Artwork for podcast Trumanitarian
84. The Pimply Teenager
28th June 2024 • Trumanitarian • Trumanitarian
00:00:00 00:25:25

Share Episode

Shownotes

In this first episode of the mini-series on ACAPS' journey in the Tech to the Rescue AI bootcamp, Chiara Rizza, Ali Arbia and Lars Peter Nissen discuss what to do with AI. It is early days in the bootcamp and Ali and Lars Peter are quite confused, but Chiara seems to know what she is doing so everything will be OK.

Transcripts

0:15 Lars Peter Nissen

Ultimately, the impact that AI will have on the humanitarian sector comes down to humanitarians. Follow us as we go through a boot camp to understand what AI is for A-Caps, how we move from A-Caps to AICaps. Chiara Rizzi and Ali Arba, welcome to True Minitarian.

0:46 Chiara Rizzi

Thank you Lars Peter, very happy to be here.

0:49 Lars Peter Nissen

Yeah, I'm excited about this actually. And we're here because we have recently from the ACAPs side gone into a bootcamp. And it's actually, I didn't do military service, so it's the first time I've ever been in a bootcamp. It's a bootcamp on AI and it's organized by Tech to the Rescue, who were on Trumanitarian a couple of episodes ago. And basically what Tech to the Rescue has done is they mobilize some of the biggest tech companies in the world to help us figure out what to do with AI. So I think we should begin by maybe introducing ourselves. Chiara, who are you?

1:25 Chiara Lizzi

Well, I'm a senior data scientist at ACAPS. I joined ACAPS two years and a half ago, but before that I was a researcher in high energy physics. So I was working a lot with what by now it's called classical machine learning. So things like classification, regression, things like that.

1:40

Yeah. And I think it's safe to say, Chiara, you're the only one who actually knows what you're doing.

1:44 Chiara Lizzi

Approximatively.

1:49 Lars Peter Nissen

I think that matters. But Ali, why are you here?

1:52 Ali Arba

So I'm the training unit coordinator at ACAPS. I started right when the pandemic hit. And so my background is more academia. So I like to think about things, but I know very little about AI, except for what an interested person would know.

2:12 Lars Peter Nissen

Ali, driving in this morning in the car, I suddenly realized why you are the perfect fit for the incubator. I think you are by far the best hallucinator we have in all of ACAPS. And please take it in the nicest possible way.

2:27 Ali Arba

It's scary, but that might be true.

2:30 Lars Peter Nissen

Whenever we work together, we… our minds wander quite freely, I think it's fair to say, and we end up in one rabbit hole after the other. And I actually think that's probably what AI does as well. So maybe you can empathize with the AI more than the rest of us.

2:44 Ali Arba

I'm the AI whisperer. Exactly.

2:50 Lars Peter Nissen

All right. And I guess I come to this from the point of view of the director of ACAPS. I want to make sure that we understand how this technology can serve us, I think we can save a tremendous amount of money on this. That's probably what drives me first and foremost. So there's a very sort of practical motive for going into this. But then of course, the big price is then can we also transform the way the humanitarian sector works? Can AI actually be a true disruptor of the current business model and change and simply make us deliver better services to the people we serve. Maybe let's just start with a round of reflections of why are we doing this… Chiara?

3:24 Chiara Lizzi

So I think a big part of why we do this, or at least why I'm interested in this, is to really understand what are the potentialities of AI and how they can apply to us. So right now, especially Gen.AI has a lot of hype, and we really need to understand if this can be for us.

I think everybody shares this kind of feeling of ‘AI is important. I want to do this, but I don't quite know how and if it's for me’. Jean-Luc Montprivier, one of the lecturers at the bootcamp, shared the statistics saying that 80% of the CEOs think that technology is important, but only about 6% are actually satisfied of how they implement it. So this is a shared feeling. And I think it applies also to us.

4:27 Lars Peter Nissen

Yeah, I think so as well. And as the CEO or director… I think I'm in the 6%. I think I'm actually quite happy with how we're evolving. I can also see a slowing down as we have grown, you know, much bigger than we used to be, but I'm not, I think we are quite a creative bunch actually, but I think in order for us to come to terms with AI, we really have to make a special and sort of dedicated effort to get on the train. Ali, what are you thinking?

4:52 Ali Arba

AI seems like this huge thing on the horizon and I'm still not sure how much of it is hype and how much of it is really a tectonic movement. I think I would like to figure out, no, not figuring out, but at least have a little bit of a better understanding, to which extent there is hype and to which extent that's really a thing that will change almost everything, in the way we operate or we see the world operating. So I think it's about finding out what will happen.

5:32 Lars Peter Nissen

Yeah, I remember hearing the story about elevators that when they started having buttons and you actually didn't need an elevator operator anymore to sort of make it go up and down, you still had a guy standing or a gal standing in the elevator for a long time because people simply didn't trust that complicated technology. And I don't want us to be that. I want us to actually start pushing some buttons now that we can and make it practical. For us as ACAPS, what is AI and how can it begin to make small, tangible differences in our day-to-day work? And then building on that experience, I think we will then get to sort of the, the moonshot, right? The big stretch, can we predict crisis in a way we can't, can we, you know, there's a whole bunch of things we can think about, but I think it requires. that step by step, getting used to pushing the buttons

6:27 Chiara Lizzi

And for us as ACAPS, I think it's also important to figure out how much of a player we need to be in this change. Meaning, should we be proactive and be really involved in AI development, or should we just sit back and wait for big tech and companies to do this for us and then take advantage of what they develop? And this is one of the things that I hope we will figure out during this boot camp.

6:53 Lars Peter Nissen

I think that is a key question. Ali, what are you thinking about that?

6:57 Ali Arba

I think for me, one other aspect actually is to pick up your image of the elevator… is also to understand to which floor the elevator will actually bring us. And if it turns out that sometimes taking the stairs might be better, that we still take the stairs.

7:19 Lars Peter Nissen

Yeah, nice. I think, Chiara, with you, I am very much on the last option. I think we should do as little as possible ourselves. I mean, trillions of dollars are going into training these models and researches going on left, right and center. And it would be stupid and arrogant of us to think that we can do something really brilliant. I think we can leverage a technology, but we have to tap into that incredibly strong stream that's there. And not pretend like there's a special AI for humanitarian action.

7:47 Chiara Lizzi

And this technology is evolving so quickly that I think it's fantastic for Tech to the Rescue to offer the opportunity for us to be brought up to speed.

8:00 Lars Peter Nissen

I very much agree. I was, you know, we have we have experimented internally. We have had a first model called Sophia. I don't even know if we call it a model, but we have an AI called Sophia who helps us pick out some indicators. My concern has been that we would close our eyes for everything else that's going on, because Sophia is such a beautiful creature, that that's all we need. And then AI for us becomes Sophia and we totally miss the development. I also think that this bootcamp is an opportunity for us to really get a much broader perspective on what AI is and adopt a really agile approach to how we deal with it.

8:34 Ali Arba

I think it's also important to remember that the bootcamp can only be that much. The bootcamp. And is a starting point because things are developing so quickly, so fast that we have to keep our eyes on the ball and keep on it also the coming years because the bootcamp can only be that much.

8:55 Lars Peter Nissen

Yeah, I fully agree with that. So I think it's fair to say that we all go into this… very happy that this opportunity is there. It's exciting. We all are confused as to what it'll be exactly, but we think the only way to sort of break down this big black box called AI is to get started and do something concrete and then learn from that. So, so let's talk a bit about the bootcamp. Ali, just explain the setup to us.

9:20 Ali Arba

So this is the first cohort. It's, um, there are about 10 organizations participating. It's disaster management and we meet twice a week and the whole bootcamp should be for a duration of seven weeks.

9:35 Lars Peter Nissen

And they're four hour sessions, which is… really tough, sit and listen to really techy stuff for four hours. And I think I'm a bit too much of a space cadet for that.

9:50 Ali Arba

It's hard to concentrate for anyone after 90 minutes, I think, not just you, Lars-Peter.

10:00 Lars Peter Nissen

Thank you. That's very kind. All right, Chiara, just tell us the content that they cover in this bootcamp.

10:03 Chiara Lizzi

Well, they have a very ambitious plan because in this bootcamp, they want to go from the basics all the way to us having ideas for prototypes. So they cover all the basic understanding of AI, but then also how to in practice go through the steps of developing a prototype that eventually at the end of the seven weeks, we'll also be able to pitch to tech companies who could potentially help us implementing this in practice.

10:33 Lars Peter Nissen

Yeah, so not only are they teaching us about what AI is, which I personally found really useful to get the actual tech broken down into different steps and so on, they also will help us make it concrete and get some prototypes going. And then they'll also get us access to tech companies who will help us develop them and the resources for actually getting the compute and so on.

10:57 Chiara Lizzi

Yes, exactly. So we'll have the possibility to be matched with a tech company that will actually be able to implement, to help us implementing our project, but also have some advantages, setups from the point of view of computing resources.

11:17 Lars Peter Nissen

Very cool. So what do you think of the bootcamp so far? Ali, any reflections?

11:22 Ali Arba

I'm still a little bit confused. On one hand, it sometimes got quite into the technical details. On the other hand, there is also a strong business perspective that is covered. And in my head, I'm still jumping back and forth between the two and somehow I don't see the big picture yet.

11:46 Lars Peter Nissen

Chiara?

11:47 Chiara Lizzi

Well, so far we have attended the first two weeks. And I have to say for the moment I'm very positive about it. At the beginning I was a bit skeptical, exactly because they have this very ambitious program that they want to cover. And also because they have a very challenging situation, because they have a very diverse audience where the people attending the boot camp are CEOs, but also very technical people. And it's very difficult to find something that is interesting for everybody. And I think they're doing a very good job with that. From my side, for example, I find maybe the technical part a bit easier to understand, but all the business side is new to me. So I find that very challenging and interesting.

12:33 Lars Peter Nissen

Yeah. And for me, it's almost the opposite. I've done a couple of episodes on Trumanitarian around AI and that of course has been sort of very strategic level. What does this mean for the for the industry as such and blah, blah, and it can become very speculative and it's fun and it's good to, I think, be in that space. But to then be confronted with, you know, what does it mean to vectorize your data and sort of very, very technical things that I didn't even know existed, has helped me make some connections. I'm still utterly confused. So I think Ali probably, Chiara is the only one inside her comfort zone as of now. But I mean, I think it started such a productive train of thoughts in my head. So I'm really happy actually, even though I don't get half of what they're saying.

13:22 Ali Arba

I think for me, that's probably one of the reasons for the confusion is that I'm neither familiar with the business side nor the tech side, but I come into this like… too many other things. I know a little bit about everything, but nothing really. So that might be one of the reasons for my confusion. I hope it will get better.

13:44 Lars Peter Nissen

Yeah, and I actually think that the combination of our three perspectives is part of the strength here because you have to bridge from the very technical to the use case to the business for this to work. It's a powerful technology, but it's also very dangerous and it won't necessarily give us a net positive. And so we have to be very critical and we have to challenge each other's perspectives as I think we've done a little bit in our discussion so far. So we're two weeks in and we have already actually identified a number of use cases or a number of potential prototypes that we could begin working on. And Chiara, since you're the one who knows what you're doing, maybe you should just take the lead on this and then we will comment.

14:32 Chiara Lizzi

Ali at the same time was also hesitantly pointing at me with this thing. So yes, I guess I'll give this a shot. So for the moment, we are still in the brainstorming phase where we have many ideas about possible projects. Some of these ideas were pre-existing, things we had thought about before, other ideas were generated through the bootcamp. So let's see, a few of them. Well, first of all, one big topic at the bootcamp and in generative AI in general are large language models, LLMs, and we are thinking whether we could start from an existing model, and make it learn about us, either through retraining or through giving it access to our repository of analytical reports, to really teach to the model how we speak. To teach it our language. And this would open up then a lot of potential applications that range from a chatbot that we could use to either interact directly with our reports, or to have our users interacting with our reports, to having quality control and many other applications.

15:50 Lars Peter Nissen

Yeah, the way I think about that one, and it's probably the one I'm most excited about as well, is that I think the last language models are like pimply teenagers, basically. They've had a basic training, and they know how to talk. Sometimes you can't always understand what they're saying, but it's sort of a very broad spectrum. They know a little bit about everything. And we'll then take one of these pimply teenagers and put them through university, the A-Caps University. We'll teach it everything we know and make it much sharper and much more passionate about our perspectives. And we've probably written more than a thousand reports since the beginning of the project, if not more, I would say. And so we will simply feed all of those into the teenager and train them on that and hope that out come a young academic who's really sharp. And who can help work with us on what we do. Is that stupid?

16:39 Chiara Lizzi

No, and I love this metaphor of the teenager going through ACAPS University. I think it's very fit, especially because even after university, people can still make mistakes. And what we need to figure out is how these newly formed academic will perform.

17:00

Lars Peter Nissen

I think I made most of my mistakes during university. But that's okay. Yeah. Anyway, Ali?

17:09 Ali Arba

I was just thinking through the metaphor. As someone who spent many years at university, I hope at some point I will leave university and actually do something useful.

17:29 Chiara Lizzi

But this is what our train model should do for us, be useful. And this is still something that we need to fine-tune, understand; Ok once we have acquired this new database of knowledge, how do we actually put it into practice to make our life easier, to save time, to save money?

17:44 Ali Arba

Just I was thinking about when you introduced it, you said that we teach it how we speak. I would have said we teach it how we think…

17:58 Lars Peter Nissen

Hmm, interesting.

17:57 Chiara Lizzi

That's an interesting difference. And I on purpose said how we speak and not how we think.

18:04 Ali Arba

I'm sure you did.

18:06 Chiara Lizzi

It's very challenging and it's very difficult to anyway attribute to a machine the capacity to think in general, but especially to think, to say that these things as ACAPS does. We can definitely teach it the language. We can teach it the embeddings. It's difficult to say whether it will be something that will really think the way we do and propose us ideas that are similar to what we would have thought natively.

18:38 Lars Peter Nissen

Yeah, exactly. I think that that is exactly where the use case goes from interesting, maybe helpful to truly transformative, right? Because of course, the danger by doing this is that we simply feed it just our narrow-minded way of thinking about the world. And that when something new comes along, it won't pick it up. But if it can detect some of the underlying patterns in what we've seen in more than a thousand crises and start bringing up some of that, the way we think Ali, or the analytical framework that we actually apply whenever we approach a crisis, if we can tease that out of it, I think that'll be incredibly powerful. And I think the use case that our colleague, Natalie, came up with of saying, you know, when we do scenario building, we generate 40, 50 mini scenarios to sort of kick off, get people thinking a bit laterally. Maybe the AI can just do that with a push of a button. That would be incredibly powerful. Maybe those hallucinations will actually be really useful when you are thinking about forward-looking analysis, because you do need to get out of the box.

19:46 Chiara Lizzi

And I think it's also essential to start from simpler applications. Not to demand too much at the beginning, simple tasks that can still be useful and saving time, but not to have too ambitious goals. I mean, it's important to have ambitious goals, but they need to be broken down into steps that are achievable.

20:06

How do you eat an elephant?

20:08 Chiara Lizzi

Exactly.

20:09 Lars Peter Nissen

Yeah, one bite at a time.

20:11 Ali Arba

And I think this is why it's great to have you on board, Chiara, because with the whole excitement about AI, it's really difficult sometimes to keep this pragmatic perspective.

20:19 Lars Peter Nissen

Yeah.

20:24 Chiara Lizzi

But I think all of the projects that we are discussing are feasible. They are all technologies that have already been implemented elsewhere. It's just a matter of fine tuning them for us.

20:39 Lars Peter Nissen

Cool, so that's the new ACAPS AI analyst. We talked about what should we call the person, Cassandra, The Virtual Analyst, the ACAPS Intelligence, The Co-pilot, or somebody's saying that already, so I guess we shouldn’t use that one. But I mean, we haven't quite come up with a name yet, but that's the first use case. Let's take one of the other ideas we've been talking about.

21:06 Chiara Lizzi

We also have a few other use cases, for example… well, you have mentioned, last Peter, before that we already have an AI tool in ACAPS, SOPHIA. This is great and it's already helping us a lot with data collection, but maybe it could be better. So another idea that we have is to use the opportunity of this bootcamp to brainstorm about how SOPHIA can be better performing. Or another option is to focus more on classical machine learning and predictions since as, in ACAPS, we do collect a lot of numerical indicators. And often, like it's often the case with that in the humanitarian sector, we are faced with information gaps. So we could try to think if there's a way that artificial intelligence can help us using information that is already available to fill these information gaps for us.

21:56 Lars Peter Nissen

So the first couple of weeks we were very much around the foundations. We even talked about Alan Turing. And now I feel like this week is more like much more practical beginning to actually see some examples of how things are trained and so on. And then what are we doing moving into next week?

22:13 Chiara Lizzi

Next week we'll be moving into more ideation and prototyping. So of course we don't yet know the content of the bootcamp, but what I hope to see is how we can bring our ideas forward in a more practical way.

22:26 Lars Peter Nissen

And I have to say that those two weeks are run by Ideal. And I'm just hyper excited about that. That's one of the companies I find most fascinating in the whole world. They're just absolutely awesome. So I'll be there every single minute of those two weeks. Yeah.

22:40 Ali Arba

I'm curious about next week and also a little bit nervous because so far it was a lot of the big picture general reflections. So that's much more my turf than what is coming up next week. So that will be interesting.

22:53 Lars Peter Nissen

Yeah. Welcome to Accountability.

22:57 Chiara Lizzi

Oh, I'm so excited for you to get more concrete.

23:00 Ali Arbia

I'm not surprised.

23:02 Lars Peter Nissen

Yeah, we'll be there for you, Ali, if it gets too concrete. Great. And then I guess once we've spent a couple of weeks ideating and having some more specific prototypes, then we will move into actually making it reality. And that, of course, would be really exciting. Great. Thank you both of you for coming and spending half an hour discussing these things. I'm so excited about this bootcamp and I think it'll be really good. I look forward to our next conversation about how we move from ACAP to AI-Caps.

23:35 Chiara Lizzi

Thank you for having us.

23:36 Ali Arba

Thank you, it was fun.

Chapters

Video

More from YouTube