Artwork for podcast Women WithAI
AI and Forensic Science: A Conversation with Prof. Ruth Morgan on Innovation, Ethics, and the Future
Episode 1528th August 2024 • Women WithAI • Futurehand Media
00:00:00 00:36:24

Share Episode

Shownotes

In this conversation, Jo Shilton interviews Professor Ruth Morgan, a Professor of Crime & Forensic Science and Vice Dean (Interdisciplinary Entrepreneurship) at UCL, about her work at the intersection of science, technology, and the humanities.

They discuss the use of AI in forensic science and the challenges of evidence interpretation, and they also touch on the importance of interdisciplinary approaches and the impact of culture and context on scientific observations.

Professor Morgan shares her journey into forensic science and the need for evidential underpinning in forensic techniques. The conversation highlights the role of AI in fingerprint and DNA databases and the potential for AI to enhance evidence analysis in forensic science. The conversation explores the use of AI in forensic science and the challenges and opportunities it presents. It discusses the shift from traditional forensic methods to the analysis of digital footprints and the vast amount of data available. The conversation also delves into the challenges of bias and regulation in AI, as well as the need for interdisciplinary collaboration and the importance of human creativity in the age of AI.

Takeaways

  • Forensic science requires interdisciplinary approaches that bring together diverse ways of thinking and seeing.
  • The interpretation of evidence in forensic science is influenced by culture, context, and the scientist's perspective.
  • The development of fingerprint and DNA databases in forensic science has relied on algorithmic tools for searching and identification.
  • AI has the potential to enhance evidence analysis in forensic science and improve the understanding of what evidence means in a forensic context. AI is transforming forensic science by enabling the analysis of digital footprints and the vast amount of data available.
  • The shift from traditional forensic methods to AI-based analysis presents challenges in finding relevant material in a large ocean of data.
  • Bias and regulation are important considerations in the use of AI in forensic science.
  • Interdisciplinary collaboration is crucial in harnessing the full potential of AI in forensic science.
  • Human creativity and craft are still essential in the age of AI, and AI should be seen as a tool to enhance human capabilities.

Transcripts

00:00 - Joanna Shilton (Host)

Hello and welcome to Women WithAI. Today I have the great pleasure to welcome a Professor of Crime and Forensic Science who works at the intersection of science, technology and the humanities, with a focus on real-world challenges and impacts. She's also done a TED Talk, so I'm very excited for what I'm sure will be a very thought-provoking conversation. But before we jump into the podcast and how she's using AI, let me tell you a little bit more about her.

00:23

Professor Ruth Morgan is Vice Dean for Interdisciplinary Entrepreneurship at University College London for the Faculty of Engineering Sciences. She's not only a Professor of Crime and Forensic Science but also the Founder and Director of the UCL Centre for the Forensic Sciences and Co-Founder and Co-Director of the UCL Arista Institute, which is a new interdisciplinary institute seeking to transform how research engages with the real world. She's a strong advocate for interdisciplinary approaches that bring diverse ways of thinking, seeing and doing to address global challenges. Ruth is a regular speaker, including that TED Talk, which you can find on YouTube, on forensic science, the need for creativity and imagination in science and the interaction of science with policy and culture. She has acted as a special advisor to the House of Lords Science and Technology Committee and has been awarded as a World Economic Forum Young Scientist. Ruth also loves travelling, exploring ideas and having conversations with people who are passionate about what they do, whether that is making wine, designing buildings, writing sci-fi or performing magic.

01:23

Professor Ruth Morgan, welcome to Women WithAI. Well, thank you. My first question was going to be can you introduce yourself? But really I want to find out who do you speak to that performs magic.

01:37 - Ruth Morgan (Guest)

Oh my gosh, we have. I'm part of this amazing institute. I'm a business; I'm a performer in residence at the um centre for performance science, which is set part of. It's a collaboration between Imperial College London and the Royal College of Music, and they just bring all these different people from really different walks of life together because there's this idea that what you do is very much about part of who you are, and so we talk about performance in music or performance in magic, but we also need to think about doctors who perform and scientists who perform and pilots who perform, and this idea that performance is the thing that holds all of these things together and if you think about it in that framing. Actually, there's so much to learn from all those different professions and capabilities. So it's a really, really diverse group, and I always learn. It's amazing.

02:33 - Joanna Shilton (Host)

Yeah, because you're not just defined by your job, like everything. Everything that you do is who you are yourself to it, don't you?

02:41 - Ruth Morgan (Guest)

you, you know? Are you in the:

03:22 - Joanna Shilton (Host)

I love that. So, well. Following on from that, how did you become a professor of crime and forensic science, and how are you using AI?

03:34 - Ruth Morgan (Guest)

So, I think it's fair to say I haven't had the most linear of journeys. People do often ask you know, how on earth did you get into forensic science? And telling them that I started off in geography is often not where I'm expecting me to go but I did.

I started off in geography really really fascinated by people and also their environment and how people influence their environment, how environment influences people, and so I was working um in environmental reconstruction, and so I was trying to work out how we could reconstruct past environments by looking at the clues that we can gather from particular environments, and I was particularly focused on using minerals to do this. And we worked out that basically, instead of using those same approaches and same techniques to reconstruct events over tens of thousands of years, if you shorten the timeline, you could do the same thing, but it would be useful for reconstructing events over tens of hundreds of hours. So actually that could then become really useful in terms of reconstructing crime events.

So that's sort of how I ended up thinking about how can we do this in a crime reconstruction in a forensic science context, which took me then into the world of forensic science and realising that there's so much amazing science that can be useful when you're trying to reconstruct what's happened. You know what, what happened when, who, where.

years ago. So in:

05:49

So a great example would be DNA that we find on a knife handle. The assumption is that the person who used that knife last would be the person whose DNA you find on that weapon. When you start doing a lot of research into this and sort of simulating it and doing a lot of experimental studies.

What you start to find is that actually, sometimes, the last person to use the knife isn't leaving the majority of the DNA. Sometimes, it's the person who's used the knife the most over the last few months. The last person often is a partial profile or maybe even not even present, and you can also get DNA on there from somebody who's never even touched the knife because they've been in contact with somebody who's been in contact with somebody who has touched the knife. So, it's great the sensitivity of the techniques. We can now tell that's definitely person axis dmr.

06:48

How did it get there? When did it get there? That's when you need to be able to answer those questions in a forensic context because it might be really relevant that your DNA's on that knife, but it might be that it's been in your life block for the last two years and actually, that means very different things.

So, yeah, so that was my kind of foray into forensic science and identifying that actually that issue is the same across all forms of trace, whether it's DNA, finger marks, minerals, other sort of other small particles that you can get, and whether that's illicit drugs or yeah, pollen grains or whatever it might be.

So, yeah, so that's how I got into forensic science and set up the centre at UCL and built it, built a team to really try and tackle that. What does it mean when you find a problem?

07:40 - Joanna Shilton (Host)

Sounds incredibly important work like just yeah, and if no one was really looking into that or or doing that or realising that, and that's, yeah, incredible. And how are you using AI? I mean, was sort of machine learning always been a part of that, or is it something that you've? Yeah, has it always been there and it's just we're calling it different things now, or how do you use it?

08:02 - Ruth Morgan (Guest)

Yeah, I love the question because it's. It's really interesting when you start sort of thinking about it because obviously AI is really just blasted onto the scene in the last 18 months, two years. It's everyone's talking about it, and the capabilities that we're seeing emerge are just they explode your mind, don't they? But it is interesting to see quite how much of the algorithmic approaches and tools have been embedded into life for years in the run up to that, and in forensic science is a similar situation. So two of the big breakthroughs I'd say in forensic science over the last 30 years have been the development of a fingerprint database and the development of a DNA database, and those are only possible by having algorithmic tools to be able to search and to identify similarities within those databases. It's interesting.

08:56

I think CSI is a great franchise and I love it and was an avid watcher of it you, I think. When we used to watch TV rather than streaming you could be there was a certain sort of few years where, whatever time of day, whatever you know channel there would always be, you'd be able to find an episode of CSI. It was just. It was prolific, wasn't it the difference I guess CSI the media sort of portrayal of it is. You put your finger mark, lift into a scan, it in, and then suddenly you'd get a, a flash and you here's the, here's the matching print in the database and this is the picture of the person and here's a blue dot on a map where their phone is currently based and you can jump in your car and go and apprehend them. Yeah, it's. It's never been like that. It's always been more of a okay, here's your mark, here, with a selection of reference marks that have some similarity. You always need, still need, the person, the expert, fingerprint expert, to then go and look and make those comparisons and start yeah, the comparison between.

10:06

So so that to say sorry, yeah, we have had algorithmic tools in in forensic science for a number of years, but I think what we're really seeing and what I'm finding quite interesting, is that with the advent of mobile device, with the effort of mobile devices, and increasingly we don't just have a physical footprint in the world, which is where I sort of started in terms of trying to work out from the grains of mud on your shoes. Where did where have your shoes been physically in a physical environment? We're now seeing that people are having increasingly um digital footprints that they're very much available in there. People are having increasingly digital footprints that they're very much available in there. People are in the digital world as well as the physical world, and so we've got this shift in forensic science, which was predominantly before about trying to find the tiniest, tiniest trace. Once you’ve found it, then being able to get as much information out of that tiny, tiny speck of material as possible. Now we're seeing a real change in that.

11:09

So we're moving towards with the digital footprint that people have. Actually there's so much data out there. Every time you change your phone, you just take all of the previous material with you and it's each phone we have. It is getting exponentially more, more rich, if you like, in terms of data. But that means that the challenge is not trying to find the tiny speck in the first place. It's about trying to find the relevant material in a very vast ocean of other materials. Same issue what does it mean when you find it? Which is the thread that comes across. But I think we're seeing a real change from this sort of particular focus on trying to find the tiny speck of trace to being able to find what's important in a large ocean of data, and I think that's where we're seeing AI really really offering opportunities to be able to do an awful lot more than we would have been able to do before.

12:12 - Joanna Shilton (Host)

Time saving because it can check so much more. I guess. But does it do the predicting in the right way? Does it have the same? Because I guess you would think, if a human's looking at you, do it, but then we've got biases or not that you might think, well, no, actually, yeah, their fingerprints are on it, but theirs aren't. You know well, they've got more, or their DNA's on it less, but I think it was them anyway. Does AI do that too, or how? Yeah?

12:38 - Ruth Morgan (Guest)

what are?

12:38 - Joanna Shilton (Host)

the challenges when using.

12:39 - Ruth Morgan (Guest)

AI I was gonna say I think there's lots of challenges and lots of questions. I'm not sure if we've got loads of the answers yet. I think some of the work that we've been doing has been to think, because AI has come on the scene, pretty much every discipline that you can think of, whether it's computer science and physics, or geography, or law or psychology, every discipline is thinking about AI because it's such a front and centre issue. And I think what's been really interesting in terms of that prediction piece has been the contrast that you can see between different disciplines. So there are some disciplines that are really really focused on the algorithms and the tools and the capability and the um efficacy of those, of those tools, and they're predominantly based in a sort of more um with a maths underpinning, so an understanding that you know anything you observe in the world, you can characterise and you can code and you can um create, if this, then that kind of a language. But then you've also got other disciplines that are more focused maybe on the sort of societal human part and their sort of standing position is that humans evolve and change and there's a huge amount of variability within a group of humans, and so I think that's where the prediction challenge really comes to the fore, because I think there are those, those disciplines that are taking a more mathematical approach.

14:13

Where the potential for prediction is really quite exciting because of the levels of sophistication that these tools are beginning to get to.

14:27

But you've for it to be truly predictive, I think you've also got to enmesh into that and understanding that there is a huge amount of variability and that humans are not immutable and they do change and they do evolve and they do adapt, and so, when we're looking at things like predictive policing, for example, actually there's a lot of challenges, I think, in terms of there's a lot of road still to travel to get to the point where we've got really really effective and reliable algorithms that can also incorporate the variability and the dynamism of humans and their adaptability, and then you factor in adaptations within some of the AI approaches and it's starting to get quite complex.

15:16

So I think we're definitely on a really interesting road and the predictive capabilities are extraordinary and they will get more extraordinary. But we also have to factor in this idea that people change too, so this isn't something that can be solved and then it's done. This is something that's going also have to factor in this idea that people change too, so this isn't something that can be solved and then it's done. This is something that's going to have to go together and constantly iterate and evolve.

15:40 - Joanna Shilton (Host)

Those unintended consequences of using it, because you've got what we assume it's going to do. And how are you seeing that sort of in forensic, like in forensic times with regulation or something you know? Is there anyone how, yeah, regulating it?

16:00 - Ruth Morgan (Guest)

yeah, I mean, I think again it's. It's really interesting that you know we've got this amazing capability and immediately the broad narratives have been you know there's a huge amount of potential good and benefits to this capability coming our know. There's a huge amount of potential good and benefits to this capability coming our way. There's a huge amount of potential harms and problems coming our way with it. What do we do with that? And I think a big, big push has been to think right, how do we control it? How do we regulate? How do we ensure that we create opportunities for innovation to ensure that we get the very best that's possible and make sure that we make the biggest difference and produce the greatest good whilst mitigating and reducing the chances for harm. And I guess that's been a really interesting sort of conversation to be part of and to observe, because it works brilliantly. And I'm putting this in a sort of sort of the forensic science lens, which is, if you've got, you know, the good good actors for want of a better word those that are wanting to do good in the world they may well have very clear intentions for how AI and AI tools can bring about benefit. But you've also got nefarious actors, those that are seeking not to do great things, and regulation doesn't really account for the nefarious actors. So we've seen that in lots of other spaces. We've got rules about what chemicals you can buy and how you can store them and utilise them to try and minimise the damage that potential chemicals can do, but that doesn't mean that it's not possible to get hold of these things and build bombs. So the regulation is there, but it only goes so far in terms of what the intended consequences are of those two groups. But what's particularly interesting, something that we're particularly interested by, is this idea that regulation doesn't actually help you when it comes to the unintended consequences of either of those groups.

17:57

So you know, I think, a great example, like diesel cars, for example. there was a big problem in terms of lead. Let's change the kind of fuel that vehicles are being that vehicles are utilising ended you end up with a in a world where we have an awful lot of diesel cars. It now is clear that that was actually a significant problem in terms of the pollutants and the health impacts of that, and I wonder if it's the same that we're seeing with AI, that there will be unintended consequences of the good capabilities of AI and regulation can't help us with that, and so I think one of the things we've been thinking about is how do you get a really broad understanding of our world and our society, which has got know? You've got people, we've got their physical environment, we've got their digital environment and all of that's interconnected and relational and it's happening right now, but things that happen now will ripple out into the, into the future, and are changing things and directing our course for how things are going to look in five years time, 50 years time.

19:12

How do we think about AI in such a complex system and do we need to sort of think differently about how we do that? And traditional approaches to regulation, like chemicals and building bombs, are quite linear, they're quite procedural, they take time and the pace of AI and its developments is just unprecedented in that sense. So I think these traditional, linear approaches to regulation are going to only get us so far and we need to start thinking a bit more broadly, thinking around those corners. And do we actually need to build something that's more like an ecosystem that can have its own checks and balances, that can correct, can absorb shocks, can cope with that evolution of people. Yeah, this is some kind of equilibrium where the worst things are mitigated against and the good is able to grow and thrive. But yeah, that's just some of our research of taking us on that.

20:19 - Joanna Shilton (Host)

You're right. It's such a massive thing to think about, isn't it? You know, I've been thinking about chat, gpt and large language models and the biases in there, and how AI is predicting on kind of what's it?

20:32

what it's been fed. You've got to make sure it's been fed. You know, equal amounts of data, not just the data that exists, because that's the trouble. The data doesn't exist about everyone or about everything, it's just you know what someone, as you say, whether that's for good reasons, bad reasons, the various reasons, good actors, bad actors it's just what someone somewhere down the line has decided to to put online or to put somewhere, because I've heard you speak, talk about ecosystems. You've talked about something called the meadow and like importance, importance of having like diversity. So can you explain? You'll explain it. Yeah, much better than I'll be able to, that's great.

21:07 - Ruth Morgan (Guest)

Well, yeah, so this is a piece of this. It is where the arista institute has come out of, really. So this observation that there are an awful lot of big challenges, like ai, that are remarkably persistent, they're incredibly complex and they they impact us across the board in terms of scale and geography and everything. And so this idea that is it that we need new knowledge and new understanding to be able to address these challenges, because they're very persistent and the current ways aren't working, you could argue or is it that actually all of the knowledge and insight and perspective that we need does actually already exist, but it's all very fragmented and it hasn't been connected and we haven't been able to sort of bring it together. And the more we thought about this and the more we sort of sort of exploring this, it became really quite clear to us that the latter there's a lot of mileage in looking into that this latter idea that everything we need is already there but we just don't, we haven't got it, we haven't connected all the right bits at the right time in the right way, and so the sort of the metaphor for this is is that is a meadow. So, thinking about particularly in the university setting. In the university you've got loads and loads of different disciplines, spanning everything from pure science to applied science, to engineering, to arts, humanities, social science, and this idea that if you think of that like a meadow, which is lots and lots of different plants all coexisting in a particular space, you can start to get a sense of what, what are you, what the potential of a university is, because it's got everything there. It's all very eclectic, it's growing, it changes through the seasons, it it can, it can cope with lots of different things. But what's interesting in the sort of ecological perspective on a meadow is that the most critical part of that ecosystem is not the soil or the climate or the introduction of a species. It's about the bee. The bee that travels from plant to plant and cross, pollinating as it goes. And so this idea that if the challenge that we've got requires the bringing together of different insights, but in different ways to what we would normally do in an individual disciplinary setting, how can we do that? And if we think of the university like a meadow and we think about how can we enable people to be bees in that meadow, we start getting somewhere in terms of building bridges between different disciplines, different perspectives and sort of reimagining how we might approach a question or how we might ask a different kind of question that we haven't asked before.

24:00

Text messages and, as I said, these evidences are coming really, really important, alongside physical evidence. But what we've observed is that if you've got a series of text messages, there's a huge amount of technical data you can get from them in terms of when were they sent, what device were they sent, who were they sent to, were they deleted? Geolocation, where was that message sent from geographically. But there's also a huge amount of insight that can come from those who've got a literary background or a social science background or a psychology background or a linguistics background, and traditionally we've just focused on the more science part of that. We haven't actually brought in those with those alternative perspectives.

24:53

And then you start thinking OK, so if we bring those two together, we start seeing that the computer scientist is seeing what a literary scholar is seeing in those messages and vice versa, and actually they're seeing completely different things because of their background and experience and training, and actually they're seeing completely different things because of their background and experience and training. You can then start making some interesting connections and maybe asking slightly different questions, and then you can start thinking about what are the privacy issues here, what are the ethics issues here, what are the justice issues here, what are the societal influences here, and start trying to hustle with that question in a way that's a bit more engaged with the complexity of the system that you're operating in. It does that. I don't know if that makes sense, but I hope that gives you.

25:41 - Joanna Shilton (Host)

Yeah, no, so we all need. We need to be more b and that's.

25:49 - Ruth Morgan (Guest)

That's super cool, because so just sorry, just I get so excited. There's, there's thousands of species of bee, but there's only six or seven species that can make honey, wow. And so there's all these bees going out there doing the cross pollination, yeah, but there's only a very small number that can actually then bring that pollen back and transform it into honey in a way that will maybe hopefully engage people who are maybe outside of that meadow and enable them to do things and I think, yeah. So that's one of our critical things. It's, you know, be more be, and also let's create space for those that can make honey yeah to do that and and see what might come out of that like, because that that's really.

26:28 - Joanna Shilton (Host)

That's what AI should become like it. You know it needs to be fed with everything from everyone, everyone's perspectives, everyone's knowledge, not just a really small section of the population yeah, yeah, definitely all right, we've cracked it.

26:44

This is all we need to do now this as well goes back to how you're not just your job, like if you can do a bit of magic on the side or maybe make some wine, or maybe you know, I don't know sew quilts. You know, it's all those bits and all that information that we as humans have all gathered that makes us you know who we are and how we're so good, and I think that's the trouble. People are just sort of putting stuff into AI, maybe, and just thinking it will spew this out, but it's like, well, unless it's got all the learnings, how can we possibly, you know, use it because it is a tool? It's a tool, isn't it really? And, as you say it's, it's not. It's not AI that's doing this, it's the humans that are using it or deciding to pick and choose what, what they've got from it. Um no, I love that.

27:26 - Ruth Morgan (Guest)

that it does make so much sense and I love the fact that you just, yeah, wine making, quilt making and magic Just thinking, though, but there's a huge amount of science involved. Like, you can't make a wine without understanding chemistry. You can't do magic without having a sort of I think mechanical engineering probably is the closest I can think of that enables that, and yet they're nothing if they haven't got that craft that's brought into them as well. So you can make alcohol, but to make a beautiful wine, yeah, is actually far more different. You can, you can do something that's technically brilliant, but it's not magic unless you are positioning it for your audience and enabling them to engage with it and absorb it and take it away and let it do all these beautiful things. That when we see a beautiful magic show that you know, you take it away with you and it, yeah, gives, gives you joy, but it also gives you a huh, or oh my gosh, how do they do that? Or you know so, oh my gosh. Wouldn't it be cool if?

28:31

yeah but science and craft and science and art. there's something really powerful about that in terms of what does make us human, and I think that's the anti-ai conversation about how do we use the science and AI with an understanding of people in society and bring craft to that so that where we go is somewhere that's brilliant for people rather than not because you're, obviously you're in the university.

29:02 - Joanna Shilton (Host)

How, how are students being taught? I mean, how, in higher education and these sort of settings, students being taught to use it, or is it still that? Oh no, don't let. Don't let them access it, because you know they might be using it for plagiarism or they'll be, you know, cheating, or is it no? Are they being taught how to use it? What, yeah, how do you do you? Are you?

29:23 - Ruth Morgan (Guest)

involved in that. Yeah, that's a huge, yeah, it's a huge part of things, and I think I'm really really taken with a lot of the research that's coming out which talks about the future of work, and this idea that you know a large proportion of jobs that those about to go into university, let's say, in the next three years, the jobs that they will be doing five years from graduation, might not even exist right now. And so this idea that, as, as universities, we want to be training our graduates, as universities, we want to be training our graduates with a lot more than just knowledge about stuff for want of a better way of putting it we want to be thinking about how do we enable them to develop and grow and get really great at analytical thinking and critical thinking and creativity, because those are the kinds of things, that those kinds of skills that are actually going to make you a very successful person down the line, because we we can't necessarily plan for a particular suite of jobs on a particular from a particular program, particular program, because that job might not exist in five years time. Something else would be completely well. And so, yeah, creating that resilience and capability to pick up new things, to make connections, to problem solve. I think that's what becomes really important and so, yeah, it's been really interesting about.

30:52

You know, how has AI blasted into the higher education scene? Because I think to start with it was a bit like, oh my gosh, there's like a, there's now a freely available tool that enables students to write an essay in two minutes, and this is a disaster. We've got to make sure that we regulate against that and ensure that that doesn't happen. Through to now, I think we're seeing some incredible examples of how people are embracing this technology essentially and instilling it into the way that they teach and what they're teaching. There's an amazing um and, yeah, amazing person is academic at um in the US and he's basically built an entire degree program to train and to learn how to use um I think. I think it's focused on LLM, so you know the large language models, but he used chat, gpt to create that program and to teach it and to deliver it. To deliver it, and I think there's some really interesting pedagogical things there about.

32:01

You know, it's not just about a tool over there that you can kind of bring in occasionally when you need it, all the way through to how do we use use this tool to help us learn and to give us insight into the type of ways that this kind of tool can evolve and change and change things physically and also digitally, and so there's some yeah. So I think there's something really interesting about that and I think we're seeing we are definitely seeing a move towards the. It's less of a oh my gosh. This is something that students can use to cheat through to. This is an exciting capability that we've got to ensure that our students know how to handle and to do so really, really well, so that they're prepared for whatever comes in the next year five years, ten years. So, yeah, I think that's definitely a bit of a space kind of thread, that one yeah, okay, yeah, it's all right, this is the beginning, or it feels like it.

33:02 - Joanna Shilton (Host)

It's sort of a beginning and you're right, it's. It's not that AI is going to take those jobs and they're not. It's because actually we'll grow with it or we'll, as we learn how to use it and there'll be more opportunities and hopefully all that sort of ethics and diversity and equality and everything. If we're sort of weaving all that into it now and that can, that will sort of pave the way for the future and I think it also really helps us.

33:26 - Ruth Morgan (Guest)

it really focuses the mind on what is it that humans are uniquely brilliant at? Because you know, yes, you can produce an essay in three minutes with the right kind of prompts, but what? What about that essay is great, and what about that essay isn't? Because it hasn't been crafted by a human and the kind of insight that a human can bring. And I. One of the things I'm quite excited about is how do we actually celebrate human creativity even more, which seems a little bit back to front, but I do wonder if we're going to actually see the magic and the beauty of human capabilities more starkly than maybe we could before, because there'll be so much of a lot of roles and a lot of activities that humans maybe won't need to do or will be curating rather than producing.

34:24

That means that what the human does bring is so unique and so special that maybe we'll be able to envisage a time in the future where actually we celebrate what humans can do more, we appreciate the human aspect to a greater degree and we craft roles and the future of work and things in that way that almost elevates the human. I'm not sure.

35:00 - Joanna Shilton (Host)

Yeah, that's made me feel really positive about the future. I've loved our conversation, ruth. I mean, where, where can anyone find you? Where's the sort of best way? I've mentioned that your ted talks on youtube, but are you? How can people sort of find out more about everything you've told us?

35:15 - Ruth Morgan (Guest)

oh, that's so kind so I'm on LinkedIn, Professor Ruth Morgan, and we have. If you Google UCL Arista Institute or UCL Forensic Science, it will come up, and I've also got a website ruthmorgan.com

35:39 - Joanna Shilton (Host)

Oh no, sorry Scrap that Ruth-Morgancom, but we'll put links in the show notes to all of those so people can find you really easily. And yeah, it's been absolutely wonderful to speak to you. yeah, thank you for your time and oh no, thank you, it's been really. Yeah, professor ruse morgan, thank you for coming on.

35:53 - Ruth Morgan (Guest)

Women WithAI thank you yeah it's been a pleasure.

Follow

Links

Chapters

Video

More from YouTube

More Episodes
15. AI and Forensic Science: A Conversation with Prof. Ruth Morgan on Innovation, Ethics, and the Future
00:36:24
14. Empowering Women in AI: Verena Weber on Diversity, Leadership, and Innovation
00:35:44
13. Breaking Barriers: Dr. Eva Agapaki on AI, Diversity, and Ethics in Tech
00:31:00
12. Harnessing AI for Mental Health Breakthroughs with Dr Sarah Morgan
00:29:59
11. Insights on AI, Bias, and Business Transformation with Suzanna Chaplin
00:32:23
10. Wellness, Play, and Professional Growth in the Digital Age with Joy Dean
00:34:48
9. Transforming Digital Marketing with AI: Insights from Mumtaz Khamker
00:25:40
8. Balancing AI's Influence in the Media Industry with Fern Potter
00:29:03
7. Women Leading AI Innovation with Catherine Breslin
00:31:32
6. The Impact of AI on Social Media: Insights from Social Media Consultant Louise Brogan
00:31:55
5. The Symbiosis of Technology and Human Touch in Storytelling with Natalia Talkowska
00:32:54
4. Harnessing AI: the evolving landscape of tech, creativity and female leadership with Julia Linehan
00:24:28
3. Exploring the Intersection of AI Voice Technology and Gender Dynamics in the Workplace with Izabela Russell
00:36:48
2. Empowering Women in Tech: Harnessing AI for Gender Equity and the Future of Work with Sophia Matveeva
00:36:09
1. Introducing Your Host, Joanna (Jo) Shilton
00:13:21
trailer Welcome to Women WithAI
00:00:25