Artwork for podcast Women WithAI™
The Legal Maze of AI: Media Manipulation and Beyond with Kelsey Farish
Episode 1725th September 2024 • Women WithAI™ • Futurehand Media
00:00:00 00:39:25

Share Episode

Shownotes

Join us for an enlightening discussion with Kelsey Farish, a top media and entertainment lawyer who has transitioned her expertise from traditional IP law into the cutting-edge world of artificial intelligence. Kelsey shares her unique journey and how her passion for the movie industry, coupled with a crucial incident involving Scarlett Johansson and deepfakes, sparked her deep interest in the legal implications of AI. We dive into the challenges of protecting one's image in the modern digital landscape and the intricacies of navigating multiple legal systems that often struggle to catch up with technological advancements.

Takeaways

  • Existing laws are inadequate in regulating AI-generated content, such as deep fakes, due to the complexities of intellectual property and publicity rights.
  • The use of AI in the film industry raises concerns about content inflation and job displacement, but also offers opportunities for democratisation of filmmaking.
  • Individuals can protect their image rights through contracts, even in the absence of specific legislation.
  • The legal landscape surrounding AI and media is evolving, but it will take time for laws to catch up with technological advancements.
  • AI can be used in film production to create realistic scenes and finish films when actors pass away.
  • The right of publicity protects individuals' control over their image and how it is used in films and online.
  • AI can be used as a tool to detect harmful content and manipulated videos online.
  • Transparency and education are crucial in understanding and navigating the use of AI.
  • AI can provide access to new ideas and concepts, but it's important to challenge and look beyond algorithmic recommendations.
  • There is a need for laws and regulations to ensure responsible use of AI and protect individuals' rights.

Find Kelsey Farish on LinkedIn

Visit Kelsey's website and find a spotlight on Kelsey's AI Media Expertise

Kelsey on X

Kelsey is also an advisory board member of vera.ai

Transcripts

Jo Shilton (Host)

Hello and welcome to Women WithAI, the podcast dedicated to amplifying the voices and perspectives of women in the ever-expanding field of artificial intelligence. Today, I'm thrilled to be speaking to a media and entertainment lawyer, someone with extensive experience helping clients manage legal risks while protecting their creative vision and business goals, who is recognised as one of Europe's foremost legal experts in generative AI. But before we jump into having a conversation, let me tell you a little bit more about her. Kelsey Farish is a media and entertainment lawyer with expertise that bridges the gap between traditional media law and new technologies, which has led her to specialising in Gen AI. At Reviewed and Cleared, she supports a variety of clients across the audiovisual, publishing and media sectors, as well as those in the creative and performing arts.

01:16

and coming female lawyers in:

02:18

So, kelsey Farish, welcome to Women with AI. Thank you so much for having me on the show. It's a pleasure to have you here, so that all sounds phenomenally exciting. So can you tell us, please, about your journey from traditional IP law to becoming an expert in AI and media?

02:36 - Kelsey Farish (Guest)

What sparked your interest? Sure, well, I mean just listening to you kind of chat through what I've done over the last six years, I mean I felt like I just needed a nap listening to that. It's been a really busy time. I should say that it was all really kind of a happy accident and it was sparked by just a real genuine passion for the movie industry. I'm originally from the West Coast, I've always really been into films and part of that has meant, you know, staying on top of Hollywood gossip and trends. Right.

03:07

And about six, six years ago I came across what we now know as deep fakes and Scarlett Johansson was interviewed by the Washington Post and she was complaining rightfully so about this new technology that was putting her face on top of pornographic actors' bodies and it was being shared on the dark web, right.

03:29

And in this interview she said that there was absolutely nothing that she could do about it from a legal perspective because there were no laws that could regulate the sort of content, much less it's spread on social media or Reddit or otherwise. And being a relatively junior lawyer at the time, I thought, oh, that can't be right. Surely there's something, there's some law out there that can get you know that can help her out. And so I sat down and I basically did a lot of research over the course of a couple of months and I realised that the situation is really complicated and so I ended up writing, as you said earlier, a peer-reviewed paper on it, and it turns out that it was one of the first big pieces of research about AI and media law, and from there it just really picked up momentum. I've just always been really interested in how people can protect their image, and AI completely just distorts that whole ecosystem from a legal perspective.

04:28 - Jo Shilton (Host)

Yeah.

04:29 - Kelsey Farish (Guest)

So, how?

04:30 - Jo Shilton (Host)

who is? Why is there no law against it? Is it because it's so new and are you working towards there being a law, or is it just too difficult? So the issue is really this.

04:39 - Kelsey Farish (Guest)

So if we take Scarlett as an example Scarlett herself when she works on a movie, she herself doesn't actually own the copyright in the videos that people take of her, and so if someone was to use that content in a way that was unlawful or, you know, it was unwanted, like it harmed her reputation in some way, she can't use traditional intellectual property laws to get it taken down. And so that's the first obstacle. The second is, even if she could use intellectual property laws to get it taken down, well, you can't just flip the switch and turn off the internet, right. A lot of this stuff once it's online, it's online and it's going to get shared far and wide, so we know that we can't really rely on copyright. Well, there are other legal tools in our toolkit so we can look at things like publicity laws.

05:34

But publicity laws and rights of persona and personality they're called different things depending on where you are in the world. They only apply in certain situations. So, for example, in the United Kingdom you typically have to be a well-known person in order to have the right to your own image or your own face. It sounds crazy, I know, Whereas in other jurisdictions it's kind of a human right that you know, you automatically have the right to your own name. So we look at these things from different angles.

06:05

And it's not really because it's created by AI. It's just because the whole ecosystem is complex and it gets tricky, because AI and the content that it generates is so lifelike that audiences and individuals who are viewing the content are getting increasingly confused and manipulated. So in tech, in speaking about it from a legal perspective, it's not so different than if you had, like, a photograph that you were then sharing online without permission. But most of the time you can look at a photo and be like, well, okay, this is, this has been doctored, or or you know, it's a sketch, it's sketch, it's been manipulated, but AI just kind of confuses that whole thing. We just don't know as viewers what's real and what's not. So this is a long way of saying it's complicated because of outdated laws. There aren't enough laws on the books, and even if there were laws, it's really difficult to enforce.

07:03 - Jo Shilton (Host)

Because she managed to get them to stop using her voice, didn't she? With the other, the more recent?

07:08 - Kelsey Farish (Guest)

yeah, that's right. But again, she wasn't really relying on a legal doctrine or a legal principle there.

07:15

She was using pr she basically did a open letter that she shared with journalists, and then that public pressure and you know, open AI. They didn't want to get a bad reputation. They didn't want to be seen to be doing something that was maybe unethical. But can you imagine a situation in which someone, let's say, you know you or you know you were having your voice being used? Are you going to be able to afford an expensive lawyer or a fancy PR team to put together a letter? And even if you could, and it was published like let's be real, like who's going to pay attention? Usually people are paying attention to people like celebrities or other powerful people. So again, this becomes a much broader conversation about the rights of everyday people who are being impacted by these technologies as well.

08:03 - Jo Shilton (Host)

So what happened when you because you did the research into it and then is that that that got picked up? Is that right, because no one else had done?

08:11 - Kelsey Farish (Guest)

that, yeah, that's right. So I just happened to be that I was. I guess I was just really stubborn and I wanted to see, I was just really curious, right. And this is a great example of allowing your natural curiosity and the things that you're naturally interested in kind of lead your way.

08:27

I suppose, because at the time there were a few academics who were thinking about deep fakes and generative AI, but most of them were focusing on it in the political context. So people wanted to explore what could happen if you had a deep fake of, say, the president threatening nuclear war or something. People were really focusing on that from a national security perspective and I was one of the first practising lawyers to really say well, that's fair and that's a really important topic. And we also need to discuss the criminal implications of intimate image abuse as it's now known, the criminal implications of intimate image abuse as it's now known. But there's this whole other body of law which is intellectual property, which is publicity, which is everyday people and the rights that we have to our own face. And I think it was just because I was one of the first movers in this space and just really really obsessed with solving this problem and now it's kind of become, I guess you could say, like my life's work.

09:24 - Jo Shilton (Host)

I just think that the issue of how we as individuals, how we appear in the public sphere, how our face, how our voice, our LinkedIn photos are used you know, these are not just silly, trivial things for a celebrity gossip magazine Even though that's where a lot of the case law begins it really, in this day and age, it has to do with all of us who have a presence online yeah, we had a situation at work recently where someone was uh, someone through you know decided to go to one of the leadership debates before the elections took place and you know didn't go in a capacity for representing the council or anything you know just went along and he's there in the the council or anything you know just went along and he's there in the front and he got picked up by the BBC. They were filming because whenever someone was saying something, he was like nodding his head or shaking his head.

10:13

And it was, you know, I think the Daily Mirror picked it up and said oh you know, shaky head man, noddy head man, they're like reacting to Nigel Farage, exactly the way the rest of us are thinking, and but unfortunately it got picked up and some kind of like far right twitter group picked up but they reversed they must have reversed searched his image to find it, but then was doing it and they picked and chose what they were going to take from linkedin to say, well, he's doing this, he said, and it was completely unfair.

10:36

I mean, it's all blown over now, but for a weekend it was really awful and it was, you know, thinking about. You know, is it quite funny at first and then it's like, well, no, it's not funny out there, but you're right, people can do whatever they like with the image once it's there.

10:52 - Kelsey Farish (Guest)

That's right. And I want to pick up on a word that you said there which is unfair and you know, so often we see things that are mischaracterizations of what people have done or said. And you know, if you take a second to pause and think critically and evaluate the full picture, you know the circumstances right. Like who knows, like maybe a facial grimace on camera, maybe that's because you know you've got something stuck in your shoe and you're kind of playing with your sock or whatever. The law, unfortunately, doesn't really care what's fair or not. Most of the time it just wants to know what's in the statute book or what a judge has said. Now, of course, the law is informed by social and normative constructs of fairness and equality, but a lot of the time just because something hurts your feelings or it seems unfair or it doesn't really pass the common sense test.

11:46

Unfortunately, that's not good enough to bring a lawsuit. So again, you know, taking that as an example, presumably, if you know, if you're on a television show, you'll sign a waiver that says yes, you can use my image or clips of me and you can share it on social media or what have you. And then what happens once that is shared?

12:10

is really left to other people and other people can misuse it. But let's say, someone misuses a clip of me on YouTube, right, like they take an image of me from a, from a you know interview that I've done or a podcast, and they manipulate my voice with AI and they say that I'm going to vote a particular way in which I'm not. If I wanted to get that taken down, there are a lot of steps I have to go through. It's going to be expensive. It's going to be annoying.

12:32

It's going to be taking a lot of my time and a lot of the issues as well is that it's really complicated. There are a lot of steps involved and also it's going to be like psychologically draining as well. So it's a hugely complicated issue.

12:50 - Jo Shilton (Host)

Is the law? Well, I'll just ask you that question when you were mentioning about political, the political context of images, because you said just before we started recording that you'd been speaking about Donald Trump and the Swifties and sharing pictures of Taylor Swift fans saying that they support Trump. I mean, how can you sort of expand on how that comes about or what that means?

13:30 - Kelsey Farish (Guest)

Sure. So just for your listeners, for their context basically, donald Trump recently shared a couple of images on his Truth Social social media platform in which we have some AI generated young women who are wearing T-shirts that say things like Swifties for Trump, and he also shared an image of Taylor Swift herself stood in front of an American flag, pointing her finger, saying you know, taylor Swift wants Taylor Swift's, swift wants you to vote for Donald Trump. And he retweeted these and he said okay, I accept as in. I accept this endorsement. So the issue that we have here I accept this endorsement. So the issue that we have here.

13:52

There are a couple, and the first is if you have an image of a well-known person like Taylor Swift being used in the context of a political campaign, that's a big no-no. So her lawyers are going to be probably writing to his campaign saying we don't approve of this. They might also seek to uh, have a cease and desist letter If they can find that you know he's sharing images that he doesn't have the copyright to. They could also if they can demonstrate that her reputation or her brand has been damaged. Typically there has to be a financial implication, but if they can argue it, then they could also sue him for defamation, right? Because it might be an untruthful statement that he's purporting to be factual and that, in turn, harms her reputation. Those are a couple of things.

14:41

The other issue has to do with these images that are AI generated, these people that he showed the pictures of. They don't exist in real life. So you might ask, well, what's the harm? It's just like a fantasy image, right? Well, because it says Swifties, and because it's like using her name and her brand out of context. Well, I could go on and on, but you can see why there are a lot of different angles that we have to explore here. The good news is that, typically speaking, when you are a political candidate, there are rules about what you can and can't say. So in this case, he's attempting to manipulate the public into thinking, potentially, that a well-known individual is supporting his campaign, when they're not, and that's just. It's not the done thing, right?

15:27

So I'd expect that her lawyers are going to be pretty busy this week.

15:32 - Jo Shilton (Host)

And is the law? Is it catching up? I mean, how do I mean? I imagine it's quite slow and quite a difficult process to get laws changed or get things, because a lot of these laws do exist from before. Well, maybe from before the internet existed, before AI.

15:45 - Kelsey Farish (Guest)

Yeah they do.

15:46

e United Kingdom. That's from:

16:05

It's sometimes I use this example all the time but, like we have to think too, you know, when we take a picture when we're outside and we take a picture of a flower or whatever, we have copyright in that image that we take.

16:18

However, it took a Supreme court case in order for the law to recognize that photographs are capable of copyright protection, because before then the law basically said you have to be a human in order to have copyright protection over your work, and that makes sense, right.

16:34

his, and that happened in the:

16:56

So I think that what we're seeing now is yet another example of wait a second. The technology has progressed and, even though certain people might have these quote unquote common sense approaches to things, you need the law to catch up. And, yes, people are on both sides of the Atlantic trying to figure this out, but I think it's going to take a couple more years before we get some new updates to intellectual property law. In the meantime, we are seeing more and more regulation on social media, as well as more and more regulation on things like political campaigns. But again, we have to be really careful to not override freedom of expression, or freedom of speech as it's known in the United States, because we don't want to say outlaw the use of AI and creating content just because it could be harmful.

17:46

Because, AI can be used for a lot of really cool creative things as well. But what depends what? What it's going to depend on is the context of that image or that video or that piece of music.

17:55 - Jo Shilton (Host)

Yeah, because you've just come back from the Locarno International Film Festival, where you invite. You're invited, weren't you, to discuss AI and filmmaking. So what? How does that work? I mean, what are the? What's the film industry concerned about?

18:08 - Kelsey Farish (Guest)

would you say yeah, so a lot of filmmakers right now are rightfully really concerned about, would you say yeah.

18:11

So a lot of filmmakers right now are rightfully really concerned about two main things and the first is content.

18:18

Inflation and the ease with which we can now create media with artificial intelligence means that there's just a lot more out there and a lot of it can be pretty good. I'm not saying like a full, you know feature length, cinema ready film, but we can use AI to create pretty good clips or B roll or, you know, enhancements. So the more content that's out there, the harder it is to get your production noticed by financiers or people who want to be in, go and see your project right. The other issue is, of course, job displacement. Actors are worried about having digital replicas being made of themselves without consent, and people like colorists and editors and people who work as film extras. They're also worried about getting out of a job because ai can now do it a lot faster, a lot cheaper, maybe not as good, but you know budgets are tightening and, with inflation and costs and all the rest of it, it's just creating a really crazy uncertain time for the industry right now.

19:23 - Jo Shilton (Host)

Yeah, because I guess you might think oh no, surely you must have lots of money to be able to do it, and it'll be all the high end, top grossing, high budget films, budget films. But yeah, maybe, though, is it opening it out so that people that haven't got much money or haven't got the kind of big studio behind them then they can kind of try and get into the filmmaking industry and and that whole concept is known really as kind of like democratisation of filmmaking, and we see this as well.

19:47 - Kelsey Farish (Guest)

It was like people who might not normally have access to expensive studio quality you know, special effects teams. They can now use generative AI tools to create pretty cool stuff that audiences are going to want to see, or indeed, they can use software that will help them improve upon their script or, you know, create more interesting or nuanced characters. Right, we can use ChatGPT to review a script and offer improvements or suggestions, and so these things are really exciting. But again, the flip side of it is well, wait a second, you're potentially putting a screenwriter or an editor out of a job.

20:30

But an interesting comment that was made by a producer who used to be at the British Film Institute, actually, so her name's Katie, she's at Henway Films. She made the comment, which was well, if we don't somehow make films with our reduced budget, then everyone's going to be out of a job. You know what I mean. So I think that this is an example of a necessity for people to kind of maybe adapt with the times and grow with AI, as opposed to saying that it's horrible and evil and no one should use it. I definitely think there's a grey area in between. We need to be cautious of it and we need to use it responsibly and ethically. But I think that saying oh no, all AI is bad and you know we shouldn't use it at all in our creative endeavours, I don't think that's necessarily the right approach either.

21:17 - Jo Shilton (Host)

And I guess yeah, I could say from a lawyer's perspective that it's probably like challenging and exciting. Yeah, it's really challenging, Like a holy world.

21:27 - Kelsey Farish (Guest)

It is and some of the things too. You know we've spoken, you and I, in the last few minutes about things like legislation and things like lawsuits, but there are other things that we can do to control the use of AI, and it's kind of boring, maybe, but it's what I do day in and day out, which is draft contracts. So, even though a law doesn't exist right now that says, you know you can or can't do this with AI, if you were working with, say say, you're an actor and you want to work on a film project and you don't want them to use AI on your, on your face, you could write into your talent agreement hey, I'm happy to be an actor in your film but I'm going to include this clause that says you studio will not use my face and manipulate it with AI.

22:14

So there are different approaches that everyday people you don't have to be a Hollywood celebrity everyday people can use to kind of have some control and agency over your personal identity or over, you know, your other creative content. So it doesn't have to be legislation. You can also use things like contract law as well.

22:34 - Jo Shilton (Host)

Yeah, so that's why people need you.

22:37 - Kelsey Farish (Guest)

Yeah, exactly, I spend a lot of time looking over a lot of fine print and usually clients will say, well, can I do this, can I do that? And I'm like I mean, you can have a go, we can write it into the contract and it's up for the you know the other party to decide if they want to accept it. You to the contract and it's up for the you know the other party to decide if they want to accept it. You know you might not get away with it, but it's always a really interesting thing because I get to spend a lot of time negotiating and thinking about the, the common sense implications of what they want to do to get their, their story made or their business deal across the line yeah, because I guess sometimes you want to protect your likeness and you want to make sure.

23:10 - Jo Shilton (Host)

but and then I'm just thinking, was it Gladiator? I mean, how old that film is now I don't know, but someone died before the end. I can't remember his name. Now this is terrible. I should have done the research. But you know, one of the actors died. It was a really famous one, so they were able to use some of the old footage to then put it together so they could finish, and so that's the kind of you know. I mean it wasn't obviously good for him that he died. It was awful before it was filmed, finished filming. But that's the kind of yeah, so there's always more than one side to the story.

23:41 - Kelsey Farish (Guest)

And it's interesting that you say that, because that's actually how the statutory right of publicity was born in California. So, basically, you had a very famous actor and he passed away and the film studio wanted to use old clips. Not nothing to do with AI, they just wanted to use his old clips and put them into a new film. And because they had the rights to the film and obviously the film studio had the copyright in the clips, they're like well, this is cool, we're just going to do this. Well, not so fast, said his kids. This actor's kids say no, you can't use our dad in this way. He never would have wanted to participate in another film. And oh, by the way, he's going to act quote unquote in this new film.

24:22

t's the California Civil Code:

25:03 - Jo Shilton (Host)

Yeah, and I've fallen for well, I've fallen for scams. Or you see pictures and you think they're real and you're like look at this. And then you're like hang on.

25:28 - Kelsey Farish (Guest)

I mean I've fallen for deep fakes as well. I mean I, a couple of years ago again, to use US politics because even though I'm an English lawyer, I'm originally from the States I retweeted a voice deep fake of a particular former president saying something really idiotic and because it matched my own political views and my own I suppose my own biases, I didn't challenge it, I didn't think oh wait, did he really say that before I retweeted?

25:56 - Jo Shilton (Host)

No, I just retweeted it Because I was like yeah, he's an idiot, of course.

25:59 - Kelsey Farish (Guest)

He said this, this crazy thing, and then it took me a few minutes before I thought, wait a second, that's even crazy, even for this person. But yeah, no, I say that all the time, even me like someone who's like an expert in this field. I fall for them too.

26:14 - Jo Shilton (Host)

Because we're saying that AI can create harmful content. Can it be used as a tool to try and detect that?

26:20 - Kelsey Farish (Guest)

though? Yes, it certainly can. So that's the Vera AI project that I've been working on, and here you have incredibly intelligent computer scientists from all over Europe who are working to use AI as a detection tool, and so what they'll do is they'll use AI to analyse certain behaviours and certain sentiments that people are picking up online and they can group. Like it's crazy, it looks like a spider's web If you zoom out and you imagine like Twitter or other social media platforms, like this crazy web, you can actually visualise little clusters and overlaps, and if there's a lot of activity around a certain phrase so, for example, southport and immigration, if those words are being picked up a lot more than normal, then AI can hone in on those areas, and it can also see if videos are being used that are maybe manipulated. So, again, artificial intelligence at the end of the day, it's just a tool, and what we do with it? That's where the real critical thinking needs to come in.

27:28 - Jo Shilton (Host)

Yeah, yeah, it's, yeah, it's how we use the tool as humans, not just the tool itself.

27:33 - Kelsey Farish (Guest)

I think that's right. And something again that I heard at the film festival was we need to be the ones using AI.

27:38

AI shouldn't be using us or, you know, I would say, taking advantage of us and part of that, and not to go into a full like criticism political or socio-economic criticism of capitalism.

27:53

But I think we also have to hold these very powerful individuals who are the CEOs of these tech companies to account as well, Because social media platforms, they love sensationalism, Crazy stuff, online drives, advertising clicks and that's how this whole model works. They want insane, crazy, fascinating, wonderful stuff to be on Instagram or wherever so that you look at it and that you buy stuff off of it. And unless we find the right balance between protecting people from what they're seeing and then also allowing people to express themselves and explore new ideas, Well, I think it's just going to. The whole problem is just going to become exacerbated and I don't have the right answer. I don't. I don't know what the right answer is, but I would say that, as the mother of like a small kid myself, I am a little bit worried about what's you know what's to come, unless we start adding in some safety rails and some guardrails to our online ecosystem.

29:01 - Jo Shilton (Host)

Yeah, because I don't want it to take away creativity or stifle anything or become the norm that you're not thinking for yourself or doing that. I mean looking to the future AI mean I think we've sort of we've gone through like it's going to take over the world, it's going to take all your jobs. Actually, no, it's not. It can be used for good or it can still be used for bad. I mean, are there any upcoming sort of trends that you're either concerned about or excited about?

29:27 - Kelsey Farish (Guest)

I mean that's sort of two questions really yeah, I would say the trends that I'm most excited about is what you said earlier. It's about giving access to different ideas and concepts that people otherwise might not have had access to. I think of it like taking a group of school kids to a modern art museum that might never have had the chance to see works of modern art, that might never have had the chance to see works of modern art. In the same way that when we go to museums or we go to concerts and we get inspired by what we're experiencing and then we turn around, we go home that day and maybe we write a poem that was inspired by something we read or we saw. That's not that we're ripping off the existing creativity, necessarily. We're just inspired by it. And I think that we can have, dare I say, dialogues with these generative artificial intelligence platforms where we can say, hey, I've got this story or I've got this idea, this business plan, like, help me refine it, let's you know, let's get access to it. Like the AI tool can draw upon billions upon billions upon billions of files and utilise that, that information. The issue is are they doing it in a lawful and responsible way, like in the same way that you pay for a concert ticket or you pay for spotify. You don't just download files off the internet anymore.

30:51

I mean, mean, I had Napster when I was younger, so I know all about pirating content, but I think that that's really exciting. You know, improving people's face or their skin tone or their weight, that's really scary because it's sort of addictive. You know, you take a selfie, you've got a spot on your cheek, you go into the tool, you airbrush that away. Oh well, actually, if I just press one more button my hair will look better and if I just press one more button then my teeth will be perfectly white. You know it's a slippery slope and I do worry about the mental health, especially amongst young girls who are going to be growing up with that, and yeah, that's worrying to me.

31:55 - Jo Shilton (Host)

Yeah, I think that's where education is going to play a massive part in not just well not letting students use it, but also not embracing it too readily without the right rules and regulations in place, but also just talking about it, and that, you know, isn't what you see.

32:09

I mean, I grew up. I was talking to a friend the other day. We're like, oh, do you remember? Like Kate Moss and heroin chic and in the late 90s, you know, and it's like you're supposed to be really skinny or you're supposed to look like this and that way, and it's always been there. But it's just easier to access now because you know everyone's got a mobile phone, kids are getting them. You don't know what they're sharing at school. So, yeah, that's that's, I think yeah, good to educate people.

32:31 - Kelsey Farish (Guest)

You and I both grew up with Alina here and I remember looking at photos of Kate Moss or similar and you know, I think I wonder. I'm not saying that it necessarily would have, but I wonder if I might have been spared a little bit of anxiety or maybe a little bit of like stress over my own body image, if every single photo that I had seen of a super thin or beautiful model in a magazine, if maybe it had a warning label and it said, hey, this image has actually been Photoshopped, or hey, reminder, this person's job is to not eat very much and to wear clothing and walk down a runway.

33:11

This is not what 99% of other people look like. And I'm not saying that like we need to have, you know, a police force or a nanny state or anything like that. But you were absolutely right when you talked about conversation. It's is what we're seeing actually real and just knowing that things that look real might not be. Yeah.

33:31 - Jo Shilton (Host)

Well, I've seen more and more people, even just on LinkedIn. They'll say that this content was. You know, it was like curated by a human but made better with ai, generated by ai but curated by the human. I think it's that, yeah, it's that caveat or it's that label that will have to go on everything because you know we as adults, you sort of think oh yeah, obviously that's airbrushed, obviously no one looks like that.

33:50 - Kelsey Farish (Guest)

Obviously, obviously, obviously, but as a young, yeah, impressionable child, whether you're male or female, that's, that's going to be difficult to tell absolutely, and you know, a lot of artificial intelligence has been happening without us even really knowing it, for years and years now, so you know when you I mean, I'm old enough to remember when netflix DVDs came in the post and it was a really exciting thing to sit down on my dad's computer and pick out the three DVDs that we wanted that week. I could choose one, my brother could choose one and my dad could choose one, and then you watch them and you pop them back into the post and then you got your next three DVDs right and now you log into Netflix or other streaming platforms and my Netflix account looks very different from my husband's. Why? Because artificial intelligence knows my demographics.

34:35

It knows my viewing habits, it knows my other activities and it uses that information to curate. There's that word again to curate recommendations, and this is great because it means that I'm actually getting shown things that I am probably going to want to watch, rather than some random sports, biography or something that my husband wants to watch. But it comes back to transparency. I think that if these tools or these platforms said, actually hey, Kelsey, you're being shown stuff that we are artificial intelligence thinks that you want to watch.

35:09

But hey, if you opt out of this, or if you turned off this personalization feature, maybe you're going to see something that you might not have been shown that might broaden your worldview in a different way, and that I think is really exciting, as well, so I kind of I want to get into a situation where maybe we're challenged and inspired to look beyond the algorithm yeah, no, you're right, because otherwise it's that's when it starts to limit what you see.

35:35 - Jo Shilton (Host)

You're only going to see the things that it thinks you want to see, and it's not going to know that actually you might have an interest in shark fishing yeah, that's exactly yeah yeah, I think that's right and you know it's.

35:47 - Kelsey Farish (Guest)

It's just like going. You know, we all get really comfortable in our own, in our own routines and I've probably listened to the same spotify playlist about 19 million times. Why? Because I know what I like, yeah, but I don't know about you. But I just remember that feeling of just like listening to the radio and you hear something for the first time that you other you know you wouldn't have found some other way and yeah, that's exciting to explore that new avenue.

36:12

So, again, this has to. So again, the thing that I'm discussing now is like echo chambers and biases, and that's again. That's not an AI issue, it's a broader social media issue. The real question is do we just let the tech giants and people sort this out for themselves, or do we need a law that says, actually, if you're going to do an algorithm to recommend content for people, you always need to give people the right to opt out, in the same way that we see with cookies. You know those cookie banners that are so annoying but they're actually really good, right? Because then we can figure out like we can make the decision for ourselves. So I don't know, maybe I would like to see a pop-up that says hey, kelsey, do you want to opt out of this personalization and maybe see something that you otherwise might not, I don't know. That's a good idea.

36:59 - Jo Shilton (Host)

Yeah, you should do it right. Well, I could talk to you all day, Kelsey, but for any young professionals or students that are interested in it because you've got such a cool job, but in the intersection of AI and the law, what advice would you give them? How to prepare for their careers?

37:14 - Kelsey Farish (Guest)

I've been really lucky in that, like I, I basically got lost on my way to drama school and ended up at law school instead, right, and so I've just always been really interested in film and television.

37:26

My natural interests in those areas led me to pursue intellectual property, and, and that in turn, led me to AI.

37:34

ng time, and I would say that:

38:01

And if you're interested in it, there's a chance that someone else is going to be interested in it. So don't be afraid to just sit down and put in the hard work and learn as much as you can about a subject or subjects that interest you, and, who knows, you might be the first person to ask a question like wait a second, is there really nothing Scarlett Johansson can do about AI generated media? It's just because I asked the question, I asked the question. I sat down, I read a lot of books and articles and no one had the answer, and so I tried to find the answer myself, and you'll see this as well with scientists and other explorers. It's just because I asked the question, I had the courage to pursue finding an answer.

38:42 - Jo Shilton (Host)

Fantastic. I love that. That's great advice, brilliant. But where can people find you Kelsey? Is LinkedIn the best place?

38:48 - Kelsey Farish (Guest)

Yeah, linkedin's great. Also, kelseyferrish.com. I have a lot of stuff up there, whether it's, you know, little bits of advice for people who are creatives and they want to know how to protect themselves. I'm doing a lot on reputation at the minute, yeah, so, kelseyferrish.com, and I'm always very happy to have a virtual coffee with people who want to chat through these issues. I'm going to be trying to go to as much of the upcoming BFI London Film Festival in October. I'll be around for real life coffees then and trying to catch as many films too as I can.

39:23 - Jo Shilton (Host)

Brilliant, fantastic. Can't wait. Kelsey Farish, thank you so much for coming on. Women With AI.

39:29 - Kelsey Farish (Guest)

You're so welcome, thank you.

39:30 - Jo Shilton (Host)

Awesome.

Follow

Links

Chapters

Video

More from YouTube