Artwork for podcast Creatives With AI
E56 - Navigating Legal Complexities in AI and Creativity with Will Charlesworth
Episode 5614th June 2024 • Creatives With AI • Futurehand Media
00:00:00 01:05:49

Share Episode


Will Charlesworth is a media and technology lawyer, Partner at Keystone Law and member of the Artificial Intelligence All Party Parliamentary Group, who discusses the legal considerations around AI and its impact on the creative industry.


  • Processing data with AI requires understanding and compliance with data protection laws.
  • AI is being used in the legal sector for tasks like document review, contract analysis, legal research, and predictive analytics.
  • Human oversight and interpretation are still important in the legal field, as AI cannot replace soft skills and certain decision-making processes.
  • The ethical implications of AI and biases need to be carefully considered and managed. Retrieval augmented generation involves using AI tools to retrieve information from specialised AI systems.
  • AI audits are important to assess the impact and compliance of AI systems.
  • The All-Party Parliamentary Group on AI (APPG AI) fosters a dialogue between policymakers, industry experts, and academics to inform legislation and promote ethical AI use.

Links relevant to this episode:

Thanks for listening, and stay curious!



Tools we use and recommend:

Riverside FM - Our remote recording platform

Music Radio Creative - Our voiceover and audio engineering partner

Podcastpage - Podcast website hosting where we got started


00:02 - David Brown (Host)

Morning Will.

00:03 - Will Charlesworth (Guest)

Very good morning, David. Thank you very much for having me on your podcast. It's very exciting to be able to talk with other people about AI and to share my experiences.

00:15 - David Brown (Host)

Well, thank you for saving me For those listeners out there. I got a little stuck last week because someone dropped out, and I put out a call to my network saying can anybody come and save me and hop on the podcast, and thankfully you were one of the people that replied and I had quite a few people reply as well. So now I have some bookings out for the next couple of weeks, which is amazing and for me, about you, that's really interesting and why I really wanted to have a conversation with you is that you tick several of the boxes of types of conversations that I like to have and I'll get you to give your sort of background in a second. But you know you're a lawyer, and you're working around sort of IP and that sort of stuff, and you're working with companies, and you have some thoughts on particularly sort of legal considerations around the creative space. So that's totally interesting.


But you've also worked in electronics and stuff like that, so you have a little bit of different industry background to bring in. You're also on the UK APPG, which is the All-Parliamentary Party Group around artificial intelligence, so that's quite interesting as well and maybe we can talk a little bit about that where you can, and then you're also an illustrator. So maybe, if you want to, I mean, that just ticks all the boxes for the podcast. So maybe, if you want to, I'll let you give a little bit more detail about that, and then we'll just jump into a conversation if that's okay.

01:44 - Will Charlesworth (Guest)

Yeah, that's fantastic. Thank you very much, much. That's a very kind introduction. I'm glad I'm ticking those boxes.


Yes, so I'm a media and technology lawyer at a city law firm, Keystone Law, primarily based in London. My work involves the protection of intellectual property and reputational rights for a range of clients, so those in the technology and creative sectors primarily, as David said. As you said, before becoming a solicitor, I worked for an electronics company in Hong Kong and in the US for a time, so I have an insight into how intellectual property, in particular, is applied in the real world across different territories, not only the technical challenges of that, but also the cultural differences as well, but also the cultural differences as well, which I try and bring that understanding to when I speak with clients who perhaps aren't in the UK. As you said, I've been a member of the UK Parliament's all-parliamentary group on AI for about it's coming up to a year now. actually, time flies, and that's been absolutely fascinating.


I'm happy to talk more about my time on that, and I'm also honoured to be asked to be an advisor to Projectus, which is a specialist technology company currently focused on blockchain technology, and, as you said, in my spare time I'm also an illustrator as well, so I like to see things from a 360-degree angle. So when my clients who some of them are artists and other creatives are talking about the value of protecting intellectual property, I can understand it from a very base level as well, in terms of the protection that I expect for my work and the value of the creative industries and how I suppose how AI at the moment is going to impact on that and how the landscape's going to change. But yeah, and that's me.

04:12 - David Brown (Host)

Yeah, thank you. Yeah, like I said, ticks loads of boxes, so that's amazing. I guess my first question really is about. I'd like to dig into the legal bit just a little bit, and I've had some guests in the past. I've had an IP lawyer who specifically focused a lot on patents and that kind of thing and how AI might impact that part of the business, and I've also had a friend who works for the UK embassy in Washington DC, and she looks at sort of you know, very high-level IP and copyright type stuff.


th of June in:

05:33 - Will Charlesworth (Guest)

Okay, so in terms? So do you mean in terms of where we are in terms of IP law and AI, or just specifically in terms of how my, uh, my legal practices is using and adapting, uh, AI?

05:49 - David Brown (Host)

Um, I think that, yeah, that well, yeah, they're two different questions, aren't they? Um, I'm definitely interested in how your firm is using it. It sort of internally, and that sort of thing. But maybe let's start with kind of, I guess, and that sort of thing, but maybe let's start with kind of, I guess, what are you telling your clients?


I mean, in a large sense. I mean, obviously, I know, you know, you get paid lots of money to advise customers, but you know, where do you think sort of the law sits at the minute, and what are the major things that people need to look out for or need to be doing at the minute?

06:26 - Will Charlesworth (Guest)

need to be doing at the minute? No, um, yes, and my legal rates are extremely reasonable. Um, so I had that one in there. Um, yes, it's. It's a very good question and there are there are some legal issues that people do need to consider, which have perhaps been, uh, overlooked with the excitement of AI. So AI is extremely exciting and businesses and creative businesses and creative artists themselves are increasingly adopting AI. They're becoming increasingly reliant on it to assist with lots of different tasks, and it starts with chat, gpt and it goes from there to more sophisticated programs.


But what I'm finding, speaking with clients and also looking in more detail at the AI systems that are being adopted and used, is there are some legal issues. I think that people need to be aware of some risks that can be managed and they can be mitigated. So it's not all going to be doom and gloom, and I don't want anybody to get depressed or necessarily to. We're not going to abandon AI, but I just think the more that you understand about it, the more you know, the better prepared you are because I don't want people to be caught out because the excitement of AI and increasing reliance on it could potentially lead to some issues. So, when we're talking about AI and the risks, we can look at it from the start. So, training AI to be as effective and accurate as AI is, it was, it needs to be trained on as large a data set as possible. Um, so, depending on the type of AI technology, that can involve an organisation, uh, loading into the system a large amount of data. That could be graphics, it could be designed, it could be other information and other data, which is all well and good, and you want to feed it more because it gets more accurate, and the better the information going in, the better the result coming out of it. But some of the legal risks around that are around. So, for example, personal data. So if either employee data or other contact data or other people's data is put into the system itself, and I'll come on to this in a little bit, that can constitute a misuse of personal data if you don't have the right consent around using that From an IP point of view, in terms of training the AI, there could be an infringement of third party copyright.


So the learning process often involves storing and making copies of copyright works which, without permission, is likely to infringe copyright, and there is a noted legal case ongoing at the moment Getty Images and Stability AI, which is currently going through the high court, which will hopefully determine uh, which will hopefully determine uh that issue um, and there are lots of complex legal arguments going around it, which is probably not the right place to get into that um. But, generally speaking, if you are storing and making copies of other people's works that are protected by copyright, you don't have their permission. Generally, that will infringe copyright, and we also come on to another risk as well of trade secrets and confidentiality. So if you are inputting confidential information or if you have other data, that's that's a trade secret and that's being inputted into the AI to train it. The misuse of that is also a legal risk as well. So that's training the AI.


We then come on to the middle bit, the processing of the data, and one of the key risks arising from processing concerns personal data, and I appreciate that when I start talking about personal data and data protection, a lot of people start to switch off and it can be a very dry, very, very dry subject, but it's arguably one of the most important obligations that falls on any business, whether it's creative or not, and the danger with AI is that the black box nature of AI, where you're not quite 100% sure how it's processing that data, which algorithms are working and how the data is being tokenised and how it's being churned around, how that's working it makes it difficult for you as an organisation to say that you fully understand how the data is being processed, because you have an obligation to say if it's being processed, if it's being sorted, organised, used.


You need to understand that because that is an obligation on you. Restrictions on processing as well that come into play, which, if you don't know how the AI system is doing it and processing it, that could potentially present an issue, particularly if you haven't obtained the necessary consents.

11:56 - David Brown (Host)

That's all part of the GDPR right.


That's correct, that's sort of been around for ages. I think we can still. I mean, I worked in ad tech back when sort of the GDPR came into effect and I'm sure you'll remember there were many, many hours and much ink was spilled over the sort of talking about that and I think we could still have arguments over GDPR and whether it's actually any use at all in the bit of tech. But my question is or my thought, I guess, is I wonder if it isn't.


The black box nature of AI almost makes it better in my mind, because nobody has access to it. It's like if you know what it's doing, then somebody can figure it out and they can figure out where to kind of get access to it. It's like if you know what it's doing, then somebody can figure it out and they can figure out where to kind of get access to information or where to break the chain or whatever, whereas with AI, when it just goes into the black box, it's like it's probably beyond human comprehension anyway. So even if we did get it to try and spit out some sort of a log of why it's doing what it's doing, nobody could understand it anyway, because it's gone so far beyond what we could even figure out it. Just that almost feels safer to me than what we do at the minute.

13:13 - Will Charlesworth (Guest)

I guess is is what I'm trying to say, and if, if you have the necessary consent from the data subject, from the person, the individual, to that, then that, then that may not necessarily be, then that may not necessarily be an issue, but I think it's important to be able to explain and there are various ways that you can do that to your data subjects, if that data is being put into an LLM, how that data is being processed.


But there are some unresolved issues and, in terms of, yes, how much, what that data actually looks like once it's being, once it's being churned by the, by the ai system itself, um, so, um.


So, yes, so there are some, there are some issues to consider in terms of how the ai system is using that data and what consents you can actually reasonably get from people. One is the perhaps not necessarily on the storage of the data, but something that's relevant when you look, you look at, say, the output, um, the outputs from the ai system, uh, and one of the one of the issues that comes up um is um, how, uh, how much, ai is relied upon for decision making, um, in terms of, uh, say, discrimination, discrimination and diversity and equality, because AI systems are increasingly beingw outcomes towards, or ignoring certain protected characteristics such as age, race, sex, and that can expose an organisation to potential liability around that as well if there isn't a human oversight around that as well. So the issue of how AI deals with individuals and their data is a hot-button topic at the moment, certainly.

15:38 - David Brown (Host)

Yeah, 100%. And I think it's interesting that whole topic and I've talked about this a lot and it feels like this is going to be. This is sort of season two, and it feels like season two discussion is going to be a lot around. You know, how does it work and what's this whole concept of trying to eliminate bias and things like that? And it's, you know, for me it's whose bias do you choose? Because every single human is biased and has biases. So whose bias is better than someone else's bias?


And you know, what the AI does is it reflects back the data that it's given, so it gives us an accurate picture probably more accurate picture than we want of what we're actually like. Now, the way to fix AI is to change the data that's going into it by doing things differently, so then the AI starts to give us different results. It's not in my mind. It's not to create some fictional world that doesn't exist and have AI then try and tell you that that's how it is, because it's not how it is, and have AI then try and tell you that that's how it is, because it's not how it is. And I think there's a danger in that of there being some. You know. It's like what happened to Google with their thing when they said give us, you know, images of the founding fathers, and it came out with this multiracial kind of thing, and it's like that's where that breaks down. That's the perfect example. It's like you can't force one thing to overwrite history or to give a totally different picture than what's there.


And for me that feels like that's going to be the topic of the year is talking about how do we manage that, and do we want a bunch of tech bros in Silicon Valley deciding what those biases should be, or do we want a Western liberal view of that? Or do we want a Far Eastern view? Or do we want a western liberal view of that? Or do we want a far eastern view, or do we want a, you know, a southern hemisphere view? I don't know, I I don't know what the answer is, but that's where it feels like the risk is to me, because now what you're getting is battling kind of political views and then people are putting those views into the, into those models, and then having them give results based on that. I kind of prefer that it to have no kind of filter forced on top of it and for us to be able to see what the data is actually saying, but I don't know if you have any thoughts on that it's.

17:59 - Will Charlesworth (Guest)

I mean, that's a really that's a really interesting point. And, as you say, how do the AI models are likely to differ depending on where you are in the world and depending on local laws and territories and cultural ideals or cultural norms within those norms, uh, within those? Um, yes, ideally you want the ai to be as, as neutral as possible, but I suspect inevitably I mean it depends on the data that's fed. If the data is the it will reflect, it can only reflect the data that it is fed to it to a certain extent. Um, so I suspect that your ai is going to look very different in california and perhaps here, than it is elsewhere, in other jurisdictions, and it wouldn't surprise me if there are restrictions and other things that are put into ai systems used in different territories to comply with probably more stricter laws.


On that, I mean, I think it highlights to a certain extent, insofar as it can help, why it's important to have a human review or oversight in respect of the decisions that are being taken, insofar as that is possible and this may date really quickly and it's not, and it's not just me being a lawyer going well, clearly, the individual is the one because I understand the power of ai and I understand how it can and how currently in you know, the legal sector in particular it's it's processing and analyzing documents and processing all of that data far quicker and in far more detail than human beings could, or you would need a huge room full of paralegals or people to process that information.


But what's still important is the interpretation of that by a, by a human being, just as a safeguard, if nothing else. But also there are things that the AI can't do. There's a lot of soft skills that it can't do and that's why, so far, a lot of jobs are vulnerable to AI, but perhaps not as fast as we think, because there's still a lot of soft skills client relationship management and continue on that thread a little bit.

20:45 - David Brown (Host)

It's interesting that you say that and I've mentioned this in quite a few shows recently, because I find it fascinating ethical sort of answers to them, and they gave it to a bunch of humans and they gave it to a bunch of AI, and then they had people rate the answers of who they thought were the best answers and AI won in both of those. So it gave more what real people considered more ethical answers and more empathetic answers than the actual humans did. And, of course, because if you take something that has you know a billion people's life experience put into it and then you ask it a question about something about you know how should you react to this situation, or you know what's the ethical stand on this, it's taking all of that together and it's taking all those experiences and then giving you an answer, whereas on an individual basis, it's. You know, I only have my personal experience from my life to base my empathy and my ethics and everything on, and you only have yours, and so we may have very different answers, but if you start to pull everybody together, then you start to get something that comes to the middle right, you get the mean, and so I find that fascinating. And again it gets back to some of these other things where I think AI would have an advantage in that point, because it takes away that individual viewpoint where somebody may be yeah, of course we're all in a bell curve, but somebody may be an outlier and it actually eliminates that outlier status and kind of keeps everything in the middle. It'll make everything really boring, because we need those people on the edges, right.


But sometimes in I don't know I'm trying to be class half full about this. But going back to you know, sort of the HR thing and that sort of stuff that like I totally get that it. It has issues, you know, for what we want it to do. But in some instances it may actually be better because what it would do is it would take an extreme reaction from a human and go well, actually that's not the best reaction because this would be the best course of action. I don't know, it just seems interesting and that that that was. I don't know if you saw that some of the articles about that stuff. I'll try and find it, put it in the show notes again, but and I'll send it to you separately if you want to see it?

23:15 - Will Charlesworth (Guest)

Yes, please, yes, and I agree that AI is moving, moving extremely quickly and, uh, to an ex, yes, to an extent that it will, it will, it will produce those answers based on, as you say, for example, a billion people's experiences rather than just one person's experience. Um, yes, being being glass half full, uh, as you say about, and it's quite easy to say. Well, of course, ai could replace pretty much everything, um, and the only thing I should, I should, I should go and do is do something involving highly skilled manual labor, because at the moment, that's not something that ai has particularly been able to grasp on it on a very easy level anyway, um, but I I still think currently, at the moment, what I'm seeing with clients and with friends and other people in the legal sector, is that AI is being used in a very positive way to assist with tasks that are either more mundane or as using as a backstop, as a check or a, or even as a starting point for a lot of things. So, for example, ai is very good at, it can be good at, research, albeit, we all saw the report of the US lawyer that relied too heavily on AI in his legal submissions and it invented cases and invented quotes and he was caught out by that, just you know. But Lord Justice, burr's IP judge in the UK used it relatively recently AI recently to summarize some points of law within a judgment he was giving. In his own words, he called it jolly useful, so love it. Only the judge could come out with that. Um, and certainly from the legal perspective, law society said it presents incredible opportunities for us to use ai as a tool. That that's extremely useful, not necessarily something that's just intended to replace anybody and everybody, because that's just at the moment. It's not good enough. At the moment it will obviously get to that point, but at the moment it's just it's not good enough.


I think there's a desire from people as well.


I think that people still want a uh a to know that, and it may be a uh a flaw in our nature, but I I get the impression that people still want to know that there was a human involved somewhere, if a decision is being made about them, some sort of oversight on that, or just to just to tick that off. So, for example, in the ars hr example, even if an ai is making a, a decision on that, I think that you, you still have the, you have the right in law anyway to have that overseen by a human being and, um, to know that, even if that human is flawed which we all are, and we all carry our own prejudice and our own bias, and but also our own experience as well. If that can be, um, if that's still implemented, I still think there is a, there's a desire for that. But as things move incredibly quickly, I will be seen as a dinosaur and those younger than me will be saying trust, we trust the machine, we don't trust, we don't trust the human well, that's it.

26:51 - David Brown (Host)

Yeah, that's a great point. And how long? So here's the question. So how long before we get to that point, because you know you can see somebody in a large organization going yeah, that lady kathy in hr is a real bitch and she doesn't like me. I don't want her making a decision about me, I'd rather have the AI do it.


And and I guess the corollary, the corollary question that goes with that is kind of you know how long before? And maybe this will segue us into talking a little bit more about how, how you're using it internally but how long before you kind of say, well, actually you know no-transcript, I know that's not today. I have lawyers in my family so I talk to them about this all the time and I get that. You know paralegals do a lot of other stuff that you know that are needed by humans at the minute. But how long do you think it'll be, I guess, before we get to the point that AI will be good enough that people will actually start to trust it, maybe more than humans, and you'll actually say, well, I'd rather an AI do it. Frankly, not to put you on the spot or anything.

28:04 - Will Charlesworth (Guest)

No, no, no. I mean I think that to a certain extent, people are already doing that anyway. I can quite easily see people asking for an opinion and then going away and putting it through, say, gpt or jasper ai or any of the other, any of the other programs and even google now uses, uses ai as part of its search algorithm, um, to validate and verify. Um, I can, with the speed that AI is running, I can see things moving to a point where the, as the AI gets more accurate, we're able to rely on it from a legal perspective. I think you have to adapt and you have to run with the technology and you have to understand it, because not only your clients are using it, but your competitors are as well. And if there's a competitor and that you know, if there's an advantage to, say, analysis of information or documents, which is one of the very good things, one of the things that ai is particularly good at, then you need to be on that curve. On that curve, as well as to how quick that will happen, um, uh, I'm not sure I could say entirely. There are things that slow, that slow things down.


So just because the technology is there doesn't necessarily mean that it's going to be, uh, implemented straight away. There is going to be a a curve of of trust in that there's also from any kind of regulated industry. There's going to have to be um uh, insurers and people who are happy, happy to rely, happy to rely on that information, and there will have to be safeguards, I'm sure, in place, um, uh, in terms of the service that it delivers to clients as well. It will have to be of a certain, it will have to be of extremely high standard. For that, what I would, yes, what I would perhaps say is just because you're relying on, you're relying on the technology and that could replace junior people, that could replace paralegals, but I think there just has to be a lot more work. I'm going to I don't know it's difficult to put a timescale on it, because it would just become really outdated very quickly, I think.

30:38 - David Brown (Host)

And it'll be faster than we think, I suspect, or it'll be way longer. It'll be one of the two, but it you know what I mean. It'll. Either somebody will, we'll figure it out and it will like the dominoes will fall very quickly, or it will just get mired in in the muck and it will take years and years, and years and no one will ever do it. That's kind of how I feel it's going to go. One way or the other. I I agree.

31:04 - Will Charlesworth (Guest)

I think there is a certain, there's a certain incentive, uh, for um, there's a certain obvious commercial incentive for, say, ai companies to put out lots and lots of material and to hype up the press and journalists about how amazing the AI is and how accurate it is and how it will take everybody's jobs within the next four to six months, because they want to promote their technology as being the best and absolutely incredible, whereas in fact, in reality, how much people rely on that technology and how much that is trusted and the quality of the data that goes into it as well, is all a little bit unknown, and I would I would wager that we're probably behind where we're saying we are, in terms of not only adopting and relying on it for more important and serious tasks, but also in terms of what it, its actual functionality and how much you can actually trust it day by day. I think?

32:13 - David Brown (Host)

is there a? Is there a legal gpt that you know of?

32:19 - Will Charlesworth (Guest)

um in terms of. There are companies, say say, thompson, reuters and other um uh organizations and and companies which have legal software. There is legal software. There are lots of different companies offering legal ai software out there that can um. So, for example, the main areas are document review so analyzing large volumes of documents, highlighting key potential information. Contract analysis as well um, so identifying potential risks and liabilities.


Legal research is where AI was probably most notably has most notably being integrated. So the legal resource tools that we have in terms of searching case law legislation are now supplemented by AI, are now supplemented by AI. That's still at its more elemental levels in terms of the more it's used, the more it learns, the better it gets. So it just relies on time of people using it and analyzing the quality of the results and feeding that back into the machine so it can get more effective and predictive.


Analytics is a is a big area in the law societies. As I said, it's a very promising area where it can predict the outcome of legal cases, which is meant to provide lawyers with better decision making. There has been a lot of discussion as to whether parties would, for example, say in a mediation or an arbitration, so that's effectively like a court process but a private process between the parties, whether they would entrust making submissions to an ai bot and the bot being the judge effectively to come out with the, to come out with the answer. Um uh, I'm unsure as to how much that has been, how much that has been used, or the willingness to um uh, to go by, to go by that, to rely on that. But I can certainly see that's that. I can certainly see that's where it's going.

34:40 - David Brown (Host)

Yeah, I never thought about that one. That's an interesting one. One of the other things as well and I don't know again to get your opinion on this is I've kind of always thought that eventually, what we're going to see is what in the database world we would call a federated solution, and of course, they've come up with a special term around AI, which is RAG, which is Retrieval Augmented Generation which essentially means is that if you ask, say, chatgpt a question and it doesn't know the answer, you may ask it a legal question.


And this is making this very simple no-transcript, or a mathematics AI in the background, or it could be a chess AI or it could be like whatever. And so when you talk to something, what you're talking to is the conversational piece that sits on top and it has connections to all these different specialist tools that it can use. So when you ask it, it doesn't have to, it's not, it doesn't make up an answer it goes to that AI and says, okay, this AI knows about this topic, and blah, blah, blah, and it feels like that's the direction. Now that everything is going. And again, we've been doing that in the database world and the data analytics world for decades, where we call it federated. So you've got all these different databases that sit in different places that can't connect to each other.


If you think about, like in government or something, and it's like, well, and again going back to data privacy and those sorts of things like, well, this person on this team, maybe you work on the parking team in a local council Well, you can't see the data that's in the I don't know, in the council tax database, because it's not the same database, they're two totally separate systems. And so you say, well, this person can't go over there because of data privacy and blah, blah, blah. So what you do is you put a layer of software on top of it that has access to all those different ones and it pulls it together and gives it to you in one so it can answer a question. If you say, does this person live in the council, it can go in cross-reference, but it doesn't give the person direct access, if you know what I mean.


So it sounds like that's kind of what you're talking about, cause you you might end up in a situation where there's there's even specialist AIs within the legal system, different realm, that do different tasks and have been trained to do different things. But you could have one sort of node and one AI that you talk to and say ask it a question about this. Say, is there anything funny in this contract that I need to be aware of? You know, and somebody puts a random line about fish in there, just to see if you've read it and I'll go. Well, there's this random comment about fish. I don't know where that came from. That's an old trick that we used to do in contracts when I when I work for a company. Just to see if people read it.

37:55 - Will Charlesworth (Guest)

Um, that's a good, that's it. That's a good one. I've, I've, yeah, I've, I've. I've heard of people putting a lot of people put uh deliberate mistakes, um, in either contracts or in certain uh content as well, so you can tell if it's been, if it's been copied or, as you say, you can tell if somebody's actually properly read it or not exactly, and that was the old, the old van halen story from years ago about the the brown m&ms.

38:19 - David Brown (Host)

I don't know if you know that story, but one of the writer clauses in all of their contracts was is that they they had to have a bowl of m&Ms in their dressing room and it had to have no brown M&Ms in it.


And they used it as a, as a point to you know, not being fun. I mean, they thought it was funny, a funny thing, but they always knew if someone had actually read the contract or not. So when they showed up, if they had just a bowl of M&Ms but, you know, or didn't have any M&Ms at all, then they knew that no one had actually read the contract. There you go, showing my age there a bit, but there you go.

38:54 - Will Charlesworth (Guest)

No, yeah, no, I understood. I understood the reference it's.

38:57 - David Brown (Host)

It's fine for you, but I think it's I mean for someone like me who runs a small business, who you know, doesn't have a lot of extra money. So I, you know, a lot of times if I need a simple document like an NDA or something like that, I'll go on one of the legal websites and I'll just buy one for 30, you know, 30 pounds or whatever it is, and it's going to be yeah, okay, it's not super custom to my business, but it covers me for 99% of what I need covered. But what's interesting now is if people send me contracts, I literally can use an ai tool and say, hey, can you? Is there anything in here that I should be aware of?


That's unusual for this type of agreement, and sometimes it comes back and you know it'll give you a list of bullet points but it'll say basically no, this looks like a standard agreement. You go, okay, great. But sometimes there is stuff in it where it's like, well, it doesn't. You know, in this type of agreement it doesn't usually say this type of clause and you're like, oh, interesting. So for regular people it's also. I think that could be really useful for trying to understand what you know what documents were being sent uh, yes, I can see that.

40:05 - Will Charlesworth (Guest)

So, yeah, so both of your points I can see in terms of the so you're saying about the federated solution and about incorporating lots of different ais talking with each other. I think is is very important and I think that is it's very similar. It's performing a similar service. To, say, your uh, traditional, say you would have your, your lawyer that you know might be a commercial lawyer, and you'd go to them with a few different queries and then they would go into the firm and there may be an employment type query, there may be specialist ip query, there may be a corporate share type query or something like that, and they would be your one point of contact and they would go and find the information and they would present that to you. And I can see that it is better to have a specific AI solution which is extremely good at one thing, rather than having something general which is OK but will hallucinate a lot, or it will just create things out of nowhere, or it's just not able to cover not able to cover it and that has been a trend in law. As well as that, you're finding that clients are very because you have so much choice now. Is that they're able to go online and one and a client and I've had clients like this in the past they they will have different lawyers, maybe at different firms, even for different aspects, um of various things that they do, because they want to pick and choose the best particular particular lawyer at that. Now, um, I must say, that hasn't. That hasn't happened at my current firm, keystone, because everybody is very senior and experienced and very specialist at what they do, so we've been able to connect them with people within the same firm. But from a consumer point of view, a client's point of view, it's very good that you're able to go out and you can do that to a certain extent, and if you have your what used to be called a conciliary or one point of contact as a lawyer, they would then normally go out and fish for that information. So I can see that happening.


One of the things that you also have to be careful of as well is confidentiality issues and how much data and what data you're putting into the to bring it around again to this, but data confidentiality, um is one of the things that um that pretend there is something that needs to be considered with ai, particularly in legal cases as well. When you've got some, can have some extremely sensitive commercial and other information. Um, that's important. So you can't necessarily just you can't just go into chat, gpt and start putting in all of this, all of this data, because, whilst you might say it's all being churned around and nobody knows what's happening, you don't know where that data is going to end up and how it's going to happen. Because, to go back to the getty images and stability, ai, getty found that stability had used its images because stability started to produce, generate um ai images which had the getty logo on because it had analyzed so much of yeah yeah, I remember that yeah, um, but it was kind of it was looking for that the the fish in the contract.


Effectively it's the. Ah, I see, you know you've learned that from, clearly learned that from somewhere.


And if it had our watermark on, then yeah, which is hilarious by the way you know, yes, all I'm going to say in that case is there are some really interesting legal arguments being advanced from Stability AI's point of view in terms of that yeah, view in terms of that, in terms of their defence, it's a very difficult one because a lot of defences rely on the use you're making of the work being for non-commercial use, but obviously with AI.

44:04 - David Brown (Host)

It's going to be commercial use. I think I'm conscious of time and I do want to talk about the appg at some point, but we have a couple of quick things to cover before then. Um, oh, now I forgot what I was going to ask oh, I know, I was going to ask um, yeah, so the thinking about the legal stuff and working with clients. I know you also mentioned that you guys do some sort of an AI audit or something like that. What's that about?

44:31 - Will Charlesworth (Guest)

Yeah. So, as I was saying, there are lots of risks that, to be honest with you, don't blame clients for not being aware of them, because that's why you have lawyers to be quite geeky and to think of these issues. So the AI audit effectively goes in and it maps the terrain as far as possible in understanding what technology is being used within, either by an individual, individual creator or by an organization so advertising agency or marketing agency or other organizations and it asks all those questions and it maps exactly what's being used and where, because most employees don't necessarily they might not realize how much they're relying on AI, but it's always interesting to find out what's happening. So you go in there and you map and say, well, what's being used by whom, what data is going into it and what is being used by whom, what data is going into it, what's and what is being produced by the ai and how is that being? Um, as I say, from an oversight point of view.


But how is that being checked and how is that being deployed as well? So, um, we go in and just talk to the client about that. Um, we can go in and do all of the really boring stuff. So the mapping the controller processor relationship for data protection things, um, uh. And also think about it from an employment aspect as well, in terms of whether the use of the ai is um, is compliant with um, employment, current employment law and regulations, because employment law changes almost on a daily basis, which for my employment colleagues I'm not an employment lawyer, but for my employment colleagues makes it extremely exciting. But, as any HR manager or anybody in a position of responsibility for other people has found out, it can be a bit of a minefield if you're not 100 hundred percent, uh, if you're not 100 compliant, or questions are asked and you're not quite sure how that's how that's going to work exciting wasn't the word that came to mind.


But that's okay, I'll, we can run with it dramatic and fear-inducing um, if you don't necessarily have somebody you can rely on to advise you in that respect. So, yes, so the AI audit. It basically just goes in and understands what you're doing and it helps you to navigate that field because, to be honest with you, the existing law is having to keep up with AI and at the heart of the AI is a software system and, whilst the algorithms may be mysterious and unknown to a certain extent, there are lots of things that we can put in place Understanding, for example, what terms and conditions you're using the AI systems under, because some of them are really nice and generous, and chat GPT and it says it's all for you, don't worry about it, it's all fine, and we push it back, push all the liability back onto you in terms of ownership and liability, and some are more restrictive. There are other systems that will claim ownership over part of what's being produced or generated, and you just certainly, to go back to the data. From a data point of view, but also from an intellectual property point of view as well, you need to have to be able to map out what's being produced by whom and is that covered by your existing policies internally, what's being produced by whom, and is that covered by your existing policies internally?


Or, if you're an individual, if you're a sole trader, if you are somebody that goes in and is commissioned or contracted to do things, are you protected in terms of protecting your own IP?


Because it's the most valuable thing that pretty much any business has. Whether it's content, whether it's creative or non-creative content, the data and information they have is the most valuable thing that pretty much any business has. Whether it's content, whether it's creative or non-creative content, the data and information they have is the most valuable thing. So it's just important to be aware of it and just to have protections in place and exclusions in place where those need to be exclusions in place as well, because, whilst it is early stages and there are a limited number of cases that we're aware of going through the courts, that will change, and traditionally we're about six to nine months, maybe 12 months, behind the us in terms of uh court cases, litigation, liability, and the us is having more and more cases going through about the role AI plays within the creative industries and also within businesses as well, and we will catch up very quickly and I just want to make sure that clients are best advised so they can manage and mitigate that risk. You can't eliminate risk because it's a part of doing business.

49:21 - David Brown (Host)

Yeah of course, but you can manage and mitigate it manage and mitigate it.

49:31 - Will Charlesworth (Guest)

So how long does the audit take? Normally it depends on the size of the business. It can be a relatively short process. It initially starts with a conversation. So we'll have a discussion about how the business operates and what essentially is in place at the moment, and if it's only limited use, it could be a relatively short process. And if it gets to be more complex then we can work with their IT teams, their marketing teams and other people into that process. So it's about how long is a piece of string. It can be relatively short, but if it's more complex organization it could be something more involved.

50:09 - David Brown (Host)

But from, say, the director's point of view, they want to be certain that they have ticked that box to to make sure they're still compliant okay, and another thought that came up while we were talking about that and again, before we get on to the appg, which is I don't know a hundred percent, so anybody out there don't quote me on this, but it was I was talking to someone the other day who's very knowledgeable about this sort of thing and he was mentioning the fact that if you use the API so if you connect to chat, gpt, for example, using the API and not typing stuff in and copy and pasting and all that that actually it doesn't store any of the information that it uses through the API. So if you have a question about a document that has IP in it, for example, and you're using that through an API connection, it actually doesn't store and use any of that data for training. So it only uses the training data to do the analysis, which is an interesting little wrinkle to that, and I think that for a lot of businesses, that could be hugely important. So if you're using a third party tool, or even if you get an IT team to build you an internal tool that has your internal interface to it, then that could be a way to protect your information.


I don't know if that's true or not. I'm going to be at the AI summit for the next two days and I'm going to try and wrestle someone from open AI to the ground and make them answer the question for sure, just so I don't give anybody any wrong advice. So I'm not a lawyer and this doesn't constitute any legal advice, but that's my understanding as well. So that could be, if it's true, that could be a good way for businesses to think about having that extra layer of protection, that at least they know that their data is not being recorded and used, it's just being analyzed in the moment certainly see if you can create that closed environment, much like with a uh, a private blockchain, um, or a closed and private limited blockchain, for example.

52:13 - Will Charlesworth (Guest)

If you have that closed environment, almost like a sandbox, that you are um, that you have full control over and it's not going anywhere outside of your organization and the ai is just using its analytical tools on that and producing a result that essentially goes nowhere, I can see the use of that, and if that was able to ensure the confidentiality and that the data went nowhere and the client was happy for their data to be analyzed, in that respect, I can see that being a very positive way, a very, very positive way forward, and I would encourage you to wrestle a person from OpenAI to the ground. They need to be held very accountable for a lot of things that are being released.

53:02 - David Brown (Host)

So watch the news tomorrow, definitely We'll see. We'll see Some American guys wrestled someone from OpenAI to the ground at the AI Summit.

53:12 - Will Charlesworth (Guest)

That'd be hilarious if nothing else.

53:16 - David Brown (Host)

Okay, so I do want to talk about the APPG Again. Maybe for those people who listen, who aren't in the UK and who haven't worked with the public sector like I have for the last decade, Maybe if you could give a one or two minute overview of what that is. And then I'm really really curious, like what is it you actually do and what do you talk about when you sort of meet up and you get together and how does it actually work.


So if you could just share a little bit on that, I think that'd be really interesting. Sure, yeah.

53:51 - Will Charlesworth (Guest)

I've been involved, as I say, with the APPG AI for about a year now and it has been, for me it's been a fascinating journey. So the all parliamentary group itself, what it does is it examines the impact of ai across various sectors, and so what it intends to do is, as it says, it fosters a dialogue between policymakers, industry, experts and academics. So it collects a lot of information um in and it meets about uh once a month or now, because Parliament will be broken up, I think the next meeting is actually in September. But it aims to inform legislation, to promote ethical AI use and to look at the implications on society and the benefits of AI on the general public. So I was introduced to it through Projectus, a technology company, as I say, I'm associated with. And what the group does is it I say it meets about once a month and it collects lots of information, collects lots of evidence. So it invites leading figures within uh. So each meeting will have a very uh different focus. So, for example, um education, you had leading figures from the world of education. You had, um one of the leading people from, say, duolingo was there and other people adapting technology to education. You you have the creative industries.


One of the most interesting meetings I was at was on generative AI and the impact of that, because that obviously is foremost in the minds of particularly my creative clients or my artistic clients. So they had the head of AI, for Sony Music was there. They had a representative from DAX, the Design Artists Copyright Society that represents artists and ensure that they get fair pay, royalty distributed, protects them right People don't know. And they had other people from other technology companies and organizations there and you're hearing different perspectives from different sides of the technology line. So those that are creating the technology, adapting it and using it and are very excited to do that, but also the other side, as well as the people, the industries, organizations that are directly impacted, um, impacted by that, um, so, uh, so, for example, in that generative ai ai meeting, uh, there was reference to a report that came out earlier this year from uh, from dax, which from DAX, which assessed the impact of AI on the creative industry, and one of the themes that came out of that meeting was the need for transparency. So what original, copyright protected work has been used to train AI and has there been, or is there going to be, fair compensation for that use and um dax in their report said that, um, I think it was saying, like 84 percent of respondents.


Uh, so artists created would sign up for a licensing mechanism that they would be paid for their work whenever their work was used by AI. So I think people are more open to the reality that AI will consume and take your data and your information. All they want is the transparency to know when it's being used, to obviously originally, ideally consent to that or not, which would be the normal situation. If I came and said can I, can I use, uh, can I use your graphics or can I use your, your music for this? And you go, that's fine, you can pay for a license for that, and you'd have various restrictions around that to protect the integrity of your work, etc. Um, and uh, if that hasn't happened, um, ugly, it's an infringement of copyright. Um, and, ideally, what the artists want is just fair compensation, because you can see that ai models are valued extremely highly and they will be making commercial use of of that um, of that training, of that input data.

58:26 - David Brown (Host)

Um, I wonder, just to jump in. It makes me think of. I had a fascinating conversation with a guy about a year ago and he was talking about medical records in the NHS. And you know, for years and years and years like even if you wanted to see a copy of your medical records, you couldn't get it because the doctors were like, well, no, there are records, they're not your records. And particularly over COVID, there was a there was a huge step change in the view by, by the population and I'm going to say normal people, but people outside the NHS that it's like hang on a minute, no, that's my data, it's about me. So it's my data, it's not your data. And actually we need to give you permission to see my medical records, not the other way around. And he said it was that they now have an app where you can actually, you know you can use the NHS app and some of the other stuff to see your medical records and you can get access to it.


But he said what caused that was the change of the view that people started to realize and this is through social media and online advertising and everything that there's been this switch where it's like hang on a minute, that's information about me, that's my personal information, so I own that, and if you want to use it, you have to pay for it or you have to ask me, and it's kind of it. We're getting towards that same thing is what it kind of, what it sounds like. It's like OK, it's fine, you can license my know how it would work, but I could see almost like a, you think, spotify, which is maybe not the greatest example, but it's the same kind of thing, and I think we might get to that point with personal information as well. So it's as an individual, you go okay, well, you've used some of my personal information to, say, target ads to me or whatever. So if you want to target an ad to me, target ads to me or whatever. So if you want to target an ad to me, you have to pay me part of what the cost of that ad is, for example. So that's quite interesting.


So so you get these presentations sorry, that was just a little aside. So you get these, you get these presentations from people and all that sort of stuff, and then so does so does the group. Do you then like write that up and make a recommendation to government, or is it just a, is it? I mean, I know it's all parliamentary, so the whole and it's cross party, so you have representatives from all the different political parties are are part of this group as well, and that's part of the reason for it. So do you then? Do you write papers, papers, do you make recommendations? Or is it literally just a group that, because of who's on the group, people want to come and talk to you, and so it's a good way for politicians and, um, and influence I'm going to say influencers, but you know people within the industries to get that information that maybe they couldn't normally get.


I think it's a good way of, I think, highlighting important issues. So, in terms of the politicians that attend it, the value that they get is, yes, they get direct exposure to people who are leaders in the industry and people that they want to talk to, because, inevitably, what evidence is given, so what's been said by various people will be fed back into parliamentary discussion. If there's a private member's bill or something like that, then that information is all fed back, so it's an opportunity to ask questions of those people that are there. It's all recorded and it's all publicly available. What's been said and recorded.


I think the primary value is to the politicians to understand exactly where we are in terms of technology and the various um points of view of interested parties.


Because, as I say um, particularly on the generative ai, there are, uh, different.


There are people falling on different, different sides of the line in terms of, in terms of ai, its usefulness, its application and what should happen.


I mean, the one common theme that came out of it was transparency in terms of, in terms of what's been what, what data has been used. But it's a good opportunity just to raise questions and raise issues as well, because each of the people that's invited to speak has five or ten minutes to give their own talk, their own representations, and to feel that they are heard as well, outside of the usual social media and other statements that they that they might make um and um. It's just, I mean, from my point of view, it's just extremely useful to be able to um, speak to those, uh, speak to those people. Either you can raise a question or you speak to them afterwards and you feel part of that dialogue. And, yes, you're hearing things perhaps a bit before they are released, generally in terms of press releases or social media as well. So it's a little bit of an advantage to understand what's happening within within that space sounds great.


Sign me up you can you, can, I can, I can send you a link to it.


This is my own personal appg that I try and pull together and share it with everybody. So, uh, so I I know the feeling. Um, thank you very much. We're we're just now a couple of minutes over an hour and I'm conscious we're getting on in time when I assume people just go to your law firm's website, which is keystonelawcom, and they can get information on if they want to do an AI audit or anything like that. Is that right?


That's right. Yes, yes, so you can, yes, go to Keystone website and if you find my profile and contact me, and I can give you all of the information that you need about that, and I can chat to you about it and just give you a bit more information as well.


Perfect, there we go, everybody. If you want your AI on it, give Will a call and he can help you out. Will, thanks very much for your time. I really appreciate it. It's been a fascinating conversation and, yeah, thanks again.


My pleasure. David, thank you very much again for having me. It's been really great to talk about these issues. Cheers, we'll speak to you soon. Bye-bye, bye-bye.




More from YouTube