Welcome to Creatives WithAI™, the weekly show that brings you AI education, industry insights, and straight-talking reactions to how AI is impacting creative businesses.
This week, we are exploring the intersection of AI and Intellectual Property with our special guest, Kayleigh Nauman, the UK's first North American Intellectual Property Attaché and Policy Advisor.
In this insightful episode, we delve deep into the complex world of AI and IP, examining the influence of AI on IP laws, the role of education, and the need for cross-industry collaboration. We also tackle pressing questions surrounding the ethical and legal implications of AI-generated outputs and how governments and creative industries are navigating the choppy waters of copyright law in the age of AI.
Join us as we discuss the potential impact of political shifts in the US and UK on AI regulations the role of the EU, and take a peek into the future of AI and its impact on various industries.
Whether you're a tech enthusiast, a legal professional, or simply curious about the intersection of AI and IP, this episode is a must-listen! So, grab your headphones, sit back, and let's dive into this thought-provoking journey into the future of AI.
Links relevant to this episode:
Thanks for listening, and stay curious!
//david
00:00 - David Brown (Host)
I mean, I think that's actually a good place to start. Why don't we just, you know, start from there? Yeah, obviously, everything that we're saying is your opinion and not the opinion of the UK government or anything like that, and it's certainly not from my standpoint. But anyway, Kayleigh, welcome to the show.
00:17 - Kayleigh Nauman (Guest)
Thanks, I'm really excited. I know we've been talking about doing this for a while now, so apologies for the grand delay, as it were.
00:26 - David Brown (Host)
I know you're super busy. Your schedule is way busier than mine. Maybe if you could just give everybody a little bit of background on who you are, what you're doing now, and kind of just a couple of minutes on how you got where you are, and then we can pick up from there.
00:41 - Kayleigh Nauman (Guest)
Yeah, definitely so. I am the United Kingdom's first-ever North American Intellectual Property Attaché and Policy Advisor. So that means that I work for the United Kingdom's Intellectual Property Office at the British Embassy in Washington, DC. It is my job to be an expert on all things US, Canada and Mexico IP for the British government. So, I'm not a British IP expert, I'm a North American IP expert which.
01:12
I know is sort of an interesting nuance, and it confuses folks quite often. My role is very much in the policy space, so I was hired to help negotiate trade agreements on behalf of the United Kingdom with the US, canada and Mexico bilaterally, not all at once. So that is what I mostly do. I do a little bit of business support, and then I track all policy developments in the IP space in all of North America, report back to the British government, and analyse potential impacts.
01:42
Before this, I worked for a member of Congress, so I actually come from a legislative policy background, from the Hill.
01:49
Prior to this role, which is also part of the appeal in having me is I know how the government things work most of the time. So yeah, so I came to this, like I said, from the Hill. I was actually working really in the trade and international space, a little bit in the IP space from a telecom perspective, but mostly from a trade perspective. So I was a foreign policy person before I was an IP person, but I've been in this role for four years now, so it's been really interesting when you talk about the creative space because when I first started, everything was copyright, safe harbours and intermediary liability. So, it is slowly morphed. There were certainly discussions of AI at the time, but those have morphed dramatically in the last four years, from a policy perspective in particular. So I kind of follow the winds. I guess you could say. Whatever people are talking about is suddenly what I need to be paying attention to, though I have a team of one, so periodically, something pops up that's news to me, but then I get to learn, so that's always exciting.
02:58 - David Brown (Host)
Yeah, and you were born in Wyoming.
03:01 - Kayleigh Nauman (Guest)
I was born and raised in Wyoming.
03:03 - David Brown (Host)
Which is amazing in the mountains.
03:05 - Kayleigh Nauman (Guest)
I know, which is a beautiful place but a very weird place. Most people are like, "You're the first person from Wyoming I have ever met", which I know. I think less than 500,000 people in the state, so I'm one of the rare Wyomingites who's made it further than Colorado.
03:25 - David Brown (Host)
There's a I'm trying to think of what his name is. There's an influencer on YouTube, which is a shout-out. That isn't actually true. This is an endorsement. It's called Dry Creek Wrangler School, and it's an old guy, and basically, he just records to the camera. So he'll sit down and do sort of 15, 20 minutes, and he's just he's an old cowboy and basically trains people on how to be, how to ride horses and how to camp and all that sorts of things, and it's amazing.
03:56 - Kayleigh Nauman (Guest)
One of the best YouTube channels out there. One of my favourites is all of the Wyoming ranches now will sell experiences where you can go help wrangle the cattle, which is the worst part of their job. So now, if you pay them a lot of money to come, do the hardest part of being a cowboy.
04:13 - David Brown (Host)
Exactly.
04:14 - Kayleigh Nauman (Guest)
Absolutely genius.
04:16 - David Brown (Host)
That sounds amazing, yeah, so he's like everyone's kind of granddad, and he spits advice, which is, you know, like don't go quietly and all sorts of things.
04:27 - Kayleigh Nauman (Guest)
So I just I got turned on to Like yourself like a bootstraps.
04:31 - David Brown (Host)
Yeah, yeah, yeah, yeah. And anyway, sorry, that's a bit of a little Wyoming chat. There we go.
04:36 - Kayleigh Nauman (Guest)
I love that.
04:38 - David Brown (Host)
So I wanted to have you on, obviously because IP, copyright, all that sort of stuff around AI. Everyone is worried about that, and particularly people who work in the creative industries, I think, because the technical industries. We've had machine learning for decades, and you know that's a, that's a very well-trodden path, I think now, and the whole world runs off machine learning actually, but these new sorts of generative AI tools and the large language models have really encroached on the knowledge workers sort of. You know, we've had revolutions in the past that were the Industrial Revolution, and you know, we had the invention of fire and all this sort of stuff that really moved society along, but nothing's ever come for the smart people before.
05:27
And I think that's what's really different about what's happening now, and I wanted to not not obviously not from a legal standpoint, but just working in it and dealing with it every day and sort of seeing how this stuff happens at the higher levels. What are, how is it sort of? I guess my first question is is how are people thinking about it and and how seriously is it being taken at, you know, at the governmental level? And then what are the, what are the general sort of thoughts around it about maybe what's the best way forward. Yeah, definitely.
06:08 - Kayleigh Nauman (Guest)
So this is this is an interesting one. The answer is it's being taken very seriously and people are thinking about it for a lot of different facets In the US. One of the big issues, though, is that you need a lot of education, especially from members of Congress, for any kind of policy will really need to come from right. There has certainly been a big push to have a lot of really educational work done on AI, first on the Hill, which is very smart, right. Like you know, the first thing that a staffer you know pointed out to me was you know, we don't want a repeat of the Mark Zuckerberg hearing, wherein a bunch of senators were asking him how to use Facebook. Right, they were not understanding even sort of the basic level of operability, so they're definitely trying to avoid doing that again.
06:54
There have been I think we're getting close to dozens of hearings on AI in general, broadly on the Hill, and certainly at least three or four specifically on AI and intellectual property, on both the House and the Senate side. Like I said, most of those thus far have been 101s, so that's been. They've been sort of broad across, across different sectors. So there was an AI patents one, and the Senate and AI in the copyright but then at the working level, while Congress makes the policy right, the US Pet and Trademark Office, the US Copyright Office, administer the policy as far as AI and intellectual property is concerned and at the working level there's been much more in-depth review of what that means. So US Pat and Trademark Office has done technically three call for comments on AI and patents.
07:43
In particular, this was following the Dabbas case wherein the inventor of Dabbas, whose name I just totally blanked, tried to file a patent application in the US or did file a patent application in the US saying that the US had a patent application in the US saying that his AI model was the inventor of. I think it was like a cop for someone with an illness that would make them shake. The US ruled that he couldn't that. That was that only a human is allowed to invent per the US Constitution. The US Copyright Office appellate the same thing when he did the same, trying to register a copyright for the same thing. So that is to say that that was really the impetus, I think for a lot of us was that particular case. Similarly, the you know they kind of ran its first AI and IP review separately following that case and very much from that standpoint initially just broadly like what is this and why? What should we be paying attention to?
08:44 - David Brown (Host)
Similarly, the UK says only a human can invent something right, can we just stop, and can I jump in right there just quickly? And I know there's a lot of stuff and a lot of this stuff is really detailed, but one thing just jumped out to me when you were talking about that that might be valuable. To sort of separate is sort of what's the difference between intellectual property and copyright? Because I think there's a difference between those two and we don't need a law class. But you know what I mean. Like, when we say IP and when we say copyright, I think we mean two different things and maybe let's just be clear on what those are.
09:22 - Kayleigh Nauman (Guest)
Yeah, so I would actually say, when I say intellectual property, I'm looping in copyright. So, copyright is a type of intellectual property, as our patents, trademarks. There are some other ones that I think we will skip because they're not relevant to AI. We don't need to confuse folks. So certainly, if I say intellectual property, I mean basically in this instance patents and copyright but I can certainly try to be more specific.
09:48 - David Brown (Host)
So it's patent and copyright that are different. Got you? Okay, that's fine, and I think it's really interesting and there's a this is probably the discussion I would imagine that's happening everywhere which is what is a person and how does that qualify? Because the thing that I always find confusing and maybe you can help me understand this is companies are legal people and so if you work for a company and you sign a contract that says you're an employee for that company, then you assign the rights to any IP that you develop for that company to that company and that company as a person, then owns the rights to whatever it is that you had.
10:31
And I don't see where AI is any different than any other employee for a company. So if an employee at a company or someone uses a tool to create something like Microsoft, again Microsoft doesn't have any claim over anything because somebody used Microsoft Office to you know, or Microsoft Project to manage a project or whatever. So I guess I don't see where AI is any different in that. Is that the general feeling or is, or have I totally missed the missed the point?
11:04 - Kayleigh Nauman (Guest)
So interestingly, from a stakeholder perspective, this is totally come up and a few stakeholders have argued like at some point AI will be inventing things and we should. We should treat it like a corporate entity.
11:15
And that's already in US law. From a government perspective, there is no question about what a human is. That isn't even part of the discussion. So the most recent US Patent and Trademark Office call for comment was actually aimed at sort of defining how many steps are removed a person can be from an AI output before that output is just no longer protectable with intellectual property period.
11:39
So the question is and you know, like, how do you protect the output? The question is, is it even protectable? So the argument right now is there are still human influential enough in the process at some point that the output, if it is protectable by intellectual property protections, applies to that individual who's responsible for managing the AI. But, like, at what point is it just no longer even an intellectual property output? And that call for comment wrapped up this summer and we don't have the final report out yet, but it will result in guidance from Patent and Trademark Office on, I think, inclusive of whether or not to disclose if AI was involved in an invention and, if so, like what that format looks like, needs to look like and what other information should be included. The Copyright Office has already required that for copyright applications. I feel like I should also say real quick for folks knowing that you're based in the UK In the UK copyright is an automatic right, meaning you don't file for any protection.
12:40
In the US it technically is as well, but if you want to be able to enforce, generally speaking, you need to register your copyright with the Copyright Office. It's not an application, it's a registration. But there's already been guidance issued by the Copyright Office on what, basically just saying, if you've used AI at all in your invented process, you need to disclose that you did, and the AI parts are typically considered by the Copyright Office not to be protectable by copyright. There was actually a really influential case that the Copyright Office ruled on where it was a graphic novel that the author had written and she had used or sorry, they had used a generative AI for the images, and the Copyright Office initially revoked that person's copyright registration, saying it was generative AI and therefore not a human invention could be protected. And then, after some pushback, they said that the novel itself could be protected by copyright, but not the individual images, that those were open source.
13:42 - David Brown (Host)
Right, right. And in your opinion, as a person who just works in the business, how do you think this is going to play out? Because, no, I'll just leave it there. How do you?
14:02 - Kayleigh Nauman (Guest)
That's a fair question. I think we're not yet at a space where I think AI is really smart enough to do anything on its own Right, like it's still very much operating on whatever has been programmed into it, even if it's many steps removed. And you know, there's been an argument from tech folks for a while that it's a black box, we don't know how it's analyzing the outputs and giving you an output. And then now they've said actually it's not, but it's hard to track it, but it's trackable, right, and like, I think, to the extent that it is trackable, that is a pretty good indication that we're not yet at a space where the AI output is going to really be, you know, something as unique as maybe what you would consider to be coming from a human mind. So I, in the immediate future and so in my long term, I don't think that governments are going to be thinking outside of that space. Right, it's going to be. Is it too many steps removed for this to be protectable by intellectual property protection, you know? And what bits of information do we need about the AI involved in this process from you to ensure that that is indeed the case Right In the future?
15:10
Certainly that could become a much broader discussion, but I think in the immediate term, you know and we talked about this a little bit the last time that we spoke right, the bigger question is sort of what are the impacts of gendered AI on creators versus, you know, is the IP infrastructure sort of robust enough?
15:28
Because the answer that all of us have found thus far is yes, right, ip, especially like in the US and the UK, canada, you know, europe, the European Union, frankly, like Japan, south Korea, singapore, our infrastructure is really robust. They can handle all the impacts of AI. Because, truly, in terms of the event of processes, actually not anything that new, right, same thing, just sort of you know, different thing, the same kind of story but in terms of just the actual impact of something that can produce so rapidly and so prolifically is the bigger concern. And to that point, you know there's been a lot of discussion about licensing, there's been a lot of discussion about right of publicity, and those have been some really interesting sort of more recent discussions that we've been having in this space.
16:24 - David Brown (Host)
Right of publicity. What is right of publicity? What does that mean?
16:27 - Kayleigh Nauman (Guest)
Yeah. So right of publicity is interesting because it's a. It basically means that you, as an individual, have the right for your to determine whether or not your image can be used in something, particularly something that would have a commercial significance. Right, I sound like such a policy person saying it that way. So you know, if someone wanted to take your picture off your Instagram and use it for a marketing campaign, you would have the right to say like no, I didn't give you permission to use my image. Interestingly, the UK, the US, we don't have this as a federal right.
16:56 - David Brown (Host)
Yep.
16:57 - Kayleigh Nauman (Guest)
In the US, something like 35 or 37 states have a right of publicity, so there's been a big discussion about whether or not that needs to be a federal right and I draft bill has actually been shared in the Senate with stakeholders, so it hasn't been introduced, but they've drafted an initial bill and are sharing it with folks to see you know how to proceed forward on that and if it's a viable sort of solution for any kind of outputs that could, I guess, be trying to use someone's image right without their consent.
17:34
And a big concern, a point where that comes from or sort of initial point of discussion where that that led to this discussion on right of publicity, was that a lot of artists are saying you know, if you use shared of AI quite frequently originally to get an output, you could even say I want something that looks like this person or this person's artwork.
17:54
A lot of models changed it so you couldn't do that anymore, but then if you knew the models well enough you could put in enough you could use it the right terminology to get that anyways. So there was this big concern, one about those outputs looking like something that, say, the artists who did all of the concept work for Dr Strange would have done. And what's also there is in that discussion, then is you know, from the same perspective, this artist testified in the Senate AI and copyright hearing. You know she also said no one ever requested my permission to use any of my art to train AI, so no licenses were issued. I have received no remuneration for these the training of this AI, but then the outputs are now commercialized and so people are now paying for outputs from AI that shouldn't have been trained and I would not have given consent for it to have been trained on my own art and so this. There's a very dynamic interplay between these two sort of considerations.
19:00 - David Brown (Host)
So what I find interesting about this is there's been a very similar conversation in photography circles again for decades, because if somebody's standing on the street and I want to take a photo of them, I can take a picture of them and I can use that picture for pretty much whatever, because they're standing in public and there's no expectation of privacy or anything. And this always comes up. I think the typical case that everybody cites is if you've got, you know, there's a man in a park taking pictures of kids playing on equipment, and everybody gets you know like he must be some pervert, whatever. You have no right to take pictures of my kids. And it's like, well, you're all out in public.
19:46
There's, you know, there's no expectation of privacy, and it feels kind of like the same thing. But what you're saying is I again, I'm not a lawyer so I have, you know, no idea. So it's interesting that the you know that the writer publicity and I guess, if it's, I guess if somebody is like, if I put it in an art exhibit and I'm not making any money from it, that's one thing, but if I'm putting it in an ad for, I don't know, you know, detergent or something, that's a whole different thing and I'm using it to make money. I guess there are two different circumstances, so certainly.
20:18 - Kayleigh Nauman (Guest)
And certainly to from, I think, at first instance, at first sort of view. Yes, the photography situation certainly seems really relevant here, but in terms of writer publicity, it's finding a specific individual and trying to profit off of the image of that very specific individual, versus sort of more of a I don't even know, like there's a word that I'm not thinking, I can't remember right now, but it's certainly more of like a piece that you just take sort of to speak to the broader image and not that you're trying to to individually focus on someone. The interesting thing about this draft bill in the US which the UK is following, because, like I said, the UK doesn't have a writer publicity either is that it only applies to AI. So it's it's certainly really interesting from that perspective because you know, at the state level where they have a writer publicity, it's broadly right. It's not just an AI output. But if the federal right would only be regarding an output, not only would that really limit it, but certainly it could age it very, very quickly.
21:22
it won't be until later into:22:02 - David Brown (Host)
So yeah, we'll. We'll touch on elections later. I do have some thoughts about that and a question about that, so I'm hoping we'll get to it. But yeah, it's, yeah, it's well. Actually, let's go there now I think. No, sorry, I'm going to go back. I'm going to go back. I can edit all this waffling out, so that leads. So while you were talking and going back.
22:30
Obviously, right, publicity was one thing that I picked up on, and then another part of that, though and it's a question I have a lot of the time and it combines a couple of things that you've mentioned. One is it combines sort of the black box idea with sort of artists, art being used to train AI, and my challenge to that and this this is what I guess you always hear, but again it's nice to get a professional's opinion on this is what's the difference between that and training an art student at university? Because an art student will go to university. If they're studying painting or what if they're? If they're doing a module on, you know, classic art or whatever they'll, they'll be told to go to a museum to find a painting and they need to re duplicate that painting, or they need to to paint in that style.
23:20
Or if you're a photographer, you'll be given a. You'll be given a, a piece of work to say okay, you need to go do a profile picture and you need to do it in the style of Rankin. What's the difference? Because when a human does that, that now then becomes part of our knowledge set and we use that when we then go and take black and white photos of portraits of people. What's the difference? Because we're a black box as well, so nobody knows where we got that from. As a human. I guess the core of my question is this is why are we treating AI so much differently than we treat an actual human being? Because all it's doing is acting just like a human does.
24:04 - Kayleigh Nauman (Guest)
I knew you were gonna ask you this question. I was totally prepared for this one. The first thing I would say is, like, certainly, at face value, this seems like a really relevant question, but I really think it's a completely false equivalent, and the first thing being that, like not to get too potentially philosophical, depending on people's religious views right, but like machines, are man-made.
24:26
They couldn't do anything that they do without us having programmed them to do them. And we are not. We are bags of mush, I suppose, who have become sentient and are able to come up with their own concept. So there's that one right. Like as much as you seem like the same things, machines are not humans. Right, and as much as someone might be trying to program to act like humans, like one, they don't. And two they're still not.
24:51
They're still very, very limited in how they're able to operate and what information that they've been given, versus a human who is exposed to all sorts of inputs at all points in time and, like you, literally cannot identify how someone learned something because our memories, as like a number of cognitive states have shown, are not actually that good.
25:11
So, like we, we start to create new narratives and we misremember things legitimately, all the time, which I'm also a yoga teacher and I like to read studies about neuroscience and find it fascinating. So that's that's where that's coming from. One, two even if a human has read a bunch of novels and then writes their own novels, if they have directly copied someone, they are held responsible, and I think the best case as an example of this is the guy who wrote the Da Vinci code wrote this novel. It went, it got, it was huge, right, it totally blew up, it became a movie, right like this very famous movie star played the main actor and then he got sued by another author whose work it turns out he had copied and he lost right and he had to be held accountable for that.
25:57
Or my other favorite, snoop Dogg, in like the 90s sample, the police's, or was it was it Sting's?
26:03
I'll be watching you. Yeah, did it license. It lost a place recently and he now has to back pay Sting for having used his work without permission. So just because an AI robot is putting out an AI is putting out something that might be creative, doesn't mean that they shouldn't be held any more accountable than a person. From another perspective, right, but the problem is the AI itself isn't what's culpable, it's the person who programmed it, and certainly from a creative standpoint, one of the big arguments we're hearing here, especially in the US and this is going to get into another finicky piece of copyright that I'm going to try really hard to not make super technical okay is that a lot of AI models were trained under exceptions to copyright law educational exceptions to copyright law right so as an educational model it could scan just about anything it wanted.
26:56
Now a lot of those same models have pivoted and have gone public and are charging, so that would require a license that's no longer under the exception right. So this is this is another big debate that's being had in terms of the licensing discussion in the US, in particular, in the UK as well, but to a little bit of a different degree, because the US has something called fair use, and fair use means that if you are going to use a copyrighted material for education purposes, if you're only going to use like a very, very small snippet of it, a couple other, you know a couple of other exceptions, then you don't need a license. They call this fair use. In the UK we have fair dealing. Like you license pretty much no matter what. There are some exceptions for education, but they operate a little differently.
27:49
So there's there's a bunch of concern about that being exploited in the US, fair use being exploited in the US to train these models to the disadvantage of artists and creators. So we talked a little bit about this before I remember. In particular, something that comes to mind is last Christmas or so. There were two apps that came out where you could put in like 10 images and it would give you like I don't even remember how many hundreds for like four bucks, and one was like a bunch of different artistic styles and one was like, at a bunch of different points in history, what you might have looked like right?
28:23
and a bunch of artists came out and said that looks exactly like my art on Instagram where. I have a public account because people need to find me so that I can have customers, so that I can sell paintings that take me months to complete and I sell for a couple grand that I need to eat. And they didn't realize that, in having that public facing Instagram platform, they had given meta permission to use their page to train their ii models yeah right.
28:52
So there's. There's a big discussion to be had here about education, certainly, but also how do we make sure at the front end that, if there is any regulation to be done on AI, that it is ensuring that AI is operating ethically at a whole number of standpoints, right in the intellectual property space, right, certainly, from the copyright perspective, that it's licensing when it should be licensing? You know a lot of models. There's a question of do we try to do we make them go through that black box and identify things that should have been licensed and do we go back and do that? And, of course, the argument from tech has been that's huge. And the argument from music has been we can do it for every single song ever written that's on Spotify yeah you can figure out how to do it for all of those images.
29:37
So this, none of this has been resolved and certainly I don't think I could tell you what I think the best practice would be, but certainly there's.
29:45
There's a lot to be said for this becoming a big can of worms, if it's not sooner rather than later, which is also interesting because senator schumer, the the democratic majority leader in the senate, has a safe AI initiative and they've been having a number of AI forums where they've discussed AI in defense and AI in other areas. They haven't gotten to IP yet, but the understanding was that the outcome of that would be a big legislative package on artificial intelligence, and this has really been led by one of the new mexico senators. But yesterday that senator and five others introduced a piece of legislation in AI. That would be a bigger package and it's not clear on whether it was supposed to supersede any of schumer's work or not, but it certainly seems like it was jumping the gun again, very unlikely to become law.
30:37 - David Brown (Host)
This is going to be a very long-term discussion, but the the fact that it was introduced, that that came out before any of the discussions on the hill have really like wrapped up per se right, is very interesting so I I don't remember the last time, because it wasn't that long ago, that we actually had dinner here in tumbridge wells, but I don't remember the last time if we talked about the fact, my cynical view, that tech is getting involved in all the regulatory discussions because they want to slow the whole process down, because by them getting involved they can drag out meetings and they can say, oh, I can't do this, maybe next week or next month or whatever. And the longer this goes with no resolution and the more confusing they can make it, the better it is for them, because it means they can pretty much operate with impunity in the background while there's no regulation. That's that's my totally cynical view of the of the situation, because just from you know my 54 years of experience in life and and working in business that that seems like a, a sound business strategy from a from a purely predatory business standpoint. What was I going to say? So, yeah, anyway. So that's my opinion on. It's going to take ages. And it's going to take ages because mainly because I think the tech companies want it to take as long as possible. Oh, I remember I was going to go back, because what you were talking about is. You were saying that through this discussion.
32:11
Obviously a lot of the Western countries feel similar about this. So you've got the UK and the US and the EU and, broadly speaking, there are sort of ideas around this. For the same, what's the risk of this going to a race to the bottom, where you get some random country in the world that goes we don't care about copyright If you want to come and put your models here and you want to build your models and you want to do your servers and you want to have like what's to stop some country from doing that, because it only takes one in that instance, and then literally everybody will just go put their companies there that do the AI, they'll put all their servers there, they'll do all their training there, and then it's sort of open to everybody else. Has anybody thought about that, or?
32:56 - Kayleigh Nauman (Guest)
No, and certainly I understand where that thought comes from. My counter is no one's done that before. All of the big tech platforms are already regulated in a number of ways in Western countries, and most of them, I was going to say, play ball. But that's not true, right? Because you can see, like every time the EU tries to regulate US big tech, us big tech is sort of like oh, look at this lovely loophole you created for us and it starts over again. But certainly it's not like they've left the US or the UK or what have you and gone elsewhere already, and so I think that whatever would have to happen to get them out of these bigger countries would have to be very dramatic and like the US government's not going to do that, the British government's not going to do that, but I am certainly not an EU expert, so it's certainly a possibility.
33:47
But there are already a lot of developing nations that don't really have IP protections, right. A lot of them might even be signatories to international treaties on intellectual property, but those all have grace periods for low income countries, so a lot of them haven't had to implement, and I don't know what that time might look like for quite a few of them, but that is to say that that could already happen. I think part of it, too, is that while in principle that seems like a good idea, infrastructureally speaking, a lot of lower income and developing nations don't have the infrastructure to support the Wi-Fi presence even that you would need in order to be able to operate a tech-based business. You know what I mean.
34:32 - David Brown (Host)
But if that was a government strategy, they could double down on that and that could change really quickly, because if they said hey, I mean certainly yes, yeah, come to us, you can set up all your server farms here and we'll give you tax breaks and everything else. Obviously, it's a theoretical sort of idea, but it feels like the tax thing right. These companies go to massive, massive effort to dodge tax and they'll move money around all around the planet at different times and they know exactly where they need to move the money and when they need to move it and what country they need to go to. And it's just a big shell game and I just get the feeling that this whole AI regulation thing is going to just turn into a massive shell game where they just keep moving all the stuff around because they've got servers in these places anyway and it's just going to be. It's like whack-a-mole, right. The governments are just going to keep trying to catch them and find them in different places.
35:24 - Kayleigh Nauman (Guest)
I don't know it's certainly a possibility. I mean at the moment sort of back to your point about big tech train is slow down the discussion. They're certainly participating in it. It's probably likely that some extent they are trying to slow it. Right. That is just to their benefit, right? That said, regulation does change pretty often and they adapt. They don't like it, right, but I just got the new iPhone and now it has a USB-C port. That's pretty cool. I don't have to have a million different ports in my house anymore. So change is certainly possible. I think that that is actually a lot of.
35:56
The concern at the working level is this could become drawn up very quickly. The biggest hurdle truly is just educating enough members of Congress to build the coalition you need to pass the bill, and part of that is, frankly, some of them are very old and these things are all very weird to them. Some of that is like you know, you might have a member at Congress who kind of gets it, but their bigger priority is the war in Ukraine. Their bigger priority is their home district water being poisoned by chemicals, right?
36:27
So there's also, you know the people who are in the country, who are in the country who are having to whip the caucus, or the members of Congress to even be interested in something. Happily, so many people are talking about AI that I think that this is definitely the time right, like members of Congress are hearing about AI from all of their constituents, right, young and old, and so that is really garnering a lot of the interest on the Hill. Something we haven't mentioned yet is that the White House released an executive order last week or the week before on artificial intelligence. Yeah, I saw that in the news.
37:02
Yeah, I think it was officially their second one, but they've been doing work on AI since President Biden took office pretty much. This one was particularly interesting, though, because it called on Congress to pass privacy legislation, and it's been that the US does not have online privacy legislation.
37:19 - David Brown (Host)
They have no.
37:20 - Kayleigh Nauman (Guest)
GPR equivalent. There was some momentum last Congress I would say before the pandemic to really get moving on. It in a pandemic Very understandably slowed things down pretty dramatically. But the Biden administration had not previously weighed in really on online privacy. So the fact that that was a big statement in the executive order was a particular interest to me. Whether or not Congress galvanizes, I think, tbc granted, now that they've actually funded the government through next year, maybe that we'll start that discussion again as of yesterday.
37:54
The other thing that was interesting is, yeah, the White House has really avoided talking about IP, intellectual property, sort of broadly in general over the last three and a half years. But they also had a provision in there on copyright asking the Copyright Office to report back to the White House on whether or not there should be increased regulation on AI in relation to copyright, which is interesting. Certainly the Copyright Office has already been doing that review. They just extended a call for comment that they issued this summer. It was originally supposed to wrap up last month and now they're taking comments through December, though that's really interesting because my understanding is they already have more than 10,000 comments.
38:38
So by the time they get the rest of the comments through next month. It's going to take them a while to go through all of that.
38:44 - David Brown (Host)
They should use AI to analyze it.
38:47 - Kayleigh Nauman (Guest)
They might actually.
38:49 - David Brown (Host)
Which is a perfect use for AI.
38:51 - Kayleigh Nauman (Guest)
I don't actually know if they have an AI tool internally and I'm going to have to ask them, they will. They will eventually yeah.
38:58 - David Brown (Host)
But this is an interesting point and again it gets back around to what you use AI for, and I think there are some very valid uses.
39:07
If you do a big survey, for example, and you've got 250,000 responses to a survey that you need to analyze, putting all of that data into a language model and then asking it to come back and analyze the results and things like that that you get from it, that's an absolutely valid, 100% excellent use of AI Because it can digest enormous amounts of information that people can't and would struggle to do and it can go in and pull out themes and stuff like that from it. And that's a separate thing from creating original content. That's taking something and analyzing it and saying here are the results. And obviously we've got machine learning that do things like run all of our transport systems and everything else and that runs quietly behind the scenes that the whole world runs off of, and I don't think nobody has any problem with that really, because it's just a tool that helps, makes things operate faster. So I think that's a good sort of way to do it. I mean, I know I'm a bit flippant going they should use AI, but they absolutely should use AI.
40:13 - Kayleigh Nauman (Guest)
I mean, certainly there's been a lot of discussion, and US Patent and Trademark Office, us Copyright Office, the UK Intellectual Property Office, speak about these things at the working level all the time. Ukipo is actually using AI models to an extent and some of these sort of back end just administrative actions. The US Patent and Trademark Office is actually working on finalizing a model to help review trademark applications following a particularly iffy situation where Chinese applicants managed to get a whole bunch of fake applications filed in the US. I don't actually know where the copyright office is on this, but that is to say that at a government level, absolutely, we're talking about how we can use a lot of machine learning models effectively, particularly in administrative tasks. I think the concern is the human error and not training it properly and therefore missing things. And then, therefore, is there a need for humans to review some of the work that's being done just to ensure that things aren't following through the cracks and, if so, to what extent? And what does that look like?
41:19 - David Brown (Host)
I mean, no, you're absolutely right. It'll be interesting to see the research when it comes out, because somebody will be looking into it. But what's the rate of error of a human versus an AI?
41:31 - Kayleigh Nauman (Guest)
And.
41:31 - David Brown (Host)
I suspect I know what the answer to that's going to be.
41:33 - Kayleigh Nauman (Guest)
I imagine the AI is doing a very good job.
41:36 - David Brown (Host)
Yeah, probably. I do know to add a little bit to the mix and I may have mentioned this. I think I've mentioned this before, but I saw a presentation by the team that runs the data team at number 10 in the UK and they have a data modeling team and they do a lot of statistical analysis and they very openly use AI to help them turn through enormous volumes of data that they have. But the interesting thing about it and this goes, excuse me, this goes back to something that we touched on earlier, excuse me is it's not a black box. It actually gives a report at the end of why it chose to analyze the data in the way that it did and the algorithms that it used to do the analysis and then the variables that it found. So it gives them a very detailed report that they can analyze afterwards and they can go back and a human can review it and say did it use the right procedure to analyze this data that we would have done, et cetera. And in the presentation that she gave at the time, she said we've never once had it do anything that we wouldn't do like as an error and she said, in fact, we've learned from it because it's actually done things that we would have never thought to do, which were brilliant and actually gave us better results. So there is that as well. So there are tools out there, even though a lot of the large language models for these huge black, they call them black boxes, but some tools are specifically designed to actually give you that, so that it's not a black box and you can check their work.
43:15
I think this is only a medium step, though, because and again, this gets back to the errors and everybody double checking everything all the time this is just an adjustment period, but we will get to the point, I think, in the next few years, where we will have done all of that checking to death and we'll realize that in 99.9% of the time, the AI is going to give a more consistent, better result and answer to any question that you ask it than any human will be able to give, except for maybe the.
43:47
.:44:31 - Kayleigh Nauman (Guest)
I think we're sort of stepping outside of what I do a little bit, but certainly it seems so right Like it's. It's a computer program sort of at its base, which shows you that I am not a tech person. I apologize to everyone who's an AI expert who just cringed. It's fine.
44:45 - David Brown (Host)
We're just talking opinions here, it's fine.
44:48 - Kayleigh Nauman (Guest)
At the base of it it's 1s and 0s and like that's, that's what computers are more than great at right Running the numbers and doing the data analysis. So that, like, totally, I think we're going to see it become more and more ingrained. And certainly, like we already accept a small amount of human error. So like, if 1% error or 0.1% error from an AI is, you know the outcome, like oh no, because I'm sure human error is huge. You know, I think, when we come back to the intellectual property perspective, here the concern isn't really like AI models that do sort of those administrative tasks. The concern is generative AI models, particularly commercialized, private ones and what? And private meeting, like people in the public can pay to use them, but it's a private entity, not like private, like in house, and those are certainly the models right, like we're. We're already having the discussions, and those are the places where it can get very tricky very quickly if we don't sort of have guardrails or immediately, or at least from a policy perspective.
45:51 - David Brown (Host)
What if that raises an interesting question what if somebody built an AI and they said look, we're going to train it on everything and it's totally free, and everybody can use it freely for anything that they want? There is no economic benefits to the company that develops it.
46:11 - Kayleigh Nauman (Guest)
I mean certainly. You get a lot of those similar arguments from open source folks. Yeah, the problem is, is it? Training it on things that should be licensed is still going to be a problem, right, if they're somehow able to only source, scan through open source things and then make it in the output open source, okay, but it's never going to be quite that straightforward. You know some companies to avoid this. Have you know like the Adobe machine learning model has only used licensed Adobe images? Yeah, point blank. That's all it has, absolutely scanned. That is the only thing that they have used. And then Adobe even says for paying users, if you do end up in a situation where you are now being sued for copyright, violation you will reimburse your cost.
46:58
Yeah, I saw that Like that's how much confidence they have in that model, and a few other places have now followed suit. I think chat, gpt has now followed suit, which I thought was bold. Yeah, that's, interesting I don't know if I feel as confident about their inputs as I do about, say, adobe's.
47:12 - David Brown (Host)
Yeah.
47:13 - Kayleigh Nauman (Guest)
But the other side of this too is there's a number of pending AI cases in the US right now that I'm tracking in particular, at least like seven or eight, and the outcomes of these cases will also be very indicative of how we move forward on regulating generative AI in particular and what precedents are to be set. The US, obviously being home to a lot of these platforms, I think will start, will end up sort of ad hoc regulating a lot of other countries, whether they like it or not. Certainly I think that's why you see like the EU trying to get really ahead of this curve. Certainly the UK is interested in doing so as well, though obviously isn't quite as far ahead. So this is the US.
47:56 - David Brown (Host)
e have to have an election in:48:43
I don't know what the feeling is in the US. I haven't been tracking it too much and not actually living there and being able to talk to people on the ground, I don't have any real feel for what it might be like. But I certainly know in the UK that if there isn't a change to the Labour Party, that's going to be a miracle. They're like 30 points ahead in the polls at the minute. So I think that's going to be a philosophical change.
49:09
So, regardless of the party that's there, what I wonder is is you know we're going to have a whole new group of people and the current government's view has been that they want the UK to lead and as really working hard to try and position the UK as the leader on a regulation and everything else. They obviously had the summit a couple of weeks ago here where they were trying, you know. So that's been something of the current government to do, but there's no guarantee that the next government that comes in next time is going to care. They may have a whole different platform and they may go. Yeah, ai is important and we'll be happy to participate, but we don't really care because we've got bigger fish to fry and that same thing could happen in the US. And I'm curious to to know what you think about. Again, it's not like which party and who you want to vote for. It's more about what do you think the change is going to be and how do you think that's going to affect the whole landscape.
50:03 - Kayleigh Nauman (Guest)
Yeah, that's. I love this like crystal ball question. So certainly you know, easier for me to speak to the US. Certainly my viewpoint here in DC is that the election is still very neck and neck right. So obviously, like no actual election has started. Yeah, trump has not chosen to participate in any of the Republican debates. Yeah, but I think the understanding is it will be Trump and it will be Biden and it's going to be pretty close.
50:32 - David Brown (Host)
You know what's the feeling there, because over here the feeling seems to be what I've seen reported is is that even though he doesn't participate in any of the stuff, he's still the leading candidate.
50:45 - Kayleigh Nauman (Guest)
Yeah, yeah, For the podcast listeners, yes that is 100 percent the case. He just has such a cult of personality, right, like people who support Trump like want to follow Trump. Right? Unless he manages to get a felony before November, it very much looks like he's the candidate, right?
51:04 - David Brown (Host)
Yeah.
51:05 - Kayleigh Nauman (Guest)
So we'll see how that goes. So from a congressional standpoint, I don't think that the dynamic on AI will change pretty much at all. Right, like there's already some party lines here very minimally, you know. I think it's just the extent of regulation. I think Democrats typically want more right, republicans typically want less, republicans tend to be a bit more like let industry do it at once, and Democrats tend to be a bit more concerned about, you know, online safety, as it were. So we'll see those debates continue. If they do privacy first, I think that will actually address a lot of those concerns. So we'll see how that goes. From a White House standpoint, does Donald Trump know what AI is Right Do? Does he?
51:50
have staff briefing him on it.
51:54 - David Brown (Host)
I would he listen if he would he listen if he did have stuff briefing him on it?
51:58 - Kayleigh Nauman (Guest)
I sort of imagine personal opinion that his idea of AI is very much like the robots in the movies that are basically humans, because you know what I mean. So whether or not that is something his White House, if he came back, picked up on right away, honestly very hard to tell. Certainly, congress will keep pushing, needless to say, like we haven't seen platforms from Trump or really Biden yet on what they would foresee this like next four years. Looking like what do you think It'll?
52:35 - David Brown (Host)
be interesting. Do you think there'll be a swing in the, in the lower, in the houses as well, like it's different? I don't know how many people in the UK know how the American system works and vice versa, but what will happen here sort of is that the party will win and the and the party that has the most seats in Parliament will then put somebody up will be the prime minister. So it kind of goes together over here, whereas in the US it, funnily enough, is works completely differently and that's on purpose. So it could. You know, obviously, the way it works in the US. You could end up with a president of one party in the Senate. You know you could have a Republican in and different parties split up. Do you see there being swings in the houses as well? Or do you think this might just be the president, but it's probably going to be the same? Is it literally like it's just totally open and no one has an idea of what's going to happen?
53:27 - Kayleigh Nauman (Guest)
I haven't actually heard from anyone on what they think will happen. So right. I my opinion that I'm about to express might be totally wrong, but yeah, for the folks on the might be, who might be listening or watching, who are not super familiar with the US system, right, every single member of the House of Representatives will be up for election, all of them.
53:44 - David Brown (Host)
Right.
53:46 - Kayleigh Nauman (Guest)
And then I think about a third of the senators, I think the Senator Joe Manchin, who has been a big moderate Democrat, some people would argue actual Republican presence, and the Democratic Party has said he's going to retire. So that is certainly really notable, though normally I that's a West Virginia seat and normally people would say that a Republican would be taking it over. But it looks like there's a strong Democratic presence.
54:14
So that'll be interesting to watch. See Senator Diane Feinstein passed away. The governor of California has appointed someone to see out the rest of her term, but whenever that term is up we'll see that election. That said, that was a California seat and it will very, very likely stay Democrat, so that won't dramatically change this party dynamics. So, that is to say in the Senate, I think we're not likely to see a dramatic change. I think it's going to stay pretty similar. We might end up in a situation again where it's split 50-50, meaning the vice president is the tie breaking vote.
54:47
Certainly that's why we haven't seen more from Vice President Harris, sort of generally during this presidency because she had to be in the NDC in case of a tie break vote. In the House who?
55:00 - David Brown (Host)
knows.
55:02 - Kayleigh Nauman (Guest)
And in part I say that because we had the midterm elections not too long ago and everyone was saying the House was going to flip dramatically and be very much majority led by the Republicans. And it didn't. I think the Republicans only have a majority by like five or six seats, if I remember correctly.
55:22
So it's interesting too because, with the Senate being for almost evenly split, the Democrats only have one seat over the Republicans right now and with this House sort of being similar. If that continues, that's also a really interesting trend, because that requires more collaboration and cooperation and frankly, we're not seeing any desire to do that right. I think the other big wild card here is Senator Mitch McConnell. The Republican leader in the Senate does not appear to be doing very well. I have no insider knowledge of this right. They're keeping this very close hold.
55:56 - David Brown (Host)
Just what we see on the news.
55:57 - Kayleigh Nauman (Guest)
What we've seen on the news is that he's appeared to have a couple of seizures during news conferences, like press conferences, but he very, very closely holds the Republicans together across by camera-ly across both chambers, and there's really not a person who could step into a power vacuum if he retires or passes.
56:21 - David Brown (Host)
Interesting.
56:22 - Kayleigh Nauman (Guest)
And that very much like saying the speaker of the House situation recently, right, where someone that none of us were familiar with is now in the seat. I think we would have a very similar sort of okay, who's taking over? And when they do take over, what is that dynamic, and will they be able to whip their caucus as effectively as McCarthy does or sorry, as a Senator does, as Speaker Pelosi did in the past, right, so those are some other sort of outlying things here that are going to make a huge difference in the way that the Senate would operate and we just don't know.
57:00 - David Brown (Host)
Yeah, I mean, I know that's a little bit more politically. So maybe people were expecting but no, but I do. I think it's hugely important and I think on this discussion of how we move forward with AI and the regulations and the guardrails and all that sort of stuff, the parties that are in I think make a huge difference on the approach and how that happens. And I think if we're going to see a change in potentially both countries that are going to have major influence in what happens moving forward and around AI, I think it's potentially a huge concern because, excuse me, it's highly likely that labor is going to come in in the UK and they're going to say, look, we've got to focus on cost of living, we've got to focus on getting the NHS.
57:48
There have been loads of strikes. We've had strike action for the last 18 months. Two years we've had the train drivers have been striking. There's been all sorts of the NHS, everything and there's a lot of civil unrest.
58:02
People don't have any money at the minute. The cost of living is skyrocketing and they may just say, look, ai is a distraction for that at the minute. We've got to really get our, we've got to get our ducks in a row and we've really got to figure this out and it could just get put on the back burner, which may not be good for anyone, or where it means the EU is actually going to end up taking the lead on it and, while the US is distracted, if Trump gets in, the media is going to completely ignore AI. No one's ever going to talk about AI again. All it's going to be is it's going to be wall to wall coverage of Trump and no one's going to get a look in for months on anything, and I think that's my concern and that's why I think it's important. So, yeah, thanks for the viewpoint. Again, I know this is all just our opinions and we're just talking about it, but yeah, I think it's important.
58:56 - Kayleigh Nauman (Guest)
It's definitely going to make a huge difference, totally right.
58:59 - David Brown (Host)
Yeah, 100%. So I have a few questions. I always ask at the end, but before we do that, is there anything that you think that we should have talked about? That we didn't talk about? Something that you think is particularly important, that maybe something that's important that you want to highlight for everybody, just to say, you know, this is something you need to think about, or this is important, or whatever.
59:21 - Kayleigh Nauman (Guest)
Honestly no, I think we really covered sort of the gambit of it.
59:26 - David Brown (Host)
I'm sure we'll get in the comments. We'll get like 15 million questions going. Why don't you ask for this? Why didn't you ask for that?
59:31 - Kayleigh Nauman (Guest)
So, please, we can do a follow up sometime. We can do a follow up sometime.
59:34 - David Brown (Host)
Exactly, yeah, put in the comments if anybody has any questions. I'm sure I forgot to ask you loads of stuff and I can make sure to.
59:41 - Kayleigh Nauman (Guest)
I've forgotten the name of several things, and so I will make sure to like send you the links to those so that you can share them.
59:47 - David Brown (Host)
Brian and.
59:47
I'm going to put links to more information on things like the writer, publicity and also the. I'll put a link to the second wise how the most recent executive order that came out, in case people want to look at that. So anything that we think I'll put. I was just saying to Kaylee earlier that I have a whole new podcast hosting platform that I'm using, which actually enables me to do really really nice show notes and stuff now that I didn't have the capacity to do before. So doing things like dropping these links it'll be much easier to do. So I will, I will put all that in, so okay, well, if you don't think we really missed anything major, then that's. That's really good, I guess.
::I think we covered everything that I can think of, so go.
::So I'll just got a couple of questions that I ask everybody. Number one in your mind is AI, male or female?
::I think of it as very asexual.
::Okay, interesting Okay. And if you have not if, but when you have your AI assistant, because you will have one someday. When you have your AI assistant, what will you name it?
::It'll always be based on a cartoon. So, for example, I have a robot vacuum named Frobo, which is a frog robot from a Disney cartoon.
::Nice.
::So it'll certainly be something similar, but on the spot I cannot think of anything.
::Okay.
::I'll reflect.
::Okay cool, Send this to me. I'll put it in the show notes, maybe if you can think of something.
::What's the name of the like AI in She-Ra? Oh, it's at night. Maybe that I'll have to look up what it was.
::Yeah, it's going to say when it like we're furiously typing now I found like my table.
::That's a little shaky so I'm trying to not type.
::Yeah, yeah, that's fine. I just find it interesting as well, you know, because we are going to get to that point and and I've said this many times but I can't wait to have an AI assistant, because it will probably be much better at remembering things than I am and and it will probably be a lot better at getting stuff done.
::So if I have a system that can help me. My partner and I don't have an Alexa or an equivalent Like we. You know, like none of these things. I sometimes ask Siri to add things to my grocery list.
::Yeah.
::So I think personally, I assume that if, if, if and when we get to that point, my partner and I will be late adopters ourselves.
::Right.
::And we'll get to that point where we sort of have no choice.
::Well, I have. I got early access again because of the podcast. Somebody reached out and I got early access to an AI and it runs on signal. So basically, you just message it back and forth on signal and you can ask it to do things like can you find the lowest price for this particular thing, and it'll go out and find it, and can you book me a hotel and stuff like that. So it'll like you can ask it about holidays and like I want to go to New Zealand, can you find the best hotel, and you know Christchurch in February, and it will. It will go away and do the research for you.
::So, but it's very, it's very commercially minded. So it it's more about doing those sorts of things. But that's kind of what the AI assistance I think is going to be like. It's not going to be there to be your friend. It's going to be there to make a dinner reservation for you or to remind you that your anniversary is coming up in two weeks, if it, if it knows that. So, yeah, I can't wait. I think it's going to be fun, but I don't.
::It would be nice to be like sending that email and keep avoiding sending. Go, yeah, yeah, exactly yeah.
::Yeah, and just don't tell me what's in it, just send it Cool. And. And the last thing is I I usually just like to get people's opinion and this is kind of based on the sci-fi. So you know, we've had loads of different sci-fi films in the past.
::We've got everything from, you know, star Trek to you know, I guess the opposing views are a Star Trek on one side, where you have the utopian vision, and then you have Mad Max on the other side where you know the world's completely fallen apart, maybe not because of AI, but just you know we've literally regressed back and and and to sort of pre-computer days. And then there's all that stuff in the middle, there's tons of different options in the middle. There's, you know, there's Blade Runner. There's, you know, somebody the other day mentioned some Ian Banks novels where people actually live quite well with AI and and they have a symbiotic relationship with it. And I'm just curious to know where you sort of fall on that If you, if you think about the AI future in sort of 50 or 100 years, what's your kind of sci-fi vision of that? If you had to equate it to something, I'd say, sadly not super sci-fi.
::You right, Like you know, from a jet-tune's perspective, we were going to have flying cars and robots that could dress us and all sorts of amazing tech by now.
::So I don't actually think that we're going to see like fundamentally, a humongous shift in like the structure of society. Certainly, day-to-day how we operate, I think there will be some major changes. We might see more sort of like self-checkout-y solutions in a lot of places. You know, certainly I know that my colleague who works on labor is already being lobbied a lot on concerns about stuff like that. But I think truly like day-to-day how we live, I don't I don't personally see a major difference.
::Interesting. Okay, cool yeah, awesome, kaylee, thank you very much for your time this afternoon. Thank you, david. It was so great that we finally got to do this.
::And, like I said, if there's a bunch of comments or questions like you can always follow up.
::I'm sure I did one on mental health, a couple of weeks ago and I did the whole interview and everything and then I told some people about it and then I got absolutely camed because it was like why didn't you ask this, why didn't you say this, why didn't you do that? And I'm like put it in the comments and then I can go back and say hey, let's do a follow up episode to answer them.
::So yeah, if anybody has any comments or any questions, I'm just waiting for like the very like tacky folks to be like. No, that's not right, you said it this way. That's not what it does. I'm an idiot.
::I don't know what I'm talking about. I worked in data and data analytics for a long time, but I was on the customer side, not on the. I'm not a data scientist, I'm not a data engineer, right, I just I worked with the customers to try and interpret the results, but not working in how it works. So I don't know either.
::I know how government is thinking about regulating how things work.
::Yeah, and I know you have to be careful as well, you know, with what you say, to say no problem, okay Again, thank you very much. It was lovely talking to you and hopefully let's not have it be so long next time. But thanks for coming on the show and I'll speak to you later.