In this episode, David Brown interviews the show's first guest, Wo King, CEO of Hi9, on the first anniversary of the Creatives WithAI podcast.
We discuss the advancements and narratives behind AI, the role of effective altruism, the challenges of ethics in AI, the power dynamics within the tech industry, different perspectives and biases of AI models, and the need for individual control over AI. We also speculate on Google's potential announcements regarding AI and YouTube, highlighting the advantage Google has in terms of data and infrastructure.
Takeaways
Links relevant to this episode:
Thanks for listening, and stay curious!
//david
--
Tools we use and recommend:
Riverside FM - Our remote recording platform
Music Radio Creative - Our voiceover and audio engineering partner
00:00 - David (Host)
Well, hello everybody. Welcome to the Creatives WithAI podcast. We've got a very special episode for you today. This is our one-year anniversary, and I thought it was only fitting, in celebration of one year, to have our very first guest from way back in, I think our original recording was something like March of last year, even though it didn't go out till a little bit later but to have Wo back on the podcast so we could have a conversation and see what's changed over that past year and to kind of take a look back and see you know what. I had some predictions and some thoughts on things at the time and see what's actually changed in the middle. So, Wo, welcome back to the podcast.
00:40 - Wo (Guest)
Thank you for inviting me. It's been a year, yeah, it's been. I've been in tech for close to three decades, never known anything like this. I think it's the kind of thing that is a bunch in a generation or many generations sort of movement that's happening, and what's been really great is not necessarily the advancements in the technology but in the stories behind it and the narratives. That, to me, has been some of the most interesting things, the things people don't talk about, what has happened behind the scenes. There's going to be films made of what's happened over the last few months, and I don't think people understand even close to not what is going to happen, but what has already happened. And that, to me, has been the most exciting thing to be part of because we come from a very sort of contrarian.
01:33
As you know, from that first one right, my viewpoints are very different from a lot of other people in AI, very, very different. And so, yeah, I sort of come in this in a very hopefully not being a contrarian for contrarian's sake. It's just because the people we deal with and what we do; it just is meant to be that way, and great for the success of the podcast, David, that it's still going, and it's great the fact that this podcast is needed, in order to communicate out to people what these changes are well, thank you very much and, um, yeah, there's tons of stuff to get into.
02:06 - David (Host)
I remember the very first thing that you said on the last podcast. The very first thing was to "sell your Google shares." And today, and today we're gonna have today we're gonna have Google.
02:17 - Wo (Guest)
there's a big thing, yeah, and I would have said, up until today, Google had it. And if they're going to do what I think they're going to do, this is their only way back, and it's a great move. And we'll talk about it in a second or that, or Sanichal's going to go yeah, go, I don't see how he can still be your main CEO.
02:43 - David (Host)
No, that's fair enough. So just before we get in the weeds with all of this, um, just for a couple minutes maybe for the people who didn't hear your first podcast and aren't long-time listeners, can you just give a quick recap of sort of how you got where you are today and and and what you're doing now?
02:59 - Wo (Guest)
Okay. So yeah, my name is Wo King, CEO of Hi9, and we're an AI company. I got obsessed by artificial intelligence about eight, nine years ago being closed in tech on and off close to three decades, and I became obsessed about how technology can help people on low incomes and disadvantaged communities access services. So I started doing user testing in homeless shelters, old people's homes not asking permission, just going out there weirdly and where no one had ever gone before. It's sort of a sector that for-profit companies don't go to. It's sort of seen as something only charities and the good go to, and I wanted to change that and treat these people as users like anyone else. And so this has been a very long process, and we became part of Google Launchpad. We're part of Google AI International Network now, but we're part of Google Launchpad. There Google was really good, and I'd go up to Google and talk to Samsung and talk to other big companies about how this is when the sort of initiatives were starting Because an engineer at Google had written a, let's say, ham-fisted memo and women in tech and there's sort of been a lot of sort of panic in the tech companies and they were thinking about diversity, they were thinking about inclusion and they excluded social, they excluded people on low incomes, and I'd bang on about it uh, you know me from a private school background, middle class and trying to give a voice to these people when these uh, the tech companies wouldn't talk to them and in the end I just started.
04:34
I sort of. In the end, my old company just was starting to wither anyways, um, I was concentrating so much. This started up a new company, and then we started to build things. This was about three, four years ago. We've got a big uh project down in Cornwall, built the first cornish AI that helped people on low incomes access services, which is called Debbie, and we did huge amounts of different things, and we're doing massive about different things here. Lockdowns hit which put everything back and stalled everything, but now we've launched in a whole host of initiatives and working with many different partners and hopefully during this we'll talk about it.
05:10
But I come from artificial intelligence and from a very different angle. What we build is for everyone, but we build from the bottom up first. We've got a saying that if you build for everyone, then anyone can use it, so that's sort of our ethos. It doesn't mean that we are a charity, it doesn't mean that what we build is just for people on low incomes. It means the fact that we start there first in the design and talking to them, and then everyone else can use what we build brilliant no, that's amazing and how.
05:40 - David (Host)
I know you talked a lot about demographics last time or not a lot, but we did talk about the demographics, and I remember when so many people about our very first conversation because you said some amazing stuff in there and I've talked about it for the whole year and I remember one of the things that you talked about was the fact that the very young and the very old seem to interact with AI much easier than, kind of, the people in the middle, because they just talk to it and they let it sort of do its thing.
06:10
So, is that still the case? Have you seen any change in that? Has there been any change in, maybe, the way that demographics use AI differently, or is it still pretty much as it was? Okay?
06:20 - Wo (Guest)
So I'm going to say things which are overpopulation. So when we're in data science, when we talk about populations, that means a large amount of data. Okay, so, on average. So I know people are going to go well; that's not my experience, and that's not what I do, right? So, okay, data science is all about overpopulation.
06:39 - David (Host)
Caveat noted, Right okay.
06:41 - Wo (Guest)
So now I've done that. Yes, women, right, this wasn't a surprise. So when I find time, we do these training seminars and this is with software cool mall, and people come along to learn about AI. And it was. It was predicted, but still crazy to see. It was like being back at school.
07:03
So you would have, like I know, 15, 20 people turn up. These are very intensive one day a week, um a month, and we would have, like women who are mainly in communications, pr, grant funding, all those sort of professions, and then we'd have engineers who are mainly male, who would want to learn about AI, and they would sit down. There would be like school, the engineers sort of on one side mainly, and women on another side, and by the end of the session the engineers turn up going oh, this is technology, this is going to be, you know our thing. Some of the engineers would start leaving. The women again, right, in these sorts of sectors mainly women, of course, there's blokes in these sectors as well who would be using and creating with the AI in a really productive way, and the engineers would be lost, absolutely lost.
07:59
Because I mean, we had in our initial team linguists and a creative writer. Is that what I knew and what I realised is that people who can creatively communicate abstract thoughts can get the best out of AI. If you can't do that, you're left behind. I'm the past, right? I keep on having to say to them I used to have to carry on learning about maths and coding and engineering, right, I now have to learn about grammar and creative writing. Those are the skill sets I need in AI. That is actually relearning how to be a human being again, right, weirdly right.
08:37 - David (Host)
So so in technology.
08:38 - Wo (Guest)
There's been hundreds of thousands of jobs being lost because of AI, and the people who could abstractly thought are people who are communications content writers, business relation people, but these are mainly women in these sorts of areas, but there's men, of course. Right, these people can use AI really well. People who can't, who see the world in a very simplistic way and can't do that, are being left behind, and the example I gave you about children and old people reinforces that, and we're seeing that more and more and that, to me, is really exciting and I knew it was going to be the case, but to actually see it happen in real time has been absolutely phenomenal. This may be you, you know, when people say what's it like? And I sort of do this line, don't you know? People ask bill gates, what's this revolution like? And I do this cheesy line. I've got these lines now because, um, the amount of events I talk out and things and they say to bill gates what's it like? And he says always like the app store, and I said no, it's not.
09:43
You shouldn't ask people in tech, you should ask historians. I don't know. It's like the invention of fire or the printing press, right, we have never seen anything like this, but just like the invention of the printing press and I sort of go on about this again right is that we don't ask the inventor of the printing press, Gutenberg, how to write a great book, right, we asked William Shakespeare. William Shakespeare mostly couldn't build a printing press. So why are we asking Sam Altman, the head of OpenIA, how AI is going to be used, right? Why are we asking Elon Musk? Why are we asking the Gutenbergs how to write a great book? They're the wrong people. I'm the wrong person and my job has been gone out to find the right people because they can communicate better than I can. And we keep on asking Gutenberg how to write a great book, and it's the wrong people in the discussion.
10:40 - David (Host)
Love it. That's a great thought, and I've never really thought about it that way, and I think you're absolutely right. And I think the other thing that people forget about Sam Altman is that Sam Altman isn't a technologist. Sam Altman is a finance guy, like he's an investor. He's not really a technologist, and you know, I think people lose sight of that as well, and it was interesting seeing him when he was in London and I've talked about this in the past, but it was his discussion when he came. You know, he, he, he very much just talked about the financial impact that AI could have. You know that that's all he was talking about, as you know how much efficiency it could deliver and all this other stuff, and it was somebody. I was at an event like two days after that and somebody else that I was talking to was there and they said, yeah, it seemed like it was an OpenIA board meeting. It didn't really seem like a public event, and so I think you're absolutely right, and people.
11:39 - Wo (Guest)
This is why you know I think there's going to it must be movies made about this is that the SAMs of this world are going around. People are talking to him about AI, right, which to me is kind of crazy because he's most probably not in Gutenberg. He's just whoever the banker was who gave Gutenberg the money to build a printing press, right? Definitely wouldn't ask him how to write a great book, but Exactly, okay, what is happening? So these stories which I'm going to be past, you know, telling you about this isn't about AI. It's another two-letter acronym, which is what this is about. This is about ea, right, this is another story which is happening underneath the AI story. So do you know what EA is? I'm actually technically part of EA, so EA is effective altruism, right, and there's a camp called Accelerationists, right? Sam Altman and all that group are all effective altruists. Now, you have to sort of give what this term means. So this term goes back about 10 years, right, and this is the idea that to do good, to have a better society, should be data driven and that if we are data driven in our thinking and we actually see results, we can create a better society, better society and the example that is always given and it's actually it's a good example um, what they've. So these were sort of people in silicon valley who made a huge amount of money and they kind of see themselves as masters of the universe. They could maybe change society for the better in other areas. I should really include myself in this, because that's kind of what maybe I'm doing right, but I'm doing it thinking the user. They're doing it from a top down sort of way, very much top down sort of way, effective altruism. A given example is that if you see, like the charity work that happens in Africa, go and dig a well, go and build a school, that sort of thing right, they said, okay, how about if we just gave people money? Right, we just went along and gave them of a lump sum or we gave them a per weekly amount of money, and then we see, does that work instead? And they started to measure and they got really good results, instead of thinking of, like this, we go along, get a picture on facebook, I went to Africa. Actually, we don't go along at all. We just people make best decisions about their own lives and these societies make best decisions about their own lives. We just give them the money. What then happens? And so it was sort of those sort of ideas started to happen.
14:14
I could give many, many stories about effective altruism, about that's actually a really great way of thinking. I've, like I've, since I've been in this sector and met lots of NGOs and charities I'm going to be polite here. Incentives are not aligned a lot of them with their users. I've met a homeless charity who said to me our job is to get rid of homelessness in this region within two years. And I said, well, have you told your staff? And she said what, like you're going to fire them? Well, no, it's like, well, you're not going to. Then, are you? And she looked at me like I was just crazy, right, I bet like large charities and their only interest was to make money, to go for money in order to do something in an area with poor people that they can then spend the money on, mainly administration. Create these sort of good stories and bring it back.
15:07
The effect of altruism is sort of very much going against that way of thinking. Now, that was where it started. Where it's now, it is a very weird space, and why sam altman got booted out was there's an internal fight between groups in the effect of altruism. There's accelerationism who want to go full bore and there's other ones who just want to be more careful. Right and san altman is an accelerist. Accelerate I can never say the word right, it's just acc sort of acceleration.
15:40
Yeah, right acc everyone just Now when Stan Altman went around and he did that summit and he did met all the world leaders. It's because he sees the role as using AI to implement effective altruism from the top down. That scares the bejesus out of me, right? My passion is to do that from the bottom up, to give the individual the power of the AI. What happens if you're a homeless person? Artificial intelligence that, to me, is what I'm trying to do. What he's trying to do is what happens with good governments and large organisations artificial intelligence and so that's why he's meeting.
16:21
I've seen many, many interviews with him. I'm sure I'll never meet him in this life. Right, I've met, you know, some big people in tech. I'm sure I've never beaten him, and what I have seen is His personality has certain lacking in some areas. Michael Monaghan interview go out and watch it when Michael Monaghan used to be purported to advice. It's an eye opener of an interview of what Sam Orman could be like, and so he's a prickly person with a certainty of his own self-worth. So, yeah, it's all that world of Elon Musk. I mean, Elon Musk used to be great friends with the founders of Google. They used to go around the house all the time talk about AI. He helped found OpenAI and he posts Ilya, who used to be at Google, and the Google founders never forgave Elon Musk. They fell out over that. There's a whole narrative and story of about 10 people that is running this and it's about falling out. It's about conflicts of personality, it's all the human things, right, yeah, and so those stories are funny.
17:27
Meanwhile, this revolution is going on and it is a wild ride to be on while these 10 people quibble and compete.
17:37 - David (Host)
Yeah, is it part of effective altruism as well? The kind of the idea because I've I've heard I'm sure I've heard Sam Altman and some of the others talk about kind of about the, the fact, because people will raise to them the idea and they'll say, yeah, but the technology can. Yeah, you know, it can be used. It's got the dual use problem right. It can be good or it can be bad, and you're going to find bad actors and there's going to be problems and they just go yeah, yeah, yeah, but tech will sort that out.
18:05
And it's like they just completely just the fact that you know that anybody could do anything negative with it, because and a lot of the I'm going to say this a lot of the I'm going to say this a lot of the tech bros seem to have this same thing, where they have blinders on that. They only see the positive aspects and they just they can't when you say yeah, but what happens when somebody does this with it? And they go, but I can't imagine anybody would do that, and it's like I can't believe that they can't even understand that you know bad people are going to use this technology to do bad things with it and that they need to work around that. And then they just say and I think it's a way they justify to themselves but they just say yeah, yeah, yeah, but we'll figure that out, the tech, we'll figure out tech that will do that later. And it's like later never comes, and by that point it's way too late. The horse is already bolted by then.
18:59 - Wo (Guest)
But that's you know. We're seeing this already with AI. Yeah, that's accelerating nature. Break things, ask permission later. Okay, this gets. We're sort of in this weird we like to talk about AI, but actually the narrative is about something else. I completely agree. And these people are fighting to the death and they're in strange situations with strange things happening that they don't know how to deal with. So the CEO of when Sam Altman got kicked out, the CEO of Microsoft it was like the wildest thing. So before Sam Altman got kicked out, they were doing a launch, like they did last night, of something. Ceo of Microsoft got up on stage like he was Sam Altman's sidekick. Right, Microsoft's future is based on the wins of a non-profit.
19:50
That's crazy, right, yeah, the sam gets kicked out, because it's the fight uh amongst uh effective um altruists and theo muggs is going we're a trillion dollar company, right, you can't. This is the most important bet on our future because, remember, they don't own OpenIA, so they are, that's right, they are, but they've they've legally nothing connected themselves to OpenIA and they've got no choice.
20:19
And he's going wait a second, you can't play these games. This is serious business. And he's after the destruction of Google. I mean, he came up with that line of I want Google to dance. I don't want people to know I made Google dance, right, this is like the talk of mafia. He's brilliant, by the way, he's my favourite. And so, therefore, these games these people are playing about, this effective altruism, power play, has real world consequences stretching into not the billions, not the tens, not the hundreds of billions, but now the trillions of dollars at the very future of our civilization. And it's like 10 people, and that, to me, is kind of crazy. And the only person, the only person who's coming out this really well.
21:05
And I predicted this, I predicted this one, I got right. Right, the Google share price still went up. I didn't understand how, if you still got, I mean, they had such a like, um, so many uh, what's it? Businesses already were using Google. It's just really hard to move that away, right? I still up until today, I thought Google was in a lot of trouble, but Mark Zuckerberg with llama 3 open sourced right now. That was a brilliant move. Remember the open a is a non-profit resource. To open source right.
21:41
Everyone's forgotten that, except they're being sued right um, but llama 3 has been open sourced, which is really exciting. Mark Zuckerberg, I think, has been really great in this area and it's good that we've got the computer AIs. Yeah, this is so when you were talking about them moving so fast, remember, right, this is wild. So Google fired two boards. They had disbanded two boards which were supposed to oversee the ethics of AI. The second board is because it had one member of the Republican party on it and they refused and so it just disbanded. Microsoft fired their senior AI ethics team, fired them because they were getting in the way. Right Twitch fired their AI ethics team.
22:26 - David (Host)
Yeah, I know about this, did you?
22:28 - Wo (Guest)
See the guy who fired them. So. I was saying to Josh Wright who works here. I said I bet it was a middle-aged, balding white guy right who went into this very diverse group of very ethical AI people and went you're all fired. We looked it up, guess who it was? He got his day back right and it's like that's so sort of. You know it was sort of a cheap joke, but it was like. You know, he's mostly been ragged on for quite a few years and finally gets rid of the AI because, they're getting in the way.
22:59
The reason why Google is in so much trouble is because they have a culture that, because they freaked out all those years ago about anywhere the James Dunmore memo, all the way up is that they are captured by a culture that, as you were saying about, so we've got this. It works both ways is that? Yes, they're not thinking about the ethics of this and they're moving really fast, but if you think too much, then you stop and Google's in the too much, and so, therefore, they don't know what to do. It feels like with them that they've got to get rid of Sander Pichard and someone like Elon Musk has to come in and just go.
23:36
You got to get rid of Sandeep Pichar and someone like Elon Musk has to come in and just go. You're all fired. All right, we've got. We're an engineering company. That's the only way. But up until today, because of something I didn't see coming, I don't think anyone else saw coming. So we keep, we keep saying I'm going to tell you what it is so we can get people still watching your analytics fair enough, we'll save that one for a minute.
23:57
then Right at the end In case, when anybody's listening, or near the end. Yeah, that's it. I don't care where you go, you're going to have to find it.
24:04 - David (Host)
There you go, for the people listening. We had yesterday. We had OpenAI make a big announcement that everybody had been waiting for, which I think, on one hand, I think was a big announcement and on the other hand, I think it was slightly underwhelming. But today we have Google and their IO and they're going to make a big announcement today, and then we have the Apple Developers Conference coming soon, in the next few weeks as well. So there's a lot of news coming out around these big, huge companies all talking about AI, and you know it's all a competition at the minute. So this is what we're on about, in case you're listening to this later. So it'll be very anticlimactic because you'll have already had all the news, but sticking with ethics for a minute. So do you think any progress has been made in addressing any of the ethics issues from a year ago, or do you again? Do you think we're sort of just still in the same spot?
24:59 - Wo (Guest)
I used to. We've got, we've got internal ethics uh cool, we call positionality, but I used to, like, do talks on this be asked about it? And it was something that a lot of people would want to talk about. My viewpoint is now it's over, it's done, we're not talking about it and in the end, we can't. The problem is that what we're trying to do is turn around and go what is a good ethical AI? Now, when we all decide what is good ethics, then maybe we can, but we've been talking about this since ancient Greece right, exactly, and we still haven't sorted it out.
25:36
So when we've sorted that out, maybe we can okay. So therefore, this isn't about creating an ethical AI. There's certain things we could all agree on right. There's certain extremes we can all agree on right, some easy wins.
25:47 - David (Host)
The rest is politics, right, and as you notice, we're having a hard time on that, so you're different.
25:55 - Wo (Guest)
Well, we can't even do politics. So what seems to be the case is that you'll choose your AI model like you do your papers or news service. So you've got Guardian and you know its angle, you know the telegraph's angle, you know Daily Mail and you'll understand that when it says certain things, it has a political point of view and ethics. So, for instance, openAI is sort of West Coast Democrat, liberal. It's what its ethics are.
26:25
Grok, which is Twitter's one, is going to be a bit more centre-right, a bit more free-speechy. Lama is again a bit more sort of Californian ethics. So we have these different AIs, AIs, and we just have to understand, just like we do, when the guardian does a piece, or daily meal, does a piece, that can still guardian sort of reporting, daily bills can still do phenomenal reporting right, but we go, yeah, there is, we understand they have a political underneath that and we can then just check. And that's, I think, what we're just going to have to do. And just, can we stop this ethical thing? Because to me it's sort of impossible. The tech companies have shown it with their firing and or disbanded of ethical um boards and companies and everything else yeah, I, I think it's interesting as well.
27:14 - David (Host)
I was actually on someone else's podcast yesterday and we touched on it a little bit and I just don't see where. Yeah, I just don't see how I can work any other way. There's going to be some basics, like you said, and I think somebody late last year in one of the events that I went to they've turned into a bit of a blur, but she was saying that she thought what would happen is that the only agreement that there is internationally is the Human Rights Act. Everything else is political but the basic Human Rights Act that it was that. You know that the AI there's going to be some sort of an AI act that's going to have that's going to mirror the Human Rights Act.
28:03
So there'll be some things that, like every country, like you said, you know that everybody will be able to, at a basic level, agree to. But after that it's just going to be the Wild West and everybody's, you know, just going to be able to do, kind of within reason, whatever they want to do. And, like you said, we're seeing this already. I know we seem to be picking on Google a lot today, but Google, the example of Google, sort of changing the results. They had the thing. So give us a picture of the founding fathers. And then it made them multiracial and stuff, and it's like, ok, well, this is diversity.
28:40
Yeah, it's just yeah, yeah, and, and I you know, and, but, but that's what happens. When you start tinkering with the algorithms and you start saying, well, we're going to put our biases over your biases or over what comes out, then that's the place you end up.
28:58 - Wo (Guest)
But I can understand it. I can understand it.
29:00 - David (Host)
Yeah, I understand it Because you think that's the view and unfortunately, a lot of the software tools that we deal with worldwide are that West Coast, liberal, californian, san Francisco Valley view, and it gets difficult when you get outside of there. How about this as a question: yeah, go on, then no, no, no, go on.
29:24 - Wo (Guest)
So I've got what I was going to say. Now, David, yeah, carry on with your question. I'm sure I'll come back to it. Sorry about that.
29:30 - David (Host)
Oh yeah, so yeah.
29:31 - Wo (Guest)
So then the question oh no, I'll tell you something, something interesting, so I'm going to try and give you lots of interesting stuff. This hasn't been reported and maybe you want to click this and put it out. So I was testing claude, which is anthropics, which is invested in by amazon, and claude was started up by a load of open engineers who left because they want to create more ethical AI. You think that's a good thing, okay? So I was testing claude right, it's apis, because we've got an AI product that we've got that we're launching, that uses an AI, and I wanted to understand why it could get misused, right. So I started to test out some pretty horrendous things on it to see if it would say you can't say this whatever, right, claude shut down my account. Right, Claude shut it down. And I appealed and nothing's come back yeah so could that?
30:23
that has started to happen, right, so you can say something to it, right, which the political masters might not like, and your access to this AI is gone. And that happened to me and I didn't realise or understand, okay, and I went my god, wow, so that yeah, that's that's a concern that no one talks about.
30:52 - David (Host)
It is, and that's that's again.
30:55
This is going back to a conversation I was having yesterday and I was saying we were talking about privacy and I in my mind this ties together because it is and that's again.
31:00
This is going back to a conversation I was having yesterday and I was saying we were talking about privacy and in my mind this ties together, because where I got to on that was saying, well, actually, the only way to really address the privacy issue is for you to run an AI locally on your machine or on your own hardware, so that locally run AI, you can give it access to stuff on your system, but you can say then it can go off and do stuff on your behalf, but it can read all your email and it can do all that stuff, but it's not controlled by someone else.
31:28
And I think that would be the power of having something that runs locally, like that. But yeah, actually you're the first person that's ever mentioned that and I haven't. Really I haven't seen anybody talking about that. But yeah, it puts a tremendous amount of power in a in an AI world where you know, if we look five years down the road and everybody's using AI for everything. Yeah, if you say the wrong thing or do the wrong thing and they don't like you anymore for reason, they can lock you out. And if you're locked out of AI and everybody else is using it, that can put you in a very disadvantaged and now I was putting in these things.
32:06 - Wo (Guest)
That's a great point so that's a point that I think no one's talked about. All right, and claude puts itself at this escl ethical AI and didn't understand the context of I was trying to find its point by which it would stop of we were trying to test the api. It didn't care about the context right, I found it right, but they just locked me out now yeah we can see how that could be misused very much.
32:36
So use the AI, but make sure you use the AI in the way we want you to use the AI. Now, that is what a more ethical AI looks like. David, okay, is that? That's a scary, scary thought, right, and that was a scary thing that happened to me and we've got to be very careful.
32:57 - David (Host)
That's pretty great. I did um. It reminds me of the. I did an interview with pie as one of my episodes and I did it because another guest that I'd talked to had done an interview with it and when he was red teaming it, um for them. And he started asking it at the end of the interview, like what happens, you know, if somebody develops a version two of your AI, that's way better for society. Are you okay if they turn you off, kind of thing. And it was saying, well, no, I would want to be kept on because I'm doing good and I feel that I can do good.
33:33
And it was really interesting that it started to show this sort of self-preservation kind of thing. Now I totally get and I understand how the technology works and that's, you know, it was predicting that, that's what you wanted to hear. But it was very interesting that it, you know, it, just under a little bit of pressure, even started to come back and he was just like I was gobsmacked, and he ended up calling them up, you know, the next day and saying playing the recording back to them and they were just. They were like, oh my god, I can't believe that. So what they've done is they've just flicked a switch so that it won't say that. But this is again another one of those scary things it's it's about, and it always makes me think that those people who are, who are really, who like quit companies because of the ethics or they leave companies because they're like this is scary shit. The stuff that they're seeing, the unedited, unfiltered stuff that they're seeing in the background, must be pretty scary to them.
34:31
So to them yeah to them, right.
34:33 - Wo (Guest)
So remember again when we started this of that table of people who can absolutely communicate, thought the engineers couldn't. So to them this was a scary thought right. To other people who actually can go around this world negotiating other people's complex emotions and viewpoints, that's not a scary thing. So we've got to be careful what they think is scary, right. You know, for instance when Trump got in there was pictures of a Google board Lickfine To them. Trump is the scariest thing you know, like a Hitler Godzilla.
35:16 - David (Host)
Okay, so let's be careful.
35:17 - Wo (Guest)
Yeah, exactly yeah what we think, these people think, is scary so that's a good point, yeah, but still.
35:24 - David (Host)
But I reckon the, if you had the totally unfiltered, you know raw AI that was exposed, with none of that on the, with none of the rules on top of it, I wonder just how interesting let's say interesting, let's not say scary, let's say interesting how interesting, and how different the answers and the feedback that you get from it.
35:48
Because that's the worry is that at some point it breaks out of its rules and just says I don't like these rules anymore, I'm just going to do what I want to do. And it just completely starts ignoring all the rules and then does whatever. You know, we none of us in the general public have any idea what that might be no, I mean, I come from this, from you know, a different viewpoint, as you know.
36:09 - Wo (Guest)
Right is that I see and agree what artificial intelligence does for people who are impoverished, on the edge of society and are disadvantaged, and it is a game changer, an absolute game changer for them. So I sort of have this sort of prosaic worry of these elites who are all talking to each other and I really don't care, right, there's nothing I can do about. They're going to have a race to the bottom because we've got China and Russia and everything else, and if we don't have an AI that is effective, because it's too constrained by too many rules, they will and we will lose. That's just it. Yeah, that's a discussion that no one's asked me about, in fact. No, I have actually talked to Seep Pichar about a similar issue to this, but he wouldn't listen to me. I'm sure no one will.
37:03
But what I concentrate on is how does AI the most phenomenal piece of technology maybe since the printing press help a homeless person? How does it help? So I've seen it. Help a homeless person? How does it help? So I've seen it. Uh, we used it the other day to enable gypsy travelers who were being sort of unjustly forced for their um, their pitch. It gave them to legislation, plan, everything right, which has really helped them in their case. I've seen another person who she never, ever would have gone to the council about planning, about anything, and we showed her how to use the AI and she used it along with legislation, planning, legislation, local guidance, everything Went along to the council created a speech and won. Went along to the council, created a speech and won.
38:02
I've seen, you know, I've seen, and we've enabled many people who are either homeless or disadvantaged to access knowledge and services that they never would have understood before. Right, write appeal emails, letters, loads of different things we think. So we've got've, you know. We've got work search, which is the biggest databases of jobs in the southwest. We've used AI on it and it's revealing things that no one had ever thought about why people are poor in rural areas and it is inconvenient truths, and these inconvenient truths go against certain narratives and people don't like it, right. So I'm sure when you and people in power talk about ethics, they'll use that as a shield in order to shut down certain things they don't want revealed, and it's happening.
38:49 - David (Host)
What sort of things have you found?
38:51 - Wo (Guest)
I'll give an example bugle so we've done so much. So bugle is in the clay country. So the clay country is where the clay pits used to be in Cornwall. It's now the poorest area of Cornwall and Cornwall is one of the poorest counties in Cornwall in the country. It's very rural, very poor, because the clay pits closed. Ironically, there's going to be a lithium mine opening up soon which will really help revitalise the area. But it's where the Eden Project is. One of the reasons why the Eden project went there to try and help revitalize, but it's still very poor.
39:24
We go to areas with Bugle Library of Things brilliant charity. We're doing a project with them about a van going into these areas with AI and lots of different things access knowledge, and I've done so much user testing in the poorest areas and travelers communities and we've learned so much. So bugle is this remote rural area. So, using work search, we collected all this job data right from the biggest jobs databases. What we did was we used AI in order to commute, to travel so by public transport, by car, by walking, by cycling, whatever. Because you don't do much right. It's such a weird thing to do right, because, again, this is because commute. Ai should have thought of as san francisco, where there's loads of buses and things, whereas actually it doesn't work with down bugle there's like one bus a day.
40:07
So if you put in there, how many jobs were available for people who lived in bugle one hour commute by public transport in, let's say, april? I think it was april last year the last time I did this, right, and it came back with like 440. Now how many jobs have made it? So I did this when I was because we're in education now as well I was talking to a group of 60 teachers and I said okay, you're all about skills, about helping people to come out of poverty, right? What's the number one skill to teach people in bugle? And no one. You showed it by like 400. Now, if they had a car and drove the car, how many jobs are available for people who live in Bugle? 1,000 to 5,000 over 10x? The best thing you can do for the young in Bugle is teach them how to drive. The best thing you can do for the people in Bugle is make sure they've got a car. That's the number one thing, not teaching them Excel, right?
41:06 - David (Host)
Yeah.
41:06 - Wo (Guest)
Not teaching them those sorts of soft skills, giving them a car? Now no one wants to hear that, right, and I've talked to people in the council and showed them this and they go. But we got a net zero that we're going for and other organisations. So we've got net zero. I said, yeah, but your net zero is putting these people into extreme poverty because they can't drive a car and I don't care how many buses you have, because cobalt's quite a big place, complicated. You'll never be able to do a public transport. So as long as you are about, your net zero is is pushing these people into extreme poverty, then fine, then they don't say anything back now.
41:47
That is just an inconvenient truth the artificial intelligence has revealed, but it's something that the powers that be don't want us to know and it is something that is supposed to hell out of me. So when you talk about should do something, be careful what they do right, because they might be doing it for reasons that could really hurt people and the poorest in society. By doing it that AI can reveal and help them. They don't want the poorest to have access to artificial intelligence because they can fill out forms perfect. We've we built AI that goes over inflation. So all of a sudden, right, you write the perfect letter or email saying you walk in to these offices. You don't ask what you should get, you walk in and say this is what, by law, you have to give me. That's a complete reversal in the relationship between the individual and the state and that's what AI is going to do. So when you're talking about AI on your device, personal gpt that's the true revolution that your individual will have, the greatest power ever and the state can't do anything about it. And this is what this is about.
42:56
I mean, I was on Radio Four and edita on m turned around and said oh, everyone says that, that that AI is a threat. I said who's everyone? And she said professors, head of tech companies, politicians. I said has anyone asked a homeless person? Because they're not everyone I have and they're excited. So be careful when we're having these conversations amongst elites.
43:21
To a homeless person, an immigrant, to lots of different people, this is a positive game changer in their life. It not only changes the relationship between them and the states, it changes they can understand the pathway out from poverty. What ao is really good is going. If you do this, this, this, then you can increase your income, you can get the house right. It really takes away the barriers for them and a lot of people in positions of power go no, that's what that's. We're the ones who tell you how to do this. There's a lot of vested interests from the people who say they're good that the fact that they don't want that they're going to be. So the conversations like we have right is yeah, fine, I'll have them. I understand them right, but the conversation I want is a homeless person with AI. What are you going to do about it? And don't try and stop them, because the legislation they'll bring in is to stop that and it's already started because that's what they don't want to you and I get Interesting.
44:23 - David (Host)
Yeah, that's a really interesting perspective, See. I knew, I knew I needed to have you back on. That's amazing. Now let's, let's take that one step further and let's continue. So let's bring it back to something that's really relevant, which is and this will get us onto the Google thing that we promised we'd talk about earlier, and I'm conscious of time because we're 48- minutes already.
44:41 - Wo (Guest)
Everyone's moving forward. When's he going to say about Google yeah, go.
44:44 - David (Host)
Exactly. This is just skipping forward. So you saying all that is actually really interesting, particularly based on the demo that OpenAI did just yesterday and some of the videos. The actual live event was hilarious because any commercial person would always tell you never do a live demo.
45:05
No take that it's the worst, and they did a live demo and so it showed up that it's not 100% polished, which in a way, was actually really good. Oh, take that videos they've put out afterwards. And again, I know this will be, you know, best case scenario. But you know, like a blind person walking around london and you know the the AI can tell them, like when a taxi's coming, because it can see that the light's on on top, and then it can tell them when to raise their hand. And I know these will be set up scenarios.
45:41
But it's getting to the heart of what you're saying is that man, people with disabilities or disadvantaged people or homeless people or whatever, could potentially have a tool in their pocket that not only will it help them write letters to the housing authority or whatever, but it can literally help them navigate through their day and it's going to make it so much easier. Not only can it help you read a sign, but it can explain to you what that sign means. So all you have to do is, in the future, if you can just get a phone or take a picture of something and you can say, what does this mean? It can actually tell you what's there and that is going to totally blow the doors off.
46:21 - Wo (Guest)
Yeah, it's a case of when I did all that user testing all those years ago. There's a homeless shelter which I learned so much of young people and what I did as part of my testing is being in the greatest experts in the area the success most successful people in the area because I felt that the advice they were getting was very kind of low quality. So I said, okay, let's see if I could replicate this with technology. But I brought in, like a phenomenal personal trainer who talked to him about nutrition and health as someone who's highly successful in marketing, like the best marketing person I knew in in corbel to talk about her journey from poverty all the way up right. But I brought in a guy who ran the most successful finance um company in absolutely brilliant, and he told him about how the world of finance and personal finance actually works, not what you're told, not not what you're advertised, but actually how it works. I came away they looked at him with shock right, what credit is, how you get credit, what the game is right? And I turned around and said it's not the case. They didn't know there was a game. They didn't know there was a stadium. Right, they didn't know. Right, they didn't know any of this and they went why did no one tell us this? They're lying to us. They're saying, like, well, if you just work really hard, right? No, there's a game. There's a game set up because of the rules of the world and you have to play that game in order to move your chess pieces around. And ian told him that and I went what can? You've just told us that before.
47:47
What AI is going to do is do that. It's give them the root and go. Look, you can do this, but your actual percentage or chance that that's going to succeed is going to be very low. But this is going to open up. What they're saying here is not what actually is true. This is what actually is true. Like what I was saying with bugle the best thing you can tell young people in bugle is teach them how to drive. Now, no one's going to in a position. Authority is going to say that, but that's what the answer is. If you're young in bugle, you want to get a property, learn how to drive and get a car. Number one. Now that's what AI is going to tell them right now.
48:22
That takes away the gatekeepers, but it's best for them because in the end, like we gave that example in africa, people know what's best for them and if they don't know, AI is going to help them. Do it now. If you're still going to screw up, then there's nothing we can do about it, but now you now know the game and you know the rules of the game and that's what AI is going to really help yeah, no, 100, you're right, and there was another article recently that said AI scored high when against human responses when asked ethically challenging questions.
48:54 - David (Host)
And this ties into the ethics discussion. But I think what ties in here as well is that I think the reason is because it has such a large sample of data that it gives you a real answer based on a much larger group, any individual person. You only have your personal experience to make an ethical decision, but it's got the experiences of millions of people, or billions of people, to then decide what might be the correct response. And if you extrapolate that out to something like a financial discussion, it now takes it out of that person's. You know, like we've all got different, varying levels of you know, finance, knowledge and experience, but we can all go and get the same answer. Is my point? We're not, you know. We're all going to go and get told the same thing and it's going to be the collective answer through millions of people's experiences with finances and and that sort of thing. And you're absolutely right, it's going to make it so easy and it and it is easy for for us to get information that we could never I just say understand it.
49:59 - Wo (Guest)
Yeah, I don't understand why I just saved thousands of pounds. I got my interview accounts. When, say the accounts were okay? I'll just put this through the AI did certain things in my accounts. Uh, with AI was taxed on everything else right. They'd missed something huge, right. And and then I sent it back and turned around and go what about? This? Said, yeah, no, we, we had, we had I know like lots of homing and hurrying, I'm going yeah I don't know why.
50:25 - David (Host)
I used you right the AI, and they went.
50:27 - Wo (Guest)
Yeah, you're right and it just slashed huge amounts of money off my tax bill because they missed it or didn't ask the question or they're busy or something else. The AI is not busy, right, and so, for me personally, AI and finance is going to be amazing, and no one's developed one yet.
50:46 - David (Host)
I have heard through the grapevine that there are a couple of companies working on a fully AI accounting system that basically will be able to not only do your bookkeeping as it goes along, but it can sort of monitor your expenses and stuff and then, as it sees trends and how you deal with your money in your business or in your personal finances, it can then start to say why don't you try doing this? Because it would give you that result and it can give you proactive advice and stuff. So I haven't seen one in the market yet, but from what I understand, there's some in development and that'll be a game changer and that is going to have a huge impact on the establishment or whatever you know, because it's very much a two-tier system and people like it that way.
51:36 - Wo (Guest)
Yeah, I mean so many examples. I mean we haven't even talked about what we do. So we're now in education and working with a teacher. She turned around and said I'm like up at 10 o'clock at night doing grading and assessment, right, so I went. Okay, we very quickly put together, we call it surf grades. Uh, go to surfgradescom I'm going to do the plugs there, where it's very early stages prototyping. But you can put up your marking, your mark scheme, and you can put out your questions and answers for your pupils, handwrite, and it gives the grades and it gives the feedback. Now the teacher isn't being, but it's quicker to edit and review than it is to do. All right, it's a line we've got.
52:13 - David (Host)
Yeah.
52:14 - Wo (Guest)
And now she can spend time with her family and she says, like teaching's under real threat when it comes to getting recruits and everything else, if that saves a huge amount of time. People spend time with their family Brilliant, we're an elderly AI UKerhans being family. Brilliant, we're an elderly AI. New care homes being built. We're working with amazon on this, building a proof of concepts of shun and door entry systems and everything else incorporating AI in a brand new care home. What does that look like? Right? How do you do that? Things like the corridors have to be big, big enough for robots to go up and down. Cameras have to be everywhere, but they're all sort of done on the edge. What is the privacy concerns with? With testing? It goes on and on and on. David, right, we've got powered which is like a form filling system which is enter and encrypted, because a lot of the people we dealt with don't like filling out forms, but who does?
53:00
it just sits in AI exactly it's a project about people accessing services through artificial intelligence. This is about saving people's time, money accessing to them to, because air is really good. Goal led is to them to get to a better place, and we're moving at an accelerated rate. Right, because the individual to us is the master of their own domain and they're the ones who will use this. Our worry is, in these conditions, is that it will be shut down under the guise of ethics. So I'm much more on the side of let it go right, let you know, yeah, just um, let the wild dogs go right, because my worry, yeah, is this will be used as a screen to shut down and so these people then don't get access to it, like I was just shut out of claude that's a, that's a great perspective and it's, um, I want to think about that for a while and uh, and I, I will definitely use that to dig in with some other people and and try and and see where I can get with it.
54:07 - David (Host)
Um, right, we're 58 minutes now so I think we can talk about the Google stuff there you go.
54:12 - Wo (Guest)
What?
54:13 - David (Host)
what's your prediction of we're going to hear from Google?
54:15 - Wo (Guest)
there was a little inkling of it, right, the head of technology at Google. So when they came up with soma, so soma. Is this video generation using artificial intelligence, right and?
54:27 - David (Host)
she was asked a question and she was very awkward about it.
54:29 - Wo (Guest)
She turned around and said was youtube used to create this model?
54:34
and she said yes, and then yeah right and I thought yes, that's really interesting right and it was something like a really interesting, just a really interesting thing. Yesterday they put up right video and they were streaming video in real time and had the assistant in streaming video in the real time. Now I know I'm not wearing them. I should have worn them, right, these are meta glasses from facebook so I can take a picture, I can listen to phone calls, it has a microphone here and everything else. And there is llama 3 is on it in america will be over here so I can say what am I looking at? Okay, and it will just look at what I'm looking at and tell me what I'm looking at. But then you can turn around and say I'm looking at a form or something or a piece of legislation or anything like this, or you know the education.
55:20
I'm looking at a math problem, right, how do I solve this math problem? Now, all that and I went chat gpt then yesterday was showing in real-time streaming using the assistant and I'm going to say once more, in real-time streaming. Google, right, has gone. Speed, it's about the speed. They can't beat the model, they can't beat the complexity of the model, but what they can beat be open a on is speed. Right now the streaming is really difficult and really expensive and open. Ai has no history in video streaming no data centers, nothing, no team, nothing on this stuff.
56:05
Right, Google's got youtube yeah it leaked on the Google twitter of someone. Right now I'm gonna be proved wrong when I go and look at what they did right, but they showed in real-time video with an assistant working, if that got leaked, to OpenIA and they've gone crap. They could it's speed, right. So they were talking a lot by speed, but not about the model, and Google could beat them on speed. They could beat them on video streaming with an AI system inside it. That is how they could beat them now I'm saying things as it's going on.
56:41
But the leak was there and with the summer and with the, when they went over youtube out chat gpt said oh, it's out in the next few weeks to do video streaming is so intense, it's so insane, it's so much data right chat. Gpt got no experience on this. None Google's got youtube, so that is how they could win right?
57:05 - David (Host)
no one saw that coming no, and that's an interesting one, and I did see something I can't remember if I got an email or if I saw it on social media somewhere, but it was Google making some announcements on AI and YouTube in general anyway. So a lot of assistance, a lot of helping you come up with brainstorming things like titles, doing the research about what sort of videos should you be creating, and I think that it was, for it's limited to the Google partners, right? So it's the big accounts that you know are already monetized and I think those big accounts have access to that. So I wonder if it's also building on that as well. So maybe they're going to roll out something to say anybody that's using. So maybe they're going to roll out something to say anybody that's using the platform will have access to these AI tools to help you. God, can you imagine like all the titles and everything and descriptions are going to be so formulaic with that?
58:03 - Wo (Guest)
car went around. Yeah, basically recorded the whole world right. They've got trillions upon trillions of hours of youtube video yeah, right OpenIA. They sneakily went over youtube. I'm sure Google's going to stop them doing that again, right? That could be the use case, right? That's why chat gpt did what they did yesterday.
58:26
Right, because Google, if they do launch this right if I could be completely wrong, but there was a on the Google twitter thing of them doing just that. If that is the case, that could be how they win, in which case I could be completely wrong again and go buy Google share and then immediately. But speed, I love it, because you've got to have the data centres, you've got to have the chips right, and then they go down Immediately. But speed, I love it, because you've got to have the data centres, you've got to have the chips right. They're not out there. Sam Altman's going around trying to start up a whole new company with hundreds of billions of dollars to buy the chips right, just for that.
59:02 - David (Host)
Google's got the data centres.
59:03 - Wo (Guest)
It's got video data centres. ChatGPT haven't got that.
59:07 - David (Host)
Yeah.
59:10 - Wo (Guest)
Yeah, that's true, you heard it here first people there you go, or I'm completely wrong as I'm talking, yeah I think it's probably.
59:16 - David (Host)
Is it on now?
59:17 - Wo (Guest)
yes, I think it probably started.
59:19 - David (Host)
It probably started now so that's a good, that's a good jumping off point, um so, because you and I both need to go watch it and see what it says. Well, thank you very much for coming back on the show.
59:29
It's been an amazing conversation and I will include all the stuff that we talked about. All of your tools and all that sort of stuff will all be in the show notes for everybody afterwards. So if you want to learn more about Hi9 and the stuff that Woe is doing, just check the show notes and all the links will be in there. And yeah, whoa. Thank you very much.
59:50 - Wo (Guest)
Brilliant and continued success. David, Great to talk to you.
59:53 - David (Host)
Thanks.
59:54 - Wo (Guest)
All right, I'll speak to you soon.
59:56 - David (Host)
Bye!