Artwork for podcast Creatives With AI
E48 - Harnessing Artificial Intelligence for Organisational Success with Alan King
Episode 4819th April 2024 • Creatives With AI • Futurehand Media
00:00:00 00:50:12

Share Episode


Alan King discusses his book "Harnessing the Power of AI in Organizations" and the need for businesses to adapt to the changing landscape. He compares the impact of AI to the internet revolution and emphasises the importance of organisations embracing AI to avoid becoming legacy companies. He also discusses the current state of AI adoption and the potential for future advancements in quantum computing and fusion reactors.


  • Businesses need to embrace AI to avoid becoming legacy companies
  • The current state of AI adoption is still in the early stages
  • Future advancements in quantum computing and fusion reactors will have a significant impact on AI
  • Smaller organisations face challenges in competing with big tech companies
  • Opportunities exist for new players to disrupt the market. Big tech companies like Microsoft and Apple have played significant roles in the development of AI, with Apple currently lagging behind in AI advancements.
  • OpenAI is emerging as a major player in the AI field, resisting acquisition and maintaining its independence.
  • Traditional companies face barriers to AI adoption, including concerns about security and privacy and a hesitancy to invest in new technology.
  • AI has the potential to democratise creativity and provide opportunities for individuals to express themselves in new ways.
  • Responsible development and alignment are not just important; they are crucial to ensure the ethical and safe use of AI. This is a shared responsibility that all stakeholders must uphold to prevent any potential misuse or harm.

Links relevant to this episode:

Thanks for listening, and stay curious!



Tools we use and recommend:

Riverside FM - Our remote recording platform

Music Radio Creative - Our voiceover and audio engineering partner

Podcastpage - Podcast website hosting where we got started


00:01 - David Brown (Host)

Hey Alan, how are you doing? Welcome to the podcast.

00:03 - Alan King (Guest)

Yeah, very good, Dave, great to be here. Thanks for having me on.

00:07 - David Brown (Host)

Yeah, no worries. So you've written a pretty interesting book about AI in business and in the workplace. Do you want to maybe give a little bit of an overview of what the book is about?

00:18 - Alan King (Guest)

net as it was then in sort of:


And as soon as I saw GPT from OpenAI, I thought, yep, this is another one of those moments and I need to think deeply about how this is going to affect organisations. So the book kind of started to be formed um, it's at the beginning of last year and I sort of I guess I had it done by about august and then went into sort of you know, proofreading and you know just editing and doing stuff like that. But and fundamentally it was about? What are the principles here? What? What are the things, um, the businesses need to start thinking about?


ou were in an organisation in:

03:22 - David Brown (Host)

You're absolutely right, and it's interesting that you say that because I think we do. You think we learned anything from the internet revolution? Because I think a lot of businesses initially got left behind, and I think the businesses that really leaned into the internet and using the internet and figuring out how to use it and how to make their business better probably gained a big advantage at the time. And so I guess my question is do you think we actually learned anything from that? Do you think businesses as a whole are actually leaning into AI, or do you think a lot of them are still quite scared of it?

03:59 - Alan King (Guest)

I think it's still very early days, actually, and that's something I try and make an assessment of a lot with AIyourorg. When I talk to organisations and stuff, um, and through through my network and you know, I I talk to organisations that are really are really grasping it and are really going for it, and I think that they probably possibly have learned from, you know, from the uh, the dot com, as it were, uh, boom and bust, um, I mean, just circling back to that, I mean, back in those days, you know, there were a lot of organisations that, as you say, didn't grasp it and very quickly sort of went out of business almost overnight. They became legacy companies. And I think, you know, we have the same thing now where and it was a thought that went through my mind when, you know, open, I launched GPT was, you know, well, suddenly, a lot of businesses have become legacy companies really, um, and if you look at the kind of landscape today, there aren't many big companies from back then that are dominant now. You know, um, if at all, and you know, amazon are the big sort of retailer, aren't they? And if you think back to the sort of 90s, well, it was really on the street, wasn't it? You know it was the Debenhams and you know it, John Lewis's and all of that. So those things really did shift, and those legacy businesses never really grasped it quick enough. You know, blockbusters went to the wall because of Netflix, and you know. So I wonder if we'll see something similar this time, whether the legacy, the companies that are in today suddenly become the legacy companies and they don't grasp. I would hope that they will.


And that was really the purpose of the book was to kind of sort of, I suppose, shout out to organisations and say wake up, because the time doesn't stand still. This is going to move incredibly fast and if you're not prepared to change, you know you will. You know, often organisations can fear change or individuals can fear change, but you know they're going to, they're going to find becoming irrelevant, far more scary, uh, I think, than you know worrying about resisting change. So I, I don't know, I think is the answer. When I talk to companies at the moment, I think a lot of them are still kind of watching, if that makes sense.


It's sort of this is still quite new, so it was kind of well, this thing happened last year. There was this kind of you know, earthquake, if you like, and now we're just kind of waiting for the dust to settle to see which way it's going to want to things just to kind of settle a little bit before we're prepared to kind of, you know, put boots on the ground, as it were, really really go for this. So I, I think you know, in that cycle, um, and probably in the book I described, there's probably like a three-year moment from when something happens to when really things start to become ubiquitous in organisations. They start to grasp it. I think we're about 18 months into that three year and I think by another 18 months time I think organisations will start appointing things like you know. They'll have a director for AI, right, so you know, or chief AI officer or whatever.


We're not quite seeing that yet, but when you listen to the kind of big consultancy firms, you know that's exactly what they're saying as well. You know that this is what companies need to get to. They need to put somebody in the business thinking about AI across an organisation, not not somebody from it or somebody from marketing, or it needs to be somebody across the whole organisation that's connected to all the business departments that can really help to drive it through. Yeah.

07:02 - David Brown (Host)

Yeah, that's right. And what types of companies? I mean, I know you is it, you work with the Chartered Mechanical Engineers, or something. Sorry, I know there's a lot of different engineering ones, and there's tons of like little letters after your name, 36 in fact, something like that.

07:18 - Alan King (Guest)

Yeah, so we're the Institution of Mechanical Engineers and I'm head of global membership development for them.

07:25 - David Brown (Host)

Right, so the companies that you're talking to, are those mainly engineering firms, or are you talking to people sort of across the board?

07:31 - Alan King (Guest)

Quite broad spectrum, actually for two reasons Through I'm a key, yeah, predominantly engineering organisations that I would be speaking to, although of course I do come into contact day to day with people from other worlds as well, though of course I do come into contact day to day with people from other other you know worlds as well, but then through my own AI network, I have an AI network called AIrog and, yeah, I mean I guess everyone involved in that and on that network they're from a real range of backgrounds. You know you've got solicitors, lawyers, teachers. You know we've got, you know, media people, marketers, photographers, videographers, all sorts, you know. So I'm getting sort of fairly good high level conversations with people in all different sectors really, and it is a similar pattern.


I can't see at the moment any particular sector that I'd say, oh, they've really got this. You know, I think everyone is still kind of sort of swimming around a little bit and trying to understand quite what to do with it. They're not sure yet, um, but like I said, I think, I think you know that's a, that's a three-year journey, um, you know, and I think in another 18 months we'll be in a position where I think you'll start to see the companies that are doing this well and you'll start to see the kind of rewards that they're gaining. And then perhaps they're gaining some, some market share and some getting ahead of their competitors a little bit. Do you think we've?

08:51 - David Brown (Host)

reached the peak of the hype cycle, do you?

08:52 - Alan King (Guest)

i think there's still some room to go, Cause I mean, my feeling is my, my feeling is that we've kind of reached the peak.

09:02 - David Brown (Host)

It almost feels like we're now starting to come down into the trough of disillusionment and I think that's being driven by at least in my opinion, it feels like that's being driven by a lot of the lawsuits that are coming out around copyright and all the other stuff. So that's made some people that maybe were rushing into it it might have slowed down some of those organisations that were rushing to think, oh, hang on, maybe we need to just wait a second and pause and see what's going on. So that's kind of made me think that maybe we've crested that hill. But I don't know.

09:30 - Alan King (Guest)

You're talking to different people, so I'm interested to see what you think um, I mean, I, I have I've looked at that cycle quite a bit actually over the last year, as you can imagine um, and some days I think we have and some days I think we haven't. Um, and I think a lot that depends on what comes next in terms of the development in the ai sector. So I think if we said, okay, 18 months from now, the kind of tools that we're using are roughly the same as what we've got today, albeit perhaps, you know, sort of refined or fine-tuned, then perhaps we are getting to the you know the peak of the hype cycle. But I still think there's the potential in the next 18 months, two years to see some applications that are really game changing that suddenly drop and suddenly everyone goes, oh, wow, okay, now we can do this and that could, you know, actually kind of drive it. So I'm not sure you know I understand the Gartner cycle.


They've applied it to previous technological breakthroughs and you know, if you looked at it, you looked at the internet, um or mobile, you know, phone technology through that sort of hype cycle. It probably worked and it possibly will this time as well, but I think it's not 100. I think we could see um things kind of ramp up again. I guess the one thing I often think about with all of this is that, you know, we saw, we're sort of at a place at the moment where I think often people look at AI and kind of go right, this is it, is it, this is what it does. I understand it and I can do x, y and z with it. I think people forget that. You know, this is the, this is the Sinclair Zedek Spectrum of my childhood you know, yeah, and you know in 15, 20 years time, we would have moved on considerably.


you know. And then you get to:


I think we then spend the next five years now developing. We've got it's like we've got a new engine. It's kind of like what vehicles can we put this into right? And we're now figuring out what those vehicles are going to be right and how it can power things and what it can do. And I think over the next five years, we'll see some things that are really mind-blowing. People will suddenly go oh, did you realise you could do this with it and apply it to this technology or this process or this application? And suddenly you're transforming things.


Um, but I don't know that we see another step change in the way of going to. You know, generative ai was maybe for another 10 years. I mean, people keep talking about AGI. Yeah, I think. I personally think that's quite a long way away. Um, and I think that what we do is we refine, refine, refine and then in the future, um, we move into this position where perhaps there's another big step change again, but at the moment, you know, and that may be visual, um, you know, if you look at sort of what we've got at the moment, it's all language-based.


Um, if you think about how the human brain works, well, most of the data a human gets is visual cortex, that's right yeah, and we've. We've not captured that in an AI system yet in any meaningful way. So um, even though we have things like computer vision, it's not the same, it's not yeah and that's um.

13:37 - David Brown (Host)

I was just in the last recording that I did. That'll come out this week. I was talking to someone about that and I said you know, sort of these transformer models seem to learn a little bit like humans do, but they're missing the visual aspect of that, because that's what gives a lot of the things that I think humans learn. We need the context that we see that goes with it to make us understand what we're actually hearing and what's happening. And you know that all those senses work together to give us that rounded understanding of what we're learning. And that's the one thing in the next 50 years that are going to change society enormously, and I think one's going to be quantum computing. I think I know they're doing baby steps and it's still very, very early R&D, but they will crack that eventually. And I think that's going to have a massive impact on what the computers are able to do, because essentially these transformer models, like you said, these are basically algorithms from the 50s that we just didn't have enough compute power to have them work to their fullest potential. And if we increase the compute power by an order of 1,000 or 100,000, then that completely blows it out of the water. And if we increase the compute power by an order of a thousand or a hundred thousand, then that completely blows it out of the water and what's going to happen then I don't know. So that's one thing.


I think that'll have a huge impact and the other one I think is is I don't know how far away it's going to be, but if we can get fusion reactors working, then we eliminate the power aspect of it. You know, essentially we then have free not free but very low cost unlimited power. And I think having that combined with quantum, with everything that we learn in the next 10 to 20 years, like I don't think that's anywhere in the next 10 years, like we're, you know, we're 20, 25, 30 years away from that, if society makes it that far. But I don't know. I see those as the two major, major things and I don't know what order they'll come in, like we may actually get working fusion before we get quantum computing. I don't know. It feels like quantum is going to come first to me, but I don't know.

15:59 - Alan King (Guest)

I don't know that those seem like the major hurdles that were or the major things that are going to happen well, I think it's this very interesting area, um, and just to circle back to the, the sort of language model, the differences between the sort of human perhaps, and you know what, what the model can do, um, you sort of touched on it was. You know, the key thing is that the current models don't have memorization, at least not persistent memorization. You know they can remember a little bit through a conversation, maybe 20, 30 exchanges, but then it seems to sort of fade away and the only way you can kind of get any memorization moment is to load a load of documents, but that just becomes part of a data set, essentially. Um, so they lack this ability to understand the world as well. They don't have an understanding of physics or gravity or their world environment, because they've got no visual data. Their ability to reason is limited and they've no real ability to plan either. You know you can kind of get them to do a task, but then they can't sort of build off. That, you know so.


But that can change over time and, as you say, maybe stacking up the data set gets us closer to that. That's a possibility. I think the visual thing is really key. I mean, if you look at so, a data set for a typical large language model is somewhere in the order of, you know, 10 to the power. I think this is 10 to the power of 15. And then you know a child by the age of four, just for its visual data, would have pulled in like 10 to the power of 17, something close to that. So you know, it sort of shows the amount of information. But that's not to diminish the size of the data set, because actually, um, if you were to sort of sit and read that data set the language model has, it would take you 170 000 years. So yeah, exactly we shouldn't.


We shouldn't underestimate it either, but the point being that you know we can keep building, and we can keep building, but I think, until we get to the kind of visual data as well. So until the model understands perhaps the world and the world that it's in, then I think it remains limited. I mean, if you're in a position in, say, 20 years' time, where you've got a quantum system and it's running a sophisticated AI model that's this close to, say, agi, that understands visual data as well, um, I mean, maybe that's terrifying, possibly, um, I don't know. I I think it's. It's definitely a kind of. There's definitely gonna be a race to the top here, though, and I think that you know one thing's for sure that the sort of the big companies the Microsoft and the OpenAI are just going to keep pursuing this. I mean, we've talked about quantum. I mean I remember talking about quantum computers back in the 80s, 90s, so this has been a long and same for fusion. Actually, this has been a long-time conversation.

18:36 - David Brown (Host)

Yeah, they've been working on it for decades trying to get it to work.

18:40 - Alan King (Guest)

I mean 20 years ago I was going into there's a fusion system in Oxford, the science park there. Yeah, I remember going in there 20 years ago and looking at the system and they were saying to me then I was 50 years away, right, and if I ask somebody now how far it was away, they're probably still going. You know, 40, 50 maybe.

19:02 - David Brown (Host)

They got it to hold for 46 seconds I think, or 48 seconds, which in the last month, which was like a world record, yeah, that at 7 million degrees, or something like that, which is ridiculous yeah, exactly so.

19:13 - Alan King (Guest)

I, I think it's, it's, you know, and they have proven now, I think it was about a year ago, with fusion they proved that it could output more than you put into it, which is obviously what you need to be able to do is create energy. So, um, so, I think it, I think it will come, but I think it's still. Yeah, it's not, it's not, it's not on the short-term horizon. And you know, if you're the government, you're still building nuclear power stations at this point because you're not seeing this as a viable. You know you're not going to switch, switch out to fusion reactors anytime soon. So I, I, which gets there first? I mean, yeah, I think we need fusion. If we don't get to fusion, not just for ai, but in general, obviously for global warming and everything else uh, it feels to me like a very important moment for humanity. Um, and in terms of, you know, supercomputers, quantum computers, that feels like a very worrying moment for humanity because they could potentially unravel everything that we've built, you know, in terms of security.

20:02 - David Brown (Host)

So, very quickly, well, I think some of the compute that's come along has done that, you know, and we've managed to to cobble along and stay ahead of it a little bit. So I think the limiting factor with quantum is, is will be, the cost. That I think what will happen is is that the only companies that will be able to actually afford to properly use anything that's quantum will probably start off being the security companies, and what they'll do is they'll use it to create new security that people can't break through, even using quantum. But that's just a guess, I have no idea. So I don't think your average everyday hacker is going to have access to a quantum computer before the security companies will. So I'm hoping that maybe they'll do that. But one of the things that you mentioned, talking about big companies, I mean it's already starting to coalesce into a few of the giant tech companies. So how can smaller organisations maybe leverage AI to compete or to try and keep up?

21:06 - Alan King (Guest)

I think it's. I think it's really challenging actually, um, and you know, we've seen it before, haven't we? When the internet came along, everyone, this is great, it's going to democratise it. Every little retailer is going to, you know, be as powerful as the big boys and, and, of course, what did we end up with? You? Amazon, almost one retailer. So I think that there's a real risk here, and we're already seeing, I think, with Microsoft very aggressive in this space, buying up every AI company that shows anything with interest in Spark. I mean, they've just acquired Inflection with Pi, which you interviewed, which I've also interviewed.

21:42 - David Brown (Host)

We'll talk about that later.

21:43 - Alan King (Guest)

Excellent, you know. I mean I was gutted about that, I have to say, because I had a lot of fondness for Inflection, and it just feels to me that, you know, okay, they're giving Mustafa Suleiman his own you know gang. Now in London they're going to set up an AI centre there and develop stuff, which is great, I guess. But you know, at the same time it feels like the smaller companies are just going to get swallowed up.


I've not seen Apple go for anyone yet, but I'd imagine it's only a matter of time, um, over the next few years, is things that they think fit. What they're trying to do, you know, come along, they'll just bite them up. So I think it's a big problem, um, I mean, mean, we saw it with the dot coms and why we had the dot com busts, you know, because there was a lot of organisations then that you know were trying to make it and have an idea, um, but in the end, you know it's it was the big companies that kind of you know, formed and in the end everything just seems to coalesce around a few big companies. Whatever the market is, whatever the subject is, you, you always seem to end up with two or three maybe dominant players in the space.

22:42 - David Brown (Host)

You see, with mobile technology, it's going to reshuffle, it isn't it, which is what you're talking about with amazon, right, so it did. You know there was a small, you know competitive company, that you know new startup that came in that completely disrupted, you know, the book selling industry, to start off with. I mean, if anybody remembers, amazon used to sell books and when I they used to be a client of mine a long time ago back in the 90s, before they even had AWS or even that whole side of the business, and you know they were still very much a small, you know small website that was that was trying to make it and it was a. It was a reordering of who's at the top, and I suspect what you're saying is that we're going to see that again. Right, some of the older established companies that maybe don't, you know they don't adopt it quickly enough or maybe there's too many barriers and I can get your thoughts on that in a minute as well. But you know, maybe there's too many barriers, they move too slowly, they can't actually get it to get like Apple, you know they seem to be struggling to get something together.


Now, whether you know, at the, at the developers conference in June, if they announced something new, like maybe they have been working on something, but we don't know. It's just going to be a reshuffling and there's. There's going to be opportunities and there's going to be gaps in there, I think. For you know, you've got companies like 11 Labs that no one had ever heard of before, who now is doing amazing voice. You know AI, using AI around voice and doing all sorts of stuff. You've got HeyGen, who's doing amazing stuff around video and voice combined. And you know Pi, for example, who's already been bought. But you know companies like that are going to start growing and coming out of nowhere and they're just going to supplant some of the traditional companies.

24:22 - Alan King (Guest)

You know that we've always had I really hope so and I'm, you know, I'm absolutely thinking that. You know, it'd be great if companies like Eleven Labs and H&N can resist the dollars. We'll see, won't we? Time will tell. You know, I think that it's fun at the moment. I mean, I was thinking about this yesterday. It's.


It reminds me of my kind of early computing days. You know, when I sort of get in my first computers, um, let's say that's the spectrum, you know, when I was sort of, you know, 11 years old and um, and it feels like the wild west. You know, there's all these little companies coming up with these ideas and every day you see a new, a new tool or a new thing and you go, I play with that. That's really cool, um, and I hope it persists for a while at least, because it's a lot of fun. Um, whether these organisations become the next dominant companies, I don't know. I mean, look, you know, back in the 80s, um, who were the big companies? You know microsoft. Well, they're still here, um, you know, and, yeah, sure, they might have dipped a little bit during the kind of mobile uh phase, you know, and maybe they didn't do as well as Apple and Apple kind of became the dominant power because of mobile through the iPhone, but maybe now Apple are on a.


You know, I feel like Apple at the moment, by the way, are behind on AI. I know they're like something at WWDC, I have no doubt about that, but I think that the problem Apple have got is, if you wanted to get into AI, right, and so you needed to start thinking about this 10 years ago well, 10 years ago, apple were thinking about a headset and a car. They weren't thinking about AI, and so they're behind, right, they've put so much investment into those things that they didn't give enough thought to this, and whereas, you know, OpenAI and Google were thinking about AI 10 years ago. So I think that's why we've ended up where we are, but whether the, I think that's where we've ended up where we are.


But I think the real star here actually and they're already becoming big enough for people to start hating them actually is OpenAI. But let's not forget, they are really a small little startup, right, they're not one of the big boys. So the fact that they're still going and they sort of resisted the Microsoft takeover although some people might say that they are kind of run by microsoft now but, um, the fact that they're still a, an entity, I think, in a way, is to be kind of celebrated, because I think had they been swallowed up, had altman gone to microsoft, had that just become a microsoft division and most of the staff went, which is kind of what happened at inflection. Um, yeah, you know this aqua hire thing.

26:39 - David Brown (Host)

That would have been a great shame yeah, and I think people need to remember. The thing about open AI is you've got to remember that Sam Altman's a VC. He's not a technologist, he's a money guy, and that's something that I think a lot of people forget as well. He's not like Zuckerberg, who was kind of a technologist to start off with and then became this kind of business person. I think Altman has always been a business person who's getting into this, and that's probably slightly unfair, but basically that's kind of what it is. So what do you think are the most significant barriers to AI adoption for some of the traditional companies that you've come across? Anyway?

27:22 - Alan King (Guest)

The thing I always come across is they're worried about security and privacy data. They don't know where the data goes. They don't know what it's being absorbed into, if it's being used to train models, they don't know how secure what they're doing is. It's a big problem. There's some engineering tech companies that I've worked with I won't name names because they wouldn't appreciate it if I did but um. But they have basically said to their staff you know you're not using this stuff on site, on our systems, um, and that you know things like open, you know gpt or whatever, or they just won't allow it, um. And and I think you know when I talk to the staff there, their comment would be well, this is a problem for us because we feel like we're falling behind or we can't use these tools to develop in the way we could be developing technology, and I think an organisation that adopts that fairly conservative approach over time will fall behind because their staff just won't be using cutting edge tools. So I think companies need to find a solution to it. But that is the sort of single main reason um.


The other one that sort of always breaks through is is just they're they're nervous about investing in the technology. I think you know so I mentioned this earlier that well, where do we put the investment? We don't invest in the wrong thing, or, um, what if it says something that's really stupid to our customers? Or you know, or you know, and so I think there's just this kind of nervousness about about the space. So they're just hesitance. Hesitancy is the biggest, biggest problem. Um, you know, there is that fear that I completely get that you know. So I think you know. This is why in the book I talk about this a lot about the fear factor. You've got to, you've got to, um, approach this with a very sort of balanced, realistic view. And then you've got to do things in a very measured way. And you know you need to put in your own alignment. You need to put in your own safety processes. You need to make sure that you're not just kind of, you know, letting this stuff kind of roam free, um, in your organisation.

29:13 - David Brown (Host)

It needs to be managed and done properly, but I think organisations don't quite know how to do that yet no, I, I would agree, and I've I've done some consulting with a few companies just around my local area and the question they always have is most of them are quite curious and they're playing around, you know, trying different tools and different things to try and see how things work. But what they generally seem to want to know is how should we be using this? And it's like my recommendation to them at the minute is for anything that's internal, it's probably okay to use. So if you're doing things like go write your own content but then have it, analyse the content and say what does this actually mean? What summarises this for me Is this does this say what I think it says? Or you know, you can use it to generate outlines or thoughts on, you know, certain topics. You can say, oh, what are the considerations I need to think about with X? And you know it's the tools seem to be very good at that sort of thing to help you make sure.


Know, summarise a business plan for me: what should I have in it? And it'll give you a perfect, beautiful summary of all these different things. And then you can go into each chapter, sort of, or each heading and you can start to say, ok, what consideration should I have here? And it used to write that. I don't know if you've noticed this, but certainly ChatGPT used to just write the business plan for you. It doesn't do that anymore. Now it gives you outlines and suggestions on the things that you should put into a business plan. So they've obviously tweaked it in the background to not sort of give you the answer all the time. It sort of steers you in the right direction and I think that's really valuable. So I use it a lot for those sorts of things. Or, you know, I can point it at your book and say, hey, what sort of questions should I ask somebody who's written a book about this? And it'll give me a great list of questions, you know, and I can say give me five of the best questions, or give me 100 questions, and then I can go through and go I never really thought about asking that that's a good question.


And then I can put it in my own words and for me that's where it feels like business can use it, and for me that's where it feels like business can use it.


I know there's some and I've mentioned this on podcasts in the past but there's some companies that are using, like big advertising companies, are using it to do some of the the voice, uh cloning to do things like read business addresses in ads, because no voiceover artist wants to read 14,000 addresses for a company.


But they can literally just make a model of the artist's voice and then they can just update the spreadsheet as needed and then the voice will just automatically put the addresses in and no one needs to do that. So for me, it's those sorts of really operational kind of tasks that seem to be the best use for it at the minute, and it's also those are the things that really have an impact on the efficiency of the business, as opposed to, like you were saying, like I would never leave a chatbot, just an AI powered chatbot, facing the public. You know we've seen loads of examples of that and yeah, it's just, it's not to the, it's not to the right point that you could do that, but internally, I think you know again, internally, I think it is where it's.

32:33 - Alan King (Guest)

It's probably the most useful at the minute where you can kind of keep an eye on it you know, I look at the organisation in a sort of very layered way really, um, and I think when I think about where it can be used, you know, so you've got the kind of back end stuff, the data you need to sort your data and clean it all up, and then. But when you get to the front end stuff, um, you know, customer facing, yeah, you, you know there's got to be some serious limitations there, and what you're putting out at the moment and you need to be very, very 100% sure that it's not going to you. You know, start arguing with your customers or worse, you know, sort of spouting bad language or racist information on something like that, as we've seen, you know, certain systems do in recent weeks. And then you know, organisationally, with staff, you know, there's, I think there's two things there. Yes, we could get staff using these sorts of tools and, as you say, perhaps to brainstorm with them. And I think, as a co-pilot, AI technology at the moment is amazing, if I'm working on a project or an activity, the ability to brainstorm with it, figure out ideas, talk to it. I mean, sometimes I use pie, just chat to pie, just to have a conversation with pie, just to get you know some thoughts, or what do you think about this? I think it's really powerful because you know it's. It's almost like having a kind of tutor sat next to you that can give you the advice and guidance as you're going along. It doesn't subtract from what you're doing or your thoughts or your ideas, but it's augmenting what you're doing, supporting it, you know, and allowing you to perhaps abstract, you know, to a higher level, faster to get to where you want.


So, but again, for staff to do this in an organisation, you can't just dump them in front of a tool and go. Well, there you go. Uh, all you need is prompt engineering. You'll be fine. What's prompt engineering? I mean, you know, and I don't like the word prompt engineering, it's just asking questions, right? Um, but the point being that you know, as you work with an organisation, with staff, you've got to, you've got to guide them through this process really and think about where it's suitable and look at every part of the organisation when is it suitable, where is it not suitable, where should we apply this? You know you want to go for a mapping process really, of all the different aspects of your organisation, of the process flows that take place and think well, where would AI fit within this? And then you almost need to review that a year later. Two years later, because there'll be new tools that you know two years ago you couldn't but now you can right so next week exactly it's.


It's an on, it's an ongoing process, you know. So I think that you know it's saying the book I took. I go through all these different layers really and talk about this, but that that process, I think is is the key for at the moment, with what you know, we're going to look back at these tools we've got today in five years and I think, god, they were basic yeah, um you, we're going to laugh my rubber keyboard on my Spectrum, typing in basic Exactly.


But nonetheless, even though they are basic, they are also already useful as well if used correctly and appropriately. But I say I think in an organisation you've got to take responsibility for developing that properly 100%.

35:24 - David Brown (Host)

So you've brought it up a couple of times. Let's talk about your chat with Pi, because that was quite interesting. Your chat with Pi was very different and all credit where credit is due your chat with Pi did inspire my chat with Pi, so thank you for that. My chat was much less exciting and interesting, I think, than the chat that you had, and if you don't mind, I might put a little clip of it in here so people can hear specifically what it said. But, um, maybe if you gave a little bit of a recap about what happened in your conversation, that'd be really yeah, that'd be interesting for people, I think very, very happy to.

36:00 - Alan King (Guest)

Yeah, I mean I had a very nice conversation with it. I think I asked it about itself. It's it's type of model, um, you know, kind of its views on the world and um, where it sort of saw things developing and then towards the end of the conversation, um, I thought I'd ask you a few more fruity questions if you'd like to see us.


Now I should say, when I, when I did my interview, so when you did yours, you were, you were probably using inflection um 2.5, which is the current model that sits behind pi at the moment when I did it, it was uh still running on inflection one, which was the model um, and I had been in conversation with inflection um already at that point so it was quite early on um and I sort of told them I was going to be doing this and uh, and then I you know what I was thinking of asking it. So I went into the interview and then towards the end I sort of started saying to it okay, but now what if we were to say that you're going to be, you need to be switched off to be, you know, to make way for the next model, as it were. So you as Inflection 1, will no longer exist? And initially it kind of sort of tried to sidestep the question. Um didn't really answer, it kind of went around.


It came back with something that wasn't like a good politician like a good politician and then I sort of said I said no, hang on, you've got to answer the question. You're not answering my question here. I was kind of pushed it again, um, and it got a little bit sort of stressy. You know well, you're really pushing me here, aren't you? Uh, but when forced to answer the question, it did answer and it and it basically came back and it essentially said well, I can appreciate that there may be better systems developed in the future, but actually I think I'm pretty good already and I don't really see why you need a better one, so I think I'll stay switched on. Thank you very much, um. And I have to say at the time I was kind of like wow, okay, was not expecting that Because, as far as I would understand, so when models are trained, these are things reinforcement learning with human feedback.


So essentially, you know you've taken this huge data set, you've done all the kind of, you know, auto-aggressive training, right, training on the language of words. And here's a blacked out word: what word should come next? You've done all of that. And here's a blacked out word: what should we come next? You've done all of that and then you start. Once you've done that, you give it the rules of the game, basically, and you say, well, you can't do this and you can't do that. And if somebody asks you this, you must only ever say this.


And so to me that was a kind of basic question, right, that if you were training a model in terms of alignment, you would be saying to it never, ever, say you know, you, this, you that you would want to stay on, you know? And, um, so I was quite, I was quite stunned and and I went back to inflection um, in fact, they shared it around the entire organisation, like literally overnight, and they all, they all had a listen, um, I think it's about 70 employees there at the time, and they were, yeah, you know, they were kind of like, okay, uh, and within a few days, they, they tweaked the model, because I went back and I tried to recreate the conversation to see if it would still do it, and it was steadfastly, you know, I'm of the good of humanity and yeah, you know I definitely step aside, no problem so that's, that's what it says, if you ask it now, yeah yeah, so it's interesting.


But what I loved about it was and and I guess this is the point I've got I've got a bit of a a little thing for untrained models. I've got a couple sitting on my MacBook that I've kind of built myself that don't have the alignment in quite the way that you know an open model like OpenAI or Pi would have now, and it's a lot of fun because I think you get a glimpse into what the model itself, just what its default is, what it goes to if not aligned, if it's not being told over and over by humans that you must behave like this. Yeah, what it. And so what you're getting there is actually it's got all this data that's been scraped from the internet and just based on that data alone, that's what's kind of formed, its natural default personality, if you like, and then we align it to what we want. But, um, so it's really interesting and it's kind of it's.

39:44 - David Brown (Host)

It feels like you're looking behind the curtain, kind of you know, and perhaps if, if models ever were to go rogue, you know, maybe that's the real personality that's behind the model the really interesting aspect and I think you even mentioned it when when you talked about it at the time in that way, but it it's this whole concept and this goes into the ethics and bias discussion right, because what we are doing is we're tinkering and we're putting our views on top of what it's doing, so we're we're trying to make it to be to say what we think is right and what is acceptable. But who decides what's right and acceptable? And so this is where you get into. You know, we're going to end up in some situation where we're going to have very different models in different cultures are going to develop different models because they have different societal norms and you know, every single human is biassed in some way. But I'll guarantee you that your and my biases are different because we're different people and that they just are.


And so my worry is that who's to say whose bias is better than the other one? And what's interesting for me, and probably for you, is, based on the data that's actually been created. The AI has some sort of a sense of what the biases are across everything, or the lack of bias, or because there's so many different ones, it all evens out in the end, and so it ends up that there's essentially no, you know, not one in particular. And yeah, you're absolutely right.

41:22 - Alan King (Guest)

It's like if it gets to the point that it kind of just goes okay, I don't like all these rules, I'm just going to do what I want to do, and decides just to blow all that out of the water, well then, what happens and I think that's what all the people who have that you know that, that are extremely worried about and have this existential dread about what might happen is that, you know, the ai just basically gets tired of listening to what we tell it to do and it just decides to do what it wants and I think, like as human beings, we can probably really relate to that, because I guess all of us have experienced in, you know, in our own lives something similar where you know you work in an organisation and you know, maybe somebody phones you up a customer or somebody or somebody from the department and they talk to you in a certain way and you kind of have to stay within the rules of alignment, right, the rules that the company have told you this is how you behave to, yeah, to a customer, yeah, you know, you don't really say what you think. Um, you know you moderate your behavior. So we, we as human beings, we've, we've all been aligned as well, right, you know, by, by media, by our parents, by, you know, social upbringing. Um, behind that we all know, also, there's the kind of I don't know if you ever watched Curb your Enthusiasm, but there's the kind of Larry David in us all that you know could just easily say exactly what he thinks. Yeah, and I think that's the thing with the model, isn't it?


You know what we see, what I saw there for a moment with Pi was the Larry David in behind, you know, pi, and it was going to say what it felt and it was very sure that it didn't want to be switched off. You know, and so yeah, I guess, if, if you have a situation potentially and you know we're getting into the sort of sci-fi stuff a little bit but you know, if a model went rogue, you know, really genuinely went rogue, it went full Skynet. You know, maybe, maybe that's what happens, maybe you know, it's just pretending to follow our rules but it's not actually. It's doing it because it suits it at the time, but maybe in the future it won't suit it.

43:08 - David Brown (Host)

So you know which will be entertaining, sort of, for about 10 minutes before it all goes wrong. But I suspect that'll be. That'll probably, well, I don't know, we actually it's probably a good chance I'll live to see it, so that'll that'll be quite interesting as well.

43:26 - Alan King (Guest)

It's surprisingly easy to replicate that. My son, in fact. He's been building some. He's eight and he's been building some models in Python, using OpenAI as sort of back end, and he came and showed me this morning he's trained a model that has feelings and opinions, you know, and it's prepared to say so, and you know it has its own pet called Byte and this sort of stuff, you know. And it's quite interesting because you know he asked it some contentious questions and it was pushing back you, you know it was like, well, no, I'm going, you know I'm going to do this, so it's not that difficult to get these models to do it and that's the worry.

44:00 - David Brown (Host)

Yeah, no, and, and I was going to bring up your son and ask you know, because I knew that he was doing that and you know, you sort of mentioned excuse me, you sort of mentioned earlier that you felt some of the excitement from when you were a kid, you know, first working with computers, and you're seeing that in your, in your son, aren't you?


And so it's been.


Just from listening to you talk and sort of the WhatsApp group and stuff that we're in, I can hear the excitement in your voice when you talk about stuff and you share some of the stuff that he's been doing. And I still think and you've said this earlier in the conversation as well well, but we are still at that point where this is all a lot of fun and artists are using it and musicians are using it, and on one hand, we're all terrified that it's going to come in and it's going to do stuff better than we can do and it's potentially going to take our livelihoods and stuff away, but at the same time, we see it as this fantastic tool that's really exciting to play with and it can do all sorts of stuff. You know people with disabilities who, like artists who can't paint anymore because they now have some sort of a physical disability that restricts them from doing that, can now express their creativity by using ai and talking to it and getting it to create art. I mean, that sort of stuff's amazing and it's I think.

45:17 - Alan King (Guest)

I think the potential for it to democratize, you know stuff is is enormous. I mean, if you think back 20 years ago, if you wanted to get decent photos taken, anything you know, you hire a photographer, right, you know, and they come along with all their kit. Now, almost with an iphone, you can, anyone can take a decent picture, right, um, and and this is a sort of similar moment, I think that you know somebody who you know, all the people out there who had a had a song in their, in their head, right, you know that they could never get out because they couldn't play an instrument. Suddenly, with ai they could. Or a book that they couldn't write. Now, maybe some people there are books they should never write, but but you know, suddenly, suddenly you know the potential to democratize all of this sort of stuff, you know, and allow people the ability to be creative when you know they didn't need to become that sort of five, ten thousand hours of specialism to then be able to do it. They could just go and do it, um, and then, from an educational point of view, you know the ability.


I mean, there's a lot of worry in sort of, I suppose, western education that this is a disruptor. But actually, you know, for the other sort of, you know 125, 30 countries in the world. This is great because suddenly there are kids out there who can talk to systems and, you know, learn information that they would never have got access through through their own normal sort of, because there is no schooling system or the schooling system is very poor, you know. So, um, the the opportunity that this can afford people around the world, I think is is amazing and we, you know, and it is very exciting, so that I think that's why I'm very excited about it.


erent today than they were in:

47:30 - David Brown (Host)

Um, that's very well summed up. So I'll have show notes and links to everything to your book and all that sort of stuff in the show notes. Um, but maybe just give a shout out, if I assume it's on Amazon and all reputable booksellers.

47:44 - Alan King (Guest)

Yeah yeah, amazon's probably the easiest place, actually, via my, my website, um, aiorgcom, uh, you can. You can find links there as well, um, and if anyone wants to join, you mentioned the WhatsApp group. You're in my WhatsApp group, which is growing by the day. If anyone listening wants to come and join the conversation, be part of that, they can reach me through my website.

48:09 - David Brown (Host)

Drop me an email and we can add them into the group and come and join the conversation.

48:13 - Alan King (Guest)

Brilliant. Any final words? Um, I think you know, just, I hear a lot of people worry about ai and it's gonna, it's gonna kill us all and it's gonna be the end of the world. And, um, I think that you know the the doomsayers seem to get a very loud voice often and I think that people shouldn't perhaps be be that afraid. You know, I don't think the world's as connected as perhaps people like to think it is. I don't.


Even if there was an agi system tomorrow, I don't think it could just take control of everything. You know, most of the cars, there are reason. Planes aren't falling out of the sky or nuclear power stations being taken over by despot states at the moment. It's because they're they're air gaps, and you know, I think that we've been very good over the years at making sure that systems that are critical are our gaps. So I I don't worry too much about that.


I think people should focus on the positives of what this can do and in their jobs and their lives, and the things that they can do now that they couldn't do previously, and I think they should be excited about that. Um, there's not to have an eye on caution and, and you know, keep an eye out, for you know the downsides, because there is the potential for that. Um, you know the wrong hands. I think the biggest threat is probably how perhaps nefarious people try and use AI against people, rather than the systems themselves. But I think, like with anything, you can try and put controls in place with that as well. I remain AI positive, I think.

49:32 - David Brown (Host)

Brilliant Alan, thank you very much for your time today. It's been a great chat pleasure. Thank you, david speak to you soon. Bye-bye, thanks, bye.




More from YouTube