Artwork for podcast Creatives WithAI™
Creative Intelligence and the Role of AI with Dudley Nevill-Spencer
Episode 557th June 2024 • Creatives WithAI™ • Futurehand Media
00:00:00 00:30:51

Share Episode

Shownotes

Today's conversation covers a wide range of topics related to AI, machine learning, virtual influencers, and the future of technology. Dudley Nevill-Spencer, Chief Innovation Officer at Live & Breathe, shares insights on a variety of topics, including Audience Intelligence AI, virtual influencers, and the impact of AI on the future.

The discussion also delves into the potential of quantum computing, the role of regulation in AI, and the importance of creativity in the future.

Takeaways

  • Audience Intelligence AI involves using language analysis and meta-language analysis to understand the emotions and psychology of target audiences for effective advertising.
  • Operational AI involves using AI to improve performance metrics of operations within a business - so job ‘x ‘, like writing a brief, normally takes 16 hours; with operational AI, it takes you 8 hours and is twice as good.
  • The use of AI in creative intelligence allows for the generation of multiple images, words, and stories to aid in the creative process, although it is not yet at the point of perfect ad output.
  • The development of virtual influencers and virtual assistants owned by brands has the potential to revolutionise customer engagement and brand-consumer relationships.
  • The future of technology, including quantum computing and AI, presents both exciting and challenging prospects, with the need for regulation and a focus on creativity and human touch.
  • Encouraging creativity and storytelling in children is essential for preparing them for a future where human creativity and individual ideas will remain crucial.

And to answer the quantum computing energy question, I found this: "Today, quantum computers’ electricity usage is orders of magnitude much less than any supercomputer, and this is counting all the different quantum architectures available." (Link to the full article below)

Links relevant to this episode:

Thanks for listening, and stay curious!

//david

--

Tools we use and recommend:

Riverside FM - Our remote recording platform

Music Radio Creative - Our voiceover and audio engineering partner

Podcastpage - Podcast website hosting where we got started

Transcripts

00:00 - David Brown (Host)

Dudley, thanks very much for saving me today because I had somebody drop out, and you very nicely agreed to come on and have a quick chat with me this afternoon. So, first of all, thanks for doing that.

00:12 - Dudley Nevill-Spencer (Guest)

Call me last minute, Larry.

00:14 - David Brown (Host)

Okay, and just before we get started, we'll do the traditional podcast thing. So if you can tell everybody listening, just give them a little bit about who you are and what your background is, so they kind of understand where you're coming from.

00:30 - Dudley Nevill-Spencer (Guest)

so I'm Dudley Neville-Spencer, Chief Innovation Officer at an integrated agency marketing agency called Live and Breathe and also co-founder of the Virtual Influencer Agency, which I will explain more about later, I'm sure, but by way of background, I actually started off in life back when I was just 20, working for a US investment bank where I was a trader for and a futures trader for eight years, including in the pit where you shouted each other with funny hand signals and everything.

01:06

d use machine learning around:

02:29 - David Brown (Host)

Yeah, of course, and I know before we actually got on the call, there were a couple of different areas that you had said that you might want to talk about that you were working on, which the two of them were trying to help streamline the creative process in agencies, but more from an operational standpoint, which is, I think, something that's really interesting because that's what we're starting to see in a lot of industries now is obviously using AI and some of the new tools to actually to do that exact thing. So it's interesting that you were talking about that, so I'd love to dig into that. That exact thing. So it's interesting that you were talking about that, so I'd love to dig into that. And also going back to what you said a minute ago about understanding people and being able to use AI to understand the psychology of humans to do that stuff. So I don't know which one of those you want to attack first.

03:16 - Dudley Nevill-Spencer (Guest)

Yeah, I think the operational AI is really interesting, interesting. So the focus has kind of been up to date from an AI point of view, trying to use language analysis and sort of meta-language analysis to understand the emotions of different target audiences and then from that, figure out what their psychology is and, therefore what kind of ads or words or images or stories you should tell to generate awareness or consideration or loyalty. So someone's neurotic, you know, show them an image, uh, of a bunch of people having fun, so they think they're missing out. At its simplest, you know, if there's someone that you know is full of um agreeableness, then show them a bunch of people sort of, you know, happy together, all cheersing. You know that's going to attract them.

04:02

At its basics, yeah, so that's from the research point of view, and that then sort of moved into creative intelligence, so trying to generate multiple images, words and stories quickly so that you can create, you know, really good pictures or really good what used to be scamps, which is what our creative division at live and breathe are doing now, and it just helps you visualise the ideas a lot faster and also generate more ideas. Still not at the point where you can perfect that and just have an output of an ad, even if it's on social. If you do, it's very, very simple, way too simple, so we're not there yet, but it really helps with that creative process and something that you talk about an awful lot, which is a collaboration with the AI and the human.

04:49

But where I'm starting to move now, and whereas an agency we're starting to move at, Live and Breathe, is using RAGS to try and retrieve assets, be they for the research team, so as a whole. You know thousands of documents, asset documents of research from 30 years of research work, and there's thousands of strategic routes or creative routes in your server. So, getting the machine to find the relevant ones for you, package, package them together, summarise them and say, right, here's a bunch of stuff we've done before.

05:27

let's use this to stimulate the next pitch or the next brief or the next piece of work, yeah, so that's much more operational, and, as an AI product developer, the goal you're always looking for is to take a job, create, create a benchmark. So job X takes hour Y, and then, when you put the AR tool on it, job X takes hour Y minus 50%, and so what that tool is really looking to do is try and reduce pitch time, briefing time and creative time.

05:59 - David Brown (Host)

Interesting and just for the people who don't know, can you explain what RAGS means?

06:04 - Dudley Nevill-Spencer (Guest)

Yeah, it's like asset retrieval.

06:08

So, at its simplest, if you think of and what we're really talking about here is an interface where you get to talk, for example, with a GPT, and then it goes off as an agent.

06:17

So it's doing something for you as an agent of you, and it's going off and finding things in multiple servers which it thinks are going to be useful for you as an agent of you, and it's going off and finding things in multiple servers which it thinks are going to be useful for you. So it's asset retrieval, but it's a machine retrieving the assets for you. So, at its simplest, instead of typing into that little search icon on Windows or something so non-alcohol analysis and hoping that it pulls something up, it's like talking to your very own uh data analyst and saying go and find me everything on that, and it goes into all your servers, and it pulls it all out for you, and it summarises it all for you, and it also scales it in what it thinks is the order of relevance and because you've got a GPT on top of it, it talks to you, and it can communicate with you.

07:06 - David Brown (Host)

So it's faster because of that interaction, which is amazing, and I want to say I've got I found saw an article last night talking about rags around AI, and I think the context of that was something that I've talked about, probably for the last year, which is I think that the model they were talking about was more like a federated model. So what you had is you have like a controlling AI, and if you ask it to play chess, it doesn't try and play chess itself.

07:35

It uses an API connection to go to a chess AI, and then it brokers the conversation, and it's a similar type of thing. It's able to go out to other sources to get a specialist in that area or to look for data across the systems and stuff. So I'm literally just just sharing that on LinkedIn today, but yeah, it's really interesting that's a much more complex version, and that is where we're going.

07:58 - Dudley Nevill-Spencer (Guest)

But even the version I was talking about before is actually RAGS within your own system, within your own file network, but actually, it's really difficult to do. That hasn't been nailed yet. So again, you know, it's always this thing where we're looking at the exciting stuff of it, but when actually it comes down to operational, does it work? So, we'll talk about the agents pulling information from other GPTs or other agents. But if you keep it really simple and you just talk about trying to pull information from your own servers to actually make it work properly, you really still need someone to tag everything with metadata. And so if you've got 20,000, 30,000, 40,000 documents, somebody still has to go through it because you can use clustering algorithms to try and look at the words within each of the documentation and then come up with its own taxonomy, or you can give it K, you know you can give it taxonomy. So, 30 different topics.

09:03

So if you're an advertising agency, it might fit into one of these boxes right yeah, to find, uh, you know, nearest fit into one of these topics and put it in there. But guess what? It's not perfect; it doesn't work. So, to make it work really, really well, you still need a human to go in there. And it's like that with all AI at the moment. Right, what you see out there on, um, advertised as product, when you actually get into the weeds of trying to implement it as a business, there's still an awful lot of intricate work that you need to do to make it perfect. And when you've got a business, it's got to be perfect, right? You can't have it pulling in a bunch of research or strategy or ideas from, you know, car research and implementing that into research about non-alcoholic beer because it's going to mess the whole thing up.

09:50 - David Brown (Host)

Yeah, yeah, no, it's no, it's a great point, and it's getting better and better every day and it's in my experience it's it's way better at doing that type of task than it is at doing a lot of other types of tasks. Um, so it it's. It's still, if you point it at something and ask it a question about what it sees, it's much better at analysing that than potentially hallucinating something that if you just ask an open-ended question, but, um, yeah, and there was something else I wanted to go back to that you mentioned when you were talking about understanding psychology and you were talking about, you know, people of different personality types. How easy or difficult is it to work that out from looking at people's profiles from an advertising standpoint? Because if you're yeah, I'll just leave it at that. How easy or difficult is it to do that?

10:44 - Dudley Nevill-Spencer (Guest)

So the first thing is you know it's not perfect, right, it's really signposts pointing you in a direction which is much better than not having any signposts, right, if you find, if you've identified really good target audiences, and then you can scrape their language from social and then you can scrape their language from social and you've cleaned out all the bots and you've cleaned out all the brands, which again you kind of have to do manually to get it to really work. So it's the same thing. You can get the machine to pull it all in what we call the machine qualified data, but then you still need a human. So, one of our team in the research division in Live and Breathe. You still need one of them to go through it and go yeah, right person, yep right person, yep right person.

11:26

So we like to try and get a thousand to two thousand of those individuals. All of them are anonymised so the client will never, ever see who they are and in our systems there's no reason you don't need to, so we never see who they are, doesn't matter. But then you go analyse their language. Now, from all of the research we've done and we've seen if you can get about 5,000 words of someone chatting. You really do understand them, and it's just the nature of human language that the way we talk is really an expression of our psychology, and you can't get away from it unless you're making a concerted effort to pretend to be someone or something else. And if you do that, then yeah, it doesn't work.

12:05

But most people on social absolutely don't do that. Then you tag to that you know what interests they're in, what accounts they've engaged with, what content they've shared and you start to get this incredibly accurate map of who they are. So the way we work is we will let's say we're in a category fizzy drinks. We'll find people who have talked about purchasing fizzy drinks, so you know that they are a category participant. Then you extract a thousand, two thousand of them in each different territory. Then you run your analysis to understand their psychology and you're looking for massive over or under indexing in terms of psychology and then you create the copy and the imagery around those over or under indexes and it works very well, very nicely.

12:50 - David Brown (Host)

r a company back in the early:

13:07 - Dudley Nevill-Spencer (Guest)

That's brilliant, that's amazing, isn't it? That's amazing. What are they there for? And then give it to them.

13:13 - David Brown (Host)

Exactly, and it was it actually. We worked with an airline I won't say which airline specifically, but we worked with an airline. We worked with an airline I won't say which airline specifically, but we worked with an airline. We got 44 percent uplift on people booking extras like rental cars and hotels and and and transfers and all that sort of stuff, and they ended up turning it off because it was outperforming other parts of the business and the people in the other parts of the business essentially got angry because they now weren't the best performing part and and they forced them politically internally they had to turn it off that's a great story.

13:50 - Dudley Nevill-Spencer (Guest)

That is a great story. So now you could attach that to some generative content on web, right? So not just pulling it from afar, but actually generating it on the fly and presenting it yeah, brilliant, I love it. Let's do it.

14:03 - David Brown (Host)

Yeah, which was, I mean, which is crazy that you know that you can do that, and I don't think a lot of people realise that. All the stuff that happens in the background and has done. You know that was. That was just simple. You know behavioural analysis and we were doing some predictive modelling. We'd go in and do a big study. We'd figure out which variables were predictive and then you could just target people based on that. It was really, really simple. But, like you said, now with the new tools you could start a conversation.

14:30 - Dudley Nevill-Spencer (Guest)

You can dynamically create stuff if you could be confident that it wouldn't go completely off the reservation and just start doing something crazy yeah, and and that's the trick right now, that kind of brings us to the, to the virtual humans thing, you know, which is, uh, okay. So let's say you want to get them into your system. Uh, you want to create a relationship with them. You know, you want to get them onto that website talking to a virtual assistant or whatever. Before that happens, how do you create awareness? You know, by ads or, you know, by virtual influencers. You know, by a character that a brand owns, that lives on social and represents the brand as a backstory.

15:13

So it's like an entertainment concept and there isn't a lot of it happening, but there is a whole ton of interest and a whole load of discovery projects going on, because it really feels like brands have woken up to the fact that at some point you're going to be able to generate very quickly characters.

15:40

You're going to be able to ask a GPT to have a particular personality and communicate with a particular personality, and our social feeds are going to be full with generated characters, which we think everything should be watermarked so you know who owns it and if it's made by a creator to earn money or if it's a representative of a real human or if it's owned by a brand, but that if you create one as a brand, then every single follower you have is a bit of data you own, as opposed to using a traditional influencer where you don't own any of that and all you're trying to do is get that influencer to point to you know, to your website or to your channel and hope that some of the followers come along, whereas if you create your own character, you own all of them.

16:24

And GPTs can really help with that conversation, but they still at the moment need moderation. But you can have much better, much quicker, much cheaper conversations with multiple people going on at once with just one person community managing and just authorising, authorising, authorising, authorising, authorising, authorising. So that's where it's at at the moment and that does really work.

16:48 - David Brown (Host)

There's a couple of really famous examples of that, aren't there? There's a couple of virtual influencers that have I've seen in the news I don't remember specifically which ones they were, but they they tend to be women in in my memory and um and some of them have millions of followers. It's crazy yeah, yeah.

17:07 - Dudley Nevill-Spencer (Guest)

So there are loads of virtual influencers owned by creators that have done really really well. Um, Imma.gram is a really great one over in Japan, I-M-M-A gram. If you have a look at her, she's really good because she's done lots of really big brand collaborations with brands like Ikea and Apple and all kinds and has lots of interaction with the audiences, etc. Really great engagement rate, really nice content. Um, really great character. And years ago when I used to talk about this, people used to say, oh, I'd never talk to a robot. You know, it's just crazy. And now, of course, no one says that anymore and no one said that to us for about six months. But there were these articles that we used to do, like in ad week and everything, and it would be like us against you know, like the, the editor or whatever. You know, we'll never talk to a robot well this is why you will talk to a robot.

18:05

But, yeah, that conversation's gone, thank goodness. But Imma.gram's a really good one to look at, but there are also a number of brands that have started their own characters. So um, in France, Brittany have created their own character which talks all about the coast and the food really lovely, lovely character which you know, then communicates in text on instagram and on other channels, and there's a lot of those coming out now and that is that is what has just started to take off. What we're trying to do is create virtual influencers on social, but then use that as the awareness part of the funnel, get them onto the website and then have that character on the site as a virtual assistant.

18:51

Of course, the personality is then shrunk. You don't? You don't want them talking in full personality, because at that point they're fully representing the company and really it's customer service at that point, and also at that point, you know you're not, you know you're chatting more about product, not about general life. But that is a really good funnel from awareness to consideration, all the way to purchase and loyalty, and that's what we're trying to connect at the moment and that's what a lot of companies and big companies are really interested in doing tests in.

19:28 - David Brown (Host)

Interesting. So how do you, where do you see this going over the next you know, sort of five or 10 years, I mean, is it? Are we going to get to the point and I know everybody talks about this and I've said loads of times that I would love to have a personal assistant that I could. That would essentially be a real virtual assistant that could do, actually do things for me. But do you think that's as a brand or an organisation? You know, are we just going to have somebody like that who, instead of having a chat box down in the corner, it's actually going to be the picture of a person and there's going to be an avatar and you'll just be able to chat with the avatar and it will be able to just do everything that way?

20:07 - Dudley Nevill-Spencer (Guest)

Yeah, there's a load of issues with that. There's a lot of companies that tell you that you can already do it, but you can't do it easily and you can't do it with brand safety. Yet it's very close, though, so I think what you'll see is it's always iterations, right. So the first thing is at the moment right now, through a nice text chat, you can get decent conversation which is very brand safe within, effectively like a conversation tree, certain subjects are fully safe and fully automated. If you go outside of that, then you probably need a human to come in. So I think that's where we are at the moment. Very soon, and we are actively doing this. There are a couple of things, such as booking tickets, booking restaurants, buying things, where the only component which you, David, would have to do when you ask this particular company and this avatar agent to do it for- you the only time you'll have to hit the keyboard is when you have to put in your little password for your credit card.

21:13

So that point is happening right now and you will start to see some of those come up. I would like to think within properly, with some big companies, some small companies doing it, but big companies in the next three to six months. So a few functions being done for you. However, when it comes to are you able to chat to the character and it looks at you and it chats back to you. So the problems there, of course, are we can convert the text which the GPT comes up to into lip syncing relatively straightforwardly, but there is always a streaming cost to do that, because it takes a lot of compute with a gpu to convert that text onto a rigged face and make it say it on the fly quickly. It's just, it just costs a lot. So with quantum computing coming in, I think you'll be able to do that and it will fly, but access to quantum computing and costs at the moment is still too small and too big, so we're not there yet.

22:08

So I think that's within the next three years when that starts to happen. But then that brings in the whole question about power consumption, because of course GPTs suck up so much power and so much water and so much energy. I mean, I'm hoping we bring on some fusion tokamaks soon and have unlimited energy, Exactly, that's not a problem.

22:31 - David Brown (Host)

I have a question on that, just not to distract, but building on the quantum thing. So I've said for a long time that I think quantum is going to be the next major leap in AI and we probably won't get to a true AGI until we have a workable quantum computer. But nobody ever seems to talk about that. So it's interesting that you brought that up. And secondly, do you I don't know this, but do you have any idea what the power is like on a quantum computer as opposed to a normal one? Because would we get a thousand or a million times the processing power for the same amount of electrical power that would be needed? Or does it need massive amounts of electrical power? I don't know the answer to that. I don't know if you do either.

23:16 - Dudley Nevill-Spencer (Guest)

Yeah, when it comes to power per output, I couldn't give you an answer on that. I don't know. I just know more about the speed of compute and the volume of actions.

23:30 - David Brown (Host)

I'll take that away for the audience as an action point and I will put it in the show notes, if I could figure it out.

23:35 - Dudley Nevill-Spencer (Guest)

It's a really good one to look at, isn't it? Because you always like to look at things in three or ten-year timeframes, and I think three years. We will have lots of interaction going on, but it's not going to be ubiquitous everywhere until we get that quantum thing coming, which is probably five to ten years. So it's always that thing, isn't it, where you have the technology, but then you also need the power, you need the distribution, you need the adoption, and so we might actually have the technology almost right now, but we don't have those other things to make it all work, but in a stripped-down version which could really affect CRM and improve awareness, consideration of sales. Yeah, we've got that now in a stripped-down version. It just needs to be implemented properly.

24:22 - David Brown (Host)

Yeah, 100%, and it feels like yeah, like you said, and it also feels to me like a lot of that stuff is going to happen around the same time.

24:31 - Dudley Nevill-Spencer (Guest)

Like you know.

24:32 - David Brown (Host)

I mean, our generation has seen well, I'm assuming you're around my age, but our generation has seen everything from going from black and white TVs to now having AI, and we'll probably, if we're lucky, we'll all still be around when we have, you know, when we have essentially unlimited power. We'll have AGI, we'll have all these things, we'll quantum computing. It'll all just be all at the same point and it's going to be like a crazy 10 year period when all these technologies are coming out and they're all going to figure out how to work together. And it's it. I mean, I get, it's equally scary and equally exciting at the same time, but if we can manage to do good with some of it, it could be incredible yeah, I have.

25:16 - Dudley Nevill-Spencer (Guest)

I'm always an optimist but but I'm a realist at the same time and I completely agree that you're going to get this coalescence. I also think you're going to get an ai push back and you kind of started to see it. You know, you get the hype and then you get people saying, well, I'm not using it, I haven't, hasn't changed my life, even though it's like affecting absolutely everything. You're going to get a bunch of that for a while for sure, because you always do. But where I'm hopeful is that, unlike the last leap, when we had the internet and then social, there was no regulation right and we messed that up so badly, with such dire, dire consequences that we're feeling today with disinformation and young people suffering so much with the fracturing of groups and society because of the way we messed that up with no regulation, that you're seeing an awful lot of regulation now with AI and you're seeing politicians competing to be at the front of the regulation curve, which traditionally I wouldn't have liked. But actually, for this I think it's brilliant's brilliant, you know, and you've got everyone involved. You've got China involved, you've got the US involved. You've got, you know, the AI summit, which I really liked the way the UK is approaching it, you know, which is really about outsized negative effects.

26:45

And then the way Europe's approaching it is completely different, which is just about outsized negative effects. And then the way Europe's approaching it is completely different, which is just about thousands of laws, individual ones, which we need. And then the US is completely different, which is well. We're going to motivate you by saying if you want any government contracts, you have to do this, this and this, otherwise you can't get any of our billions, which is also really needed. And then the thought about licensing gpts, you know, and if someone uses it without a license, they get. It's great, it's really really good. Will it be enough? I'm not sure. I'm very hopeful, and we're certainly in a place for this next step, which we've never been in before, with more awareness from politicians and more awareness from the public, who are demanding it. So I I'm hopeful.

27:28 - David Brown (Host)

And slightly personal question do you have kids?

27:31 - Dudley Nevill-Spencer (Guest)

Yeah.

27:34 - David Brown (Host)

How do you feel about the future that they're moving into? Because I have five kids as well. I have a 17-year-old son and you know it's a crazy world that he's trying to move into and trying to help him understand you know what skills he might want to have or what direction he might want to take. It's pretty confusing at the minute.

27:54 - Dudley Nevill-Spencer (Guest)

Yeah, it is really confusing because it moves so quickly. So, you know, I've got two children, 10 and 6. And so for them, I probably would have been advising them on something a little bit different a few years ago, where similar but different. So a few years ago it was all about kind of world creation and the theme is always creativity Right, so using tools to aid creativity, Right, but with a human touch. So, whatever happens in the future, humans and them having a relationship and stories to tell other people will be incredibly important.

28:41

And so what I've always done, what we've always done as a family, is encourage our children to be as creative as possible and to develop their own worlds and their own characters, which they love doing. And you're going to have a future, I think, where attribution of an idea from an individual is going to be really important. We're kind of built that way yes, that human can use a machine to help them come up with the idea, but in the same way that tolkien created a world, you know, that's kind of what we kind of teach our kids, um, and that whatever the tools are that you're going to end up using to perpetuate and build these worlds, that doesn't matter, we'll just use them as they come in. It's all about experimentation, but having them think creatively is what we've always encouraged in song, in word, in story, in character. Yeah, that's been our focus and they're both very creative and they love all that, so it's good, brilliant. I think that's been off.

29:49 - David Brown (Host)

That's been our focus and they're both very creative and they love all that, so it's it's good, brilliant. I think that's very well said and that feels like a really good jumping off point for the conversation as well, because I also know you've got a hard stop. So, um, dudley, thank you very much again. That was, uh, that was a really good conversation.

30:03

I enjoyed that thank you very much um love your podcast too, and thank you for having last minute larry on really no problem and I'll put show links to all the stuff and and your companies and everything else that we talked about as well. But, yeah, brilliant. Well, thanks very much and we'll speak to you soon, thank you bye.

Links

Chapters

Video

More from YouTube