Artwork for podcast Greenbook Podcast
94 — Endurance and Insight: Research Resilience with Ray Poynter
Episode 9422nd January 2024 • Greenbook Podcast • Greenbook
00:00:00 00:43:14

Share Episode

Shownotes

What do ultramarathons and market research have in common?

In this episode of the Greenbook Podcast, we sit down with ESOMAR president, Ray Poynter, who shares his intriguing journey from a computer programmer in to a forerunner in online and social media research. Ray discusses the significant industry shift towards more in-house client research, facilitated by advancements in technology. He offers insights on how the internet and AI are revolutionizing market research, forecasting AI's impactful but not disruptive nature compared to mobile phones. The conversation also covers the democratization of research through technology, where Ray emphasizes the growing importance of skills like empathy and storytelling, areas where AI may fall short. Additionally, they touch upon challenges in data quality and potential regulatory hurdles with AI's integration. The episode concludes with Ray drawing an inspiring analogy between his enthusiasm for ultramarathons and the relentless commitment required to navigate and excel in the evolving landscape of technology in market research.

You can reach out to Ray on LinkedIn.

Many thanks to Ray for being our guest. Thanks also to our producer, Natalie Pusch; and our editor, Big Bad Audio.

Transcripts

Lenny:

Hello, everybody, and welcome to another edition of the Greenbook Podcast. I am Lenny Murphy, and today I am joined by someone I consider to be my mentor, whether he wants to claim that or not, Ray Poynter, founder and chair of NewMR and president of ESOMAR. Welcome, Mr. President. Glad to have you.

Ray:

[laugh] Thanks, Lenny. And what a wonderful introduction.

Lenny:

[laugh] Well, I’m glad you still think that. And I’m going to add to that a little bit as well. I mean, Ray and I—and we go back a long ways now, back in the early days with this whole social media thing, and I will see if you remember this story. The very first webinar I was invited to do was a NewMR webinar on the future of research. I think it, you know, was the annual piece. I’d never done that before. Right? Here I was a recovering CEO and then kind of emerging into this influencer role. And we had just gotten—the dog had just had surgery.

Ray:

The dog [laugh].

Lenny:

Do you remember? And the dog started barking because he saw a squirrel. It was right outside. So it was a great learning experience. One, don’t have the dog in the office when you’re doing a webinar, while secondly, everybody just thought it was funny as hell, which alleviated my embarrassment. So do you remember that?

Ray:

Oh, yes [laugh].

Lenny:

Yes? [laugh] Oh, and then we went on to many other firsts after that. But anyway, it’s a joy to have you on. We haven’t spoken in a while. Thank you for taking the time.

Ray:

Pleasure. Pleasure.

Lenny:

All right. Now, for those who don’t know more about you, why don’t you give a quick bio.

Ray:

So I’ve been around a long time, pretty much always in the research industry. I joined as a computer programmer in ‘78, the very first Apple II’s that were imported in the country. We had something like numbers seven and eight, and we were writing statistical software for them. And then I started looking at people using it and said, ‘‘Actually, no, use it like this. This is how you do the research.’’ And it kind of went from there. So I was then in the first wave of using personal computers, got into quite a lot of international research. How can you use Apple II computers in Thailand? How can you use them in Japan? And so on. Then we started doing some dialing work. We did some modeling. We did some simulated computer simulated respondents back in the ‘80s and people dialing into these VAX computers, moved along, and we started doing a lot more online research back in the mid ‘90s, social media research when that came along. I’ve got, like, a really short attention span. I’ve been lucky to spend some time in some great companies, usually not very long. So I was a year in Millward Brown. I spent a year with a fantastic company in the UK called Research Business. Mostly I’ve run my own ship. These last 10, 15 years, it’s mostly been consultancy, training, lots of writing, some teaching at universities, lots of going to conferences.

Lenny:

Yeah, absolutely. And, you know, you’ve been integral in our conferences at IIEX, often hosting IIEX Europe and even once or twice getting you over here to the US. And, you know, the thing that has always impressed me about you, Ray—and I’m not blowing smoke, but for our audience, has been you’ve been around for a long time, longer than me, but yet you’ve always kept one foot in best practices and learning best practices, and one foot in the future. And I think that’s just been an incredible asset for the industry as a whole. What’s driven that? It sounds like maybe you’re an early adopter to begin with.

Ray:

Yeah, but not of everything. You know, I call BS quite often on things that are coming through, and sometimes I’ve been right. But I have a passion for how things really work, and I want to use new stuff to enable me to do that or to tackle problems that weren’t already there. And when you first came on the scene, you were a bit young, but now you’re beginning to get the advantage that you’ve seen some of these hype loops go around. And that is one of the advantages I’ve got at, say, over 45 years now in the research industry. You know that these things are going to come back in 10 years’ time and things are going to keep cycling back. And people, ‘‘Oh, this is going to change everything.’’ Well, usually not. The internet changed a lot of things [laugh], but most things are kind of incremental. And we’re going to get onto it later, but AI is going to be another one like the internet. It’s not going to be another one like mobile phones, which changed a lot of things but didn’t cause a rethinking.

Lenny:

All right. In those days, when we were going back and forth on things, Radio #NewMR Right? Like, one of the first—

Ray:

Oh, yes.

Lenny:

—the first podcast, we had wonderful debates about mobile at that time. And you introduced the concept for me that was important of ‘fit for purpose’ and ‘pragmatic innovation,’ right, in that—I think that perspective that you’ve brought of ‘yeah, this is cool, but what impact is it really going to have?’ And I thought mobile was going to change everything overnight. It took us 10 years, right, before it actually— [laugh] we really felt the impact. So interesting perspective on all of that. There’s a lot of places we can go. And you brought up the two letter word, the dreaded two letter word, so why don’t we kind of segue into that with—through your years, your experience as an influencer, now your role within ESOMAR, kind of summarize where you think we’ve been the last few years and where you think we’re going, in a broad way, and we can just dive into specific things from there.

Ray:

So one of the big trends—and there’s a study that started as a NewMR study, and it’s now a collaborative study with ESOMAR, the MRS Insights association, lots of other organizations, is a study called the user and buyer study, which looks at just clients and what proportion of their work are they doing in-house. You can’t do it by value because they don’t charge themselves money by and large. So we have to do it on percentage of projects. And about 50% of projects are done internally by clients around the world. And it doesn’t vary that much. It’s a little bit lower in Asia, but not much lower. A little bit higher in North America, but not that much higher. And people are talking about why have clients changed what they want? I don’t think they have. I think they would have always liked to do that, but they couldn’t do that when you had to go out and knock on doors or phone people up on the telephone. You couldn’t do that when you needed to print questionnaires and mail them out to different parts of the country or program computers. Now we’ve got all of these platforms, and there are thousands of insights, research related platforms. So, all of a sudden, lots and lots and lots of things—and, yeah, one of the first well known ones was SurveyMonkey. But everything that’s followed down that line since then, the Qualtrics and all of these things, have enabled companies to do lots and lots of things internally. They’ve been able to treat it financially in a different way, so they can spread their cost. Online communities, which is something I was really involved with right from the outset, again, allowed them to do that. Because quite often the online community platforms come with very simple to use interfaces to write your surveys or to do your online discussions. And you’ve already got your people, and you know who those people are. Because we’ll come on to data quality in a moment. And so that has been the single biggest driver. It’s not that anybody changed what they wanted. Something became possible, and that’s going to come into play in AI as well in a moment.

Lenny:

Yeah. So I’m actually really glad you brought this up because I’ve wanted to ask this question. I heard that number and read the report. I always read the NewMR and the ESOMAR reports. And my first thought was, ‘‘Well, I don’t know.’’ But then for me, my thought was, ‘‘Well, that must include CX,’’ you know, customer experience and everything that falls under that category. And if that is the case, then, yes, I buy that 100 percent. So a clarifying question, does that include CX?

Ray:

Absolutely includes CX. And this is one of the contentious issues about whether CX is in the research world. Because, of course, most CX people would say no, and quite a lot of traditional market research associations who would like to have the same set of ethics and codes that we had 30 years ago would say no. But when you look at the tools that are used, what people are trying to achieve, the similarity is much bigger than the difference.

Lenny:

Yeah, okay.

Ray:

That’s not with my president ESOMAR hat on.

Lenny:

[laugh]

Ray:

That’s the Ray hat [laugh].

Lenny:

No, I get it. And I agree 100 percent. Right? We talk about the democratization of research.

Ray:

Yeah.

Lenny:

And that’s what the technology has allowed. And actually, I was even thinking about it yesterday that’s—now it’s just democratization. Now it’s diffusion, you know, of insights across the enterprise with access to more tools. And I think where you were going, and yes, AI is just going to make that even more ubiquitous. But anyway, so thanks for the clarifying opportunity. What were you going to go to next?

Ray:

So that’s, like—is certainly the biggest change in recent times. The internet was a phenomenal change, and we should look back at what that change was. Before that, most data collection companies would have a screener saying, have you done a survey in the last six months? And if you said yes, you were screened out. And the method of sourcing people approximated to random probability. So we would use random digit dialing if we were doing telephone work in large parts of the world. And to some extent, that still happens. People would use grids to go and knock on doors to do surveys. With the internet, one of the biggest changes was not the internet. It was the switch to online panels. You know, Gordon Black and companies like this and Harris and so on coming through and really changing our attitude to interviewing the same people over and over again. And now, in the panels and in the online communities, people are doing 50 surveys, 100 surveys a year. Most participants—

Lenny:

Or a day [laugh].

Ray:

I meant the real ones [laugh].

Lenny:

Yeah. We’ll talk—that’s when we get to data quality. Okay.

Ray:

Yeah.

Lenny:

So keep going.

Ray:

But this means that more participants see more surveys than the people who write surveys see surveys, which is an interesting power knowledge shift that’s gone in that process. So that is a really big change. And the internet was successful because it was faster and cheaper. It was not as good. It was not as good for two big reasons. The first is the sample was nothing like as good as the old methods. The second is both telephone and face-to-face had an interviewer correcting all the mistakes and the crap that the researchers had put into the questionnaires, and they were sort of wording it slightly differently so it would actually function. All of a sudden, with online, there is nobody keeping people honest. There’s nobody energizing the participants. There’s nobody sense checking it. Of course, there are a handful of things which were better. If I ask you, how often do you clean your teeth, I do get a more honest answer online than I do if I’m standing two feet away from you looking at your face. But generally, we recognize that it wasn’t as good. But damn, it was cheaper and faster, so I’ll have a lot of it, please. That’s going to become really relevant soon. And the next biggest change, actually, is the one you’ve just mentioned, the diffusion, the number of players, the percentage of our business that is now Ipsos, Kantar, etcetera, has been going down and down and down. And the amount that are companies you’ve obviously heard of, peripheral companies on the side—where do Forrester sit? Where do Gartner sit? All of these things. It’s become much harder to define exactly what the industry is because of those sort of changes. And CX...

Lenny:

Right. Well, and brands, so Walmart Luminate. Google survey is a great example, right? Amazon. And of course, you know, we saw that happening with dunnhumby, you know, the spin-off from Tesco. And I think we just see more and more—not think I know—that diffusion. What’s a supplier? Absolutely. And also what’s a buyer and where the buyer sits within the organization? It’s certainly not within the insights organization anymore, at least not majority. So I would say it’s increasing the pie. And as ESOMAR shows the pie—global turnover is increasing. The pie is getting bigger, but it’s got to be cut a whole lot more [laugh] for a lot of other players.

Ray:

So I think that then takes us through—we’re seeing a lot of homogenization around the world. So if you use a face—do a face-to-face interview with beer drinkers in Lagos in Nigeria, and people in London and people in Jakarta, let’s say, you get very different feedback. People are quite different in the way they answer questions. You do a study, as I saw one a few years ago via Facebook, with people who have gynecologists in India and Africa and America, they’re almost identical because those qualifiers—most Indian women don’t have a gynecologist. Most Indian women who are pregnant do not access the internet. But by the time we move through into those things, it becomes much more homogenized. So if I want to do research on BMWs, I’m really comfortable doing that in China on the internet. I actually still wouldn’t do interviews about everyday rice on the internet in China because I’m not getting into the tier three, four, and five cities probably.

Lenny:

Interesting. So that there’s a lot of places we’d go with that, but... so the internet as a homogenizing, level-setting solution across all cultures and countries.

Ray:

For lots and lots of things. I mean, there, it’s following, not leading. The world is homogenizing. If you go across Asia, what Americans would call Asia, not what English people would call Asia, so Indochina, Singapore, Japan, China, and so on. K-pop, everybody is watching, listening to K-pop and watching Korean TV.

Lenny:

Right. And using TicTok. And...

Ray:

Yeah, absolutely. So we’re getting these unifying features.

Lenny:

So now put on your ESOMAR hat. Right? I mean, ESOMAR is the only truly global trade organization that we have. So I hear that, and my gut reaction is I think we’re losing something as a species. We may be gaining some things as well, but there’s components there. And especially as a researcher, like, I want to understand the localization. I want to understand those components. Is there an ESOMAR perspective on this broader issue of—as we could argue, does it really matter if we’re doing global studies on certain topics? Because the results are going to be the same because of this homogenizing component of online. Yeah, see where I’m going?

Ray:

Yeah, I don’t think it’s quite so much an ESOMAR issue. I think that it goes in some topics way more than others. So iPhone, very homogenized. There are some slightly different characteristics around it. So countries that are not very fond of Korea are more likely to buy iPhones because big brand leader in Android is Samsung. So there are little bits and pieces, but then there are places within big countries, like within the US, where you’re going to get some of these characteristics moving around as well. But it is only at that level. There is still pretty much anything that happens upstairs in your house is very local still. So what you buy to put in your bedroom? What you cover yourself with when you sleep at night? What you wash yourself within the bathroom? Which other things you use when you go to the bathroom—all of those, like, really local. So there is still a lot of variety in the world. But the number of languages in the world is declining. The number of communities in the world is declining. It’s not going to happen in our lifetime that it’s going to disappear altogether. But we do seem to be shifting towards an American culture and a Chinese culture relatively quickly, with lots and lots of places holding out for a bit of difference. Africa for one. India for another. But nevertheless, there is a big, big push towards homogenization. And it’s not something I think you can say is right or wrong. I remember—I’m going to just dive in now from a story. I used to do a lot of work with the folks in Atlanta, Coca-Cola, and I got into an after work discussion with one of the senior executives. And I was saying, ‘‘Is it right that we’re selling Coke to really poor people in the Congo,” let’s say. And he’s saying, ‘‘Who are we to say? If somebody chooses to save it up and this is going to be the highlight of their week to have a Coke, we are not the people to say, ‘Well, actually you don’t have enough money. We think you should save that and invest it for education or something like that.’” And you can see the counterargument that actually this is to do with big advertising. But there’s a really strong nugget in there as well.

Lenny:

Yeah, no. Agreed. Agreed. You know, we, I mean, here in the US, we’re a multicultural society. Of course, I think in the UK we aren’t having the same argument. And what that really means is that there’s this melting pot where elements of all these cultures are coming together and changing the macro culture while we still have, you know, kind of subset components that honor the kind of core originating culture. And I would agree that is certainly happening globally. My kids are huge—all of my kids, big ones and little ones—are huge anime fans. I am not. They didn’t get that from me. Right? So this kind of Japanese—now I am a big Godzilla fan. That’s how that manifests for me [laugh]. But there’s an example. They’ve just imported that element of that culture and really embraced it. And it’s just part and parcel of what they think is absolutely part of just being an American is watching anime. So, yeah, I think we’ll see more and more of that as well. So let’s talk about AI now. So I want to be conscious of your time because you’ve got—in case our listeners don’t know, you’re also an ardent runner, and you’ve got a big old race coming up right before we do this. So maybe we can squeeze that in a little bit as well. But let’s talk about AI because I agree. At this time last year—let’s put it in context, right? This time last year, here came ChatGPT. Right? And I remember that I was on a webinar with Vivek and with Jamin, and it was right about this time. And the question was ‘‘is it going to be as big as the internet?” And Jamin and I both thought probably. Vivek was like, ‘‘Nah.” And I think that, yeah, it is utterly transformative in ways we couldn’t quite envision last year. Now, certainly see that it is a game changer. And here’s my hypothesis, and you tell me whether you agree. The ease of access of information, particularly to synthesize information, that’s a huge pragmatic component, combined with the ease of operations from just—kind of just how you do stuff. Right? And it’s simplifying those. It is revolutionary in those two things. We can access information better, cheaper, faster than we ever could before, and we can use that information to do things in very easy ways that we couldn’t do before. And, you know, it’s almost like the invention of the industrial revolution. Right? It’s about efficiency of scale and throughput and those type of things. So that’s my take. What do you think?

Ray:

Yeah. I mean, the—I remember the discussion with Jamin, and the comment was made that ChatGPT is going to be bigger than the internet, and that was, like, really close. But I don’t think it is. It’s AI, and it’s—maybe it’s the stuff that’s got Sam Altman fired and rehired that’s coming down the wire shortly that’s going to be—because this is, like, one jolt. This is the pre-tremor going on. We’ve got a lot more to come yet. And there are going to be some challenges as well, and there are going to be some really big scams and going to be some really big problems. But fundamentally, you’re right, cheaper and faster. There are going to be some places where it’s better. That’s phenomenal. And I’ll actually start with one of those where it’s sort of better, which is qual at scale. So we can use these tools to apply conversation analysis to some qual, and then we can say please do grounded theory and then do content analysis. Now synthesize those three perspectives, and give me your results. And it’s not as good as a human, but it’s way faster. But it can do it with half a million conversations instead of doing it with 10 or 20. Now, will that be better? We don’t know. Because it actually maybe you can find as much from 10 qualitatively as you can find out from the half a million qualitatively. I suspect it will be better. But it takes us away from having to think about surveys all the time, which are, like, a really dumb way of asking questions. Imagine your friend has just come back from their honeymoon. They went to Hawaii. You’re not going to ask them, ‘‘Did you do the following,” and give them a long list. You ask them what they did and which bits were good and why were they good. It’s going to be much more along those sort of lines. And I remember George Terhanian doing a presentation years ago about in the 1920s, the psychometricians had a big argument about whether using open questions or closed questions. But they settled on closed questions because we can process the answers with the technology we had in the 1920s. We now reopened that. A lot more stuff needs to be much more open. We’re seeing in CX a lot of people now—you collect that NPS score and then you say why. And almost all the interesting stuff is in the why. And when you see the number and the why contradict each other, the why is usually right. Somebody gives it a three. Why? Oh, the service was really good. Okay, you don’t understand the scale. The one I see a lot—I see a lot of CX data—somebody gives it a seven. And then why? Oh, everything was really good. It was as I wanted, dah, dah, dah, dah, dah. Because, for them, a seven is a good score. For the poor person whose commission is based on it, it’s terrible. But they think it’s a good score they’ve given it. So we’re going to see a lot more use of language, information move that way. And that’s going to be something that will be quite revolutionary and potentially better. We talk about bulk analysis and [unintelligible] I’ve got a product, ResearchWiseAI. It’ll take in a simple data file, and it will spit out your analysis. Technically, we tell people this is the first cut, and you would then use it to drive your own analysis forward. Honestly, people just take it [laugh] and use that analysis, and it’s not as good. It can do in five minutes what I can do in two hours, but it can’t do at all what I can do in four.

Lenny:

So is there a concern? We’ve heard this for years with DIY. Right? Like, oh, it’s like giving guns to children. Right? They’re untrained, and I get that. I tell you, here’s my personal concern. It’s behavioral economics. I’m inherently lazy, and I don’t want to get lazier. And I definitely don’t want to get dumber. I mean, I can barely hold my own against someone like you to begin with. So there is a concern with technology like that, that you go to the first easy answer that’s presented versus digging deeper, and that there will be—we’ll miss those nuggets, will become dumber. I mean, possibly even from an IQ standpoint longer term, we’ll become lazier. And I don’t know what to do with that, with research as a profession, if that is where we go.

Ray:

So I’m going to break that down into bits and hopefully come back to the right bit. First of all, DIY, I see this as the greatest, greatest boost for DIY because most client side people who have had no training in research won’t write a good questionnaire. Frankly, most junior researchers won’t write a good questionnaire these days either because nobody gives them the time and the training. And most questionnaires should not be rewritten. Most the questions already out there, they’ve been developed and tested. So if you create smart DIY, all of a sudden it’s much safer for people on client side, you know, politicians, parent teacher associations can do much better research. And they’re going to get some guidance on how they can do the analysis and what the main stories are. So a lot of people are going to go from not using evidence to using evidence. So that is a plus. And many of the situations that we have are actually not that difficult. And if we are testing five new flavors of a soft drink and three of them are rubbish, pretty much any test will tell you that. And two of them could only be separated by a super, super high quality test. Well, it doesn’t matter. Because, actually, by the time it goes to market, and they’ve remanufactured it, and they’ve done the marketing campaign, that tiny, tiny difference between the two real products isn’t what makes the difference. So we often don’t need that level of accuracy that we would get with the better stuff. So, in terms of large amounts of things, then it’s going to be fine. It’s going to work to a standard that’s fit for purpose. What we need to think about is how will new entrants to our industry, not people like me—because I know how I’m going to challenge the new data. I’m going to reference back to what I already knew—but it’s going to be, I think, more along the lines of teaching dialectic, where you have a discussion with the AI that says, well, to what extent are people doing this? And can I generalize that? So it might tell you that men are more likely to like this new product than women. And you said, so do men like it? And so it comes back and says, no, only 5% of men like it, but only 4% of women like it. Okay, so actually, let’s stop talking about men versus women. Who are the people who like it? What have they got as characteristics? And that’s the sort of conversation. So the sort of conversation that the insights manager would have had with the stakeholder are the sorts of conversations this insights manager needs to be able to have with the AI as we go forward, where you push it and you fact check it and you say, ‘‘Well, how can I triangulate that?” So I did a demo for one of my clients of using ChatGPT. And first question was what are the top retailers, grocery retailers in Australia. It came back with the four brands. I asked that question because I knew the answer, which is always a good way of doing these things. And then the next question I said, “How can I check that?” And it said, ‘‘Well, you could look at the Hunter such-and-such a report.” So I looked at the Hunter such-and-such a report, which is a genuine report. It’s available. And I cross referenced and triangulated. Those are the skills that are going to allow some people to do much better work. So the thing that will save the research industry is the growth in the pie. Because the share of the pie is going to go down and down and down. But I think it’s going to be really hard for any business person in the future to say, ‘‘I made that decision without looking at the evidence, and now it’s gone wrong. It’s all my fault.” I think the use of evidence is just going to go up and up.

Lenny:

I accept that, and that’s great clarifying. And I think that over the past year, I’ve become relatively convinced that what we think of as qualitative research skills will become more and more important. And I’ve said for years that the role of research is to be the keepers of the why. And you and I have talked about that, you know, in the—when big data was the big topic. Right? Or when who, what, when, where and how is all accessible, and now it is, easily. And it gets to the why, that deeper probing, and that’ll open up the tools, the utilization of non-conscious measurement tools and yada, yada, yada. But effectively, it’s about asking good questions and being a good thinker. And I love that you brought up the dialectic. So maybe we’ll start teaching—go back to the Greek philosophers. We’ll start introducing people to Plato and Socrates and [laugh], you know, those things to teach critical thinking. I love that. And I think that that is true. We were technicians for so long. Right? Write a good questionnaire, you know, data analysis, that’s the pieces that are all—you know, it already has been automated. And now getting to what’s the real driver here? What’s the nugget? What’s the true insight, not just the information dump? And I suspect you’re right. The pie will grow, but the number of people—and you predicted this for years, that there would eventually not be an insight function per se. It would be a role, and I think that role will become highly specialized and highly regarded, but there won’t be nearly as many of us doing it.

Ray:

Yeah, no, I think that makes a lot of sense. And we see it to a little extent in the UX, where quite a few research people have moved into the UX world. So they’ve taken their research skills, particularly their qualitative skills, but they sit in the UX team. So they interact backwards and forwards, and it becomes just a natural piece, as opposed to the UX team getting research from the insights function or an agency or something like that. These are people who are actually involved, and they’ve got their hands on it and are understanding some of the new learning which comes out of doing research in that way. Because so much research is fitted to a single model, single way of thinking. The dead white men in Europe back in the 1920s and ‘30s and some extent in North America, we need to have quite new ways of solving things. And one of the big discussions is around synthetic data, where the objective is to replace a participant with a synthetic participant. And that’s a really fascinating area. I think it’s going to grow and grow, but there is a lot of other modeling, too, that people do where you simply create—okay, if people in a marketplace behave like this, and the number of tasks are random, let’s start the model, SimCities and these sort of things, and use that to look at what happens in real marketplaces. They don’t even have to be people as we would know them. They can be very simple things. So one of the great things in nature is—I don’t know if you have it in the US. I think you do the starling murmurations, where flocks of thousands of birds fly together. And by using modeling, they worked out actually each bird only needs to know what the one on the left, the one on the front and the one ahead of it is doing and the one above it. And you can get the whole thing, and they work. And that came from computer modeling, application of a sort of AI. So, when we talk about AI changing things, it’s not just going to be analyzing data, asking better surveys, doing qualitative analysis. There are places where it can go, and we can look at really fundamental changes to trying to understand why did crypto go like this? It expanded like that, and it changed like that. Can we model that with models of human behavior and get something that is much more useful for making predictions?

Lenny:

Agreed. I mean, I think we’ve talked about this before. I’m a big fan of science fiction. I think you’ve read your share as well.

Ray:

Hari Seldon, absolutely [laugh].

Lenny:

Hari Selden, here we are, and psychohistory, right? [laugh] All right, so I want to be conscious of your time as well as the listeners. We’ve covered a lot, and there’s a lot that’s going to be covered. One hour is not—doesn’t nearly give us enough time to dive into this, but I do want to kind of get into kind of the ESOMAR perspective and yours as well, for the year ahead. We’ve talked about all these changes. They are going to be profound, probably faster than mobile. You know, I think that we’re not looking at 10 year horizon now. We’re looking at a two, if that. So what do you think that looks like? And what will be the role of ESOMAR and all the trade associations in helping through this transformation? What’s that look like for you?

Ray:

So we’ve got the global data quality problem, and that’s all the trade associations. And SampleCon are working together to try and resolve that. But that could blow up. So that is one of the real—or where that’s going. We’re seeing new legislation come through the European Parliament in terms of regulating AI. That could be a problem. One of the ones that could be a real temporary log jam is it wouldn’t surprise me if the courts in Europe or in the US closed ChatGPT at some point. I think they could look and say, ‘‘You have got copyrighted material in there. Remove it.” They would say, ‘‘We can’t remove it.” And they say, ‘‘Close.” That would set what we’re doing with AI back six months to a year. Because it’s all moving forward, and then we would just be tackling in different ways. There is new regulations, advice coming out from the Insights Association, from the MRS. We’re about, from ESOMAR, to launch the 22 Questions. And the principle there is advising buyers on here are some good questions to be asking vendors, things like where is intellectual property? Very straightforward. What experience have you got in here? Which tools are you actually using? And do you know how they work? Because most vendors are using various things from the likes of OpenAI, so those vendors don’t know what is inside the black box. They can answer this question, this question. Or answer this question, and then at the next point they have to say, ‘‘And that we’re taking on trust from here.” Where should the human override be? When can you say this is a bad decision? People point to things like the Robodebt scandal in Australia, where they used computers to work out who had probably been claiming benefits fraudulently, sent them demands for money, in tens of thousands of cases, wrongly. So they were sending demands for money from people that didn’t have any money. Hundreds of suicides as a consequence. These are the sorts of things that are going to be coming through. But in terms of AI, it’s going to be just steady progress. I don’t think in 12 months we will see a massive difference because this is the way exponential works. It grows by one percent in the first year, maybe it’s two percent in the second year, but very quickly it compounds. And I think we’re looking at a three to five year horizon for maybe a quarter of our industry being dependent on AI, which is pretty, pretty big.

Lenny:

Absolutely. So let me qualify my statement of the two years. You and I, we always disagreed on the time frame. I agree with your time frame, and I want to qualify my statement. I think the technology will be there within two years. Then it’s just an—

Ray:

Yes. Oh, definitely.

Lenny:

—option question.

Ray:

Yes.

Lenny:

Yes. And brands will be the ones driving that. And there is interest but skepticism on this. We’re going to have to prove these things. And then they have to go through procurement and [laugh] all of that.

Ray:

Well, actually, actually, it’s much more variable, I think. Because I come across a lot of clients who, you know, are not touching it and nowhere near it, others who are doing a really good job. We saw a fantastic presentation in October from Heineken, who’ve got all sorts of things going on. And it kind of reminds me of General Mills when the internet first came in. And they dived in. They did some side by side testing, and they switched really early, eight or nine years earlier than many other companies, and were able to reduce the budget a bit, but massively increase the amount of testing they were doing and got a great advantage in the marketplace. A lot of the early adopters are going to get a really big advantage, and a couple of them will get scammed.

Lenny:

Yep. Yep. Yeah, well, interesting. You know, I think that there are trusted influencers. And I don’t mean people like you and I. I mean, you know, companies that have earned that—

Ray:

Yeah.

Lenny:

—that role and are certainly engaged in helping through that transformation. So, Ray, what did I not ask that you had hoped that I would or should have?

Ray:

I guess the one that people always ask is what should new people joining the industry be focusing on. And the thing that I’m talking to people about now is—I talk to quite of new entrants—is you want to be working on the AI stuff. So, in your company, there’s likely to be somebody who will be doing the job of testing the new AI, and there’ll be some people who are keeping the old stuff running. Get as far away from keeping the old stuff running as you can. It costs you $20 to have a ChatGPT-4 account, $20 a month. Frankly, if you’ve got any optimism about your career, if you don’t have a ChatGPT-4 account at work, you should pay $20 out of, you know, coffee money or whatever and have that. That’s really important. And you need to be doing what the machine can’t do. So that’s understanding companies, understanding people, learning how to do the storytelling, the communication, developing your empathy skills. There’s a guy, an American, Rob Volpe, [unintelligible]—

Lenny:

Yep.

Ray:

Yeah, really great on the empathy skills.

Lenny:

Yep.

Ray:

Those sort of skills are really going to be needed as this technology takes you forward. And don’t imagine that you’re going to necessarily be at a research company. Don’t imagine you’re necessarily going to be in insights. If you’re in this business because you want to understand why people do things, there’s a wonderful future, but you’re going to have to go with it and move around. And I love playing around in the sea and really got two choices when the waves come in. You can either duck under them, or you can learn to surf. And I keep trying to learn to surf [laugh].

Lenny:

[laugh] Great analogy. Well, you know, with your wonderful endurance skills, I suspect that you’ll conquer that soon enough. So, plus, all the time you spend in Australia.

Ray:

Oh, yes.

Lenny:

Yeah. All right, Ray. Where can people find you?

Ray:

Yeah, LinkedIn is probably the best place. I think most of the best stuff I read these days, I read because somebody shared it on LinkedIn. And, yeah, I’m like super easy. And I pretty much always accept a LinkedIn connection, unless they tell me they’re a life coach, they want to tell me how to invest my money, pretty much, I’m going to say yes.

Lenny:

All right. Well, I want to wish you good luck on your race this weekend. For our listeners—I’ve mentioned it twice—describe it because it’s a pretty intense thing. ‘Race’ doesn’t do it justice.

Ray:

It’s called The Spine Race, Britain’s most brutal race. It’s 268 miles. You’re carrying stove, sleeping bag, shelter, stuff like that. So we’ve got about 10 kilos—What’s that going to be?—20 pounds or something like that pack on your bag and with food and water to carry. You’ve got to navigate because the course isn’t marked, and it’s through the mountains. It’s at the moment in the UK, 16 hours of dark and 8 of light. So most of the running I do will be with a head torch and running through like that. I’m hoping to finish it in about six and a half days.

Lenny:

Well, more power to you, my friend [laugh]. See, I always—I call you a mentor, and then I hear things like that and think, I’ll never measure up because, by God, I can’t imagine myself doing that. So.... [laugh]

Ray:

Actually, I’m going to have to share with you a concept that the ultra runners talk about a lot, which is type two fun. So type one fun is where you’re sitting down at a chocolate cake and you eat it, and it’s fun when you eat it. Type two fun is hard work, and it’s not fun when you’re doing it. But afterwards you go, oh, you know what?

Lenny:

Yeah.

Ray:

That was good.

Lenny:

Yes. This type two fun, I get it. You know, I bought a farm.

Ray:

Chopping wood.

Lenny:

Yes. Yeah, or anything else. I enjoy it. There is a real sense of accomplishment on doing those things, so I get that.

Ray:

Yeah.

Lenny:

But still running for six days, that’s a whole other thing. So...

Ray:

[laugh] Just call me Forest.

Lenny:

[laugh] We can go lots of places that I won’t. We will wrap up. Thank you, Ray, so much. Appreciate the time. Good luck on your race.

Ray:

Thanks. Great talking to you, Lenny.

Lenny:

You as well, my friend. And a big shout out to our producer, Natalie, our editor, Big Bad Audio, to our sponsor, and most of all, to our listeners, because without you, Ray and I probably would not have made the time to catch up like we just did. So thank you. And hopefully you’ll get something out of it as well. So that’s it for this edition of the Greenbook podcast. We will be back again soon with another. Take care. Bye-bye.

Links

Chapters

Video

More from YouTube