Artwork for podcast Greenbook Podcast
123 — The Politics of Polling: A Deep Dive with Clifford Young
Episode 1232nd September 2024 • Greenbook Podcast • Greenbook
00:00:00 00:44:09

Share Episode

Shownotes

What should we really make of the political polls that dominate the headlines?

In this episode of the Greenbook Podcast, hosts Lenny Murphy and Karen Lynch sit down with Cliff Young, President of Polling and Societal Trends at Ipsos, to explore the complexities and significance of political polling. The conversation delves into the challenges pollsters face in today’s fragmented media landscape, the methods used to accurately capture public opinion, and the role of polling in modern democracy. Cliff discusses the global trends affecting polling, the ethical responsibilities of pollsters, and how AI is beginning to influence the field.

You can reach out to Clifford on LinkedIn.

Many thanks to Clifford for being our guest. Thanks also to our producer, Natalie Pusch; and our editor, Big Bad Audio.

Transcripts

Lenny:

Hello, everybody. It’s Lenny Murphy with another edition of the Greenbook Podcast. Very glad that you have taken time out of your day to spend it with myself and my guests. Today is a special edition. My regular co-host, Karen Lynch, is joining us. Welcome, Karen.

Karen:

Hey… it’s so good to be here. And, Lenny, I just want to say this is such an important time to be having this conversation and to be having this guest on, as we are digging into the topic of political polling. So it’s an incredibly important topic, an incredibly important time in our history. And so excited that you’re about to introduce our guest. And friends, I just want to say very specifically this is a nonpartisan episode. We are not talking politics. We are talking political polling. So, you know that it’s—you know, has the potential to be heated. We are staying out of that. We’re talking methodology. We’re talking the science of polling. We’re talking [laugh] about what everybody needs to know about the method and the means. So, Lenny, take it away. lift and shift.

Lenny:

Thank you, Karen, with that needed disclaimer. And let’s bring on our guest. So, Cliff Young. He’s the president of polling and societal trends at Ipsos. Cliff, welcome.

Cliff:

It’s great to be here. Thanks a lot. I can’t wait. I mean, that’s all we’re talking about these days, right? Polling and the results of polling.

Lenny:

Yes. And for those who don’t know, Cliff is a legend. I followed you for years, and why don’t you kind of talk about your background really quickly for the audience who may not be as familiar, and then we’ll dive into the meat.

Cliff:

Well, I’m a pollster. I consider myself a pollster. That is a thing. It’s not an upholsterer, which at times people thought I was. I had some relatives that wanted me to do their—redo their sofas.

Lenny:

[laugh].

Cliff:

And so—but I’m going to poll. I’m a pollster. And I came by that sort of, like, randomly, because I thought I was going to be a lawyer or a doctor. Like most of us in market research, we didn’t aspire to the profession, but here we are. You know, I studied as an undergrad, social science and economics. And as a graduate student, I studied behavioral science and statistics. I did my PhD, ultimately. And I’ve been doing this since then, since the mid-’90s. I lived in Brazil for ten years. My wife is Brazilian. My kids are Brazilian. I had that Brazilian and Latin American experience in polling. And I’ve been in the US since 2008, and been running Ipsos’s polling operations since then.

Karen:

I just have to say, my son is married to a Brazilian woman, and I’m about to have my first Brazilian grandchild, or she’s about to give birth to my first Brazilian grandchild. So someday we’re just going to talk about that, because I have—

Cliff:

Yeah, we should. I have three.

Karen:

[laugh].

Cliff:

I have three kids, and we produced three Brazilian grandchildren.

Karen:

I love that.

Cliff:

My wife and I. And, you know, listen, it wasn’t well planned, my move to Brazil. It was basically out of love. And so I spent ten years there because of that. But I’ve been—like I said, I’ve been back for a bit as well, in the US.

Lenny:

Very cool. I have no Brazilian connection other than you two, but…

Karen:

[laugh]

Lenny:

[laugh] So tell us the difference between being an upholsterer and a [laugh] and a pollster. Take it from the top, Cliff. For our audience, explain the difference because—well, actually, before you do that, for our audience, here’s my—I also worked in polling early on in my career, and although it is—I consider it adjacent to market research, it is a very unique science and a very unique aspect of the broader market research industry. And I think that’s why it’s important that we have this conversation because traditional, especially commercial research, may not really understand the intricacies that go into social research and polling. So now I’ve set you up. So tell us the difference.

Cliff:

So that’s the softball. Well, first and foremost, a pollster doesn’t work with sofas, and so that’s the first thing. But ultimately, a pollster, I believe, is the linchpin of democracy. We bring voice to people. We take this voice to decision makers. We express it because it’s oh so important democracies to have public opinions, a voice heard. And we’re seeing that here today. Ultimately, you know, the unprecedented events in this electoral cycle, I feel privileged, actually, to be here measuring public opinion as we work through the kinks and the ebbs and the flows of this electoral cycle. But ultimately, it’s a noble profession, and it’s one that is critical. Now, we use a lot of the same methods that market researchers at large use. We’re worried about samples. We’re worried about questionnaires, not biasing them, not biasing the samples. We’re worried about good, pithy analyses for our clients, whether it be the media on the one hand or proprietary on the other. But we place special importance on the robustness of the method because once again, as I was saying before, at a certain level, obviously, we are the guardians of public opinion. We’re the ones—besides election day, when we hear and see the voice of the people. It’s a very sort of special day, a special moment, not just in the American experience, but in general—election day. Besides that, there’s very few signals that decision makers have about what public opinion is thinking at a given moment. Bread riots, protests, those sort of social spasms that we especially saw in the non-modern era, before the modern era. But the poll and the pollster today is the modern instrument to bring, you know, public opinion’s voice to those that matter.

Karen:

Can I just ask, Cliff, because one of the thoughts I’m having as you’re talking is, first of all, you know, you’re not only a pollster, but obviously you are, you know, spearheading one of the largest pollsters out there. And I often say to the people who bring up polls to me, is you have to be careful which polls you pay attention to. They must be credible. And Lenny had said something on the pre-call about that, how not all polls are created equally. And I’m curious as to how you would explain to somebody who is not within the industry how you differentiate between the various polls that are out there at a time when it’s really important to pay attention to results and understand them.

Cliff:

Yeah. Well, that’s obviously a great question, and I’m a big believer in averages. If the average wasn’t the most important invention in human history, it’s one of them. I think taking the average of many polls and many pollsters is always the thing to do. But that said, there are some that are better and some that are worse, some that exercise the craft in a robust way, and others that have other sort of motivations. Any given country has their own professional organization, and those professional organizations have ways to assess the pollsters. In the case of the United States, the American Association of Public Opinion Research has an initiative called the Transparency Initiative, which basically is a good seal of approval, let’s say, that you’re following the methods and the protocols that are necessary to be transparent. And so that’s a way for the consumers of polls to assess pollsters and the polls. Are they part of the transparency initiative or not? Like I said, other countries have similar initiatives. But at the end of the day, from an empirical standpoint, tough looking at any given poll because of noise. You’re much better off looking at the average.

Lenny:

So you’re a fan of FiveThirtyEight, RealClearPolitics, you know, those initiatives that average the polls overall?

Cliff:

Yeah, I think those are—I think they’re part of the evolution of our profession. You know, polling is a profession that a lot of the information is actually public. Indeed, when we talk about market research in general, that is typically not the case. Right? It’s proprietary data for clients specifically. It never sees the light of day. A lot of what we produce is a public good. It produces a public good. Other polling outfits produce public goods, and that allows sort of outfits like FiveThirtyEight to aggregate all that information. And I think that’s great. Indeed, what we’ve seen over time is the proliferation of these aggregators around the world. So any given large country, or at least advanced industrial country, will have their own aggregators. And I think it’s a good thing ultimately.

Lenny:

So let’s get in. Let’s get nerdy here for a minute, right? So…

Karen:

[laugh]

Cliff:

Oh, we—I don’t have to get nerdy.

Lenny:

All right [laugh].

Cliff:

I start nerdy. I’m already.

Lenny:

We’re gonna embrace it, so we’re gonna just let our freak flags fall.

Karen:

I’m gonna say, Lenny, it doesn’t take you much. You pretty much [laugh]—

Lenny:

It does not. It does not at any given moment.

Cliff:

What you really should have said, Lenny, was, let’s stop being normal [laugh].

Karen:

[laugh].

Lenny:

[laugh] Well, I don’t know because I don’t think I ever started. But [laugh]… All right. So when I think about the differences in polling, right, a few things pop up, right? And one is registered voters versus likely voters, turnout models, obviously, you know, the sample composition, the challenges with reaching certain populations. So kind of talk us through that framework on obviously knowing there is validity to likely voters versus registered voters, and there’s arguments to be said on which is a better model. But just kind of talk us through that thinking, if you would, on how you determine the best approach to create the highest quality poll.

Cliff:

Yeah. So one of the tricky challenges of political polling—and now I’m not talking about public opinion polling. Public opinion polling, you basically want to get a gauge of what the general population of adults is actually thinking. When we say political polling, we’re talking about some subset of the general population that will ultimately vote in the upcoming election. That’s the tricky thing because the first thing you need is a robust sample of the population. And even if you get that, you have to determine who will actually vote. All right? And so that’s one of the biggest challenges that ultimately pollsters have. And by the way, the parallel market research would be modelers that do a sample, and off the sample they forecast out or predict out sort of some sort of subset of that universe. So that’s a tricky thing, like who’s going to vote, right? Ultimately, that’s a subset of the general universe. And so really what you’re talking about, Lenny—or different ways to think about the universe of interests? Am I talking about the general population? Am I talking about some sort of subset, maybe registered voters? Or within registered voters, am I talking about those who will actually vote? When it comes to prediction in terms of elections, we’re talking about that subset of who is likely to vote, ultimately. And like I was saying, it’s a tricky thing because not only do you need a robust sample, but you have to have a robust model as well. And that’s the central, as I was saying before, the central challenge of polling and pollsters when it comes to that specific issue. Now, we don’t always look at likely voters because people’s intention to vote is really fuzzy, far out, and only solidifies as we get closer to election day. So typically at the beginning of an electoral year, let’s say in January, we will only be looking at the general population. And then now, for instance, we’re looking at—Ipsos is looking at, and others are looking at registered voters, and in a little bit we’ll be looking at likely voters, that subset of the subset. Right? But ultimately we have to understand conceptually that we need a robust sample on the one hand, and then a robust model off that robust sample on the other.

Lenny:

Now for the robust samples, let’s talk about that. Just the practicalities, methodologically, of engaging with people in a very fragmented ecosystem of how people engage. Right? With technology and the—I mean, I know you’ve been around as long as I have, and, you know, the advent of online versus the old school of, you know, pure telephone, and I remember the pain of that debate. Well, no, we can’t use an online sample because it’s not representative. But yet, the reality is most people are more—now more engaged via online than telephone. So just kind of talk about that piece of things and how we manage this very complex and fragmented communication channel with the voting population.

Cliff:

There is a couple things going on. I think the first thing is more philosophical. I would argue there’s no such thing as a bad sample. It’s just like, what do you want to use that sample for? Like, what are the objectives? So, you know, I need a super robust representative sample if I’m really worried about a point estimate. And indeed, you know, the percent who vote for one candidate versus another is that sort of like point estimate that has to be extremely precise. If I want to tell a client that it’s getting worse or getting better or getting bigger or getting smaller, my sample could be a little bit fuzzier. Right? And so just a—you know, it really is a fit for purpose issue. It really depends on sort of the objective. But going back, ultimately, yes, we’re in a difficult world today. The ways in which people interact with each other, whether by communicating, speaking, or in other ways, or engaging in person has become fractured and is increasingly changing. And our methods are increasingly changing as well. Indeed, there was a study done by Pew that shows that never before in the modern era, or at least since they’ve been tracking it, have so many polling firms use so many different methodologies to get at the population of interest. Right? And I just think we need to be eclectic. It’s okay to be eclectic. We shouldn’t be absolutists when it comes to methods. At Ipsos, we use a variety of methods depending on, like, what the actual objective is. And so, yeah, we’re in a moment of high degree of fracturing or heterogeneity, and we should be eclectic in the way we approach it. Ultimately, whatever we do, the sample has to be robust. Right?

Karen:

I have to ask because, really, it wouldn’t be a conversation with Lenny and I if we didn’t talk about AI these days. So I’m sitting here thinking. We’re talking about trends and methodologies and being eclectic and doing different things, and AI, just in general, is, you know, a heated topic right now, but how are you integrating, or are you integrating some AI thinking into what you’re doing at Ipsos specifically, but just in the world of a pollster? Because that’s, I’m sure, complicated right now.

Cliff:

Yeah. We’ve been applying it more as a data reduction method, where we have long press releases or write-ups, and we’re trying to, like, reduce it into the key points. There’s a goofiness there. Right? We always have to edit it. You can’t just—you know, [laugh] you can’t just send it off. Right? But it does help. I think that’s important. I mean, there’s a lot of talk about synthetic data and synthetic consumers, synthetic voters. I think everyone has to understand, fundamentally speaking, we’re talking about models with data together. Right? And, you know, the output is only good as the data on the one hand and the model on the other. I don’t think we’re there in any place where, you know, it works in a robust way, but we’re playing with those sorts of things here at Ipsos. So there’s a lot of experimentation, especially with text and content, but also thinking about synthetic consumers and voters on the other hand.

Lenny:

You know, when people ask me—this is my personal life, right? Because they know what I do for a living—why do polls change? Like, well, it’s similar to an ad test, right, I mean, with the news cycles that are designed to move opinion. That is what they do. Therefore, polling is the mechanism to understand the impact of that and measure that in as close to real time as we can get. Because the news cycles seem to move so fast now—and you said it earlier, unprecedented cycle. Right? I mean, there’s been a whole lot of—

Cliff:

Yeah, I mean, it’s just all over the place.

Lenny:

Yeah. So how has that changed? My perception is that that phenomenon is growing over time. Right? That it’s—there used to be some level of stability and predictability just as a consumer of news, where now it just seems like every day, [laugh] you know, like, oh, what’s that? Okay, well, that’s new. And what kind of challenges has that created for you for pollsters who are just staying abreast of such a rapidly changing environment?

Cliff:

Well, we do—this electoral cycle, we do have a lot of change. We have a lot of events, unprecedented events. And obviously, they have an impact on public opinion and on polling, more specifically. But, you know, I would say for the most part, I would argue that things don’t change much, like most everything is constant or near constant. And the things that change change very slowly. So, like, most of the long-term, profound change is slow moving. It’s not fast moving. It’s not us changing our opinions. It’s actually populations changing their profile, so older people dying off and younger people coming into them. So we have to keep that in mind. Now, that said, when people look around, your average person kind of looks around and looks at the polling. Yeah. There’s a lot of things that are going on that could signal to them that things are moving or fuzzy, right, at best. And so one of them is just that they just took two different polls that have their margin of errors, and they’re kind of in different places. But if you take the average of all the polls, you know, there’s a central tendency that makes sense. Right? So that’s a lot of, like, the false positives that happen out there. Two different polls that look like they’re at odds, but they’re really not at odds just because there’s sample bounce. The other thing, and this is what you mentioned, Lenny, events have impacts. Most events don’t have long-term impacts. Most events have very short, short window impacts on public opinion. Right? We know that based upon actually an assessment of events.

Lenny:

The bump.

Cliff:

The bump. The bump is a bump because the bump goes up and the bump comes down. Right? Yeah. That can create this notion that sort of public opinion is volatile. But again, like for the most part, across most domains, over time, public opinion is very stable. Or if it moves, it moves in directions that make sense. Right? And I think we have to understand that as analysts, that most of the quick movements we see are going to be ephemeral at best, and they’ll probably revert back to status quo ante or regress to the mean quickly after the event.

Lenny:

So even in such a polarized and tight race, right, I mean, it seems like we function in the margins, you know, to a great extent now. I would think that those, because we’re functioning in the margins, those one or two percent differences can have a profound effect, where everything else is fairly normal. Is that—am I thinking about that? I would—what you just said, I would think I’m thinking about that incorrectly. So what’s your take on that?

Cliff:

No, but there are two different things. One thing is sort of like the polls bouncing around. Right? And so I was giving you an explanation of that. So some of it just margin of error. Aome of it, like events have impacts, but most events don’t have long-term impacts. Mostly we don’t change our opinions or behaviors, but if we do, it’s usually slow moving. That’s different than, obviously, what will decide this election. This election could be decided on a few hundred thousand votes in three states. Really, at the end of the day, I know we’re not going to get into specifics of this election, but there’s probably only three states that—in the Midwest that really matter. Yeah. So those sorts of things are important. And so both of the campaigns here in the US today are working very hard to gain those few points that will get them across the finish line. But in my mind, those are a little bit different. Those two—they’re different dimensions. Right?

Karen:

Cliff, you mentioned, you know, kind of some of these trends, and obviously before we were talking about Brazil. And what comes to mind was, you know, not during our current election cycle, but they came off of one not that long ago, which was equally contentious. I’m wondering if you are seeing similar patterns in international markets, and I’m assuming methodologies are similar and, you know, the way to go about the sampling is similar. But are you finding any key differences in polling, you know, in the Latin American region versus, you know, the US versus over in Europe? Are there global differences we should all be mindful of?

Cliff:

No. I mean, obviously, depending on the market, you’re going to use different methodologies. You can’t use online in India, as an example, just as an extreme example. But I would make the argument that our politics today is global, and those politics affect our methods. And so what do I mean by that? If you look across the board around the world, and we’ve done this at Ipsos, widespread sort of populist sentiment, belief that the system is broken, the need to sort of like, you know, take the system back. Whoa, that sounds familiar, right? But that’s everywhere, right? Belief that parties and politicians no longer work for the average person. And so there’s this sort of syndrome of attitudes that lend themselves to anti-establishment, populist oriented candidates and governments. Right? But they also affect the method because they’re attracting, or they have attracted individuals who heretofore have never participated in politics. And so you get your individuals that are in the basement [laugh] around America and around the world that never leave it, that are playing, you know—video games and no criticism of video game, but, like, I just to give a mental sort of image of people that are, like, disconnected, right, in part from society, who never participate in polls or surveys, who don’t participate in anything, typically, who are off the grid. These individuals have been attracted by strong anti-establishment brands like Trump and Bolsonaro in Brazil, and they are wreaking havoc on our methods. Because our methods never needed to measure them before because they didn’t vote, but now they’re voting and we have to measure them. And so what I would say is politics has changed. We’re in a context—and this is for the medium to long-term—we’re in a context of heightened uncertainty and anti-establishment sentiment, and that context affects our method.

Lenny:

Yeah, that’s—I so appreciate you saying that. I remember at the end of the last election, and we saw this immediate fragmentation of social media and the development of alternative platforms that were basically echo chambers based on specific populations. And my first thought was, well, there goes getting an integrated view of the population because now, you know, it’s just that much harder now to engage with this new and emerging group and regardless of political spectrum that was occurring across the board. And I don’t think it’s gotten any better [laugh]. So hats off to you. I know you have a tough job in trying to navigate through all of that. And you’ve kind of hinted at this, or maybe you haven’t, but I imagine that Ipsos has—you have such longitudinal data, where you have such a breadth of data that you have looked at over the years. Is there something that you could point to, or have you done the analysis to look and say, you know, we were kind of running along, just kind of normal, and then, boom, here came this shift that you’re describing now that, you know, is playing havoc with our ability to actually accurately understand what may happen because of this new political environment we live in?

Cliff:

Well, I think you’ve pointed it out. I would say it is the most profound happening in the last sort of generation. And it is impacting politics on the one hand and our ability to capture the voice of the people on the other. And we’re all struggling with it. We’re struggling with it everywhere. Indeed, to be quite frank, like, you know, we’re doing the US elections today. You know, we covered the Brazilian election as well. We did the Turkish election. We did the European elections. We did the UK elections. And, like, we did the Indonesian elections. I mean, we were all over the place—and in India as well. And we’re confronting the same thing over and over again. So, like, heightened uncertainty, heightened systemic distrust, polarized environment, the inability to come to consensus, complications with governance. You know, obviously, there’s a variability across countries, but it is a zeitgeist today in the world that, like, again, makes it—makes it complicated, obviously, from a method standpoint, and more specifically from a point estimate standpoint, point to capture because you might be missing some people. You might miss by a point or two, Lenny, as you were saying before, and that’s so important. Right? But what I would also say, I think that the polling industry has done a very good job of measuring this change, of understanding and anticipating what is happening and will happen. I’ll give you an example, just a very simple example. So in 2015, sitting here in my DC office with the team, trying to understand why Trump didn’t implode. He was just doing his thing all over the place, wasn’t declining the polls, he’s going up. And we basically started taking things from his speech or speeches and we started testing them. And then we realized, man, there’s really something here. There’s a very strong nativist sentiment in America. Then we went a step further, and we created an index. We created a measurement, like an official measure—by the way, it’s been peer reviewed. It’s in peer reviewed articles now. And because we’re at Ipsos, we’re fortunate to be at a place like Ipsos. We took that index, and what do we do? We start tracking it globally, and what do we find? That it’s everywhere.

Lenny:

Brexit, all those things.

Cliff:

And this is just—it’s everywhere, yes. And it’s not just an Ipsos thing. We were just fortunate to be at an Ipsos to be able to do that. Other pollsters in other places are doing the same sort of thing. And so I think as an industry, we’ve done a great job. Maybe we’ve had a challenge on the election side because of the very reason that we have, you know, individuals who are in holes and stuff that only come out of them when they’re really attracted to a given political brand. But setting that aside, I think we’ve done a wonderful job of telling the story of these changes and what they ultimately mean for all of us.

Karen:

One of the things that has me, you know, thinking here is, you know, the idea that the polls come out, and there’s not a lot of that understanding of why, when a consumer would see a poll publicized on a television channel or on the Internet or whatever, showing up in a newsfeed, somehow or another. You know, there isn’t a lot of that context of why. Cliff, just for background, my background is qualitative. So I’m always thinking about, you know, the qualitative response to polls and how there’s discussion, and I know that there’s a misunderstanding in the world when they show these focus groups that aren’t focus groups, talking about how people vote and all of that. So how does Ipsos, and in general, just kind of another credible pollster, get to some of these whys and then share that information out in a way that’s credible?

Cliff:

Yeah, I think that’s a really, really good question, and let me step back a bit and so, you know, talk about the pollster. Because 98 percent of the time, when we talk about the pollster, we’re talking about the craft of polling, the technical aspect of polling, you know, the importance of minimizing bias and error to get that kind of point estimate that everyone can sort of believe in and that does a good job of predicting, ultimately, the election results. Elections are our standard, right? They’re the rubric by which not just polling and pollsters, but by the whole industry is assessed. And that’s why at Ipsos we take great care in that because we understand it’s, you know, probably the overweighted, outweighed impact on sort of people’s perception of the industry at large. But there are other activities of the pollster. And so, you know, one activity, not to go into so much detail, is to convince public opinion of stuff, is to push the needle. So your political pollsters and the campaigns are trying to do that. They’re trying to convince people to vote for Harris on the one hand or for Trump on the other. So this sort of the spin doctor sort of persona is our persona as well, not just the data scientist. But the other is the forecaster, the fortune teller, like, to say something about what will happen. And I would say that sort of persona, that sort of activity of the pollster, is a lot of it’s about sort of context and understanding the broader picture. We have to understand history, what happened in the past to predict the future. We have to understand what happens around the world. And there we’re using both quantitative data and qualitative data. On the one hand, the quantitative data might be aggregate databases of elections around the world, as an example. The qualitative might be really going deep into the calculus of voters to understand what the true motivators for their vote really is. I’ll give you a data point, very interesting data point based upon a data set. We know that the candidate that’s strongest on the main issue wins, on average, about 85 percent of the time. But the critical point is you got to know what the issues are. You got to know which candidate is stronger or the strongest on each of those issues. And you have to track that over time because it can change, right? And so, you know, again, sort of those three personas. One is the data scientists. We mostly talk about that. But you have the spin doctor. Let’s set that aside. And the fortune teller, let’s say, with the crystal ball context is preeminent. And how do we communicate that as Ipsos or communicate that to the market or to clients? I would say that 80 percent of my job is sort of explaining to the markets, you know, on media on TV or radio, or to clients sort of like that context, so they can situate themselves inside that context. So we spend a lot of time doing that.

Karen:

You know, I think about the people that have come to me and shared a poll, and I’m like, well, no, that’s just, mark-, that’s just marketing. That’s not the same.

Cliff:

[laugh].

Karen:

And, you know, and I could get myself crazy talking about this to people who don’t understand. It’s almost unfair to human beings who don’t have a background in these types of methodologies. So I wonder what the—I wonder what the public service announcement is to people who don’t understand all of those nuances of polling, or if there is one.

Cliff:

Yeah, I mean, in a nutshell, I mean, I think that we should understand that we are much more than that point estimate on election day or that point estimate compared to election day. And I think the most important thing we bring to the table—I think our most important, ultimately, you know, role actually is, is to provide that context so decision makers and citizens, more generally, can understand things. And I think that gets lost a lot. Now, in my specific case, I try to do that all the time. I do that in my writing. I do that in my media engagements by trying to provide that context. But we do that all the time with our clients. Right? It’s much less about that point estimate, much more about that context in general. But I think what happens is there’s a centrifugal force that forces us as a marketing profession to talk about the accuracy of polls. Obviously, that goes to credibility. It’s important. But that’s so much less important, I believe, than many of the other things we actually do, most important of which is providing context.

Lenny:

So I love that, Karen, the example between, you know, this is a poll, this is a marketing effort, which obviously campaigns do that, so… Right? And they want to influence things. But it brings up kind of, you know, an ethical framework that’s part of this conversation of how do we educate the public to discern, right, to be able to kind of think critically and determine, you know, what actually is a high-quality poll like from Ipsos versus, you know, a marketing effort, a PR stunt. So what do you think, Cliff? Is there something that you thought, we really need to do this? And we joked before the show of we need to have a documentary about polling. What do we do?

Cliff:

We need a series on HBO.

Karen:

[laugh].

Lenny:

I’m in. Let’s—

Cliff:

With love interests and sort of probably some death or murder.

Lenny:

[laugh].

Cliff:

Yeah. Yeah.

Lenny:

Yeah. All right. We’ll have to pursue that a little bit more.

Cliff:

Yeah.

Lenny:

But how do we approach this? Because I think it’s wrapped up in this whole idea of distrust. Right? That is part of the zeitgeist, where we are now is just a distrust of the establishment. Right? For whatever reason.

Karen:

Well, but also of, you know, data quality issues and data integrity.

Lenny:

Sure. Sure.

Karen:

There’s distrust of some of other things, too. Like, I don’t think it’s just the establishment we’re talking about. We’re talking about—

Lenny:

Right. It’s a distrust of the world.

Karen:

—yeah, a time where credibility is of the utmost importance because there have been things, factors at play, that lead people to wonder and question.

Cliff:

Yeah, I mean, that’s a difficult one. Right? So some countries lean very hard on regulation and legislation. Right? The US doesn’t. Right? We’re much more of a libertarian oriented society. We don’t like that. We don’t like the government and all of our stuff. It’s about the marketplace of ideas. You know, I think as a profession, we can only control what we can control. And then ultimately—and what is that? Well, you know, our professional organization has a way to say if you’re, you know, doing the right, you know, good job or not. If you’re a polling outfit that doesn’t have that seal of approval, buyer beware. Not that everyone knows that. And I think we can only do what we can. We can control that. And we can control doing a good job with our own polls, to be exact, you know, as accurate as possible. We can go out into the marketplace and explain them and provide context. I mean, anyone who’s ever sat through an hour’s worth of C-SPAN calls knows that there’s a lot of distrust on polling out there. Right? But we need to go out there and take our time to explain things to people. Now, can Cliff Young or Lenny or Karen, can any one of us, you know, have a major impact? No, but as a profession, we should go out there and make our case about why polls are ultimately important. I think that’s all we can do. When you look at pollsters as a profession, you know, it’s not terrible. We’re not awesome. We’re not as bad as politicians.

Lenny:

[laugh].

Cliff:

You know, we’re not there yet, but we’re kind of in the middle. And does that hurt us? Probably hurts us. But ultimately, you know, we can only control what we can control. And those are the things I think we can control.

Karen:

And I’m sitting here thinking like, wow, there’s probably listeners who are insights professionals who want to get into political polling, that maybe they’re finding their voice or, you know, a younger generation that is certainly coming into their professional careers with a lot more passion for this sort of thing than I had when I was in my 20s, early in my career. So what do you say to somebody who might actually be thinking, is this a hat I want to try on? Like, do I want to shift from insights into this space?

Cliff:

Well, they can reach out to me, and I can have a conversation with them. I can’t promise a job, but I can definitely talk about what it means.

Karen:

What makes somebody successful in that line of work?

Cliff:

Very—like, I would say very eclectic set of skills. I would say a very good pollster is one that understands society. So, as a sociologist, right, that is one that understands politics, and so is a political scientist, but is also sort of a psychologist or a cognitive psychologist, understands how information is processed in order to optimize messaging. And lastly, as a statistician, both on the design side, you know, understanding samples as an example or in terms of forecasting. And so it’s a very, very eclectic, multidisciplinary discipline. And indeed, there’s no, like, really single source up to now, right, that one could go to, to kind of glean all that. But being very sort of multidisciplinary and eclectic I think is important.

Lenny:

Maybe Ipsos should launch the Cliff Young program for multidisciplinary training for future pollsters.

Cliff:

I do have a book coming out on that issue.

Lenny:

Oh, well, all right.

Cliff:

Okay. So, I mean, I was making a sort of very sort of a sly plug for it right there.

Karen:

[laugh].

Lenny:

[laugh].

Cliff:

No, it’s called—I have a book coming out. I have a book coming out. It’s coming out. It’s on Amazon already. It’s called Polls, Pollsters, and Public Opinion: A Guide for Decision-Makers. By the way, it’s based on a course that I’ve taught the last 15 years at Johns Hopkins to graduate students, but it does exactly what I lay out. It’s a synthetic view of the professional polling. It looks at those three activities, the data scientist, the fortune teller, and the spin doctor, and kind of brings together those different disciplines in one easy place. It’s Cambridge Press, and so once again, it’s part of—I think, the social science methodological series. It’s in that sort of series. And so, you know, you could take my course. I mean, you could do that too. But no, no, you know, seriously speaking, it would be a book that kind of synthesizes—or does synthesize those sort of different disciplines.

Lenny:

Okay, well, we’ll try and see if we can include a link when we release this. And I know what I’m doing in a few minutes. I’m going to order that.

Karen:

[laugh] I know. Lenny and I both open our Amazon browser tab.

Lenny:

That’s right. Absolutely. The—I mean, for me, I just—everything we’ve talked about, and especially the zeitgeist, does come down to discernment and trust, and it’s hard to determine that. Absolutely agree that’s the vital role of polling and research as a whole, right, is to provide a source of truth that is just so missing in so many ways. And I think it’s important for folks to understand. Is it perfect? Hell, no. Of course it’s not. Nothing is. For all the reasons we talked about—there are complexities. But for us to live effectively just as humans, you know, [laugh] as private citizens, as well as business, et cetera, et cetera, we need to have a north star of truth to be able to try and understand where we go from here. And I really appreciate all the work that you and your colleagues continue to do to try and deliver that. And people need to understand that so they can factor it into their lives.

Cliff:

That’s great. Thanks.

Lenny:

Yeah, you’re welcome.

Karen:

Cliff, is there anything that Lenny and I did not ask you during our time together that you wish we had?

Cliff:

Well, you didn’t—you didn’t touch on the elections.

Karen:

[laugh].

Cliff:

And so that was like a no-fly zone. I understand that. No, I think so. You know, I would just—I would end it with this and just say that, yeah, a pollster needs to be eclectically trained, as I was saying before. That’s important. There’s—you know, obviously all the entire research profession, it’s about curiosity. Right? Like, one has to be curious, and that has to be the driving force. But what I can say is, like, the most pleasurable moments of my career. I’ve had a number of these, though they’re very rare, is that when I have data in my hand that no one else in the world has, but when they have it, it will change things.

Karen:

That’s cool to think about right there.

Cliff:

Yeah. And I would say that that sort of—that’s what polling and pollsters can do. It doesn’t happen all the time. It’s very, very rare. But, you know, it’s one of those things where, you know, when you have that sort of information, you understand the weight of the moment. And that really is—that sort of thing together with the kind of the more noble orientation towards bringing voice to people. But what, you know, those critical moments, that’s what really gets me out of bed in the morning to do what I do.

Lenny:

Understood. I think, Karen, we could both probably point to a few moments like that in our careers. We thought, this is—

Karen:

For sure.

Lenny:

—this is important, and feel—

Cliff:

Yeah.

Lenny:

—blessed, grateful, whatever, right, and feel that sense of responsibility. And that is a big part of why we do what we do as well. So—and this conversation may be part of that as well. I think this is—I love all of our podcasts, all of our guests. This one feels very timely and important for the time. So thank you. Thank you, Cliff.

Cliff:

Well, thank you so much. It was great.

Lenny:

Now we’ll come back maybe in January and do a debrief [laugh].

Karen:

[laugh] Do a debrief…

Cliff:

Oh, yeah. See how we did right? Yeah.

Lenny:

A post-mortem. So [laugh]…

Cliff:

Yeah. Okay.

Karen:

Oh, no… We’ll see how all of that goes. But also, I think, Cliff, is it all right that I—I don’t know if you want to—if you want to share what it actually is, but you share some amazing content on LinkedIn personally that is just informative, and it’s very nonpartisan. But, hey… here is something public that, you know, that we can publicly share, that I’ve learned, and isn’t it cool? So that does come out on your LinkedIn feed. So I encourage people to follow, if you’re open to that.

Cliff:

Oh, that’d be great. I would appreciate that. Yeah. We try to release things that are instructive about what’s going on in the world—and insightful, right? It’s not just about the data point. It’s about the context. I think we’ve got the context—you know, we could always improve it, but I think we do a pretty good job at Ipsos at providing it.

Karen:

Yeah. Yeah. Well, and one of those also, again, public opinion, when you’re able to put out—so much of what we do isas proprietary, like we said in the beginning of this call, like, it’s, we just can’t. Proprietary, we just can’t get a lot out there. But in certain fields, you can. So if you’re somebody who really wants to [laugh] share what you learn in your studies that you do consider going into this, so super cool.

Cliff:

At least some of it, right? I mean, let’s be like—

Karen:

Oh, yeah.

Lenny:

Yeah. We do have paying clients.

Karen:

I think you do. And that is pretty buttoned up [laugh].

Cliff:

Yeah, yeah.

Lenny:

Right.

Karen:

I’m sure. I’m sure.

Lenny:

So, Cliff, where can working folks find you? Obviously on LinkedIn. And it’s Clifford Young for those that are—

Cliff:

Yeah, it’s Clifford Young. Just put Clifford Young Ipsos. I’m on Twitter as well. I think it’s Cliff A. Young. I think those are two primary places. Listen, my email at Ipsos is clifford.young@ipsos.com. If anyone wants to talk about the professional polling, I’d be more than happy to. And so I don’t know if I just open up the floodgates.

Lenny:

[laugh].

Cliff:

But the floodgates are pretty much open already, so…

Karen:

Pretty much. Pretty much. Right?

Cliff:

Yeah.

Karen:

Well, that was generous and kind of you. Thank you so much for joining us on this show.

Cliff:

Well, thank you.

Karen:

I know I can probably speak for both Lenny and I to say that we’re excited about this one and happy to be speaking to you. So thank you. Thank you.

Cliff:

Yeah. Well, thank you.

Lenny:

Karen, why don’t you end this?

Karen:

Why don’t I wrap? Why don’t we shut this one down [laugh]?

Lenny:

Yeah. End this [laugh].

Karen:

Before it unravels. Thank you so much. So really, Lenny, thanks for the co-conversation, too; and Cliff; but also to our producer, Natalie Pusch, who set everything up and helped us with the brief, too, thank you, Natalie; our editor, Big Bad Audio; and of course, to all of our listeners, we are so grateful that you tune into us week after week. We hope you found some value in this one and look forward to the weeks ahead as we wait and see what happens in the world of polling. I know I’ll approach it with different eyes. I’m sure everybody listening will as well. So thank you, all.

Links

Chapters

Video

More from YouTube