Artwork for podcast Greg Ballard Adaptive Executive Podcast
The Neuroscience of Decision-Making: Ethics and Elections with Michelle Nigella, PhD | Ep 43.
Episode 4330th October 2024 • Greg Ballard Adaptive Executive Podcast • Greg Ballard
00:00:00 00:59:54

Share Episode

Shownotes

Podcast Summary:

  • Topic: The podcast discusses behavioral neuroscience, focusing on implicit biases and their influence on consumer and political decision-making. Key areas include how these biases affect voting behavior, market research, and ethical considerations in using neuroscience in marketing and political campaigns.
  • Guests: - Michelle Nigella, PhD in Behavioral Neuroscience: Specializes in consumer research with insights into political behavior through neuroscience methods like implicit association testing.

Discussion Points:

  • Implicit Biases: Michelle explains how implicit association tests reveal unconscious biases, affecting perceptions in both consumer choices and political preferences.
  • 2016 Election Analysis: They reference data from the 2016 election where implicit biases against women in leadership were noted, particularly among Republican women, influencing voting behavior.
  • Application in Campaigns: Discussion on how campaigns might use such data for targeting voters or shaping campaign messages.
  • Neuroscience in Market Research: Exploration of how tools like eye tracking and biometric responses are used to predict consumer behavior, though not as a definitive predictor of market success.
  • Ethical Considerations: Privacy, consent, and the influence of research on decision-making are discussed, highlighting the ethical boundaries and public concerns about manipulation.

PODCAST INFO

Leave us a 5-Star Review on Podchaser (If we are not 5 stars yet, email us and let us know where we need to improve.)

Think you'd be a great guest on the show? Apply HERE.

Transcripts

::

Greg, hello and welcome to the adaptive executive podcast, where we meet with senior executives and discuss how to keep yourself and your organization adaptive and your employees engaged. My name is Greg Ballard, founder and owner of 5c consultant, and I am your host. If you'd like to be considered as a guest for this podcast, you can apply on our website at 5c dot consulting. Look for the word podcast for now, let's dive into the show. Hey and welcome to the adaptive executive. I'm your host, Greg Ballard, and with me for the second time, we have Michelle Nigella, and she's returning to us, and we're gonna have a little conversation on behavior and elections. And so Michelle, welcome back.

 

::

Thanks for having me. Greg,

 

::

absolutely. So for those maybe tuning in for the first time, give us a little bit of background on your profession and what you geek out on every day?

 

::

Sure. So my name is Michelle Nigella, and I have a PhD in behavioral neuroscience. I've been working in the field of consumer research, with some dabbling in political science research as well, um, really focusing on the psychology and neuroscience behind why people do what they do. You know, what are our drivers that affect our decision making? And so I think this falls, really, you know, sharply in this space that we're talking about today. Yeah,

 

::

so I we're going to talk about a few things. We'll talk about some some data that you looked at in 2016 with Trump and and Hillary, and then love to just kind of unpack how polls work in general, and then maybe we can get into a little bit of the things we're seeing in the messaging and in the current campaign and how that may be driving some behavior. Sound like an interesting conversation?

 

::

Yeah, absolutely. So

 

::

you, before we hit record, you were telling me a little bit of of information that you pulled from the 2016 cycle with you knew Trump and Hillary. What was some of that data and and what might be relevant today regarding that data?

 

::

Yeah, so I've been doing a lot of work applying something called implicit association testing in the consumer space to understand sort of the perceptions and biases that people have when they're looking at brands. And implicit association testing is something that comes from psychological academic research, where it's a way of looking at the speed of someone's spot response to get at what their implicit biases are. And implicit biases are these things that we're not aware of, but influence how we perceive something. So it's usually all the isms, right? So ageism, sexism, anything you can think of, you could add ism to being able to see that if someone responds say slower than they're kind of having to take a second thought about something that that sort of reveals an implicit bias that they might have. So the example is, you know, if you were to see the name Jennifer come up on the screen, you would very easily say, Oh, that's a female name, right? But, and so you could quickly say, Yes, I agree Jennifer is female, but if the name Taylor came on, well that's a little more ambiguous. It could be a male name or a female name, and so it might take you a little bit longer to respond. And so that delay in response reveals a bias you might have. So you have a stronger association for Jennifer being female than you do for the

 

::

name Taylor. Click on that for a second, because it's really interesting. Really interesting. If I slow my thinking down, I'm biased.

 

::

Is that, no, it's more that you're having things like so the more you associate something, the more the easier it is. If you think about your brain and the ideas in your brain as being connected nodes in a network of thoughts, right? So the place where Jennifer is in the place where you have like, Oh, what a woman looks like, right? Those things are tightly related. They're closer, right? So the response of putting them together is faster, whereas something that might be further apart, let's say you know Taylor and being a female name, or it could be something more an isms, right? So, you know homemaker Jennifer, or, you know lead engineer Jennifer, right? So it might take someone slower to respond that Jennifer is a lead engineer than it is for them to say she's a homemaker. It's not to say that they don't think Jennifer could be an engineer or a lead engineer. It's just that they're not. They don't have as strong of an association for those two ideas together, and it's probably because of what they've been exposed to. So if you've spent more time meeting people not named Jennifer or men, let's say in a you. Engineering leading role, then you're going to be slower to think that Jennifer is a engineer over a homemaker or a teacher or something that's more traditionally female. Yeah.

 

::

So I want to keep moving the conversation forward, but I think my initial reaction is using the speed of response as a bias test seems weak for me at this point. Maybe there's more to it. Maybe there's a check, but that initial like, if you have to think a little longer, yeah, I don't, I'm not convinced that that's revealing true bias. It

 

::

is a little more complicated than that, and so it is, like, I've simplified it for the purposes of the conversation, but there's a whole series that the the test goes through to get to that point. It's essentially speed of reaction to strength of connection. But there is like a whole series of the way the study is designed, right? There's a lot of evidence for this too. So you can go to Project Implicit on the Harvard website to see it. So if you just Google, sorry, Project Implicit, you'll see all the research that's been done on this. It's very well established. Not to say that there isn't some controversy, because there are people that say it's not reliable. One might argue that any of the measures in psychology aren't very reliable, simply because people change a lot, so there's a lot of variability in the data, and you learn every day, and your mood changes every day, so any psychological measure tends to have some variability in it, so it can be unreliable, but that's because humans are unreliable, of course, yeah, but it is very reliable in the sense that it has shown to Be able to expose these biases. But a different question is, you know, can people change their biases? Are these things that become inherent? Can you learn so, you know, you may recall that even during that time, in 2016 when a lot of this work was coming out more in the political space, that people were also doing a lot of implicit bias training, where they were trying to get rid of people's prejudice, or something like that, and there is some controversy over whether that's even possible.

 

::

Well, so this is where it gets interesting for me, because, you know, we have consciousness, we have unconsciousness. And right, our unconscious drives a lot of our behavior. Can our unconscious be biased? Well, I think our unconscious No, 100% it's biased. I can, I would, I'd be much quicker to say our unconscious can be ignorant or uninformed or responding to things that are not there, you know, due to past trauma, right? That I can get to very, very quickly, yeah. But when I hear the term bias, and obviously terms and definitions, you know, they're a little, they're a little fungible lately, yeah, and, but it denotes, like, an intentional, like, almost, it's almost associated to prejudice. Like, if I'm biased, I'm making it, yeah, out and, and I think those words should be different. I just don't think they are different in our modern Zeitgeist and vernacular. From

 

::

the psychology standpoint, they are different, right? So when we talk about, when I say bias, I don't necessarily mean that the people that associate the name Jennifer with being female are wrong, right? I think that that is because of learning, and it's how that's how we adapt to the world around us. So when you talk about the non conscious versus conscious, some people call it system one versus system two. So if you look at Kahneman Nobel Prize winning work, or the book he did at looking at the systematic thinking, what you actually find is that, first of all, it's a continuum. It's not black and white, that one thing is conscious and unconscious, right? But maybe more importantly is that the unconscious side is in the system. One side is shortcuts, and shortcuts are inherently faulty, right? So often, when we try to take a shortcut or try to make the easy decision the fastest decision, it's prone to error, right? And that's what we do have to be aware of. So yes, these things are built on experiences that we may have had or things that we may have learned, but they can be wrong, right? So you know, for example, the color blue has been found to denote trustworthiness. Does that mean that every time somebody is wearing blue, I should trust them? No, right? So, but it can bias your perception, and so this is something we have to kind of be aware of. Can you train yourself to be aware of these things? Some people say yes, right? Yeah.

 

::

So I wholly subscribe to a growth mindset. So coming back to can biases be? Can we? Can we grow out of them? And and I would say, I think so, and I think maybe there needs to be different categories, because I think there are some that that probably yes, and there's others that may not be, and that depends on where they were constructed, how they were formed. And. And what are the emotions you knew connected to those? And are they rooted in deep childhood, right? Because that has, I think, a lot of anchoring for us. But what I want to come back to, so we're talking about biases and implicit biases, and some of the studies are showing this. And I had another thought that I wanted to introduce, but it escaped me, and if it comes back up, I'll interject it. So some of these items were, were saying we can see some implicit bias. We can track what people are are making a judgment on right and yeah, and how that might influence their behavior. So let's step back into that and where these studies from 2016 have what they're showing us. Yeah,

 

::

so back in 2016 I was working for a research company called HCD, who has a history for you know, nearly 30 years doing work in the political space and asking questions of voters, there may be a little bit different from your traditional polling questions, right? So really trying to get at the, you know, psychological influences that people might have. And so in working with 538 at the time, we decided to use implicit association testing to see what was sort of going on with voters, and this was with Republicans who plan to vote for Trump, undecideds, Democrats that were thinking about voting for Hillary. You know, really a mixed bag, trying to get as many different types of people as possible. And you know, not a lot actually came out of it initially, but what we did find as a strong value. There was actually that Republican women had a stronger bias against women in leadership. So when we think about those isms, they were particularly biased against and had more of a delay in matching women with being in leadership roles. And this is really interesting because they also planned on voting for Trump, as opposed to Hillary. Now they were Republican women, but what was interesting there is they had a stronger response than the men. So that is why it stood out as being interesting. Let's,

 

::

let's turn this into maybe what, what might be happening today, and how messaging in a campaign might take this behavioral science data and utilize it. We see this potential implicit bias with Republican females towards, I don't know if it's towards or away from right, but to either towards a male leader or away from a female leader. How does a campaign take that data and convert it into a message?

 

::

Yeah, so one might, one thing might be through targeting, right? So in the world of implicit bias and implicit bias training, it is maybe about more exposure. So being able to see women in more of a leadership light, might help tip that, but it also may indicate you know who you maybe shouldn't spend like if the bias is strong enough, maybe it's not something you can actually change, right? So maybe they will never be able to see a female president, right? And so these are things that help them in targeting where their money should go right. So maybe they actually get a better tipping point with the men in that case, because they didn't have as strong of a bias against women in leadership. So I think it depends where the campaign actually wants to take it. Now that does end up kind of bringing up the issue of interpretation of any of the psychological results, is that that's really the core problem with any of this, is, how do you interpret to make it actionable? It's It's the key ingredient right to be able to figure out what to do. So yes, you see this bias. But what's going to be most effective? Well, it might be that you have to do further testing. So if, what happens, if we are to expose people more to this situation, are we going to be able to change their perceptions? Is there a way, you know, maybe diving into some more behavioral science, decision science research, particularly within communications, that we can shift people's thinking, and that should drive how we construct our communications.

 

::

So I want to come back to the study for a second and just poke around, because you knew if we're going to take a study and then make decisions based on it, looking at that study, how it's structured, and you knew maybe even coming up with a you know, how plausible is this studies? Is it relevant? Right? So in this particular study, can you give us a little bit about the structure and how it was performed in over a period of time?

 

::

Yeah, so we took a group of voters, and we had them run through this exercise where they were being exposed to, you know. Men or women and different types of jobs and positions to really see if there was some sort of implicit as well as explicit gender bias, because that was a really, you know, sticking point within the 2016 election. Now, measuring implicit bias can be tricky, right? Because it is unconscious, so you can't, very easily just ask how people feel. So using this sort of indirect measure of reaction time is really how that works better to be able to get that response from people, because nobody wants to say, I'm biased, right? They don't necessarily. They're not going to tell you that. If you were to just ask them, like, Oh, do you have a bias against women? No, they're not going to say that. But if you can show, in sort of a repeated way, that people are showing a pattern of a delayed response of matching these two items together, then you can show that there is this, this trait of implicit bias.

 

::

Yeah, so I have, I have a bunch of questions about this. So yeah, what was the size of the study? How many participants were in it.

 

::

It was relatively small when you think about in terms of how polling typically goes, but it was 500 likely voters, including having over like 100 people in or trying to get it like about 100 people for each of the four groups, so Clinton and Trump supporters of both genders, and really modeling it after how those academic studies are done in, you know, like if you were to go to the Harvard site for Project Implicit. So that's really how it it came about. So it was done online,

 

::

okay? And so I don't know if clusters the right word, but maybe clusters around females, you know, that would be pro Trump males that would be pro Trump females that might be pro Hillary. Males that would be pro Hillary. Yeah,

 

::

so I think we first looked for people that would were planning on voting, and then whether or not they were Republican or Democrat, finding out if they were Clinton or Trump supporters, and what their gender was,

 

::

Okay, gotcha. And so, so that was the way it was structured. It was done online. Was this done over a short period of time, or was this done over a longer period of time? Yeah, a

 

::

pretty short period of time. Whenever you run these studies, you can actually get, you know, you have recruiting efforts to get people to participate, and so that can usually be done within, you know, just a maybe two or three days. And

 

::

so, okay, so we've got the, we've got the participants structured out. Now, what was the framing of the questions? Was it text questions? Was there visual images? Because what, here's what I'm here's what I'm thinking about in a study like this, there's so many, multiple factors right, that come into deciding and so let me put on, let me put on a leadership hat for a second. So if I'm in an organization and I've got a open seat, right, I have an open seat on my team, and it is going to be an executive level, and so I need to fill that seat. Now, if I'm approaching this, as you know, as intentionally, right, as as as consciously as I can, to be fair, right, between genders and races and sexes, and if I'm trying to be as open handed as I can about that consciously, right? But maybe I have an unconscious thing that I have, and I don't know about it, so we'll just put that on the table. But then I have to look at the candidates that are, you knew, that are here. I have to look and say, we have, we have a a policy where we're always going to look to put somebody on that at that level from an internal right perspective. And and I have a fairly populous group. I have a fairly diverse community. So there's so many other things that is going to come into the calculus of deciding that are beyond the identity pieces and yeah, many of them could be their personality and how well they relate to the folks on the team. Obviously, competency is going to come into that their experience and their education. And then what's not often thought about is what is their future path, what is their future desire, and what skills do they need to develop to get there? And so if, in this example, I go through my process and and I select Candidate B happens to be female, happens to be white, happens to be you know, we list out all the all the different identity factors that could look biased for any particular reason.

 

::

Well, that's not how the studies are run, though. So that's a different

 

::

studies are run, yeah, what I'm saying is a decision maker, right coming, oh, yeah, and making a decision. So when you're in a study, and for example, if I'm looking at a woman, potentially being a girl. Often, or a senior government leader, yeah, am I saying I'd rather not have her be a female because she's a female, or I'm I'm saying this particular female because, yeah, she never graduated college, she's never run a business, she's actually never held office, 100%

 

::

right? Or maybe you didn't like a policy that she was involved in any number of things, maybe she said something on TV you didn't agree with, and all of that is absolutely true. And so when we look at these biases, it's an interesting factor, but kind of like what I was saying before, how do you interpret that for your campaign running? You know, we were not testing specifically Trump or specifically Clinton. We were looking at the implicit bias that these voters had in general, right? So we weren't testing specifically that, if that makes sense, um, so it when you run, you know, consumer research, or, you know, when you're trying to do campaign research in particular, you would want to look at all those different factors and think about the specifics of that candidate. But when you're doing more of an academic study of just trying to explore maybe one aspect that then you can do further research on. Starting here is an interesting point to be like, Okay, so we're actually finding here that the women had a sugar bias response to other women. That's interesting. It deserves exploring more, right? Does not mean that they would be against Clinton in particular, absolutely right? That's not what we were studying. It doesn't mean how they would react to hearing new information. That's not what it means either. It's just that there is this value about them that we were able to uncover. Does it say something about how they might vote? Potentially, it shows something that's going to be a factor in their decision making. Because if they're viewing women as not being in leadership positions, are they going to have a harder time voting for a woman as president? Potentially, that's

 

::

so we've got to really, I think we've got to use the right language around this. This particular study, it gives us a little slice of something that might be worth looking at further. Yes, but it's not something to really base a whole lot of weight. And decision is what I'm hearing you say, yeah and yeah. I think sometimes, and I'm not saying this study in particular, but I think a lot of times we get studies like this that are, that are foundational, that are, you know, I don't know what the right technical term is, but kind of these early insights that have yes potential of being key factors, right, they get taken, yeah, and they get They get labeled a science based evidence says X, and that gets pushed out into the into media, into social conversations, and people start making decisions based on these. People start making taking action on these studies and getting some real context around, hey, this is the scientific process. Okay, the scientific process is really about falsification, and it's about continuously learning, and we have to recognize where we are in the process. And I think this particular study that you're highlighting is really the beginning. It's the beginning that gives you insight in what to look on next. And say, you take a dozen of these studies, and if you begin to see these things all intertwine. Well, now you have, you have the makings of a clear hypothesis, a clear theory that you continue to build, to build on. And

 

::

I think what you're talking about, there is a bias that I think people have in general, when they hear something that's science oriented. Was this a sound scientific study? Yes. But how does science work? It's an iterative process. You know, a lot of people think, you know, especially when I come in and I'm working with with clients right on questions about their brand or their product, or whatever it might be, they think that science, neuroscience, psychology, is going to be some sort of silver bullet that's going to give them all the answers. Not necessarily. You have to do multiple studies. You have to look at things in multiple ways to really understand how people are making their decisions, because humans are complicated. There's so many things going on with them, and each person is different. My decision making process is going to be quite different from yours, so understanding the all the inputs that kind of go into that experience is important. Is, you know, something like an implicit bias against gender important, absolutely, but it is just one piece of the puzzle that could be influenced by lots of other things as well.

 

::

Yes, all right, so here's a question in real time, and I just like to see what your action is. Is true? Objectivity? Is true? Objectivity possible?

 

::

No, okay, and I'll tell you why I say that, and this because this is a question that I actually get from clients all the time, especially since I do psychology and neuroscience work, and it's very academically grounded. And often they think that coming in with some sort of neuroscience tool that's measuring the unconscious is going to get them some sort of unbiased measure. Are never true, because does a tool. Is a tool itself unbiased? Yes. EEG head caps on. People are measuring electrical activity, but if a human has touched any part of the process of using that tool, it is inherently biased from designing the study, from wiring a person out, from interpreting the results. Every step of that is influenced by a human and biased. So I can, you know, obviously, design the study in the very beginning that's going to give me the result I want. I can do the analytics in a way that favor the result that I want to see. And I may not even be aware that I'm doing that, but because a human has touched it, it is biased. And that's sort of a warning I give to people anytime they think that neuroscience is going to be some sort of, you know, silver bullet in solving all their problems and being able to read the mind. I'm like, Well, did a human touch it? Because now it's biased.

 

::

So, so the next question is, does the scientific process eliminate subjectivity?

 

::

It's supposed to, right? So the way it works is, again, being iterative. You should be able to repeat the study. You should be able to find other corroborating evidence, right? So you should be able to use a different tool and see something that aligns with it. So building, as you said before, like having a series of studies or a a bundle of evidence that all leads to the same conclusion, gives you more support. But then the other side of is that you also need critique. You need critique from other scientists. You need critique from the public people to point out that, oh, you know what, that part of your study is biased, or you didn't analyze this, right? What if you analyzed it this different way? And that's what the whole scientific process is about. You know, we do these studies. We do multiple studies. We think about the statistical power of our studies, the strength how many people are required to be in a study to make it representative. You know, we publish it so that other people can see it and critique it and complain about it. You know, the the publication process involves people having to review your work, and it can take years of going back and forth with these other experts about why you use this tool, why you came to this conclusion, why did you design it this way? Why did you set on this number of people to be in your study? All that has to be done for something to be more established,

 

::

yes, yes. And

 

::

I will say with that, that when you look at political research or you look at consumer research, often there isn't time or space to do that. So a lot of times you're running a one off study, and you're trying to run with the results of that, because your time frame is this small right to put out a new political ad or to put out your product, and so you have to rely on this maybe one or two studies to give you the direction. So the question then becomes, not, have I gone through the whole scientific process, but have I been cautious enough in my design? Do I have strong enough statistical evidence to say that this should predict behavior and that that's where a lot of people end up relying on on things and really having to think about the statistical power of their study, or, you know, the design, something like that.

 

::

Yeah. And I, I think I'd even push it a little further. There may be, there may be people predisposed to take these surveys more than others. Oh, yeah, and therefore the population you're surveying is actually a subset and may not be fully because there's people that just, they just don't do surveys like it's just not their thing, yeah, yeah. There are people that are inclined to do those for whatever reasons. There's a maybe there's some specific motivations that they have. Well,

 

::

most people are paid to

 

::

be in surveys, right? They're paid, yeah, so even looking at the population, so I did, a while back, I did an intro to statistics, and I was really fascinated. And I don't know how this works in your field, but a statistical amount is 30 people. If you have 30 people, you can say you have statistical significance. And I was like, that's it. I'll say it depends. Yeah, I think yes, if you're trying to assess a larger population, you need a larger, you know, pool of people and and so this can

 

::

be very dependent on the type of tool you're using, the type of survey questions, the number of survey questions, but you can do a statistical power analysis to see if you have enough people. So the magical number in quant research, if you're doing a survey, is usually 200 again, that's very dependent on how variable your population is, you know. So if you're saying general pop, you probably need more than that, maybe a couple 1000. If you have sort of, like, you know, a certain age range, you're looking at a certain country, you can start to have a smaller population. If you also have a tool that's very reliable and very narrowly focused, you can go with a slightly smaller population. So that's when you start getting into, like your 30 so if you're doing, you know. Know, an EEG test, usually you do multiple repetitions of that measurement. So even though you're only measuring 30 people, you actually end up with like, maybe 5000 measurements, because you've repeated things so much and you're looking at different aspects. So that gives you your statistical power. Gotcha,

 

::

yeah, that's that's really, really fascinating. This whole discussion is fascinating and, and I think there is, there's something here, right? I mean, there's people that put their I mean, you're in this career, there's a whole industry around, you know, consumer behavior analysis, and there has, so I'm going to make the presumption that it's proven itself in a from a business perspective, right? There is business value proposition in doing this work. And I'm curious you knew, maybe you have some examples of projects you've been on, or projects you know from your field, where this study led to these decisions or this and there was just a huge success. Yeah, is there any notable consumer behavior projects that come to mind?

 

::

Yeah, definitely. And it's just about everything that you see there out in the market right now. So I think when it comes to packaging, I think just about everybody's using eye tracking now to help understand what is attracting visual attention better over just asking people, right? So using these psychological principles of what is driving attention, you know, thinking about the emotions involved in bring being loyal to brands or loyal to products has been really quite huge as well. So I think you're you're seeing this in just about everything that's out there these days. It's become a huge part of of product development and branding and market marketing. I think if you were to look at Universal Studios, they actually have their own biometrics lab where they run ad testing through and they do that on site at Universal Studios. So they have, you know, a number of people that are just walking by all the time, that they just recruit in to, you know, watch ads, and then they just test them using these biometrics to see what their responses are. So I think it's just becoming more and more part of the everyday R D process and the, you know, development process. So I think just about anything you have in your house that is a consumable good has probably been tested in some way by this stuff. So,

 

::

so do we have free agency, or are we under you know, have things gotten so sophisticated that, yeah, the corporations are are controlling us through these meetings?

 

::

It's a great question, because there is a field of this called neuro marketing, and I personally hate the word neuro marketing. It's supposed to be the combination of neuroscience and market research, but what's kind of been embedded in that is a lot of what I call pseudoscience or neuro myths, and one of them is the idea that there's this Buy button in the brain, right? That there's some sort of trigger you can put in, oh, you know, if I make the packaging blue, they're going to buy it more. And that's just not true. There's too many variables out there. Humans are too complicated. There's no Buy button in the brain, I can't force you to do anything. There is this whole idea that comes from, again, Kahneman and behavioral science and behavioral economics about nudging. Are there different ways that we can sort of unconsciously or subconsciously, cue you to perceive something in a different way, potentially, but ultimately you make a conscious decision. So when I was saying earlier that a lot of people try to divide conscious and unconscious completely they're wrong right, because it's actually a continuum where we ebb and flow on both sides. We sometimes are doing things non consciously, and sometimes we are doing things consciously. Ultimately, any behavior we do is a conscious effort. We decide to do that, so there is always going to be free agency. Now, there are governments that are concerned about using central nervous testing, so something like EEG or fMRI that's measuring the brain, the central nervous system, that incorporating that into your market research is inherently unethical, because now you're maybe invading privacy, or you're doing something to influence people. I would argue that the field of advertising has always tried to do that to influence people's thinking, whether it is through lies told in an ad about, you know, smoking, being healthy for you, right? People being exposed to that messaging long enough, is it going to influence their decisions potentially, but you have, you cannot do that in a neuroscience way to influence people now, okay,

 

::

so I guess this would be the follow up question to that, because all the stuff is out there is, does the technology exist where somebody could do the level of research that that research. Could then they could then target a specific market and almost guarantee a behavior outcome.

 

::

Yeah, that's a million dollar question. Can people target and segment people to try to come up with the best way to get them to buy something? Yes, but nobody has a very no one's come up with an algorithm to actually predict success in the market, and I would bolster that with looking at food. If you look at new food product introductions, it's a 85 to 90% failure rate. So even with all the research that's being done, the more than the majority of products fail in market. So I'm going to go with no on that. Could they get better at predicting behavior? They are certainly trying, right? So using AI algorithms to try to put together large pieces of data sets from historical data as well as current data. You know, you can start to predict, for example, you can look at eye tracking. For example, they've been able to predict much better. Now, what we're going to look at, I think,

 

::

I think we all understand eye tracking, but maybe you can just, can you speak the tech, technology, and is it, can that be deployed in a way that we're not aware eye tracking and used.

 

::

So it is literally being able to record the eye movements. So where like I'm looking at you on my screen right now, and it's going to be looking at my pupils and all the little micro movements that they're doing around the screen. So am I focusing on you? Am I being distracted by an email coming up? Whatever it might be, my eyes are going all over the place, right? So it's tracking those movements and looking at how long I look at things, the order I look at things, etc. Could you do that without somebody being aware? You do have to have a camera on them, and it has to be high enough quality that they you can track those minute, little movements. As far as research goes, you do. I mean, we have GDPR out there in the Europe, in the US, and especially California, there's a lot of laws around measuring people. So no, you can't do that unwillingly.

 

::

Okay, so eye tracking when you're talking about it, is done in a closed study with, in a close study with women participants, willing

 

::

participants, that are being exposed to something that's pretty controlled, you know, like looking at a package prototype, you know, let's say for a food product or something like that, and looking at where they Look at on that, that picture, or, you know, maybe it's an actual product they're holding in their hand. So you can look at where they're looking. Are they spending more time on the logo, the cap, you know, the small writing and the on the back, whatever it might be, what is attracting the most attention? What's the order that they go in? You know, how many times do they go back to look at that one thing, all those things can be recorded to have a better understanding of their interaction with the product. And AI has been used more recently to be able to predict those movements, so that now you can use maybe a smaller set of people. So instead of measuring 100 people, maybe you can measure 20, and then you can make a prediction based on that. In some studies, they've actually just taken previous data and then had a new prototype put into the algorithm so that it could estimate what it believes would potentially be, where people would look, and that's actually been shown to be pretty accurate as well. So where was this question going, it was kind of like, well, can we predict what people are going to do it with AI to some degree, you know. So it has been done sort of to some degree, fairly reliably in eye tracking. But I would say that that's something that you could probably get from heuristics anyway. So good designers would know that if I, you know, design something a certain way, it's going to drive attention. Heuristics

 

::

define that or express that. Yeah, heuristics

 

::

are the things that we use to cheat a little bit, right? So we can heuristics are like, we know something and we can apply it to something else. So shortcuts that you may use in making a decision, you know, like knowing that the a package is got a grainy feel to it, and maybe like a little duller color, you might just assume that it's more natural, right? Because we're using that heuristic of what the packaging look like to sort of influence how we believe it's going to perform, or believe what we're what it's made of, something like that. So it's these little cheat codes that we use to understand the world around us.

 

::

Got it. So one thing that's coming up for me and and I'd love to, you know, spend a minute on this, because I think we're really talking. Talking about the technology is getting to the point where we have, from a marketing perspective, understanding, yes, not every product succeeds. I don't know if that's the right test, though. Okay, I don't know that that's the right test. I think it's, it's marketing products that use this, these technologies, and looking at them specifically and not the others, or looking at them in contrast, do they be performing even better? But it takes a tremendous amount of resources to you, to bring these consumer marketing and consumer research and to bear. But really, what I want to click on is the ethics, right? Where are the current industry ethics when it comes to doing studies that have the potential to have a significant influence on the market or on on consumers, and how are ethics applied, and have they been moved over time, right? Are they moving? Because you knew. I mean, just the way the world works is we're always going to push as far as we can, up to the line. We usually break it, right? We usually break the line. Then there's some pain and suffering that has to happen, and then if you get the right regulations, you can pull back. But I'm curious, you know, what are some of the ethical principles that that scientists in your field are currently following?

 

::

The first one I always follow is privacy, right? So making sure that people projected from being able to be identifiable in any way from any of the measures. So if I'm looking at heart rate or brain activity or eye tracking, or whatever it might be that I'm in no way being able to identify them by their data. So that's the first step in ethics. Is to protect the participant. Another way to protect the participant is, of course, in any sort of health way, right? So this usually doesn't involve the actual tools that are being used in this field to measure things, but more about maybe the products they're being exposed to. So if it's a food, if it's a fragrance, if it's a drug, whatever it might be, making sure that there's nothing harmful in the study, most, I'd say all research has to go through some sort of like internal ethics board where everybody's reviewed the protocol to make sure that it's not violating any of the established ethics boards that are out there. So there's when it comes to psychological research or it comes to human participants, there all are already very strict and clear rules and regulations. Some require that you get what's called an IRB approval, which is internal review board, but it has to be sort of it's an internal review board, but it's external from your company, so someone who comes in and reviews the protocol to make sure that it's safe and privacy and all that stuff, then the question of whether you're influencing is a slightly different question. But like I said, you know, in Europe, they have been concerned about central nervous system measurements being used to influence people's decision making. But what I was trying to make the point there is that none of these tools can actually do that, right? So even with, you know, the algorithms that I was talking about on predicting where people are going to look on a package that ultimately doesn't change your behavior. Ultimately, you're making a conscious behavior. There's no Buy button in the brain. There's no way for me to influence you any more than I could do without using EEG. So a good designer is going to make it intuitive to open the door by pulling. Maybe they put the way the handle is on the door, or maybe that they put the word pull on it right. So that's going to influence your behavior, but they're not forcing you to pull or push right, right? You ultimately make that decision, or sometimes you don't make that decision. You just use some sort of internal heuristic again, and perhaps you're wrong, right? Or it's a bad design, or you weren't paying attention, whatever it might be. Ultimately, you make that decision, and it's not the designer that is causing that can they nudge you in a direction, potentially by putting pull on the door or having the handle, really only having one purpose of pulling, not pushing. But does that make you do something you didn't want to do? The answer is no,

 

::

yeah. So I think the approach might be slightly different than, can I persuade you to buy? I think it might be, yeah, who's predisposed to buy in X condition? Yeah, and I find those predisposed to buy in X condition and then nudge them in the way that they already want to go right. And so from a marketing perspective, it's not, how do I convince you? It's. How do I find people that already want, already ready,

 

::

and that has been around since the dawn of making product right, right? And so, you know, trying to make products tastier, put it in attractive packaging that has existed with or without psychological or neuro scientific research. So, you know, the question of ethics, about making cereal boxes more appealing to children, is not really a neuroscience or psychology problem, so much as it is just a marketing problem in general. And a lot of times those things do get regulated, right? So there's rules around tobacco, right, on how you can advertise tobacco. Now, were those things at one point advertised in a way that was attractive to children? Yes, so what you were saying before about pushing those boundaries of the ethics? Has that happened before? Yes. This is why we do have an FTC. This is why, you know, we do have, you know, ASTM standards around how claims can be made. All those sort of things do have checks and balances that exist out there? Can things be pushed potentially? Yeah,

 

::

and I'm thinking, you know, we look at the beginning of advertising, you know, here in the West, but then we look at it today, and we say, Okay, we have social media, so we didn't have that. Then we have AI, we didn't have that, then we're gonna step into Quantum. You know, maybe not at the consumer level, but we're gonna step into Quantum. And as these all these things come together, the ability for scope and speed, velocity and reach, is just all going exponentially through the roof. And when we take these capabilities, right, these capabilities of, okay, now, do I have the ability to find a predisposed market quickly? Do I have the ability to message them in a way, and so I'm always going to land on, we have freedom of choice. I mean, that's just what I believe in. We have that however, I do think we also have the ability to stack the deck in the house's favor, in consumerism, and so that we're going to, we're going to be able to have a lot of influence. And I think these technologies are are just increasing that capability, increasing the speed and scope of that capability. So it's not just kids walking down the cereal aisle and the local grocery store, but it's anybody on YouTube. X, yeah, yeah, tick tock. You know, consuming media. So last question on the ethics component, earlier, you'd mentioned that there is some countries that prohibit taking, yeah. What is it? The biological data and yeah,

 

::

essential nervous system, brain, yeah.

 

::

Are there any other things that are currently, you know, on that prohibited side, that that marketers and researchers are not, you know, permitted to use in marketing.

 

::

The first big one is that identifying information so you know, when it comes to, you know, targeting individual people, that is something that we do not do. So I'm not going to go out there and do research specifically on you and then tell, you know, Heinz that they should target you with this type of catch up, right? So it's not quite that level in Minority Report, right? But are you already getting that from some of the social media that that's out there that's targeting ads to you based on your activity on Facebook or Tiktok, or whatever it might be, yes, right? And for the most part, currently, it's pretty bad, right? It's not very good at targeting you. But, you know, is there some of that out there that exists? Yes, are there active lawsuits against it? Yes, is it a topic of conversation within regulation? Yes,

 

::

well, that's a difficult to me, there's a there's a conversation there, because in one regard, you're adding business value, right? You're saying, hey, yeah, if you're consuming and you're gonna have ads, right? Or you're gonna pay a fee, yeah? So if you don't want ads, at least, let us send you things that you could be interested in. And yeah, logically. Like, that makes sense, right? Like, show me things that I might actually like and give me the option, as opposed to me a bunch of irrelevant junk that I don't care about. And so there, but isn't

 

::

theory. When you get a Facebook ad for like, women in their 40s love these shoes, I get those, and I'm like, Oh, God, the algorithm right,

 

::

right now, that's the creepy side. The creepy side is, how do you know what I know? How do you know what I like? And I have these people tell me I had a conversation about this kind of thing, this product. And next thing you know, it's an app. And you're like, is it listening to you, right? And so that's where the creep effect, right? In order to provide this service, we've got to be creepy. And that's, I think the market is just like, Ah, I don't know what to do with this. Yeah, this

 

::

balance between giving you what you want and invading your privacy. It is a balance, right? And what's interesting, though, is giving people what they like is an interesting question, because liking does not predict purchase like that is one of the biggest things within market research. It's an entire industry built off of measuring whether or not people like a prototype, and liking does not predict success in market, and that is why you have people scrambling to look at these other inputs that help influence our decision making. The example I love to give is, have you ever had Listerine? Have you used Listerine to wash your mouth? Yeah. Do you like the taste of it? No, nobody likes the taste of Listerine. It burn the it's better than it used to be, but it burns. Yeah, but the thing is, is that if you I were to give a taste test on Listerine, nobody's going to like it. They're not going to give it high scores on taste, but they're going to give it high scores on believing it works, because if it doesn't burn, it's not working right? So there are these different perceptions and belief systems we get from different cues. So when we get that burn, we believe it's a good product, it's effective. And doing all these things, do we like it? No, do we like that? It works? Yes, these are different questions. And so that's why we're trying to assess all these other inputs into people's decision making.

 

::

I have one thought for anybody listening, and if you get the chance, I think toothpaste and diaper cream should come in different types of tubes. I'm just saying they both live in the bathroom, and if you grab the one you didn't need, it's not nice. That's all I'm saying, yeah, yeah,

 

::

absolutely.

 

::

This has been a fantastic conversation. I think we just keep we could keep going for a while. I want to, oh yeah, I could talk about kind of wrapping up a little bit. So really interesting insights, really interesting conversation on on how data is used to potentially help a marketing campaign, or whether it's for a company or for political campaign. I don't know if you have any current studies, or if you have any insight into the current election that you can talk about. I mean, is there any predictions on which way this is going to go from the neuroscientist?

 

::

Ah, yeah. So I know some people that have been doing some research. So Paul bulls out in Where is he in Oregon? I think he just did some research on measuring people's biometrics while they were watching the debates, and he's still analyzing the data. So the it's still out. Right now, I don't know what the answer is just yet, but I think that that's something worth looking at. I've done similar studies in the past where we were able to show, I think it was during the Biden and Trump debates of yesteryear that did show differences in responses of people who were either planning to vote for Trump or for Biden and amongst undecided voters. And so I think that you're going to be able to be able to see something similar along those lines, what they get excited about, you know, when there's a big moment like that may indicate how successful maybe the debate was, you know, for one team or the other by how excited it's getting their their own team, or how excited it's getting the opposite team right. So you can start to gage those sort of things and have a little bit of a track on understanding, but ultimately, you know, analyzing those things, you can probably also guess who's going to react to what right. You know, Trump supporters are going to be very excited when he makes a dig. And you know, if you whenever the person is cheering for their own team, they're going to be very excited. So

 

::

I'm curious at what is the most, what's driving the decision making, right? Because, unfortunately, I feel like our system creates a a single issue, voting mechanism where I'm going to identify, and I'm oversimplifying this absolutely, but I'm going to identify the the issue that I think is the biggest, and then I'm going to pick the candidate that backs that and all the other issues I care about. I'm going to just have to put on the sacrificial altar, essentially, unless, yeah, a few of them agree. So for example, for this conversation, maybe it's the economy, right? So economy, if economy is big for me, I a really big deal and or my number one thing, then you're probably likely going to go with Trump. If social, social justice, and, you know, being, you know, how we treat each other, is my big number one thing, then we're probably going to go with Kamala. Yeah. And then all the other things fall down right, fall under, yeah. We

 

::

have to prioritize. Yeah. I mean, when you have a two party system like that, you have to prioritize.

 

::

So I'm curious, is there any studies on going on, what is the number one issue for people

 

::

constantly? And you know, if you look at the questions people are asking of all the different polls that are out there, I feel like they're always finding new drivers. And I feel, honestly, I do feel, in this election cycle that it's moving faster than in previous ones, where, oh, it's all going to be decided on abortion, oh, it's all going to be decided on the economy. You know, it, it seems to go in cycles as to what is the hot topic of the moment? Um, so it would, it maybe depend on what's the hot topic on the election day? Um, or are people already decided? I mean, again, going with that implicit bias, um, when it comes to the economy, I think if you were to ask a Trump supporter versus a Kamala supporter, I think that you're going to get very different perspectives on why they justify their answer. So they could both say that economy is the decision deciding factor for me, and give you two different answers as to who they're voting for and be able to justify that right? And that's because there's just a strong cognitive dissonance that people can have. And I think maybe, maybe very particularly in more recent elections, where there has just been more of, like, really taking a stand and not changing, you know, your mind, nothing can change your mind, sort of thing. So people have gotten very good at this, you know, cognitive dissonance, where they can justify any of the decisions that they're making.

 

::

Yeah, it's become more ideological lines. I'm curious, do we know the size of the undecided market right now, the undecided voter pool? Like, is it five? I'm

 

::

not sure. I feel like that's been changing a lot lately too. And you know, I think when you ask people, they're like, how can there even be undecided people, you know, but there always are.

 

::

They say they're undecided. Yeah, that's how they discuss, right? They may know, and yeah, can't figure that out, but

 

::

it's really hard to estimate. And they are estimations, right? So when you talk about any of these polls, and we were talking about sample sizes, and how do you make predictions? Because that's what they are. They are asking, you know, maybe 1000 people, 2000 people, and they're making an estimate for millions of people, right? And so, like, how did they word that question? How many questions were there? Are they, you know, limiting their interpretation to just this one question when there were maybe 10 questions asked? Is it some sort of combination of questions that gives you sort of a metric to better estimate, because if I ask someone if they're undivided, is that different than, you know, some other ways that I can ask the question. I think that. I think that that's really important to take into play here. So I don't think that there is an accurate measure currently.

 

::

Okay, do we know if undecideds are more likely to vote or just not vote? I

 

::

think that's a really good question as well. And from what I've been hearing is there's a lot of apathy, a lot more apathy than I think there has been previously. So I think when you're some people even decided, and have decided to abstain their vote. So maybe they would have leaned one way or the other, but are keeping their vote because of maybe some priority. Maybe it's Palestine, right? So I think that, you know there are, there's a different factor in here on that which might actually be using a non vote as a form of protest.

 

::

That's a good point. So they're not in the undecided category, but they'll be in the abstain, decidedly

 

::

undecided.

 

::

All right, we could keep going on for a while, but we're gonna wrap up so Michelle and Nigella, so I really appreciate you coming on. This is a fantastic conversation. If folks wanted to connect with you, or follow up with you, or look at your research, where would they go?

 

::

You can always find me on LinkedIn. You can Google me or my company@nerdoscientist.com

 

::

love it all right. Michelle Nigella, thank you so much.

 

::

Thank you, Greg.

 

::

Thank you for joining us on the adaptive executive podcast. We hope you enjoyed the show. You can follow us on LinkedIn and by subscribing to our mailing list. Again. My name is Greg Ballard and thank you for listening. You.

Links

Chapters

Video

More from YouTube