Artwork for podcast Top Traders Unplugged
IL41: They’re Not Just Reading You... They’re Rewriting You ft. Sandra Matz
3rd September 2025 • Top Traders Unplugged • Niels Kaastrup-Larsen
00:00:00 01:00:44

Share Episode

Shownotes

What if knowing you isn’t the end goal... but shaping you is? In this episode, Kevin Coldiron speaks with Columbia professor Sandra Matz about how algorithms trained on our clicks, searches, and faces don’t just predict our behavior - they influence it. They unpack how personalization narrows possibility, why convenience can come at the cost of resilience, and what happens when machines learn to mirror us better than we mirror each other. From the false promise of data consent to the quiet collapse of complexity, this is a conversation about power, psychology, and the systems quietly remaking the human experience.



-----

50 YEARS OF TREND FOLLOWING BOOK AND BEHIND-THE-SCENES VIDEO FOR ACCREDITED INVESTORS - CLICK HERE

-----


Follow Niels on Twitter, LinkedIn, YouTube or via the TTU website.

IT’s TRUE ? – most CIO’s read 50+ books each year – get your FREE copy of the Ultimate Guide to the Best Investment Books ever written here.

And you can get a free copy of my latest book “Ten Reasons to Add Trend Following to Your Portfoliohere.

Learn more about the Trend Barometer here.

Send your questions to info@toptradersunplugged.com

And please share this episode with a like-minded friend and leave an honest Rating & Review on iTunes or Spotify so more people can discover the podcast.

Follow Kevin on SubStack & read his Book.

Follow Sandra on LinkedIn and read her book.

Episode TimeStamps:

02:13 - Introduction to Sandra Matz

08:22 - How data is a window into our psychology

12:13 - What is the "right" benchmark?

14:12 - How algorithms learn to understand who you are

19:11 - Do algorithms care about your feelings?

22:16 - The "basic bitch" effect

25:10 - Computers can learn your personality from a picture of your face

32:01 - How the power of algorithms can be used in a positive way

40:17 - A framework for avoiding the negatives of collecting personal data

44:06 - Solving the complex challenges of cookies (not the ones you eat)

51:14 - The rise of data unions and cooperatives

57:01 - How Matz protects her own data



Copyright © 2025 – CMC AG – All Rights Reserved

----

PLUS: Whenever you're ready... here are 3 ways I can help you in your investment Journey:

1. eBooks that cover key topics that you need to know about

In my eBooks, I put together some key discoveries and things I have learnt during the more than 3 decades I have worked in the Trend Following industry, which I hope you will find useful. Click Here

2. Daily Trend Barometer and Market Score

One of the things I’m really proud of, is the fact that I have managed to published the Trend Barometer and Market Score each day for more than a decade...as these tools are really good at describing the environment for trend following managers as well as giving insights into the general positioning of a trend following strategy! Click Here

3. Other Resources that can help you

And if you are hungry for more useful resources from the trend following world...check out some precious resources that I have found over the years to be really valuable. Click Here

Privacy Policy

Disclaimer

Transcripts

Sandra:

If that's the only thing that kids interact with, they're going to lose the ability to now deal with a kid in the playground that's pushing them over and is not going to have the argument in a very nice and kind way. And we've gone through this. We've experienced the messy world and the arguments, the conflict, the tension, the emotions. The next generation that interacts with the chatbots a lot more than potentially with other human beings, and that they're just going to lose this ability to argue with someone, get into a fight with someone, and still come out somewhat okay on the other side.

Intro:

Imagine spending an hour with the world's greatest traders. Imagine learning from their experiences, their successes and their failures. Imagine no more.

Welcome to Top Traders Unplugged, the place where you can learn from the best hedge fund managers in the world so you can take your manager due diligence or investment career to the next level.

Before we begin today's conversation, remember to keep two things in mind. All the discussion we'll have about investment performance is about the past. And past performance does not guarantee or even infer anything about future performance. Also, understand that there's a significant risk of financial loss with all investment strategies and you need to request and understand the specific risks from the investment manager about their product before you make investment decisions.

Here's your host, veteran hedge fund manager Niels Kaastrup-Larsen.

Niels:

For me, the best part of my podcasting journey has been the opportunity to speak to a huge range of extraordinary people from all around the world. In this series, I have invited one of them, namely Kevin Coldiron, to host a series of in-depth conversations to help uncover and explain new ideas to make you a better investor.

In the series, Kevin will be speaking to authors of new books and research papers to better understand the global economy and the dynamics that shape it so that we can all successfully navigate the challenges within it.

And with that, please welcome Kevin Coldiron.

Kevin:

Okay, thanks Niels, and welcome everyone to the Ideas Lab podcast. Our guest today is Dr. Sandra Matz. She is a professor at Columbia Business School and an expert in the hidden relationships between our digital footprints and our inner mental lives. She's here to explain how these footprints allow algorithms to do shockingly accurate psychological targeting on all of us.

Now, obviously that's dangerous in the wrong hands, but it's also potentially a huge resource for improvement in health and wellbeing. She's written a fascinating new book called Mindmasters: The Data-Driven Science of Predicting and Changing Human Behavior, that explains both the risks and the opportunities. And it's a topic that's of deep importance to all of us. And I'm very excited to have this conversation.

Dr. Matz, thanks for joining us, and welcome to the show.

Sandra:

Thanks so much for having me, Kevin. I'm excited.

Kevin:

Okay, so you grew up in a rural village of 500 people in southern Germany.

And it sounds like the experience of growing up in an environment where basically everyone knows your business has been pretty informative for actually framing the research you've done in your career. So, I was curious, could you just start by telling us a little bit about what it was like growing up in that small village, and then how, indeed that experience kind of shaped what eventually became your research focus?

Sandra:

Absolutely. And it's funny, because I didn't make the connection between my upbringing and the research that I was doing for a couple of years already until recently when I started thinking about it in the context of the book. So, I'll get there in a second.

But, yeah, so I grew up in this really tiny village, 500 people, somewhere in the southwest corner of Germany. And as you already mentioned, the experience there was very much shaped by the 499 other inhabitants of that small town, because they were in your business every day. So, they knew exactly what you were doing on the weekend, which music you were into, who you were dating.

Some of it was because they were interacting with you directly. Some of that was just them observing you going through your experiences, putting your bumper sticker on the car, you constantly running to the bus in the morning. And in a way, what that allowed my neighbors to do was not just observe who I was as a person, but also, in a way, interfere with my life's choices.

So, you can imagine that my neighbors were not just in the business of trying to figure out who I was dating, but they were also trying to influence who I was dating. And the more that they knew about what I wanted to do in life, what my fears were, dreams were, hopes were, aspirations, the easier it became for them to do that.

And so, the way that I think about this experience of growing up in the village, with an analogy to the work that I now do in terms of how algorithms and computers can turn our digital footprints, if you want, into these pretty accurate descriptions of who we are when it comes to our psychology, and then ultimately prescriptions of what we should buy, which music we should listen to, potentially which jobs we should pick. And it's very similar in both the upside and the downside.

So, the experience of growing up in the village was, in a way, shaped by the feeling that there was someone there at all times who truly understood me. So, that was the upside in that there was someone who understood what I wanted out of life and could support me when it came to making these big decisions of what do you do after school? Like where do you go? Do you travel the world? Do you not? So having someone there who really understood what I wanted was extremely valuable.

But also on the downside, that meant that someone just constantly was poking around in my private life in ways that I didn't appreciate and was trying to meddle with it in ways that I didn't have any control over and really didn't appreciate in the moment. And the same, in my opinion, is true with data. And I'm sure that we're going to dive much deeper.

But the moment that someone gets a sense of who you are and your preferences, needs motivations, and can use it to change your behavior, that creates a lot of upside, but also creates a lot of obvious downside.

Kevin:

And so, you said you didn't really have your kind of ‘aha moment’ until a few years ago. What was it? Was it one of those, you know, just you're walking along in the park and all of a sudden, bing, it came to you, or what? How did that happen?

Sandra:

So, it just happened in a conversation with a friend. I went back home for Christmas, and he asked me about my research, and I was trying to explain to him what I'm doing with data and how it's like an amazing opportunity, but also this pretty severe ethical challenge when you think about the impact on individuals, on society.

And I was just kind of telling him how hard it is for me to sometimes to live with this tension because I obviously truly believe in the upside. And I talk to companies on how could they use data to make the products better, to really serve their customers in a way that creates value for them, not just profits. But then also, at the same time, there’s this tension, but I also do understand and see how the research that I'm contributing to could actually be abused to really exploit individuals and undermine some of these core values that we have as a society. And then we just kind of started talking about, you know what? This is actually the same thing that happened to us growing up in the village.

It was like something beautiful about being seen, but also something horrendously annoying about being seen by our neighbors. So, it was like in this conversation with my friend that I had this ‘aha moment’.

This is actually, it's not exactly the same. And a lot of my thinking after that has been spent on, what are the differences? And how do we mitigate against the risks that maybe weren't there in the village, but we do see now with data. But that was the moment.

Kevin:

That's really cool. Yeah. And we're going to talk about some of those differences as we get into it. Maybe we can, you know, just talk about that your book is split into three sections.

In the first section, you talk about how data is a window into our psychology. And I wanted to maybe talk about some of those examples. One of them is that you talk about Facebook, and you say, with just a few hundred likes, Facebook knows you better than your spouse. So, my question is… Two questions. I mean, one, I'm kind of curious if you can explain how they do that. But also, what does ‘know you’ mean in that context?

Sandra:

It's such a good question. And it's funny, because the question that pops up, in my mind immediately, is, how bad are our spouses at understanding who we are? If that's the comparison, is it just that the computer and the algorithm is really good or do our spouses really suck at making those predictions? And, you know, it's a little bit of both.

But the one thing that kind of constantly pops up in my mind, when I think about how could it ever be that an algorithm is just as good at our spouse at knowing who we are (and I'm going to say more about what ‘knowing’ means), I always think of Google.

Google, you type questions into that seemingly anonymous search bar, questions that you don't feel comfortable asking even your closest friends and spouse. So, the idea that there could be an entity, that just by observing our data, can actually make predictions that are more accurate and than what the people around us, who also know us pretty well, can understand about us, I actually think makes a little bit more sense. So, Google, for me is always the part that I think people can relate to more easily.

Now with this study that you mentioned, the way that we typically capture accuracy of these models, when we think of how much do they know about you, how good are those models at capturing who you are on these psychological dimensions, we typically have a comparison to how you describe yourself. So, we have you complete a questionnaire, personality questionnaire, and we ask you questions like, I'm the life of the party, to what extent do you agree with this? I make a mess of things, to what extent do you agree with this?

So, we kind of get your self-perception and the way that you think of yourself, and when it comes to personality traits, and then we have an algorithm sift through your data. And the way that we train them is essentially we give them access to data from thousands of people, so they can see over time, well, okay, if you follow the Facebook page of Lady Gaga, maybe on average people are more extroverted if you do that. Or if you follow the page of CNN, maybe that makes you a little bit more conscientious and open minded than the average person.

So, they kind of do this Sherlock Holmes game for many, many of these traces and for many, many people. And now, because they have an understanding of how average people, who like CNN, are more conscientious; how average people, who like Lady Gaga, are more extroverted, now, if they see your profile, they can actually put the puzzle pieces together and say, everything that I know about you and everything I know about everybody else in this space, it seems to be the case that you might be more extroverted, more conscientious, more neurotic and so on.

And then it's really this comparison of here's what the computer predicted in terms of your big five personality traits or some of these other dimensions. And here is how you describe yourself. How much overlap is there? Do you agree that you're more extroverted than the average person? Do you agree that you're more neurotic, or are there some discrepancies? So that's kind of how we think about how well doesnthe computer knows you. Can a computer or an algorithm replicate what you would tell us in a questionnaire?

Kevin:

That's interesting because as you sort of get into more deep, philosophical questions here, which is the right benchmark? You know, is it what you say you are or is what the computer says you are?

Sandra:

Yeah, it's a fascinating question. Or is it what other people say you are? So, in this case we're pitting the computer against others in terms of trying to replicate what you would say on a questionnaire. But maybe it's the other people in your life who actually have a much better read.

What you can do to try and disentangle some of that or at least see who is right, who has the right answer to who is Kevin, you can see how both your self-reports or the other reports of the people, in your environment or the computer-based predictions, are good at predicting other life outcomes - other stuff that we know about you, such as your life satisfaction, which profession do you choose?

And then you can see, okay, if the computer says you're extroverted, are you more likely to be a salesperson? Or if you say you're extroverted, are you more likely to be a salesperson? What we know is that even though the computer doesn't fully replicate your self-reports, it can still be just as good at predicting these life outcomes.

Which kind of goes back to your question of, yeah, it might know something about you that you either don't know yourself or you're not willing to disclose in a questionnaire. So oftentimes, especially if you combine the two, you actually get an even more accurate reading.

So, if I take your self-report, which has something like a more subjective experience, like this quality of, well, maybe there's some information in there that really just has to come from you because we can't observe it in data. But then also adding this prediction piece from the computer, so your extroversion level, your computer-based prediction of your extroversion, and those together are even better at then predicting which job you're going to choose. Just because we're tapping into these different parts of who you are.

Kevin:

I gotcha. I gotcha.

Okay, well, I took your test, and you have a section later in the book where you say, okay, we have personality types or traits, that's kind of, if you will, our baseline. But we don't always behave that way.

You're an introvert. You say in the book that you like to go dancing. I'm apparently someone whose social situation has a lot of energy. But I quite happily spent last Sunday alone in a dark room watching the French Open for six hours, by myself.

So, the point is, we sometimes behave in ways that are different than our kind of baseline personality trait. But you say, actually, computers can tell when you're behaving differently. They can tell what state you're in. How is, how is that possible?

Sandra:

Yeah, so for me, I mean, on some level, it's actually one of the most interesting parts when it comes to recent developments in personality psychology. And also, really, is one of the most interesting parts when it comes to a computer's ability to really understand who you are at any given point in time.

What personality psychologists, I think, have realized, over time, in conversation with social psychologists who are always like, “Well, your behavior is determined by the situation and nothing else. Forget about this idea that you come with certain personality traits and a certain genetic makeup that is also determined by your upbringing, you're just a blank slate and your behavior is just dependent on the situation.”

And they always were in conversation or almost like in a friendly fight with personality psychologists who insisted, no, there's something about Kevin that makes him behave consistently across different situations. And the two have, in a way, agreed. I would say that, well, there is something core to Kevin's personality and that's a general tendency to behave maybe somewhat more extrovertedly. So, that's a preference for behavior that you're partially born with and that you partially grow into as you're being raised as a kid.

But we also know that we're not always the same, and not in the sense that we're hypocritical and just flip flop around completely randomly, but in the sense that who we are, at the core, interacts with our environment. So, if both of us spend time in a social setting with friends, at a bar, at a club, for me it's in the classroom. So, even though I think of myself as more introverted as a tendency, I can totally step it up in the classroom. Especially in an MBA classroom, you have to be somewhat entertaining. That's the main thing that you have to do there, and be entertaining. So, depending on the situation, both of us can be somewhat more extroverted but also more introverted.

You just mentioned like, well, when you're sitting at home by yourself. Yeah, you probably feel a little bit more quiet as opposed to like more outgoing and social because that's what this situation dictates. And the interesting part, and this is like where psychologists kind of agreed on, the social psychologists and personality psychologists, is those deviations are not completely random.

There's a certain system to whether we feel more extroverted compared to our baseline or more introverted. If there's other people around and the situation is social, yeah, maybe we feel more extroverted. If the situation is somewhat more kind of quiet and reserved, there's no one around, we would probably both feel more introverted.

So, we can make educated guesses of whether you might be moving up from your average or you might be moving down. And that's what computers can do as well, So, they can get a sense of, well, generally speaking, Kevin seems to be rather extroverted. But actually, based on, let's say, your data that gets captured by your smartphone. I see that currently he’s sitting at home. He's not left the house. There's not really any ambient sound going on other than the TV playing. And there are no other people because we can see that there's no phones showing up in the same location at the same time. So, it seems like he's somewhat in a more quiet spot. So now, let me adjust my prediction and say, yeah, he's generally extroverted, but right now, given everything about what I know about the context, he's probably feeling a little bit more introverted than usual.

And that is a really interesting insight just into who you are in the moment but could also be incredibly helpful and valuable when it comes to figuring out, like, what advertisements to show you. If I'm a marketer, and I'm like, oh, Kevin is really usually extroverted, but currently he's in a somewhat introverted situation. Do I now, maybe want to show him stuff that brings him back to his baseline level of extroversion? Or maybe now is not a good time because he's not thinking of himself as an extrovert.

So, there are all of these fun dynamics that are not just related to understanding, but also potentially do the second step of influence.

Kevin:

Yeah, and that's an interesting question because I've heard people describe, not that these algorithms, at least yet, have free will, but in some sense, they want to operate in a world that doesn't change much because then that enhances predictability. So, they want me to always be an extrovert. So, if I'm displaying, I don't know, introverted characteristics online, or through my phone, or whatever, do they then say, okay, let's send them the introvert ads, or do they force me back to the extroversion?

I mean, again, I'm kind of anthropomorphizing these algorithms, but do they care what state you're in? Or do they just want to be able to identify the state, feed you the ads in that state, or are they kind of just trying to nudge you to kind of always staying the same?

Sandra:

That’s such a great question. I've been thinking about this question for the last couple of weeks a lot, because the way that I'm currently thinking about it is that in a way, algorithms are not optimized to take risks. What they do is, as you said, if they figure out that you're extroverted, the least risky option for the next ad is to show something extroverted.

And unless you optimize them for some kind of serendipitous exploration or some kind of optimized exploration that's based on their understanding of what you might be going through in the moment, they're not going to do that. And the concern that I probably share with you, based on how I interpret your question, is that they would actually, over time, just bring your complexity by just constantly optimizing for your average.

So, like, here's who I think you are, because I don't want to take risk. What I'm optimize for is showing you something that I can reasonably believe, based on everything that I've observed before, you're going to like, or at least you're not going to hate. It's something that, on average, you actually respond to pretty well.

And that's what they're optimizing for unless we actually tell them to optimize for something else. And that is, how do we keep you complex? And how do we keep you complex in a smart way, would actually be tapping into this context.

It would be saying, okay, actually, instead of just throwing darts and serendipitously show content that comes from all over the place and most likely you're not actually going to enjoy because it doesn't fit your general profile of things that you like, but it seems to be there's like a window of opportunity to show you something that's more introverted. Because actually, now, you currently seem to be in a slightly more introverted situation. So now is a good time to keep pushing those boundaries and keeping you complex in the experiences that you have.

So, I 100% agree in that it's such an interesting question because these agents are not trained for them. They're not optimized to take risks, and they're certainly not optimized, currently, to kind of keep you complex given the situation that you're in.

Kevin:

So, then I guess kind of going back to sort of the implications of this, there is a risk that the more that we're influenced by these algorithms, the less we kind of change our state, the less complex we become as people.

Sandra:

I think of it as, my current working title for the paper is the Basic Bitch Effect. Because it all makes us more similar and it all makes us so shallow. We're always the same person. It's not just that we shrink as individuals, but we also look more similar over time because it's pulling us in the direction of the average of the population. So, who knows if that's going to fly as a paper title, but it gets people's attention.

Kevin:

Well, it's interesting not to get too kind of grumpy old manish about it, but I mean, I do find that if you spend a lot of time online, I guess, in that kind of noncomplex state, and then you go out in the physical world, all of a sudden you're buffeted by things that aren't being controlled by an algorithm and your mood can change very quickly. And I'm wondering maybe part of that is because you're gone from this almost forced, simple state to having to deal with more complex stuff.

Sandra:

Yeah. And I think sometimes that can be incredibly liberating. It's like, okay, now I actually did find a movie just walking past a movie theater that I otherwise would have never seen. Or maybe I found this coffee shop and restaurant because I wasn't using Google Maps.

My concern for actually mostly the next generation is that it's going to be harder for them to deal with the complexity and messiness of the real world. If you think about it in the context of the conversations that we have with algorithms, they're first of all customized to you. So, they're much more likely to speak your language. They're also much more likely to do it in a very nice and constructive way.

If you have an argument with an algorithm about something, yeah, they might argue for the other side, they could do that, but you're still going to do it in a very nice, and constructive, and polite way because that's what they were trained for. That's the guardrails that companies put in place, which makes sense, but also means that if that's the only thing that kids interact with, they're going to lose the ability to now deal with a kid in the playground that's pushing them over and is not going to have the argument in a very nice and kind way.

And we've gone through this. We've experienced the messy world and the arguments, the conflict, the tension, the emotions. But my concern is that like the next generation that interacts with the chatbots a lot more than potentially with other human beings, they're just going to lose this ability to argue with someone, get into a fight with someone, and still come out somewhat okay on the other side.

Kevin:

We talked a little bit about how your digital footprint can be captured and I think that's intuitive to a lot of us, certainly if we talk about like Facebook or Google searches. But there's other stuff in the book that was even kind of more shocking and particularly thinking about images versus language.

And you quote some research that says, you know, computers can accurately predict your personality, your sexual orientation, even political ideology, just from your face, which was pretty shocking. And I think you were actually, I think, skeptical of that work when it first came out, but you're no longer, I think, as skeptical. So, can you kind of explain that?

Sandra:

Yeah, I’m still skeptical, I just wouldn't rule it out. So, I think the interesting part of images (it could be pictures, it could be videos), is that they come with very specific challenges when it comes to ethics because you can leave your phone at home, you don't have to post on social media. But the moment that we could make those predictions from something like your face or just an image of you, that just means that you can do it… Anytime that I get a picture of you, whether that's a picture that you posted or maybe you're just walking in the background of someone else's picture and they upload it, and with facial recognition, I can immediately tag you in that picture.

So, the reason for why I'm very interested and intrigued by this research is just the implications. But the main idea behind it is that when you post pictures, some of that signal comes from grooming. So, some of that signal comes from extroverts, for example, being more likely to put contact lenses like of blue eyes. So, when you look at the average image of an extroverted woman, in this case, you see that their hair looks blonder, which probably means that they're dying their hair more because there's no reason genetically that they should be blonder on average. And they also seem to have bluer eyes. They also seem to be much better at taking pictures because you can't see the nostrils usually. And so, they probably do the duck face from above because they figured out that it makes their face look slimmer. Introverts, on the other hand, just don't seem to care that much. So, you do see the nostrils, usually there's an outline of glasses.

So again, if you kind of combine this with, we know that introverts typically are more inclined to read and so on. That actually probably makes sense at some level. So, some of it is just the way that you groom yourself, the way that you go through life, and the activities that you engage in.

Now, the part that really is somewhat concerning, and where a lot of people are skeptical about what does the research actually show, is looking at actual facial features. So, strip away all of the grooming from beards, from hair, from makeup, and just look at the facial features of your face. Could that be predictive of some kind of personality traits or other dispositions?

And I remember the first papers coming out of a good friend of mine, Michal Goszczynski, who was a pioneer in the space, and I respect very much because he knows what he's doing. And I remember him publishing this stuff and it's like, there's just no way that this can be true.

That sounds like pseudoscience. You know, we've gone through this already, different centuries where people try to say here's how different facial features relate to certain character traits. And it's always been debunked. So, I was very, very skeptical going in. And I remember him giving a talk and just saying, look, I understand that you're skeptical. This is what we observe in the data. And I'm happy for people to replicate this, but let me give you at least a few reasons for why, theoretically, this could be the case. And the reasons, for me, were actually so compelling that I thought, you know, let's think about it in a little bit more open minded way.

And some of the reasons were, essentially, you can imagine that, take extroversion, if you are a really beautiful kid, like, super symmetric face, rosy cheeks, and so on, chances that your environment is going to respond to you in a, a positive way. You're constantly getting smiles, you're constantly kind of being like, oh, how beautiful are you? Da da, da. So, you get a lot of very positive social feedback.

Now, the likelihood that you might also turn out a little bit more social and extroverted and trusting in other people could actually go up. And there's research showing that this is the case.

If you kind of have a somewhat attractive physical appearance, people seem to become more extroverted just because you get a lot of positive social feedback. Another pathway could be hormones. So, we know that there's certain types of hormones like testosterone that very much influence our behavior and the way that we show up. It's essentially related to kind of being somewhat more aggressive and assertive. But we also know that hormones shape our facial features.

So, there might just be certain parts of biology that determine both behavior and the way that our faces look. So, if you take some of these pointers of like, no, there are certain pathways by which this might play out then it's also conceivable that computers, just because they can take in so much information, might pick up on these subtle cues that we, as humans, dismiss.

Kevin:

That's a good explanation. I mean, how does that then feed into… I mean, I could see how that might feed into kind of personality, characteristics, although I would have to be an exception, in that instance, of symmetric features leading to extroversion.

Sandra:

On average.

Kevin:

On average, right. There are always outliers. But what about like political views or sexual orientation? That seems to be something that, you know, is… Well, those post political views I could see being more environmentally shaped, but sexual orientation, not so much.

Sandra:

Well, sexual orientation… So, I think there's also, again, with the sexual orientation one, there were some arguments about to what extent is it grooming? To what extent is it just facial features? And here again, and I'm not making that argument. I'm now channeling what Micha would be potentially saying, there are still arguments that there are different hormones at play. So, some of the things that make you somewhat more female, as a man might also influence your facial features. So, I think there some of the biological pathways are probably the more likely ones.

Kevin:

I see.

Sandra:

But again, so in this case, I think the verdict, in terms of what might be driving some of these predictions and to what extent do they hold when we fully control for all of the facial features, is really tricky. You can control for some of them, but you still have, well, what do your eyebrows look like? But there are a few things that are very, very hard to control for. So, I think there, it's a trickier question than with personality.

Kevin:

Okay, okay, well, thanks for that.

Well, so let's talk about some of the implications of this, I guess, ability of these algorithms to kind of again, know who we are to kind of a shocking extent. I mean, I think it's obvious, or not obvious necessarily, but the first things that spring to mind for all of us is the downside of it.

But there's also lots of potential upsides. And you know, as you were talking, we were describing how you can, I don't know, say, for instance, an extrovert person all of a sudden displaying a lot of introverted characteristics. I mean, you could imagine a doctor having that data and saying, well, hey, you're starting to display… If I look at your, whatever, this footprint, it looks to me like maybe you're depressed or even, I don't know.

If you're talking to a therapist and you're giving them your story, they could be looking at your footprint. Like, not so sure, you're telling me.

Sandra:

Yeah.

Kevin:

Can you explain? So, how do you imagine some of this being used in a positive way?

Sandra:

Yeah, so, I 100% agree that it's very obvious you see the downsides, both in terms of privacy, in terms of our loss of agency, self determination. But for me, it's really coming back to this idea of the village.

In the village, the fact that other people knew me was the only way that they could provide the best advice ever. It was the only way that they could provide support that was exactly what I needed in a certain point of time.

So, when it comes to psychological targeting at scale, there are many different contexts ranging anywhere from how do you help people accomplish some of the goals that they set for themselves but are just having a hard time implementing? Savings is a classic example where, if I can understand, what is it that's really motivating you? What are some of the needs?

Let's say you're somewhat agreeable. You're the type of person who really cares about their loved ones and so on. Well, maybe convincing you to save is not going to work if I just tell you, well, just put some money in the bank so that it sits there, or to get ahead in life and get a competitive advantage of other people around you. What you really want to hear is, well, saving actually allows you to protect your loved ones in the here and now and in the future.

So, speaking their language, tapping into their motivations, is oftentimes a way in which we can make these difficult behaviors. Giving up something in the here and now, because you can now get this extra stuff, this extra gadget that you wanted, a PlayStation, this extra, the watch that you've been eyeing for a while, because you put some money in the bank. And maybe you're going to need it for a rainy day, but it's not entirely clear. So, it just makes it easier for people to accomplish these goals. That's like one study that we've run.

For me, the context that you mentioned, the context of mental health, is probably (besides education) the one that I think is the most promising. Just because the baseline, currently, when it comes to diagnosing something like depression or treating depression, they're just so bad.

But if you think about diagnostics in the context of depression, you have to be doing pretty poorly to go out, find a therapist that then diagnoses you with a depression. And you have to actively reach out, which is in the context of depression, extremely hard. Because one of the hallmarks of depression is that you just turn inwards a lot more and you don't interact with your social environment as much.

So, one of the ways in which you could imagine reinventing diagnostics or at least these early warning systems is to say, well, is there any way that we can passively see that your behavior starts to deviate from your typical baseline? And phones, in a way are the perfect gadget, any type of wearable is made for that.

You can see, for example (and this is real research that we've done in my lab), can we, based on your smartphone sensing data, see that maybe you're not leaving your house as much anymore as you typically do, tapping into your GPS records? Or maybe there's much less physical activity. Maybe you're not making/taking as many calls as you typically do.

So, again, it might be nothing. Maybe you're just on vacation and you're having a great relaxed time. But why don't we use it almost like a smoke alarm that says, hey, there seems to be something that is off. And try to catch you early.

But if we can catch you early and, say, there's some deviations from your baseline, why don't you try and reach out to someone right now and get some support? Or if you're someone who knows they have a history of suffering from depression, you could actually nominate someone that you trust and love to say, why don't you also get these early alerts that tell you, hey, again, it might be nothing, maybe I'm having a blast on vacation, but reach out to me to just check in and see if you can support me.

So, we don't have to wait until you enter this valley of depression that is really hard to get out of. But we catch you early and we try to supply you with the support that you need, which is the second part where AI and an understanding of who you are and how you operate actually comes in extremely handy when it comes to treatment.

So, think of like Amazon, for example, is kind of making recommendations about what you should buy next. You could imagine the same principle being applied to therapy that says, okay, we know that not everybody responds to all treatments in exactly the same way. Maybe there's a treatment that works better for you.

While Kevin, all I need to do is I need to send him to nature and that's probably going to help him recover more quickly. Sandra, that doesn't work at all. She doesn't care about nature. She really needs to be surrounded by people that she loves. So, this is a treatment that is much better and much more effective.

So, the same way that Amazon recommends the best product for you, we could say based on everything about we know about you and everything that we know about other people, this is the type of treatment that you might be responding to best. Or now, building on AI in these large language models, can we offer some kind of conversations and therapy to people who otherwise can't afford it or don't have access?

So, I'm not saying we should be replacing therapists with AI that can kind of take your data and then customize their therapy to you. But there are so many people who currently can't afford or don't have access. So much so that, I think for every 100,000 people worldwide looking for a therapist or for help, there's 13 human professionals. So, there's this huge, huge gap in terms of supply and demand. Obviously not evenly distributed.

If you live on the upper east side in Manhattan, you're not going to have a problem finding a therapist. There are plenty. You can find a therapist for your dog. But that's certainly not true for other parts of the world.

So, in those cases, having an AI who not only can read up on the latest science. The AI probably can read up on, here's a paper that got published six months ago that tells us about a more effective way of treating PTSD or depression, but also can learn, over time, what is best for you and how to communicate with you in the most effective way.

Kevin:

Yeah, yeah, I hadn't thought about that application. That makes a lot of sense. As you were talking, I'm thinking of an aura ring with a lot more functionality.

You know, aura ring, for those of you who don't know, is just a little device you wear on your finger and it kind of tells you, hey, you didn't move much today or you didn't sleep that well last night, stuff like that. And that's helpful. You can adjust your behavior, but really you're thinking a much richer way.

Sandra:

Yeah, but I think it's a great example. And they are currently implementing AI coaches. So, the idea that, yeah, tracking is, to some extent, valuable because we want to see how well do we sleep, what's the physical activity levels. But what most of us want in the end (and that again is like what the neighbors were good at), is offering advice. It's not just mirroring back to you, here's what your life looks like, but here's what you could do to make it better.

Kevin:

Okay, well, that's a good way, I think, maybe to pivot to the last third of your book, which is talking about how to make data work for us. And you say that you focus mainly on principles, not on specific recommendations, but there are some kind of general ideas you've got that I think are worth exploring.

And the first thing you talk about, and I think this is getting really more into the kind of, how do you avoid some of the negative consequences of psychological targeting is kind of an opt in versus an opt out, I guess, framework for collecting personal data. So, maybe can you sort of explain how you envision, or how that works now, and how it might work in a better setup?

Sandra:

Yeah, absolutely. And as you said, this is like mostly trying to protect people from the most egregious abuses. So, the idea that the way that it's currently set up is, in most cases, your data is just being tracked continuously once you've consented to the original terms and conditions that, obviously, nobody reads, nobody has a time to read.

So, by you signing up to a product, most of the time these products can grab as much data as they want and do with it whatever they want. So, they can use it to make their products better. That's true, but they can also sell it onto third parties, and you have no control over that.

So, you could opt out. You could say, in some cases at least, you could opt out and say, well, here's the data that I don't want you to track and I still want to use the product without the tracking, which is sometimes possible, not always. But the burden of opting out is on you. And we all know that we're lazy. As a human species, the last thing that we want to do is to now go through all of the products, opt out of every single one of them after reading the terms and conditions really carefully because we now understand what might happen with our data.

So, there's just no way that we're going to do that because we also all only have 24 hours in a day and hopefully we have better stuff to do than now having to opt out of all of these data tracking policies from all of the products and services that we're using. So, the switch to opt in, which essentially make use of your laziness as a superpower (that's how I think about it).

If it's true that we kind of need a good reason to give someone our data, otherwise laziness takes over and we're just not going to opt in. That means that a company has now to convince me that by using my data they're actually making the product so much better that I say, okay, actually, you know what, I should change these terms. I should change the tracking so that they can actually use my data. And in many cases that might be true.

If you think of YouTube, YouTube has an option to just get rid of your behavioral history entirely and just kind of start with a blank slate every single time that you open it. So, you don't get the recommendations for which videos to watch. And on some level it's extremely annoying.

There are so many times I have attempted this experiment where, on my desktop, I have the recommendation so it kind of knows which videos I watched before. And then on the phone I switched it off just to get a comparison. It’s so annoying.

My son is crying, and I want to pull up a quick video for him to watch, and I can't find it because there's no history. So, in this case, YouTube would probably convince me to say no, actually, the value that I'm adding is so good that I might actively say, okay, I'm willing to opt into some of the data tracking. But it would really put the onus on companies to say, I am going to offer a product that's so much superior with data tracking that you're willing to do it, And in the absence of you taking action, your data is protected.

Kevin:

I've got two questions here. Well, the main question is, wouldn't that just put us into a world kind of like now with cookies where, you know, you go to a web site and it accepts all cookies or whatever. And you just end up accepting it with no idea, really, what that's doing, mainly just because you want to use the product.

And so, I can imagine I'm opening up my Gmail and like, do you want to opt in to allow data and like, well, I really need to use my email, so, yes. Or am I simplifying it? But I mean, how would you avoid that situation where they just, you know, withhold the service completely if you don't opt in?

Sandra:

I think, I mean, perfect question because I 100% agree with you, right? If we're doing it the same way that we're doing it with cookies and you can't use the service unless you say yes, then most people are going to say yes.

So, we can talk about in a second what I actually think is a smart way of dealing with the idea that you do want the service and the convenience and the personalization, but you don't want to give your data in the first place.

Now, the one thing, that at least the companies and websites with cookies that take it seriously and do want to support you in making the right choice, what they do is that they make the most obvious selection is only accept the ones that are necessary.

So, in most cases, the websites that try to grab as much data, under the regulation that requires you to accept the cookies, they make the option of accepting all the most salient one. You can go in and you can untick the boxes, but nobody does that. And the button that is like red and blinking is the one, just accept all and I'm going to share my data anyway.

There are a few websites, and I always appreciate them when I see them, is that the red blinking button, that is the most obvious one, is actually just the one button that says, I only accept the necessary cookies. So, in this case, it's not more work for you to protect your privacy. It's just like, how do you make it easy for people, in this case, to accept only the necessary cookies?

And the same could be true for data. So, it could be that the mandate now is, in addition to you having to accept the cookies, the option that we want to make the most salient, is the one that protects your data. So, then you get around some of that issue.

Again, it could be that we're still not able to use the services without kind of giving our data. And that becomes another regulatory question. And it also becomes a little bit of a competitive question of can these companies, that require you to give away your data, still operate even if you have competitors who don't?

And here's where, I think, actually the ideal solution lies, even kind of coming back to my YouTube example. What I said is that YouTube might be able to convince me to give my data just because I really want this recommendation, because it's annoying to start from scratch every single time. In an ideal world, I would actually get those recommendations without having to give my data to YouTube.

And that sounds like, well, how would that ever be possible because you kind of need the data to make the recommendations? That's no longer true with technology that we have. Because what YouTube can do is they can make use of the fact that your phone is essentially a supercomputer. Your phone is like a million times more powerful than the computers that we use to launch rockets into space with.

And what they can do is, instead of me sending my data to YouTube and then they're making recommendations, they can send their intelligence, their recommendation model directly to my phone. So, they essentially just send me the intelligence to my phone where it locally kind of updates based on my viewing history.

Everything, all of my data just sits on my phone. It never leaves the phone. YouTube just sends me the algorithm to locally process, make recommendations, kind of say, okay, based on what you've been viewing before, this is the video that you probably want to show to your son when he's not falling asleep.

So, it makes the same recommendations, gives me the same convenience, gives me the same personalization, but YouTube doesn't have to see the data. And now what we still want to do is to make sure that everybody benefits. The algorithm of YouTube should be getting better over time and, for that, my data is actually helpful. So instead of sending my data to YouTube, I can still send back an updated version of the intelligence.

So, what YouTube gets is, here's how I want you to tweak your algorithm to make it a bit better. But that's just the intelligence that I'm sending and I'm not sending you my data.

And for me, that's a total game changer because now I can say, hey, I do get exactly the same benefits, but without the downside of you now having my data. And that solves so many problems for consumers. Because in a way, I can now have it all, in a way that I could never do in the village.

In the village. I could never get the support from my villagers if they didn't get to see my data, if you want, if they didn't observe who I was and what I wanted. Now, in this world of technology, we can actually do it.

And I would argue that it's also good practice for companies to do that. That's always the question that I get, why would companies agree to that? Don't they just all want to collect that data? I don't think so.

So, unless you're in the business of selling data, then you probably would not want to go with that strategy. But if you're not, you're much better off providing the same service, convenience, personalization, without now sitting on this pile of gold, collecting personal data, you're sitting on this pile of gold that you now have to protect.

If you look at the number of data breaches, and the cost associated for companies with these data breaches has gone up kind of rapidly over the last couple of years. So, it's a huge financial risk for companies to do that. And it's also a reputational risk.

If that kind of gets out and people know about data breaches, that's a reputational hit that you're taking. And on the other side, if you can be the company that says, hey, we offer exactly the same kind of product as our competitor, but you know what, we don't actually need to see your data. Your data is protected. You might actually switch to that competitor because you get everything that you want, but you don't have this risk of them abusing your data.

Kevin:

Yeah, that seems to make a lot of sense. I mean, can you just explain, I just want to make sure I understand what gets sent back from the phone to the company in that situation.

So, it's not all of your data. It's a, what, an anonymized version? Or is it just kind of reduced to a set of, I don't know, factors as opposed to all the specifics? How does that work?

Sandra:

It's not the data itself. It's essentially tweaks to the model. Take a regression analysis. A regression analysis essentially tells you, here's how certain inputs, certain kinds of variables are associated with an output. And what we get is like, here's the coefficient. It tells us if you go up an X by one, here's what happens to the outcome variable.

So, what I'm doing here is I'm essentially kind of sending you updated coefficients. I'm telling you, here's how I want you to update the model, but you don't know anything about the underlying data.

Kevin:

Gotcha. Okay, that's a great explanation. Thank you for that.

Another idea that you talk about in the book is this notion of a data co-op or a data union, which is people banding together to control and manage their data. We had Rana Faroohar on the show a couple years ago and she talked about data unions in her book.

And I got very excited about it, and I tried to find one and join one, but I didn't have any luck. So, could you maybe just explain the concept again, and then are there practical steps we can take now to join one of these co-ops, these data co-ops?

Sandra:

Yeah, it's a great question. So, the idea behind data co-ops is essentially, it's how do you not just alleviate some of the risks associated with your data being out there, but how do you actually maximize the utility and the value that each and every one of us can get out of the data that we generate? And they are member owned entities of people who have a shared interest in using their data in a certain way.

So, you could imagine expecting moms, which is my go-to example these days, of like they want to pull their data to understand, well, what should I be doing based on my genetics, my medical history, my lifestyle, my environment to make sure that I'm healthy and the baby is healthy?

Now, I don't want this data to go to a pharma company because I don't trust them, but I would happily pool my data with the data from other expecting moms in what is, essentially, an entity that has fiduciary responsibilities to its members.

So, it's legally obligated (the same way that financial institutions are legally obligated to act in the best interest of their customers) to help me make the most of my data. So, in this case we could figure out, again, based on these different trajectories, medical histories, genetics, here's what a specific woman should be doing at a specific stage in their pregnancy.

Now, the hard part, which is I think what you mentioned in terms of why don't we see more of these data co-ops, because on a conceptual level they make a lot of sense, is they're not easy to set up. So, it needs, essentially, coordination, like, in this case, hundreds if not thousands of women say, okay, let's get together and start one of these entities. Or it needs a visionary that says, okay, I'm the person who is going to put in all of this effort and now kind of get other women behind it, and most of the time they are run as nonprofits.

So, it's not that you establish one of them and then you make a lot of money running them. Because that's, again, coming back to this idea that we're member-owned and we want to create value for our members, not necessarily the entity. But there are a couple of examples of existing data co-ops that I think are very compelling.

My favorite one is one in Switzerland that operates in the healthcare space, it's called MyData. And they have different problems that they tackle. But one of them is essentially understanding and better treating multiple sclerosis, which is one of these diseases that is, it's so poorly understood because it's determined, again, by anything from your genetics, lifestyle, medical histories.

And what they do is they essentially have patients who are part of the co-op, but then also non patients, because you need a comparison group to see how symptoms track in healthy individuals and individuals suffering from MS. And the benefit that members of this co-op get is, by sharing their data with a co-op, they not only contribute to a better understanding of the disease itself, but they also benefit directly by saying, well, the data co-op now has access to the symptoms and the treatments of like thousands of people.

And now the same way that Amazon, again, can say, hey, here's the products that you might respond to the most positively, it can say, well, we've seen other patients with similar symptom trajectories as yours, and they now communicate directly with that patient's doctor to say, hey, based on everything that we know about your patients and everybody else in our data set, why don't you try treatment X? Because other patients who've had similar trajectories, they've responded really positively to that treatment as opposed to something else. And then report back to us and say if this was actually helpful in the end. And that's a completely different model.

Usually what happens is, if you're one of these people who has a disease that's poorly understood or still pretty rare, your best hope is to now give your data to a pharma company. Best case scenario is they develop a drug that you can now pay millions of dollars for so that you benefit. It's absolutely crazy.

In most cases, you don't benefit at all because it's like either you don't have access to the drug or it's just like not in time for you to reap the benefits. And data co-ops completely flipped that on its head because you can benefit immediately. They're just harder to set up.

So, I think that's one of the reasons for why we don't see them as much yet. But there's a world in which existing entities could take on that role.

So, I think the, the most promising solution that I've heard Sandy Pentland, at MIT, argue for is like, why don't credit unions play some of this role? Because they're already trusted entities that kind of organize someone's data. This could be one of these entities that actually helps us facilitate the processing of our data.

And again, because they're legally obligated to act in your best interest, they might also be the entities that are pushing for these technologies like federated learning, where they don't, themselves, want to hold the data, but they want to facilitate the exchange. So, they help you connect your data to some of the providers that you want to share it with. And then it's just like a trading of intelligence as opposed to trading of data.

Kevin:

So, it sounds like you would characterize where we are now as in the very, very early stages of that. Yeah.

I'm curious because we're kind of bumping up against our time limit now. What online tools do you use? How do you protect your privacy and get the most out of your personal data?

Sandra:

It's such a good question. And I think just looking at myself is one of the reasons for why I've become a lot less optimistic about just putting people in charge.

Like the cookies… I think about this all the time and I'm very much concerned about the privacy risk, and my loss of self-determination, and still I can't keep up with it. Even though I understand the space fairly well, I don't have the time and I don't have the energy to manage it properly.

The one part that I might be a little bit more mindful of, than other people, is the phone. Just because the phone, it's like this person looking over your shoulder 24/7. It knows exactly where you go, who you meet, and so on. And we mindlessly accept all of these requests from apps that we download.

You have a weather app and that wants to tap into your microphone, your GPS and your photo gallery. And you're like, you clearly don't need access to my photo gallery to tell me what the weather's going to be like in New York tomorrow. So, with that, I'm a little bit more mindful.

But generally speaking, just observing my own failure is why I kind of advocate for these technologies that just make it easy for people to do the right thing.

Kevin:

Could we have AI bots that obfuscate who we are? So, like, I have a personal bot that when my data gets sent out, it takes it and throws a whole bunch of random stuff in there so that, you know, all of a sudden I just look like a random, I don't know, a data generator. Is that possible?

Sandra:

We could, I just don't think it's the ideal solution because then you don't get the upside. I don't want you to live in this world where you have to choose between, oh, I can now trick the algorithm and they're not going to be able to figure out who I am and what I want, but then you also don't get what you want.

That's the terrible example of YouTube when you're just entirely lost. And there's so much information, so many products out there that you need some kind of filtering. And I'd ideally figure out a way where you get the personalization without the risk of having your data out there.

Kevin:

Gotcha. Okay, well, I think that's a good place to leave it for today. Sandra, thanks so much for writing the book and taking your time to share your ideas with us. I mean, without question, this is a topic that impacts everyone listening today. So, thanks for joining us today.

Sandra:

Thanks so much for the fun conversation.

Kevin:

Okay, well, the book is called Mindmasters: The Data-Driven Science of Predicting and Changing Human Behavior. So, please make sure to go get a copy and to follow Sandra's work because I think you can tell that not only are these very important ideas, but they're not being discussed enough on mainstream media.

So, for all of us here at Top Traders Unplugged, thanks for listening and we'll see you next time.

Ending:

Thanks for listening to Top Traders Unplugged. If you feel you learned something of value from today's episode, the best way to stay updated is to go on over to iTunes and subscribe to the show so that you'll be sure to get all the new episodes as they're released. We have some amazing guests lined up for you and to ensure our show continues to grow, please leave us an honest rating and review in iTunes. It only takes a minute and it's the best way to show us you love the podcast. We'll see you next time on Top Traders Unplugged.

Chapters

Video

More from YouTube