In this episode, host Angeline Corvaglia explores the concept of parasocial AI with Sonia Tiwari, an AI consultant and parasocial learning researcher. They delve into the emotional bonds people form with AI characters and chatbots, discussing the origins and implications of these one-sided relationships. The conversation covers how AI chatbots can be designed to appeal to users, the potential mental health risks, and the need for responsible usage of AI, especially among children and teenagers. They emphasize joint media engagement and the importance of caregiver and educator awareness to mitigate risks.
00:00 Introduction and Guest Introduction
01:03 Understanding Parasocial Relationships
03:40 Parasocial Interactions and AI
07:50 Designing AI Characters
13:06 Ethical Concerns and Safeguards
15:36 Practical Advice for Parents and Educators
20:59 Conclusion and Final Thoughts
Special thanks to
Sonia Tiwari for taking time to be a part of this episode!
Follow Sonia on LinkedIn https://www.linkedin.com/in/soniastic/
Episode sponsored by Data Girl and Friends
They are on a mission to build awareness and foster critical thinking about AI, online safety, privacy, and digital citizenship. They inspire the next generation to navigate the digital world confidently and responsibly through fun, engaging, and informative content. They focus on empowering young digital citizens and their families through engaging and informative learning.
Find out more at: https://data-girl-and-friends.com/
List of Data Girl and Friends materials about some of the topics covered:
Contact us for any questions or comments: https://digi-dominoes.com/contact-us/
So I'm really looking forward to finding out more from an expert on this So Sonia, can you tell us more about yourself? Thank you for coming and being here. I'm so excited. So tell us more about yourself and then you can maybe jump right into what [00:01:00] even parasocial is because I think a lot of people don't know that.
rt thinking as if, oh, these [:Harry is my friend. And so when something bad happens to Harry, you feel emotionally invested in the story that, Oh, how is he going to recover from it? Or, you know, when Dumbledore dies and you, you feel as if your own teacher, a loved mentor passed away. And so that kind of emotional connection, my first introduction was in a good way that you, you feel emotionally attached to characters and it's one sided.
The other [:It's, it's not bad in and of itself. It's just, it can be. Anything can be weaponized, [00:02:30] right? Like social media, there are environmentalists who are using it for climate education. Social media, there are like really toxic cosmetic companies trying to tell teenagers to change everything about themselves. So like you can leverage any media for any purpose.
z Lemon is like literally my [:And so that's the foundation of parasocial relationships, that there's some kind of shared experience, whether it could be grief, happiness, just the location, the characters who might be your [00:03:30] age, or you find some common ground for an emotional connection, and then feel attached to a fictional character who cannot talk back.
, it actually started in like:And interactions was more in terms of AI or like a fictional conversational agent. Okay. And the common thing is that in both cases, it is a one-sided relationship, but in a parasocial interaction, there is like a simulated second side as well. But in essence, it's still one-sided, right? [00:04:30] Because you're feeding the information, and like an echo chamber, it's relaying what you want to hear.
. And there's even like a, a [:It's their loss. You're so amazing. And so it's just like a guy on television, making some pauses and saying positive things. But in essence, chatbots are doing a better job of just that. It's like just simulating that you're being heard, you're being valued. And it's a [00:05:30] systemic problem right now, right?
hard. So this becomes like a [:Why not just talk to someone? Even if it's a fictional chatbot, that would make you feel comfortable about whatever you're going through. And so, like now, parasocial interactions have taken a whole new level of mental health issue, and what we learn in that, the news story that we discussed a while ago, right?
yes, this is a chatbot. Even [:is next level. How do they manage to make people get emotionally attached even though they know that it's not real? Is that something that's built into it? How does that work? Yeah, so some of it is by design. Like I said, anything can be weaponized, right? So good character design principles are one part is visual.
So if it's [:But in, in, in terms of like designing something from scratch, you can use what we learned in character design school, look for the backstory and appealing backstory, appealing colors and proportions, accessories, the characters doing small friendly things, for example, for [00:09:00] V AI, the animated chatbot character aimed at kids.
when you see something cute [:So psychologically, when you see something vulnerable, you automatically place it top of your priorities of things to take care of. Like a baby, if you see a baby is about to fall, it's like your instincts, whether or not you're the parent, your instincts are triggered to save whatever is vulnerable. So that's why like the cute characters have a certain appeal for, for kids because they have this like nurturing. [00:10:00] So that's like the psychology behind it. And in Japan, even signs of danger or floods or volcanoes erupting have some cute illustration along with the sign. Because it will capture the attention of people looking at it.
ure attention. In Japan, the [:So the same concept can be going back to that earlier example of, you know, social media can be used by a climate activist for doing something good. Social media can be used by a cosmetic company to sell crap to a bunch of teenagers. And this is just an example of one type of character for a simple age group.
s, for teenagers, as in that [:So there, there is a character for everyone. It's like, [00:11:30] kind of like, spotting your vulnerabilities and tailoring its entire world just to address and attack that one vulnerability. Yeah. So if ever animated at the chatbot got to me, it will be a cute one. Some teenager who is, you know, on that verge of adulthood might be attracted to the Game of Thrones character. So we all have one weakness. I think that's a good way for help people maybe [00:12:00] understand is the difference between now and I'd say 20 years ago, for example, is that there's so much data about every person out there and every basically second that goes by, we're creating more data and AI is getting better at analyzing that data.
know what I understand. You [:Do other things, stay focused on a purpose. We're trying to solve a complex systemic problem with simplistic solutions. So by saying that, well, what were the parents doing? The parents should pay more attention. Well, that's a simplistic solution to a systemic problem, right? It's like parents are not even aware.
chool systems are not aware. [:I was one of the, one that took fewer risks. I was afraid to do what the other people were doing. Still they're things… I'm glad my parents don't know I did that because [00:14:30] it's, it's just normal. Teenagers are going to try things out. And as you say, I was paying attention to it, and it took me completely by surprise.
interested in what it might [:Yeah, thank you. I also, I don't know whether I should tell people to experiment with it or not because because then you open it to privacy issues and who knows what. I love what you did, right? Like you're an adult, you just simply modified your voice to simulate a child's conversation. And [00:15:30] I think more caregivers and educators should do that before they hand it over to the kids.
pose in mind that if you are [:Uh, in the form of an AI chatbot to write a history report. Do it with a class, with your teachers, ask questions that you learned in a textbook that you would like to hear responses on. So some kind of context, some kind of purpose, and some kind of community, a trusted adult who is like constantly monitoring this whole thing.
That's this [:And so what you're doing, like red teaming on your own. That's the kind of effort we all will have to do before we, [00:17:00] you know, give it unsupervised to children on their own. It actually doesn't take more than five minutes to be pretty surprised. Um, I was thinking of one and I'm going to test it out. And I, I know it's going to work too.
you can give the character a [:And I'm imagining a teenager who is, got a small obsession and crush with someone on the other side of the classroom. They can take that person's image. If there's any recording of that person's voice, they can take that person's voice. They can [00:18:00] create a backstory, and basically with no problems, they can just create a copy of that person.
are much more powerful than [:They'll probably try it anyway, but at least they'll have in the back of their minds how dangerous this is, and once it feels like it's maybe getting out of control, the chances are higher that they'll come to you, right? Right. I actually, I played the one where you talked about the child saying [00:19:00] that, well, I saw this movie, the wild robot and the robot was a bird's mom.
Why can't you be my, I played that video for my son. And I also shared the news story about the teenager and Character AI. Because he has been, observing like secondhand, listening to my research and talk about AI. I have demonstrated like some of the math tutorials with, um, ChatGPT Omni with the video and voice.
demonstrations that. He was [:And I love when I heard in the interview of the mother of that child who was a victim of Character AI. She said that [00:20:00] as a parent, we usually warn teenagers about, well, don't talk to strangers online. Don't give your personal information to any strangers. But we don't realize that that stranger can take the form of a chatbot of the child's own creation.
hard to protect your kids or [:That's really important because if we're not aware of something ourselves, how can we advise someone else? And that is like a huge gap in that kind of training. There is. There's a huge gap and [00:21:00] unfortunately, we're almost out of time, but I do want to give some kind of hope to people who might be like, how am I going to start?
doesn't take very much time.[:It takes really usually half an hour max to understand how the feeling of this thing is. And then you can have conversations about it, and that's already the beginning, and then slowly you can maybe even learn together about it. But just knowing what they're doing and talking to them about it is a huge, huge first step.
of your head for yourself as [:I'll actually just conclude with this one beautiful thought that I learned from my yoga teachers. It’s that whenever something new happens, We need to respond, not react. There's so much reaction going on to like, oh, AI is doing this, AI is that. But a response is a more [00:22:30] thoughtful, inspired action that, yes, it's here. What do we do from here? And so the action items that you discussed that test it out before you share it, have a more, a safe environment to try things with a purpose, with a a trusted adult, those kind of things. That's a response. And reaction is just like, man, this world is going to end or nothing's going to work out. Then reaction doesn't take us anywhere other than panic.
is the perfect way to close. [:Please let us know what you think about what we're talking about in this episode and the others. Check out more about us and subscribe at digi-dominoes.com. Thank you so much for listening. I'd also like to thank our sponsor, Data Girl and Friends. Their mission is to build awareness and foster critical thinking about data.
tal citizenship through fun, [: