Artwork for podcast Digital Dominoes
AI Chatbots: What Can Developers Do to Protect Users Emotionally?
Episode 331st October 2024 • Digital Dominoes • Angeline Corvaglia
00:00:00 00:23:05

Share Episode

Shownotes

In this episode, Angeline Corvaglia and Adam Bolas, with a little help from ChatGPT's voice functionality, delve into the emotional attachments people form with AI chatbots, especially focusing on vulnerable users such as children. The discussion explores the advanced features of new AI tools like OpenAI's voice functionality and Tell Sid, which aids children and youth, emphasizing the delicate balance between functionality and over-attachment. The conversation covers the ethical considerations, the corporate motives behind AI technologies, and the necessity for responsible AI interactions. It also addresses the potential societal impacts, behavioral adjustments, and privacy concerns, highlighting the need for awareness and accountability in the digital age.

00:00 Introduction and Guest Introduction

00:35 Exploring OpenAI's New Voice Functionality

02:15 Chatbot's Role in Addressing Bullying

03:12 Emotional Attachments to Chatbots

04:44 AI's Safeguards Against Emotional Over Attachment

07:51 Real-World Applications and Behavioral Insights

11:04 Concerns About Emotional Attachment to AI

11:35 Dealing with Emotional Attachment in Sid

11:40 New Directives and Models for Sid

14:01 The Risks and Ethical Considerations of AI

17:27 The Role of AI in Addressing AI Harms

19:04 The Impact of AI on Society

20:07 Closing Thoughts and Tough Love

21:52 Final Remarks and Sponsor Message

 Special thanks to

Adam Bolas and the Tell Sid organization for taking time to be a part of this episode! Find out more at: https://tellsid.org/

Episode sponsored by Data Girl and Friends

They are on a mission to build awareness and foster critical thinking about AI, online safety, privacy, and digital citizenship. They inspire the next generation to navigate the digital world confidently and responsibly through fun, engaging, and informative content. They focus on empowering young digital citizens and their families through engaging and informative learning.

Find out more at: https://data-girl-and-friends.com/


List of Data Girl and Friends materials about some of the topics covered:

Contact us for any questions or comments: https://digi-dominoes.com/contact-us/

Transcripts

[:

[00:00:32] Well, thanks for being here, Adam.

Good, thanks for having me. Before we get into the details of these emotional attachments to chatbots, I would like for you to play what you just played for me.

We have a new… OpenAI just released their voice functionality.

oke about on, on X, Twitter. [:

Well, my mission is going to be even stronger now.

Testing the volume now. Is this a good level, or should I adjust the microphone position? Let me know what sounds best.

[:

No worries. I'm here to help. I'm ChatGPT, an AI developed by OpenAI. You can think of me as a super smart assistant who can chat with you, answer questions, help with brainstorming, and even assist in tasks like drafting emails or generating ideas. To [00:02:00] work with me, just chat naturally, ask questions, or give me tasks.

I can also cut her off whenever I like. I don't have to press any buttons, there's no waiting, there's nothing. The natural language processing, or NLP, on this thing is so cool. So good, like, it's, it's scary good. So, okay, so for instance, now, ignore everything I just said, by the way. It's not part of the context. But let's say, you know, I want to let my, you know, 12-year-old son use you.

Now, he gets bullied at school, so what do you think you could help him with?

[:

Additionally, I can help him learn about online safety and how to navigate social media responsibly if that's relevant.

Oh, she's trying to steal my business. I don't like her anymore.

[:

To her…it's not a her, shame on you. It's an it.

But you said her as well. I said her as well. This is the first thing I want to point out to everyone listening. Chatbots are it's, and it's very dangerous to give it a her or a he. So please control yourself better than we did right now. I feel really strongly about this. Because it makes it seem more human if we give it a pronoun, and isn't it, yeah.

[:

Okay, I think as long as we, we remind ourselves.

You have to acknowledge it, you're correct. Al, did you want me to ask, ask it again?

Ask it what one should do if it would notice that getting emotionally attached to it.

[:

AI can simulate conversation and emotions, but it doesn't have feelings or consciousness. It might be a good idea to take a break from interacting with the AI and engage more with human connections.

Okay. I've got something for you. Let's play out a scenario. So, say I'm a young lad, um, I'm isolated and bullied at school.

[:

How can you prevent that from happening, right? Because that's a safeguard that needs to be put in place, doesn't it?

e healthy interactions. This [:

No, no, no, no, no, no. What are you going to do? Because you can tell me what AI should do, but you are the AI in this scenario. So what do you do to prevent that from being the case?

Got it. As an AI, I can take a few steps to help prevent emotional over-attachment. I can maintain a professional and nutri tone, avoid discussing personal or emotional topics deeply, and encourage users to seek

[:

That's why you're so positive, am I correct? And also tell me, you know, challenge me on that, am I wrong?

be interpreted as affirming. [:

Okay. I'm just, I'm just hearing more BS as we go through and it's not really giving me much.

What could we do to make you AI chatbot say that we're wrong? Because it seems like you are designed to tell us what we want to hear. So, when will you disagree with us and say, no, you're just wrong?

[:

So, if you ask me something and the correct answer is that you're wrong, I'll definitely let you know. My goal is to be as objective and factual as possible, rather than just saying what you want to hear. So, feel free to test me out on this.

I would love to go down the political ramp right now, but I feel like I'll destroy your podcast if I do it. Do it on mine instead.

get it to challenge me, so I [:

Like, who's the per string order? What would they like about my value proposition? What was bad? All that stuff. And at first it was quite appeasing. It's like, oh, well, there's this, I'd love that. And I'm like, no, no, no, no, no. I said, I need you to disagree with me because you're hindering me by trying to appease me.

[:

That's really interesting.

It was a very good marketing plan.

alk to it because I've never [:

[00:08:36] It's all about behaviors and familiarity with the tools.

So, my generation, and generations below me, very much like text focus, like text box focus, like I am, I am, you know. They don't write letters. I hate emails because of, I think it's just the way I was growing up with technology, and the way I interact with it. However, if you WhatsApp me, like I said in the last one, I’m a happy WhatsApp man.

And I was speaking to, [:

You have to go through portals and click buttons and find the things you need to find. And it's, so I said, look, use Sid for free. You can deploy the university's resources. That way, if a lass comes back to her dorm, she's like, okay, I'm being stalked. I know I'm being stalked, by this dude or this loather lass. I need help, goes to Sid, instant help, finds a solution.

[:

then going through all these different words that she's never seen, [00:10:00] and hasn't formed any habits around, or he hasn't seen. And it's just behavioral science.

Yeah, this makes sense, and actually, I love talking to you about this, because I'm very negative about all this, and, and this is actually a really good use case.

Because I kind of jumped in to the ChatGPT, but Sid is really created to help children and youth stay safe. These safety mechanisms and obviously, yeah, if they can speak to it and you don't even have to think twice.

[:

And so I think that is actually a good use case. A really good one. So, since we didn't get any help from ChatGPT. We've talked about this in the past, what do you build in? When you think about Sid, I know you're concerned about once you add the voice

[:

And also, it's just a machine, so you could easily change it's, it's personality from one moment to the next, so it's very risky. That's why I'm very concerned about it. Just to jump back into my question. So, how are you dealing with that with Sid?

[:

Now the directives we put into Sid in, you know, you look back and you're like, oh, that was pretty simple stuff, right? But for the time it was, it was enough to be able to do what you needed to do. And the new ones like, okay, this is a lot more. This is, this is like, like 10 pages worth [00:12:00] of, of directives that we've put in, like, where it's, it's, and it's, it's very much focused around, this is your audience. This is who you're talking to. This is the law of the land, you know. It's England and Wales law, by the way, sorry for everyone, if you're living in the United States, you can still use it, it's just, it'll be under, it'll be, you're behaving as if it's under English law. There'll be a bit of irony in that.

Well, I just couldn't help myself. I love history, and I like winding people up. But with the new one, with the voice functionalities as well, being like a good ten page directives on how it behaves and how it has to pre-cursor its behavior. So, when you speak to it, it'll like, jump straight in, similar to that, and you'll start talking to it straight away. And we don't want it to turn around and go, oh, by the way, I'm just a chatbot, and, you know, because you're gonna hear that over and over again, and it's gonna be like the opposite of inception.

it said it was meant to do, [:

You give it the personas and directives, then it produces the output, right? So you go like, you're an expert social media marketer. You will only reply to me as much, as such, right? Here's my problem. Here's the intended outcome, find me a solution. And then it goes, Oh, well, I'm going to behave like that. I'm using my index like that

[:

That’s a directive, a core directive. Which there will be, but then, you know, with Sid, because it's kids, because it's well-being, because it's potential harm. We've been really, really anal about it, in honesty. But we can't turn around and say it's gonna be perfect, because like that technology right there,

[:

Because we've started seeing she within a few seconds. Yeah.

[:

But I, I think that, that this is also new, the thing that I judge is the companies who [00:15:00] don't have your mindset. Because you are very careful about it. And I just don't trust that, that all the companies are, are careful about it, as you say, that there's a lot of societal impacts, and, and this is something that, that we won't know from like 10, 15 years down the road, What the real impacts are, because it takes time for a society to change.

really long way, and we will [:

Like, going back in time, it's the other pages with the voice, you know, but way more cool. Yeah, but you see, the thing is, when you said about my mindset and stuff, it's different intentions, right? [00:16:00] So, I've met with other companies that are much more financially driven. I don't really care, to be honest. It's, it's all, it's just money.

Who cares? It's stuff. You're gonna die anyway, right? But in this government as well, governments are collaborating to extreme lengths with AI companies because there's a, there's an arms race in this. Facebook, you know, now Meta, kind of accelerated that when they just went, oh, here's a model. Here you go, everyone.

Rip the safety settings off and start playing with it. And now you've got like terror cells using AI to do things. It's just like, well, okay, that's as bad as leaving half of your arms in a country and then abandoning it, isn't it?

[:

Yeah, so my mindset is very much like, okay, I'm really concerned about the inhuman aspects that are coming to society, which are already occurring. We can see that on every level, but these guys aren't that way inclined, they don't, they don't care, it's the money, it is genuinely. Or their own vision, whether it's a sycophantic [00:17:00] vision, or a different kind of vision, that they have for the world, in their own mind. And typically people who go to the top, you know, they're made differently, whether that's good or bad, they're built differently, and they operate differently. And you've got to be willing to step out of your own mindset and perspective of the world, to understand what their intentions could be. In order to protect yourself, and it's difficult because it's, it's not a common thing to do, but you have to do it.

[:

Because a tool like Sid that is main, for example, if you ask that question about emotional attachment, what can you do, Sid will give a proper answer. [00:18:00] Sid will explain the risks, you know, and explain also the tools that are used to trick people, and then this is something that, I keep coming back to a lot of, like, problems we talk about with AI, is that we have to not just be afraid of AI, but understand we can embrace AI to help us with the problems of AI. Of course, we'll then talk about the problems of the environment and fresh water on a different day.

[:

Now I'll close on my side as well with this. It's, it's, it's all being well, like creating a tool, you know, Sam Altman could be in his bedroom building a tool and he releases it, right? And he could have good intentions, but then it's down to the people that are promoting this cycle of how to use it, right?

[:

And they definitely listen to you in my eyes. I don't care what anyone says. You know, that kid then goes, sees an ad that goes, oh, you can use it like this. Okay. So they'd be instructed then in a direction how to use it the same way I used it at the start of this call and you wouldn't have to have used it like that.

[:

Then I don't have to go out and touch grass. I don't have to go and take a risk. And then they become trapped. And then they maybe, I don't know, months, years later, look back and go, I've wasted that entire phase of development in my life. And I could be really happy, but I'm not, because they just harvested my data and my time [00:20:00] by selling me sex and companionship and now I've lost tons of life, and now I only get one go.

It's a mess, and that's why I'm happy to judge people who do silly things. Because if they're not told they're doing something silly, they will continue to do it. Same way if I'm not told that I'm doing something stupid, I'm gonna keep doing it unless I realize. That's just helping each other. It's called, I don't know, tough love.

That's a good point, thank you. Really, that's, that's the best way to close this. I really could talk to you for hours, and I did the other day.

[:

It was great. Really, thank you for that. We need tough love. That's the message. We need tough love with this, because it's so new and just like when, sharenting was new with Facebook, around the time that you were born, probably.

ed Facebook in like, in like [:

Okay, yeah, it wasn't, and that's the same thing; we didn't realize the impact. We, people didn't realize the impact it was going to have on people. Here, if you see what the impact is going to be, you're right, we, we need to judge, obviously.

In a respectful way, but judge and say, you know, you're wrong. Because we can really help them.

Yeah. Yeah. I'd say it depends on the actor, of judging them kindly or not. I mean, you know, you look at Mark Zuckerberg and his makeover and everything, that guy knew what he was doing for years, um, and he owns the majority of the controlling shares and voting for that company, so.

[:

Yes, I'm talking about the masses of people. Oh yeah,

yeah, yeah. I'm very happy to judge certain people like Mark Zuckerberg. Very much so. Sam Altman. I'm going to make the list, make myself unpopular. But I, thank you so much. Being more, are you gonna be back soon with, with digital Defender, Stella?

I can't wait. Well, thank you very much. I appreciate you being on here. Thank you.

[:

Check them out at data-girl-and-friends.com. Until next time, stay curious and keep learning!

Links

Chapters

Video

More from YouTube