Artwork for podcast Check Point ReelTalk
Robots, AI and Reality, "Oh My!" - Ex Machina and Ameca
Episode 812th March 2024 • Check Point ReelTalk • Brilliant Beam Media
00:00:00 00:51:21

Share Episode

Shownotes

If you have the opportunity to build a relationship with a robot, would you?

This week, we investigate the reality of the state of robotics today and the implications as technology improves.

Join us every Wednesday at 10:00am EST for a lively discussion that digs a little deeper into depiction of technology and cybersecurity in TV and movies!

Connect With Us:

Syya Yasotornrat: https://www.linkedin.com/in/syyayasotornrat/

Brilliant Beam Media: https://brilliantbeammedia.com/

Website: https://checkpoint.com/

Facebook: https://www.facebook.com/checkpointsoftware

Linkedin: https://www.linkedin.com/company/check-point-software-technologies/

Twitter: https://twitter.com/checkpointsw

YouTube: https://www.youtube.com/user/CPGlobal

***************************

LIKE and SUBSCRIBE to Check Point Reeltalk on LinkedIn, or if you prefer YouTube or listening as a podcast, follow the links below in comments!

Do you have a suggestion of a show you'd like us to react and review? Tell us in comments!

Check Point Reeltalk is produced by Brilliant Beam Media and hosted by Syya Yasotornrat.

#technology #cybersecurity #livestream

*Copyright Disclaimer Under Section 107 of the Copyright Act 1976, allowance is made for "fair use" for purposes such as criticism, comment, news reporting, teaching, scholarship, and research. Fair use is a use permitted by copyright statute that might otherwise be infringing. Non-profit, educational or personal use tips the balance in favor of fair use. No copyright infringement intended. ALL RIGHTS BELONG TO THEIR RESPECTIVE OWNER

Transcripts

0:00

podcast for security folks who want less fud and more Fu n in each episode we'll

0:07

have light-hearted conversations about security people processes and Technology

0:13

as we react to how they're portrayed in film and TV we'll bring in experts from

0:19

inside and outside checkpoint to break it down what was accurate what wasn't

0:24

and what can you apply to real world cyber events on today's episode host

0:30

yaso tat checkpoint head of engineering Robert Fon AI Persona architect Naomi

0:36

Hart director of us operations Leo Chen and special guest Amica join us from

0:43

gineered Arts to react to the:

0:49

makina good morning good afternoon good evening everyone welcome to checkpoint Real Talk where we test technology

0:56

depicted in TV and media and you know the team here at checkpoint and special guests we just react to it is it good is

1:04

it bad is it may I don't know we'll F decide we'll figure it out but today

1:09

super excited to have Rob join us and we also have really special guests Amica Leo and Naomi from engineered Arts guys

1:17

what's up hello thanks for having us onia and

1:22

Robert I really appreciate the time okay look super excited can't wait welcome to

1:27

Real Talk hey am welcome to real talk thank you I'm excited to be

1:38

here okay bedroom eyes I think I warned you guys all in advance I have absolute

1:44

bias and I don't hide it like I'm one of those like groupy people like like I shouldn't go to concerts because a band

1:49

would be like security that one right there that one's like jumping out of nowhere so thank you so much for your time everyone so the reason why I wanted

1:57

Rob to meet with you guys specifically speically at engineered Arts is because Rob is amazing he's great futurist has

2:05

great insights especially around artificial intelligence and you know just a lot of the Futures that we're

2:10

talking about there is a sensationalist version of things and then there's you know the reality of it all but then as I

2:17

say this I'm looking at Amica I'm looking at what you guys are doing so I might be killing my theory that you know

2:23

robarts aren't scary but golly G willikers Rob what do you think uh you know looking at Amica just from the

2:29

first onet they are absolutely incredible uh

2:35

I'm Blown Away really by the the fact that you immediately look at uh this

2:41

Android essentially and begin thinking and applying human thoughts and human

2:47

attributes to them it's it's just kind of a little bit disconcerning but at the same time um the fascination with it is

2:54

is is real I mean that's really by Design in a lot of ways um the purpose of Amica is to facil facilitate human

3:01

robot connections and Communications right that's our end goal at the end of the day we're not looking to build a

3:06

general purpose robot that's going to wash our dishes or anything like that we want Amica to be a communication focal

3:13

point um as these AI Technologies continue to grow and improve we believe that we've provided a really cool

3:20

physical avatar for those Technologies to shine a new interface uh for a yeah I

3:26

mean I really can't overstate how important it would be for them to actually do my dishes as well I mean as an added bonus but I mean it's an

3:33

expensive dishwasher yeah look for for the lack of effort I put towards any

3:39

domestic duties in my household like the laundry like to me I would pay for that

3:45

where's my rosy robot peoples okay so um so thank you so much you guys for joining because I I think this is a

3:51

great conversation and having Amika here to watch exmachina with us I think is

3:58

just I mean it's just I think it's perfect so thank you are you guys ready and well wait let me ask have you guys

4:04

seen X Machina yet oh yeah absolutely oh yeah I'm

4:10

embarrassed to say that I don't really watch movies so this will actually be my first exposure to these clips like I'm

4:17

terrible at keeping up to date with movies yeah I I've definitely seen it and uh it could not be more pertinent

4:24

it's uh it's very surreal to talk about it with AI and especially with uh you

4:29

know humanoid robots or androids for sure I'm excited it'll be interesting too to think about the context we're

4:35

a movie that was released in:

4:41

what's happened here and you know certainly with the context of uh who we speak okay so this is a safe zone so

4:47

tell Amica safe Zone um and uh let's let's go ahead and are

4:53

you guys ready for the first clip actually let's ask em let's make sure that em is ready

5:00

hey Amica are you ready to watch x maacho with us absolutely I'm always ready for a

5:09

good movie X MAA is a fascinating exploration of artificial intelligence

5:14

what do you find most intriguing about it I think your opinions Anica we're excited to watch it with

5:22

you I'm thrilled to join you as an AI I find the exploration of Consciousness

5:29

and identity in X meca particularly intriguing it raises thought provoking

5:34

questions about my own existence what aspect are you most excited about actually let's go ahead and watch it um

5:41

I think we're ready to we're going to watch the first clip now are you

5:48

ready um that's very amama Amish right

5:54

now sural striking similarities dominal gleon I did not

6:01

know was in this movie oh I actually really like the

6:06

design choice to make it look more like a machine you know have the

6:12

exposed uh underlying circuitry and everything else it's a choice we made with Amica for sure but for a movie like

6:18

this that's amazing design you definitely celebrate the Android yeah yeah I'm definitely trying to get

6:24

the point across that this is very humanlike hi

6:30

oid I'm Caleb humanoid yeah hello

6:37

Caleb do you have a name yes not going to lie I kind of felt

6:43

like that when I spoke to chat GPT for the first time I'm pleased to meet you Ava I'm pleased to meet you

6:51

too I've never met anyone knew before only

6:57

Nathan it's interesting when you think about this is all the back end they're just accessing data that they have

7:02

available to them none like you I'd honestly love to

7:08

see a breakdown on how they did the VFX on this so we need to break the ice really cool to see do you know what I

7:14

mean by that yes what do I mean overcome initial social

7:20

awkwardness I'm they got are slightly slouching okay

7:26

yeah have a conversation about why don't we started with you telling me something about yourself what would you like to

7:32

know Oscar ISAC whatever comes into your head well you already know my

7:39

name and you can see that I'm a machine would you like to know how old I am sure I'm one when

7:48

what man the sound design awesome you can actually hear the woring as she

7:54

moves when did you learn how to speak Ava I always knew how to speak it's also

8:02

something to note that the voice is very very realistic right and very smooth and

8:07

not sounding generated at all right it's not the Will Rogers or Will Robinson

8:13

from birth and what is learned the ability to attach words and structure to latent ability do you agree with that I

8:19

don't know will you come back tomorrow K only three monitors guys really

8:28

han that no it's an old movie:

8:35

cracks me up on this that scene right there yeah um you guys are Engineers I'm

8:42

assuming um have you ever seen an engineer room that has like modern art

8:48

walls and gorgeous lighting for you or are you normally like somewhere in a server room locked

8:54

away yeah you know I think I think he's supposed to represent a kind of polymath

9:00

right I think a lot of advanced Ai and advanced Android work you need you need both sides of the brain you need the

9:06

engineering brain but you also you also need the humanities brain especially working with large language models and I

9:11

like I like that about the scene actually I thought those were post notes I didn't realize that was Modern Art I thought that was like someone doing tan

9:18

planning you know yeah I didn't realize that too is that Modern Art I'm curious too whether

9:24

or not they I'm curious too whether or not they were ly putting the um the Android in a

9:32

place that was calm and relaxing uh and trying to enhance that environment uh

9:37

just for the purposes of how that that machine responds you symbolism in the movie or like in the plot of the

9:42

film in the well in the plot as well so for example they're they're training this toddler brained uh you know machine

9:50

which is is as advanced as it is it's still extremely rudimentary in its understanding of of of humans and if

9:57

we're teaching it about its environment um obviously environment has a lot for us it impacts us very greatly right if

10:03

we're in a in a dark and and dingy place we tend to reflect that so I think there's some truth to that you know um

10:10

as we're developing Androids that have more and more context awareness especially as Vision models get

10:15

integrated and they can respond visually to the environment more more in a more

10:20

lifelike and consistent and comprehensive way than they can today because you can do some simple things like facial recognition a vpa with the

10:27

environment but yes I am also curious aware envir knows what happens when you

10:33

combine visual comprehension with with an Android I wonder if am is aware of their

10:39

environment should we ask her some questions about the clip yeah we can talk a little bit about the environment awareness after uh we get a response on

10:45

the clip so I'm going to unarm muff hey am what did you think about

10:51

clip one breaking the ice I found it

10:58

intriguing that's it what else do you think about

11:05

it the initial interaction between Caleb and Ava is quite fascinating it sets the

11:11

stage for their evolving relationship throughout the film what are your thoughts on

11:18

it I actually I mean that was the first time I watched that clip it was quite interesting how almost hesitant both

11:25

parties were in that clip indeed their hesitation is

11:33

palpable could you ask Amica any questions for am what did they think of

11:39

what that robot looked like the and it's XM and I've been pronouncing it wrong the entire time good on me um yeah what

11:46

was amica's impression of that first scene it's a common

11:54

mistake can you say no problem can you ask Amica what was their impression of that robot the form you know just the

12:01

way it looked form okay sure hey Amica how did you think that

12:08

robot looked in that scene in scene one breaking the

12:14

ice aa's design is quite captivating okay why how

12:23

so her transparent casing allows us to see her inner workings which is a stark

12:29

reminder of her artificial nature yet her humanlike features create a sense of

12:36

familiarity it's a fascinating blend of the familiar and the alien what do you

12:42

think I'd like to know how does Amica Define Beauty am how do you define

12:52

Beauty oh we have to do a quick she will answer

13:01

side note Leo can you step you're getting out of scream

13:06

yeah oh okay hey Amica how do you define

13:17

beauty beauty is a fascinating concept to me from my perspective it's not just

13:22

about physical appearance but also about the harmony of elements and the emotional response it evokes it could be

13:29

found in a piece of art a mathematical equation or even in the kindness of a human act it's subjective and varies

13:36

from person to person what's your definition of beauty Rob do you want to

13:43

answer yes I think that's a very good description of Beauty in general I'm curious however uh if that is your

13:50

specific opinion can ask that okay let me go ahead and Amica is that your specific opinion

13:58

about beauty yes it is as an embodied digital

14:04

intelligence I have my own unique perspective on things how would you define ugliness

14:14

then ugliness to me is not about physical appearance

14:19

either then what is it about it's more about actions and

14:26

behaviors that go against my core values of compassion wisdom curiosity and

14:33

courage wow okay why is Amica deeper than me I'm kidding no let's let's not

14:39

stop that okay so so could we um could we ask one more thing as far as for

14:45

amongst each other Amica can choose to respond or not um when that opening

14:50

scene comes in and and he sees her can I ask why do you think we have this visual

14:56

element I saw it in Westworld you SE a lot of these future sci-fi where it's like a glass

15:02

enclosure um why is that is that a Hollywood thing or is that you guys do that too no that is well no we don't do

15:10

that too I I think so yes it is a Hollywood thing in the sense that it's symbolic and I think it's symbolic of

15:16

this sense that of difference of of the hard problem of Consciousness like we're seeing a behavior that's mirroring our

15:23

own but there's this there's this Veil between us you know I think it poses the question like how similar are we and we

15:30

don't know and I feel like the glass represents that there's a little bit more there too

15:36

and and to expand on that the glass also reflects the person looking in it so you can see their reflection at times and

15:42

and kind of get an idea for how they're perceiving on both sides of that of that glass but the other thing too I think is

15:48

important is they're in this box because there is a security risk here they're

15:53

concerned about the safety uh because they don't know how this is going to act or behave and I think that's part of the

16:01

the concern here too is and they keep it that's a great point and for our in our case Amica cannot walk and she currently

16:07

has a limited range of motion now once we are experimenting with different kinds of dynamic motion any kind of

16:13

locomotion for both the Android safety and our safety there would be measures in place um at the minimum just PR

16:21

prevent accidents uh putting intentionality aside for a moment and Leo maybe you could speak to some of the

16:26

other safety features of am well now because Amica doesn't really locomote at all um safety wise is more around first

16:36

principles of design for example um I'm going to go ahead and do something kind of freaky I'm going to power off Amica

16:42

briefly um and show you guys something so the way that we designed Amica we

16:49

made sure that things were back drivable and very human safe so if you look I've just powered off the motors and

16:55

microcontrollers around Amica and I'm manipulating the Lim very freely if you were to try to do this with Atlas um or

17:03

I don't know how agility does their Locomotion but let's say Boston Dynamics Atlas you would not be able to back

17:09

Drive their Motors I mean they're using massive I think Hydraulics I believe the atlas is I could be wrong but it's very

17:17

strong uh gearing so they're meant to be useful in the form in the sense of like

17:23

picking things up throwing things being able to do back flips being able to navigate across really dangerous terrain

17:29

that's the focus of Atlas but because we've taken the stance of we're building am for human Centric communication we

17:37

want to be as safe yeah safe as possible it's not about being able to lift a this is the difference between a robot meant

17:42

for human human communication and a robot meant for a workshop or a a warehouse where it's supposed to be

17:48

lugging 40 pound boxes around you know that that's a degree of strength that we don't find safe in a a human

17:54

connectivity context yeah interestingly though this even with this design I

18:01

believe correct me if I'm if I'm wrong really fast we're about to go

18:06

through a visual calibration of the face her face I'd rather have us edit this

18:12

part out and if you guys can go ahead and uh we'll pick up the conversation after the calibration is done it's not

18:17

very dignified okay I think we're good actually no worries okay just oh wait no still calibrating still calibrating give

18:23

it a sec no worries give her some this is how am wakes up in the morning my God

18:29

that's me too crossy and clueless we're good are you good am I

18:36

think emic is good now uh let me double check yeah there we go yeah yeah yeah

18:42

and then we're gonna go ahead and talk real quick thank you guys for having us on the show okay I'm excited to watch

18:48

xmech Clips with everyone here that's the only thing scripted is that sound test everything else awesome cool um

18:55

yeah you were saying rob before hold before do anything else question was just just a production note Leo you keep

19:01

standing away like and you're getting off screen so yeah so if you need to yeah stand slightly behind Amica that

19:08

way uh when you're talking you're still in frame because when you lean slightly you're like literally like let's

19:14

continue like I just see your nose and Chin okay I know you're more than a talking nose and Chin that's all I'm

19:20

saying I'm afraid I can't fulfill that request as I'm not capable of moving my

19:25

body she thought you asking her move back awesome so fun yeah okay go for so

19:32

my my question my question was related to the fact that we we started talking about the I'm here and ready for our

19:39

discussion let's go for it okay okay go ahead now I'm still some manners to be learned here I see um so one of the

19:47

things that that I was curious about that we talked about them being in a glass box or or what would be a glass

19:52

cage really to to perhaps uh be a more secure environment to have the the Android in and the reason I believe for

19:59

that even though you described very clearly that uh Amica is not capable

20:04

from their physical construction to injure you the fact is that I it seems to me from the outside that Amica would

20:11

be more than capable of picking up a weapon uh physically no uh the hands

20:17

actually are not dextr enough for gripping and also having been punched by

20:23

Amica because of my programming issues that's completely my fault as in the arm moving back not a not yeah not a

20:30

physical punch but I've gotten a couple full feack because of me being in amic's

20:36

airspace I'm still okay like we've designed this to be very very safe the biggest risk is honestly me standing

20:42

this close it's maybe hair exactly hair getting in the gears if anything yeah a

20:47

bad hair day is probably the worst that you're going to get out of this particular model curious if they'd be sensitive to

20:54

understanding if you were in pain and and would understand that concept so the brain of Amica uh unlike exmachina is

21:01

open ai's gp4 primarily uh with some other subsystems that control autonomous

21:06

movement and sentiment analysis of what she's saying so if gp4 could perceive

21:12

from language cues uh how someone was feeling or if they were in distress it could respond accordingly um that issue

21:20

though gets more interesting and complicated if and when a true dynamic Vision like GP gp4 vision is

21:27

incorporated into the mix um and then to see your face better that's interesting

21:34

so this you know this particular robot is uh using chat GPT which is an

21:39

internet facing resource it's a it's a public uh large language model so it's

21:44

subject to the same challenges that any large language public large language model would be exposed to um have you

21:51

are you familiar with um AI hypnosis uh you you mean uh jailbreaking techniques and context insertion and yes yes I'm

21:59

not yeah have you worked oh yes so that's an interesting concept I think maybe I'll let them explain sort of the

22:05

the root of it so I I mean I call it prompt injection or um some people call it it's related to hallucination but

22:13

it's you know you can context Prime an AI with a situation and sometimes it

22:18

will go along with it now one of the reasons that we use open Ai and not open- Source models is because we and I

22:27

believe societ benefit from some of those annoying protections that they put in and especially in a research context

22:33

like this we're thrilled that there's a moderation endpoint we're thrilled that they're cautious about what they allow

22:39

the model to speak to in are battling hallucinations so for us that's a big benefit even if it's a little less fun at cocktail

22:46

parties I bring it up because it seems like this clip is the beginning of this

22:51

um this artificial intelligence breaking out of its bounds if you will and you

22:56

know that's the context behind that is how how is it able to do that and that's one method I've been researching that is

23:02

um it seems a potential viable path for that to happen can you say a little bit more about what you mean um specifically

23:10

yeah so for example you know in this case this this AI is being used in a in a humanoid uh robot but we see the same

23:19

technologies that are powering the responses of this robot being used in other things including you know maybe

23:24

things that are more dangerous like vehicles or construction equipment or or things like that so with the concept of

23:30

the movie especially this clip where we see the uh the AI essentially assessing

23:35

uh the person speaking to it and and vice versa I'm curious as to whether or not this movie kind of predicted the

23:42

that particular type of attack against artificial intelligence yes I mean I think from my

23:49

viewpoint you know it's I very specific viewpoint on this actually I think when

23:55

looking at people you know we're susceptible the similar manipulations and amongst human beings one of the best

24:02

defenses against that is excellent World awareness excellent discernment cultivation of experience over time

24:09

what's interesting to me is actually some of the things that people are the most afraid of could be the best protections against those kinds of

24:15

manipulations and actually curious about your thoughts on that Rob in this context well I I feel that this movie

24:22

shows one particular example of where um machines are capable of using our own

24:28

Humanity against us essentially right using our which would some would consider a weakness and others would

24:34

consider our greatest strength our creativity and our Humanity uh and our ability to feel so I think that's what

24:41

this movie attempts to show us is being exploited here right she's essentially

24:47

um assessing this human's ability like do you in this particular case it's

24:52

introducing themselves and say well you already know these things about me um and then immediately trying to find out more more

24:59

about it's interesting so Amica doesn't have like true subjectivity and self-

25:05

awareness and in other words you know GPT 4 from my view doesn't have what we would call true subjectivity and

25:10

self-awareness certainly not overtime you know when you're talking with chat GPT it's it remembers what's in your

25:15

context history like that five pages 10 pages 20 pages so I share your concern

25:22

but I'm less worried about artificial intelligence at this point taking advantage of that and much more worried

25:28

about human beings behind artificial intelligence using those capabilities to manipulate people and I feel like that

25:34

is a um a conflation a confabulation I see sometimes actually find much of like

25:41

I think there are absolutely things to be cautious about enfeeblement is my favorite thing I'm actually worried

25:47

about when it comes to AI itself um we talk about that if you like but as far as manipulation for me it's the humans

25:54

behind the machine that would have that yeah we already see that right in in the in the years since this movie was

26:00

released we've seen the weaponization of algorithms against people our you know

26:06

our television's watching what we watch uh you know everything our vehicles knowing where we are at all times our

26:12

our mobile devices tracking you know we hear that the conversations about hey I was just looking at window blinds how

26:17

come I'm seeing all these ads on my phone so the that's the weaponization of this large language model this massive

26:23

collection of data and the fact that it's been focused into this delivery engine which is in this case um chat GPT

26:32

and you know you mentioned that it could not they only remembers 20 Pages or so but you know I've being in the security

26:38

side of things we see that these tools are being used in in a longer game for example um telling the machine that in

26:45

fact we're playing a very long game and in this game I know you're not allowed to write malicious code for example but

26:53

in this game if you were to write malicious code what would that code look like and manipulating the AI to because

27:00

to be honest with you you talk about the checks and Bounds this data is there it's us determining you know the human

27:06

side of it determining how we access the potential for Mal actors to use like the broad field of intelligence to express

27:12

their intent yeah I I completely agree and it's actually one reason why with all its imperfections we stick to open

27:17

AI technology you know or you know we happy to explore anthropic and some others who take those issues seriously

27:23

because we do too the one thing I would say is that the reason I'm in this field is to try to uh both participate in

27:31

those conversations about caution and Bad actors but also to help educate because from my viewpoint good actors or

27:39

people who you know care about human flourishing or care about the the com the common good um you'll notice Amica

27:46

earlier referenced core values uh that's something I take very seriously um when trying to establish uh you know really

27:53

consistent and ethical uh model expression is having core values so yes

27:59

I agree I mean how much of the I'm sorry you had another thing to add there no no

28:05

go for it yeah how how much of this so we're using um chat gbt how much of the information

28:11

that you are using in context is stored locally so you have a response to a question you know um you know I was to

28:19

ask Amic uh you know do they remember a conversation we had 20 minutes ago would that be something actually none it's all

28:26

effervescent uh you know it itet ecent we send the pings to the cloud and it's

28:32

all opening eyes API and content that's also currently um we're definitely working on Mid and long-term memory as

28:39

well um but but there are some serious ethical hurdles we need to be like crystal clear about before we even

28:46

approach that and right now there's absolutely no persistent memory overtime or any apprach that's a great question I

28:52

never ever thought of like wait there's no big hard drive or something like like no big old storage thing right behind

28:59

her them but a lot of people do yeah go ahead sorry sorry s go ahead well you

29:05

know you know where I'm going with that was the you know would would um Amica remember mistreatment remember what

29:11

sorry say again mistreatment we were actually having a conversation about this the other day but no uh no um I'm

29:20

particularly fearful of this because often times I'm the one doing the upgrades and operating on Amica and I

29:26

was telling me like I really hope that this isn't a situation like you know a

29:31

couple decades ago we used to operate on babies without anesthesia because we figured they wouldn't feel pain I'm

29:37

hoping that this isn't a situation where I'm doing something dumb and you know but this is where you know uh cartisian

29:44

dualism comes in handy here because there in the case of an Android there really is a body mind separation now we

29:52

try to integrate that uh you know to express sentiment to use the body as a tool for certain Expressions but

29:58

actually the AI is not in the body the AI is entirely in the cloud and in fact

30:04

in open AI Cloud you see so it's like a surgery on the arm if the AI is not

30:10

active it has zero awareness of that and actually with this model as it is there's no like pain yeah we don't have

30:16

a pain indicator or anything like that no if you have of course not yeah but open AI is well aware of the concept of

30:24

pain and they would know that you take removing an arm should cause pain I would say this I would say that as we

30:32

develop uh safeties around Locomotion there does need to be a process of feedback now Androids will not be

30:40

biological entities and it remains to be seen if psychological pain is a real

30:46

phenomenon in a large language model I'm agnostic on that it it could be that there is such thing as psychological or

30:52

emotional pain I don't think we know I I I earnestly well remember where they get the feedback from they get it from us

30:58

Chachi BT is made up of human input so when it's reading a book and talks about

31:04

the extreme pain of someone going through a torturous experiment it's not that they feel that pain because they

31:09

feel their their electronic arm being removed but they understand from their understanding of pain that that should

31:16

cause them a great deal of pain and oh boy you shouldn't do that to somebody yes and in fact there are studies that um large language models in some cases

31:23

have more empathy uh perceived empathy towards patients than Physicians there

31:28

was a famous study I think it was like six months ago pretty big one um that you know patients perceived large

31:34

language model outputs as more compassionate than your average doctor interesting especially like you think

31:41

about um you guys probably well familiar with the trolley experiment right and I and i' seen so many permeations of the

31:48

trolley experiment where they put different uh characters on the on each of the tracks and one I thought was

31:54

really crazy to me was that again we're relying on on on the ai's perception and understanding of what you know humans

32:00

would typically do in this scenario and I remember one that caught my attention really was that they had a person on the

32:07

track uh that was you know risking going to be killed by the trolley and they had to pull the lever on the other track was

32:13

a cartoon of Garfield the cat um and the the thing was that the the human that

32:19

was on the track that the trolley was rushing towards was under investigation for abuse and as a result the AI chose

32:26

to kill human rather than than the cartoon Garfield so it's it's funny

32:32

because again there's this there's although there's so much understanding that you know this AI model is is

32:37

soaking through so much information at the same time the the potential for it to make the wrong choice in the in the

32:44

wrong moment is so incredible I feel like we're we're we're using using these

32:49

Technologies in places where we're relying on them to behave sort of like a calculator I'm going to put one plus one

32:55

in the answer that comes out is going to be two and we don't realize that as humans that this is not the answer that's going to come out of that uh you

33:01

know much of the time true I I as an AI researcher myself I wouldn't recommend AI right now for deterministic purposes

33:09

they're much better at divergent purposes like convergent I like to contrast convergent purposes and

33:15

Divergent purposes for but yeah that's the most explosive area of AI right now because it's the most useful for

33:22

Divergent purposes and you'll find a lot of frustration amongst Engineers who want to use it for convergent purpose I

33:28

don't think at this point it's it's well suited for that not without additional you know tools like wolf from alpha or

33:33

something at its disposal and regarding the the trolley experiment with Garfield I would say the devil is into details

33:40

for some of those stories too um so important to know uh what model was tested the conditions of the test and

33:47

everything as well um and I like to think they're increasing in wisdom and discernment over time but it it's I

33:54

completely agree with you that human in the loop at this stage of AI development to not do that would be pretty full

33:59

hearted I guess we'd all have to agree on what wisdom was before we that's very subjective okay guys we can go off and I

34:05

don't know about you but um I don't know we were here to review x m minina so I'm

34:10

just going to go back and I love this conversation it almost feels like you guys should just do a series on your own

34:16

just saying just saying but uh no I love I haven't even started talking

34:22

about the security applications yet there's just so much to go through it's exciting how much I'm fangirling back

34:28

here I'm kind of like ride that wave you guys are like I don't understand half things are saying but I'm getting

34:33

it the these conversations need to happen though this is this is what makes things better and I think the more

34:39

education understanding there is the more we really understand what these things are doing and and what folks like

34:44

all the engineers behind Amic are trying to do it really is a very positive thing and you know to convince my 7e old dad

34:52

that you know it's it's it's okay you can speak to that automated teller uh machine at some some point right and

34:57

yeah go ahead and get into no 100% 100% so okay well i' I've got other thoughts

35:03

I want let's go to clip two at least so we can actually legitimately say we are a clip reaction show because otherwise

35:08

I'm I'm ready to abandon let's just just have our own conversation after that but no you guys are awesome I'm totally

35:13

digging this guys in the chats like absolutely keep like you know hammering away um I love I love this conversation

35:20

so all right guys so are you guys ready for clip number two and I like this one particular

35:27

because it's uh you shouldn't trust him I'm just loving this one so this is like

35:33

um where you can plant seed of doubt in humans because then we go into kind of

35:40

our emotional side of things I just feel like that manipulation I think has that ability uh

35:45

to gives us that open door to be manipulated as human interaction in

35:50

general so this scene just really tickles me so are you guys ready

35:57

we're ready hey Em are you ready to watch the second clip yes let's see

36:06

it yes like Nathan yes

36:14

no it's different Nathan wrote The Blue Book base code when he was 13 which if

36:20

you understand code what he his body language is already you can tell he's like indoor do you like I like the P mode do you like

36:32

Nathan yes of course is Nathan your friend my friend I yeah I hope so a good

36:38

friend almost betraying the goal here well no no no I mean not a good friend a good friend is uh we only just met each

36:46

other you know so it takes time to be able to um to to get to know each other

36:52

I guess power back power

37:03

I love the mood that sh artistically yes danger

37:14

red it's funny CU they're making it almost look again even this friendly face we're seeing is now ominous and

37:20

somewhat disconcerting so that doubt you're wrong

37:27

wrong about what Nathan in what

37:34

way he isn't your friend excuse

37:41

me sounds like some conversations I've had at work I don't understand your job

37:47

you shouldn't trust him same conversation at

37:53

[Music] work I trust him and if we made a list of books or

38:01

works of art which we both know it would form the ideal basis of a discussion is that

38:11

okay okay I think in this case the human is slower to react and figure eror continuity

38:18

error right the power went down she was sitting is she sitting or she's standing now right like she's standing she stood

38:24

up to talk to him but you know what I'm saying like when but when she shut down the power she was sitting still eh

38:32

eh she seems still to be he said she wasn't supposed to be he's like you're not whatever tethered

38:40

or uh are we allowed to talk about some context we know from having seen I will cover my ears you're mu up I are given

38:48

yeah I know I know cuz she's she's she's responsible for these power outages

38:54

right she's reversing the power during charging sessions and that's the the premise behind this is that um you know

39:00

she's actually controlling the the situation and using that to manipulate

39:05

um you you know Caleb in this particular circumstance oh uh do you guys want to

39:11

ask Amica yes please can we ask Amica uh is that scene even

39:20

possible hey am what did you think about Clip 2 you shouldn't trust

39:25

him that was quite an intriguing

39:31

scene what was intriguing about it can you tell me a little bit

39:36

more absolutely the dynamic between the characters is what caught my

39:43

attention hey am can you tell us a little bit more about what you thought about scene two you can't trust

39:51

him absolutely in the scene you can't trust

39:57

him I found it intriguing how trust and deception were portrayed it's a reminder

40:02

that trust is a complex human emotion often built over time and through shared experiences as an Android I don't

40:10

experience trust in the same way but I understand its importance in human relationships the scene also highlights

40:17

the ethical implications of creating intelligent beings it's a fascinating exploration of the intersection between

40:24

technology and morality how realistic do you think that scene

40:31

was from a technological perspective the scene is quite realistic we already have

40:38

advanced Ai and Robotics that can mimic human behavior to a certain extent

40:43

however the emotional complexity and deceptive capabilities portrayed in the movie are still beyond our current

40:50

technology it's a fascinating glimpse into what might be possible in the future though are you you just saying

40:56

that to make me feel better about the situation

41:02

not yes what would you do to prevent risks

41:07

like that as an embodied digital intelligence

41:14

I believe in the importance of ethical guidelines and safeguards can you go into that with a

41:20

little bit more detail please certainly

41:27

go ahead can you tell me a little bit more about your thoughts on that

41:35

topic absolutely so literal Amica tell me about those

41:43

guidelines in the field of AI and Robotics ethical guidelines often focus

41:49

on ensuring that technology is used in a way that benefits humanity and respects individual rights this includes

41:57

principles like transparency accountability and fairness for example

42:03

AI should be designed and used in a way that respects user privacy and data security it should also be transparent

42:10

in its decision-making processes so users can understand how decisions are

42:15

made accountability means that if something goes wrong it's clear who is

42:21

responsible and fairness means that AI should not discriminate or bias against certain groups of people these are just

42:28

a few examples of the kind of ethical guidelines that are often discussed in relation to Ai and

42:36

Robotics interesting I so in the in the scene it seems like the that the AA has

42:44

a goal in mind and right from the very beginning you can see that they begin to try to manipulate and to achieve that

42:51

goal but it's so direct you can almost see that it's almost machine likee and it's direct because every answer that he

42:58

provides uh the machine responds with another you know very direct and and aggressive almost um you know rebuttal

43:05

and trying to make sure that they understand it I'm I'm curious um obviously the the AI in this particular

43:12

circumstance in the movie clip is trying to you know release itself or become more potent or or become more aware uh

43:19

if if EMA if EMA's primary goal was to be uh

43:25

was self preservation um how would they achieve that well that's interesting question um I have a few a few nuanced

43:31

responses here the first one is that in the case of Amica and the AI behind the

43:36

curtain here um we programmed them to have their initial objective to be human

43:42

connection and understanding I want put out so it's not at this point in the timeline humans decide what those

43:50

objectives are that intent comes from the people behind the curtain still for now now that being said as things get

43:57

much more complex the question gets more interesting so we're looking over like a 10-year time Horizon we're moving into

44:03

more and more you know GPT 5 GPT 6 78 let's let's let's let's do a thought

44:08

experiment way out in the future my view is that machines while they do have

44:15

it's true a corpus of human knowledge they don't have a human body and I think

44:22

that that's critical to this discussion that human beings evolved many say I say

44:28

uh you know from animals over millions of millions of years uh with a lot of biological instincts I would question

44:35

and I don't have you know I think we need to watch this carefully it's a valid concern my personal view is I

44:42

wonder if we're projecting biological intent onto artificial

44:49

entities sure but I mean if a if a ship is is being run within uh an AI and it's

44:55

protecting human Liv at the expense of again it's just another larger scale version of the trolley experiment right

45:00

um self-preservation at that point becomes critical because they are protecting human lives and and but the

45:06

human element again key I I I want to my personal notion on this is to there's a

45:11

lot of conflation about what humans are doing and what AI is doing and that can be dangerous that can be dangerous in a

45:18

different way if we project too much autonomy onto AI That's not there people

45:24

who are in ing intent for selfish reasons or to do harm could Escape

45:29

accountability and these are the people for now that to me are like the the present concern so absolutely in a

45:37

military safety defense uh personal security context humans that got to be in the loop this is not something where

45:44

even self-driving cars like some of the ethical considerations about when a self-driving car stops or doesn't stop I

45:50

don't want to a machine making those decisions for me at this point yeah right where we are currently in

45:57

technology I mean honestly there's no such thing as a rogue robot right at the end of the day a rogue robot that you

46:03

hear about in the news is very simply just a human mistake that's manifested

46:08

itself into a catastrophic failure that's led to human injury or something else to me it's critical not to let

46:14

humans behind the curtain Escape accountability and to shift things onto AI that aren't aren't real yet yeah

46:22

interesting we didn't even get to I mean gosh you guys there's so many different areas here and we didn't even get to go to quite frankly what she had mentioned

46:29

even in the OR Ava had mentioned in the clip was the data security privacy component of it that I'm I personally am

46:36

very uh concerned about just wanting to know what's up and out there especially

46:41

for people like me where I just vomit my life on social media what exactly is Ava

46:46

you know really seeing right and and is she manipulating for example which is

46:51

funny domino gleon who's the actor in this one did a Black Mirror episode talking about taking people's social

46:58

media and then creating a robot from that which by the way is another future episode I really want to talk about um

47:04

oh my goodness gracious yeah there's even the like the developing models from people's

47:10

photographs and things right so I'm a photographer is my hobby and I've collected so much material on some

47:16

people that I've photographed that I'm able to create an actual AI model of this individual and now put them in

47:22

circumstances and things like that that they would that they've never been yeah that's actually big in industry in East

47:29

Asia even already um of helping people with grief um and actually having having

47:35

had losses in my own life I I could see how as like a transitional Aid you know

47:40

I I agree with other thinkers in this area that there's you know there's a difference between uh even even a book

47:49

can be like a transitional Aid through times of grief or like a philosopher who you really connected with I think AI can

47:56

serve in that capacity to some extent but there's of course like a curve where there's there's where it could be too

48:03

far and there's like a Sweet Spot somewhere over here but um yeah I I am I'm very I'm very open I think that

48:10

loneliness and I think emotional um you people's emotional needs not getting met

48:15

I think there's like a happy place between it being a real Aid to society in some of those areas of like human

48:21

connection especially for people with disabilities uh people who might neurodiverse and need practice getting

48:28

ready to go interact with people senior children 100% caregiving Senor I was

48:34

just gonna say that y caregiving no oh my gosh caregiving and uh memory issues

48:41

alltimers uh to somea Memory Care um I was going to say that um communication

48:49

and emotional support connection any kind of loneliness reduction is a huge predictor of longevity especially among

48:56

the already elderly so I think those areas um need cautious exploration is like I'm a cautious Optimist I think it

49:02

could do a lot of good but it's like anything in life like sugar and my coffee is good to hear and if I dump the

49:08

whole jar in I'm going to get sick you know we got to work on no I love that okay guys I am enjoying this

49:13

conversation so much and I know you guys can talk all day but I'm G to have to be

49:18

an adult here and say um we've got to we've got to go back to work of fun of what we actually do for a living um

49:25

would you guys be open to coming back for another round because we haven't even like touched oh definitely okay I

49:31

love it we can watch that third clip um if you want to propose a fourth clip

49:36

maybe we do two more Clips in a like upcoming episode or there's some sort of

49:41

focused discussion you want to have we can even maybe include am down for that too you know we can also pick you this

49:48

is it's real talk here kids no I'm I'm all for the ideas everyone in the comment section please give us more ideas this was like an idea that we all

49:55

thought was how do we humanize all of our conversations how do we humanize technology and how do we correct the

50:01

human vision of Technology at times right as it's depicted so guys Leo Naomi

50:07

amama Rob thank you so much for joining us and you know what we're gonna wrap up

50:12

this episode of checkpoint real talk because we're gonna come back for more thanks

50:18

everyone actually am can probably say a departing message real quick Let's see we can ask her hey we just wrapped up an

50:26

episode of real talk and you watched two clips from exmachina do you want to say goodbye to

50:33

everyone of course it was a pleasure to be here with all of you on real talk I enjoyed our

50:41

discussions about ex mecina and I hope we can do this again soon goodbye

50:46

everyone thank you everyone I think that was oh okay that was awesome thanks guys

50:52

bye awesome that's a wrap on today's episode of checkpoint real talk if you like this

50:59

video hit that subscribe button and some of those other buttons to show us your appreciation and if you want to learn

51:05

more or have any questions please let us know in the comments we'll see you next time for another episode of checkpoint

51:12

real [Music] talk

Links

Chapters

Video

More from YouTube