Artwork for podcast The Human Odyssey™
AI: Unraveling the Human Factors of Artificial Intelligence
Episode 17th May 2024 • The Human Odyssey™ • Sophic Synergistics, LLC
00:00:00 00:48:15

Share Episode

Shownotes

Join us for the S3 premiere of The Human Odyssey™: A Human-Centered Podcast!

On this episode of The Human Odyssey™ join Rashod Moten, Human Factors Specialist, and Dr. Jennifer Fogarty, our Director of Applied Health and Performance, as they discuss the various ways in which Artificial Intelligence intersects with Human Factors and Applied Health & Human Performance.

This episode of The Human Odyssey™ was recorded on March 23rd, 2024.

Visit our website: https://sophicsynergistics.com/

Follow us on social media!

Facebook: https://www.facebook.com/SophicSynergistics/

Instagram: https://www.instagram.com/sophicsynergistics/

LinkedIn: https://www.instagram.com/sophicsynergistics/

Twitter: https://twitter.com/SophicS_LLC

Transcripts

Speaker:

Welcome to The Human Odyssey, the podcast

about Human-Centered Design.

2

:

The way humans learn, behave, and perform

is a science, and having a better

3

:

understanding of this can help improve

your business, your work, and your life.

4

:

This program is presented

5

:

by Sophic Synergistics,

the experts in Human-Centered Design.

6

:

So let's get started on today's

Human Odyssey.

7

:

Hello, and welcome to The Human Odyssey

Podcast.

8

:

My name is Rashod Moten.

9

:

I am one of Sophic’s Human Factors

Specialists.

10

:

I'm joined here today

by our guest, Jennifer Fogarty, Sophic’s

11

:

Director of Applied

Health and Human Performance.

12

:

Hello. Hi.

13

:

Thanks for having me.

And thanks for joining us.

14

:

Sorry I got hung up there for a second,

but I wanted to ask

15

:

just a bit more about your history

and your background.

16

:

Sure.

17

:

Yeah.

18

:

So, I started out,

19

:

PhD in Medical Physiology,

so I was studying cardiovascular disease.

20

:

Very passionate about Human Health

and Performance.

21

:

I'm an avid, avid exerciser and,

someone who believes that,

22

:

you know, we can actually do more

for our health through - through things

23

:

like exercise and eating well, like,

and how do we prove it to ourselves?

24

:

So I was really focusing on that

and actually had a model where we showed

25

:

that exercise indeed

grows collaterals in a heart

26

:

when you have a blockage,

like you naturally can build

27

:

em and I was fascinated by the process

behind that.

28

:

So I started doing some molecular work

in my postdoc to partner

29

:

with the functional work.

30

:

At that time, I had an opportunity

31

:

to be part of a mission

that had NASA science.

32

:

It was one of my committee members.

33

:

He really liked the way I operated

so he said, could you come run my lab

34

:

at Kennedy Space Center while we have our,

It was a rodent mission.

35

:

It was the last Space Life Science

mission.

36

:

And unfortunately, it was the Columbia

107 mission that didn't return.

37

:

So in about a three or four month

span of working

38

:

at Kennedy Space Center,

I got to learn a lot about, aerospace.

39

:

My family has an aviation background,

so I was familiar with high performance

40

:

jets, people who do, extreme

things and acrobatic flight.

41

:

And I always thought I would end up,

like, studying that in some way

42

:

or incorporating it but didn't have a

directory there at the time.

43

:

Went through the Columbia accident

at Kennedy

44

:

and, was really just blown away

by the environment and the culture and,

45

:

you know, at that time, you know,

46

:

the ultimate sacrifice

these people and their families

47

:

had made to to try to do things

that have never been done before.

48

:

And, came back to Houston.

49

:

I was in College Station at the time

and was looking for a position

50

:

outside of academia,

and one came up at a contractor

51

:

which was Wyle Life Sciences,

and sure enough,

52

:

they had a role for cardiovascular

discipline scientists.

53

:

So it kind of lined right up and,

I started working at Johnson Space Center.

54

:

Quickly

moved on to a civil servant position

55

:

because I had a unique set of skills,

having done clinically relevant research,

56

:

was highly translatable

57

:

and the physicians of flight surgeons

who support the astronauts

58

:

really needed some folks who understood

59

:

what was coming up

through the research pipelines and how to,

60

:

how to potentially translate

it really for their purposes.

61

:

So I was working background of, evidence

base, what was going on in research,

62

:

how it might apply to the needs

that were happening in spaceflight,

63

:

because spaceflight

really puts a premium on prevention.

64

:

Right.

65

:

The best way to manage

66

:

medical care is not to have

a medical incident.

67

:

That’s fair.

68

:

Like, can we really avoid these things?

69

:

Can, can we know that we're not going

to have bad outcomes during a mission,

70

:

a variety of different durations?

71

:

At the time, it was shuttle focused,

but it was the earliest stages of ISS

72

:

where people were now living

instead of two weeks in space on shuttle,

73

:

they were living four

and five months on station,

74

:

and that was a very new experience

for all of the space programs,

75

:

except for shorter stays, that were done

previously.

76

:

There was Skylab and there was NASA,

there was Mir and then NASA Mir.

77

:

So there was a

78

:

little bit of an end of like five

and ten people who had experienced this.

79

:

But it was really the start of, of the

world, of the International Space Station.

80

:

And that was a remarkable

time to be around in science and

81

:

and with the flight surgeons

supporting them.

82

:

So kind of wove

my way through different NASA jobs.

83

:

It was similar, like,

I'm a utility player there.

84

:

I love solving problems

so if there was an opportunity

85

:

to have a role where I was involved

in making a difference in operations

86

:

while I was helping to guide

what research needed to happen

87

:

that was really kind of the best combo

for me.

88

:

But ultimately,

toward the end of my career with NASA,

89

:

I was the Chief, Chief Scientist

of the NASA Human Research Program,

90

:

which when you start getting into those

roles, there's a lot less fun.

91

:

I can imagine.

92

:

But as one of the Russian,

I was interviewed in Moscow

93

:

and we had done one of the isolation

campaigns in their, their NEK Facility.

94

:

They came up and they said, you know,

why don't I smile a lot or something?

95

:

And, and I said, well,

I am a serious person.

96

:

And usually I'm listening and thinking,

97

:

so I don't think about

what my face is doing,

98

:

you know, when I'm on camera

or something along those lines.

99

:

And he asked me,

100

:

through an interpreter, is it

because you think you're the big boss?

101

:

And, I was like, well, actually,

I am the big boss.

102

:

I don't have to think it.

103

:

I'm in charge of a pretty big program,

and I have to be responsive

104

:

to the taxpayers and Congress and,

you know, NASA Headquarters.

105

:

I said, so, yes, I'm

a very serious person.

106

:

I can laugh about it

now, but at the time I was very like,

107

:

what do you mean, the big boss?

108

:

What do you think

I’m the big boss? I got the title.

109

:

It was a lot of

110

:

responsibility,

but it was also very, gratifying.

111

:

Right? Just in a different way.

112

:

So after a couple of years of that,

I decided to step away from government

113

:

work directly, being a civil servant,

and go into industry

114

:

and that's when I joined

115

:

Sophic as the Applied Director

or the Director of Applied Health

116

:

and Performance.

117

:

You're not the only one who doesn’t

remember my title.

118

:

So it's, it's just an opportunity

119

:

to work with a variety of spaceflight

providers, people who do medical

120

:

hardware,

people who are going to do extreme

121

:

environments, other than spaceflight,

we can get involved with.

122

:

So it was applying my skills

to problems again

123

:

and being part of building solutions

and seeing them applied.

124

:

So it's an exciting

125

:

time to be in aerospace, obviously,

you know,

126

:

with the Artemis commercialization

of low-Earth orbit and potentially even,

127

:

you know, lunar missions and then Mars

missions, there's a lot going on.

128

:

There's a lot of companies

that are starting out, people

129

:

who want to engage and really need help

with Human-Centered Design

130

:

and as, as we talk about more

in the government/industry side,

131

:

Human System Integration,

132

:

and then the concept of keeping people

healthy before they go into space.

133

:

And while they're in

and on those missions, which

134

:

right

now, descriptively, are incredibly varied.

135

:

Yeah, I Imagine.

136

:

So, yeah,

the variables are almost limitless.

137

:

Thank you for having me.

138

:

No, no Love the conversations.

139

:

Oh, same, same and honestly, thank you.

140

:

Your background.

141

:

I, I didn't want to provide,

you know, just give an intro,

142

:

brief intro because I knew I wouldn't

do it justice so I really appreciate that.

143

:

Now, for

144

:

today's topic, we’re going to discuss

artificial intelligence.

145

:

You know, it's a hot topic today.

146

:

It's just in, in society, you have,

of course, everyone under the sun

147

:

speaking about positives

and negatives, fears

148

:

you know, any even you know,

149

:

for optimists, you know,

they're thinking about where it could be.

150

:

So today, just want to primarily

just focus on artificial intelligence

151

:

with regard to your background itself.

152

:

But before we do that, I do want to ask

you mentioned College Station.

153

:

You wouldn't happen to be an alum of,

154

:

[Texas] A&M.

155

:

Well well this kind of.

156

:

The, the reason I hesitate

is it's an interesting story.

157

:

So when I joined the College of Medicine,

it was the Texas A&M University

158

:

College of Medicine.

159

:

While I was there,

160

:

the College of Medicine and other schools

associated with the Texas A&M system

161

:

kind of pulled out and became the Texas

162

:

A&M System Health Science Center.

163

:

That existed,

I think, on the order of a decade.

164

:

So my degree actually talks about coming

from the Texas A&M Health Science Center,

165

:

and my, my class in particular,

because of the date

166

:

we started,

typically we would have gotten Aggie

167

:

rings like that

was the model even for graduate students.

168

:

I was, I'm from New Jersey, so.

169

:

It's was quite a culture shock

and I didn't understand the whole thing.

170

:

But, nevertheless, there were people that

were in my class were very disappointed

171

:

that when that all that shift happened,

there was a hot debate

172

:

about whether the graduates

would actually get Aggie rings.

173

:

Yeah.

174

:

And some people were,

175

:

you know, obviously sad, very sentimental

and very went down that path.

176

:

I didn't really understand it.

177

:

So I kind of was over my head.

178

:

But yeah,

I mean, I've had this strong association

179

:

with A&M and the College of Medicine

in particular,

180

:

and I did a lot of work at the Large

Animal Clinic

181

:

at the Veterinary School,

which I tell you is just stunning.

182

:

I mean, both the capabilities are amazing.

183

:

The amount of funding they have, the work

that they do, world class.

184

:

but yeah,

185

:

that's the stuff I experienced

in the, experiences I was able to gain

186

:

because of the remarkable research

they did, really sets you up

187

:

when you leave to be well-versed

both breadth and depth.

188

:

The opportunities are kind of limitless

there

189

:

if you're willing to do work 24 hours

a day.

190

:

As a grad student,

sometimes that is required. Yes.

191

:

That’s my, that's my A&M story.

192

:

I try to be very careful

because I'm like, technically,

193

:

if you saw my degree,

it doesn't say those words, but yeah.

194

:

Just wondering,

you mentioned College Station in the A&M

195

:

as a huge presence here in Houston,

specifically in the health care field.

196

:

So just wanted to Yes for sure.

197

:

Yeah, very strong. All right.

198

:

Well just to get back again.

199

:

Thank you again, but to get back to

of course AI, before we,

200

:

you know,

dive deep into a conversation about it

201

:

I do want to ask, how would you define

artificial intelligence?

202

:

Just based on your understanding

of it. Sure.

203

:

Which, which as someone with a degree

in Medical Physiology, I’m not,

204

:

not the highest qualified person

to comment on this,

205

:

but as someone you know aware of it,

you know,

206

:

that might actually answer your question

with the question.

207

:

So, so I think I understand, the,

208

:

the use of the terminology,

“artificial intelligence,”

209

:

and it's usually coupled in my

world with machine learning.

210

:

I'm a little

211

:

more accustomed to understanding

in a very tangible aspect

212

:

of machine learning,

and the development of algorithms

213

:

that can go into massive data sets

214

:

and kind of evaluate patterns.

215

:

Right.

Particularly in medical data. Right.

216

:

And the machine can learn things now

217

:

to transcend that.

218

:

You're like, at

what point do we go from algorithms

219

:

that can be used to interrogate data,

find patterns,

220

:

and then check against reality,

which it says, are these patterns real?

221

:

You know, and do they are they meaningful?

That was the other part.

222

:

Like in the medical domain, you're like,

doctors would never do

223

:

it, should not do tests that do not have

a positive predictive value.

224

:

Meaning when you get the answer,

you know what to do with the answer.

225

:

Whether it's a zero,

you didn't have a problem,

226

:

or a one, you have the problem

that test has meaningful interpretation.

227

:

Yeah. If you don't understand

where you're going with it, don't do it.

228

:

And that's where AI, for us sits right now

in this gray zone of

229

:

it does not necessarily confidently

deliver a positive predictive value.

230

:

It delivers new insights

because it can go,

231

:

these algorithms can go so much more

broadly than the human mind can right.

232

:

Assimilate data that comes from a variety

of sources and the abundance of data

233

:

that's available for,

for a variety of different environments.

234

:

But for me, it's really the concept

of going artificial intelligence is,

235

:

it transcends mathematical equations

that are algorithms,

236

:

to it itself starts to build new

algorithms, right, based on the patterns

237

:

it is or isn't seeing,

or it is actually determining

238

:

whether it's got the tools.

239

:

Yeah. Yeah.

240

:

Right and to be honest with you,

it's like

241

:

with the most current manifestation

I think people would be,

242

:

experienced with is ChatGPT,

you know, which they call it, and recently

243

:

I very much appreciated the terminology,

like artificial intelligence in the wild.

244

:

Yes. This capability has been unleashed

and people are playing with it

245

:

and when you play with it,

you train it, right?

246

:

Not different than children

or dogs, or cats.

247

:

Which obviously has a variety

248

:

of different outcomes depending on who's

doing the training and what you've got.

249

:

Yeah.

250

:

But watching it deliver unique insights

based on the direction it's been given,

251

:

that kind of transcend

anyone's person, person's capabilities.

252

:

So the way I think of it,

it is almost like the personification

253

:

of the diversity of people

who make up the contributors.

254

:

Right?

255

:

But instead of trying to figure out can I,

can I understand what you are saying

256

:

and use your intellect

in your experience base?

257

:

It is pulling the salient points from you

in some way,

258

:

putting it into the pot of options,

reconfiguring them,

259

:

kind of doing like a lot of what I know

is probabilistic modeling.

260

:

Like, let's try this permutation. Yeah.

261

:

And comes back a million times

later and says,

262

:

when we've looked across all the patterns,

this is this new insight I can give you.

263

:

Yeah.

264

:

and so for me, I like,

you know, the idea of it being sentient.

265

:

I don't think it's true.

266

:

It's not thinking, but it's using math

267

:

with respect to a variety

of different data sources and the idea of

268

:

recognizing, having pattern recognition

or recognizing lack of pattern,

269

:

you to then reconfigure itself

to do another query.

270

:

Yeah.

271

:

So it's not quite,

it's beyond the algorithm level

272

:

where someone's writing the code

to make it do a task.

273

:

It itself

creates its own tasks and I'm like, well,

274

:

that is pretty powerful

and I understand why there's all sorts

275

:

of emotions

and concerns wrapped up in that.

276

:

But I do think of it, I'm a little bit

of a centrist in a lot of things,

277

:

which is it's a tool,

and we get to decide how we use that tool.

278

:

And if we want to let it run free

and tell us to do things,

279

:

that is a choice that is made versus

I'm going to use it to help me understand

280

:

things, and I can use it as a tool

that can give me information

281

:

that otherwise

I would never be able to perceive.

282

:

so I'm a knowledge is power

kind of person, so I don't fear it.

283

:

But I do understand

284

:

people's concerns about other people's

choices, about the utilization of it.

285

:

Yeah.

286

:

No, that makes perfect sense

287

:

of course, you know

always hear the, analogy to say Skynet.

288

:

You know? Yeah.

289

:

That's generally where most people’s

fears come from.

290

:

Yeah, the entertainment

industry you know, when art,

291

:

you know,

292

:

portrays

a reality and then we start to live it,

293

:

you know, that has already defined

where it could go.

294

:

Yeah.

295

:

Right, and so it would be good to have

some more positive representations.

296

:

Yes, yes.

297

:

Which I think are out there,

but maybe not as interesting in the world

298

:

of social media and different modalities,

they're not as clickbait susceptible.

299

:

Right?

300

:

That's something I've seen.

301

:

I think, even YouTube,

just in understanding, of course.

302

:

So, as far as my background, you know, my.

303

:

[Inaudible] A small snippet,

I have a grad degree,

304

:

you know, in Psychology, Human Factor

Psychology, with Human Factors focus.

305

:

That being said, I remember back

in one course, my favorite professors,

306

:

you know, we started talking about

algorithmic thinking and modeling, right?

307

:

And of course, that naturally led

to neural networks

308

:

Right

as they pertain to, to the software side.

309

:

And then the development side.

310

:

And you know, after that, that course,

I think it was maybe that summer

311

:

I go back and went, you know, headfirst

into just learning about AI

312

:

and how these models

are not only developed but

313

:

also implemented within the systems

and whether or not it's truly like,

314

:

you know, as you said,

315

:

data in, data out, you know,

316

:

you have a subset of inputs capture

big data sets, gives you an output.

317

:

Right? I wanted to learn more about that.

318

:

And what I found was, you know, just

in clicking even in educational videos

319

:

or educational blogs, reading journal,

not just journal articles,

320

:

but articles, many articles, a lot of them

would highlight neural network

321

:

but then would not actually explain

how neural networks

322

:

are actually implemented in -

on the software side.

323

:

So having it's having the idea,

giving the idea in the background

324

:

and understanding that, you know, the

neural network is literally a 1:1 ratio.

325

:

I'm giving you this one

input, this or the guidance.

326

:

And it's looking for this

particular subset of data.

327

:

Right? That was really interesting.

328

:

So yeah, no thank you for actually

explaining that and not

329

:

I say

330

:

not buying into the fear

I appreciate that.

331

:

Well and as you know, comes up regularly

332

:

like panicking and fear

don't make things better.

333

:

Yeah Yeah, so helping the audience

and folks who don't understand it,

334

:

you know,

335

:

have some sort of frame of reference,

and in a way that they can understand it,

336

:

to the best of our ability,

to just kind of

337

:

settle and calm everybody down

because then you can think about it.

338

:

Yeah.

339

:

If you're already in fear mode, that's -

we know on a, on a neurobiology level,

340

:

you've already obstructed

some elements of clear thinking,

341

:

because now, you know, in my world as

physiologists like that's fight or flight.

342

:

So your body is already redistributing

blood a certain way.

343

:

It's already prioritizing actions

in a certain manner.

344

:

And so that used to serve us

very well, right?

345

:

When not reacting

was likely to lead to death.

346

:

Right?

347

:

So Yeah but right now we are just

bombarded with issues that set off

348

:

our fight or flight syndrome because it's

so intrinsic to how our brain operates.

349

:

Like it's, it's not quite binary,

but it can be pretty close.

350

:

And then our behavior

and kind of our learned systems

351

:

potentiate the fight, or fight and flight

352

:

over the calm and that's run by two

different parts of the nervous system.

353

:

And so then you have to be very deliberate

about doing things, for yourself,

354

:

like calming yourself down - self down

355

:

to potentiate the calm side

of your nervous system. Yes.

356

:

That then again

357

:

allows your brain to work

more optimally to be calm

358

:

and think through an issue

rather than physically react

359

:

and then reflexively react and of course,

it's not always a physical thing,

360

:

but verbal like the, “No”, and, “I know,”

“Absolutely not.” And, “you're wrong”.

361

:

And you know. All that's all the.

362

:

keyboard warrior stuff that happens.

363

:

Yes, yes.

364

:

That tends to be relatively unproductive.

365

:

So Yeah absolutely.

366

:

Now you know,

367

:

given your background

and you did touch on this earlier,

368

:

but I wanted to kind and get your thoughts

on how you think,

369

:

AI artificial intelligence

370

:

is not only being implemented

in health sciences and human performance.

371

:

whether that be at the research level or,

372

:

you know,

373

:

whether you're looking at practitioners,

how it's being truly being implemented,

374

:

but also has an impact on the industry

at all?

375

:

Sure. Yeah.

376

:

There's a couple areas

377

:

where it's really

378

:

been on the leading edge of coming in,

and it started with definitely

379

:

the machine learning side.

380

:

So one is radiology.

381

:

you know, I don't know how many people

might have experienced this,

382

:

but often if you get a biopsy

and it gets sent off, and, you know,

383

:

clearly there's something potentially

seriously very wrong if you're getting,

384

:

like, an organ biopsy or tissue biopsy,

you know, from a clinician,

385

:

not a research protocol.

386

:

it can be weeks

before they expect a result.

387

:

Right?

388

:

So now you get to like the the heightened

sense of, “I need an answer.

389

:

Like the answer could be anything.

390

:

And then I know what -

then we can have a plan.”

391

:

But just the waiting

is a torturous process.

392

:

Well, the question is, “Why

does it take so long?” Well, it's a very,

393

:

expert and human dependent activity,

394

:

and the people who specialize in that

get bogged down

395

:

in a lot of false positives

and a lot of just negative samples.

396

:

So the question was for that sort of tool,

397

:

does it always require the human eye

398

:

or could we have the experts train,

399

:

you know, a

400

:

machine learning algorithm

to know what to look for.

401

:

Not so it could do the triaging

and you could speed up the process.

402

:

So in the radi- the domain of radiology,

403

:

more and more,

404

:

interrogations,

whether it be MRI, CT scan,

405

:

biopsy - are being triaged by machine

learning algorithms

406

:

which may have already crossed over

into what may be artificial intelligence,

407

:

that the machine is now recognizing that,

hey human,

408

:

you forgot to tell me these things.

409

:

Like this pattern.

410

:

I see this too.

411

:

It goes into another category

like you are looking for

412

:

cells of a certain type

would have indicated a disease process.

413

:

I didn't see them,

so it's a negative for that

414

:

but I saw this other thing

that you should look at now.

415

:

So it flags the specimen to be reviewed

by a human for a particular reason,

416

:

and changes the prioritization of - of how

they look at it.

417

:

So the most critical cases can go

to the top of the line, to the human

418

:

who really needs to do the high level

subject matter expertise work.

419

:

My experience with, with spaceflight in

420

:

particular is with, imaging of the eye

421

:

in particular.

422

:

We've got an issue going on

with astronauts that's hard to explain.

423

:

Very concerning.

424

:

And, a lot of energy

putting into studying that.

425

:

But one of the areas that's really been

remarkable is in, nuero-ophthalmology

426

:

and imaging

with respect to AI in the mental now

427

:

and that is a clinical, a clinical tool.

428

:

A lot of clinicians are using that.

429

:

The confidence,

the verification has been done.

430

:

There's a lot of certainty.

431

:

There's

constant quality control being done.

432

:

and that field just continues to grow,

you know, and there was some fear

433

:

not only that it could be like, “Is

it wrong?” You know, constantly like, “Is

434

:

this good quality?”

So a lot of that work continues

435

:

in the background to continue

to assure that this as is as good

436

:

or better

than if a human did the first pass.

437

:

But the other element was people

having fear of being replaced. Yes.

438

:

What do I do now if I'm not looking at,

you know, 30 slides a day

439

:

or sitting in a dark room

staring at a screen at MRI images all day?

440

:

And it was.

441

:

Yeah, like we

we don't have enough doctors right there

442

:

that in that case,

there was no reason to fear.

443

:

We just shifted your role

to the higher level expertise

444

:

and applied it differently.

445

:

So I think radiology

has gotten comfortable with the idea

446

:

of using it as a tool

and really potentiating their value.

447

:

More broadly, it is not applied

for the reasons I mentioned.

448

:

Like the validation is not there,

the confidence is not there.

449

:

Overall, the data to support

a diversity of people is not there.

450

:

And a lot of medical care,

you have a selection bias based on people

451

:

who can afford the insurance

or afford the test.

452

:

So there was some work done recently,

it was actually on

453

:

an immune therapy for cancer,

that they thought they had a pattern

454

:

and they had a treatment regime

based on the pattern.

455

:

And, they started delivering

that, treatment more broadly.

456

:

And it turned out that for people of color

and Asian people,

457

:

that it was a worse choice.

458

:

And it turned out because they weren't

part of the select in pool

459

:

when the study was first done,

those findings were not present.

460

:

So they took a more narrow population,

extrapolated that this would be good for

461

:

everybody based on the cancer criteria

when it turned out that it's

462

:

not just a cancer criteria,

463

:

but some other genetic underpinnings

that have to also be present.

464

:

It's just that

465

:

genetic underpinning

wasn't diverse enough to pick out

466

:

that it didn't work for everybody.

467

:

Yeah and I have to ask, as far as,

you know,

468

:

in instances like that, you know,

whenever you do see at least

469

:

some signs of bias within the outputs

itself, are you finding -

470

:

of course, you know, you mentioned that

it's not really being implemented broadly,

471

:

you know, across the industry

but whenever that does come up,

472

:

are you seeing that heighten

that level of concern

473

:

a bit more

or is it kind of just triage the issue

474

:

on its - in the, in that

silo and then saying, okay,

475

:

after further

476

:

assessment, we'll decide

whether we want to implement this later?

477

:

I think in the, in the clinical

478

:

and research, clinical research

domain, it's just heavy, heavy skepticism

479

:

and particularly for the ones

that have shown to not be beneficial

480

:

use of it or are limited by

something like, selection bias, data bias,

481

:

people have pulled a little back

482

:

and said, okay,

you know, from an industry standard.

483

:

And this isn't just like government

regulation.

484

:

This is, you know, the clinical industry,

485

:

the insurance industry is involved,

as you might imagine.

486

:

Now, that's not today's topic.

487

:

So that that's all I’ll say on that.

488

:

I think we both had the same reaction.

489

:

It's like four podcasts.

490

:

Yeah.

491

:

Yeah. It's -

there’s a lot to unpack there.

492

:

Yeah. Yeah.

493

:

But no

494

:

they, they really have kind of pulled back

and said we need to do better.

495

:

And that has been, the benefit was

it was the push recognizing

496

:

that we don't have

a diverse enough data set.

497

:

And that actually revealed

498

:

other issues with it

which is inequitable care, access to care.

499

:

Why is it?

500

:

Why don't we have these people

in our database?

501

:

You know, like what - how do we do this?

502

:

I mean, we have to write this thing.

503

:

So I think it has resulted

in some good things, but it will delay

504

:

the product, which in that sense,

going back to, like, it's okay,

505

:

you know, for any job I've ever worked,

and I get a lot of pressure

506

:

to do things fast, light-rush,

like there's a lot of urgency.

507

:

Yeah, yeah.

508

:

Real or not,

like we want to make progress.

509

:

And I get that.

510

:

But my phrase is always like, “I will only

go as fast as good will allow.” Yeah.

511

:

And if I don't think something is good

and I know that’s a generic phrase,

512

:

but that means, you know, credible,

valid, evidence-based,

513

:

you know, as you know,

quality of data, diversity of data.

514

:

But you start ticking down the list of,

of what means good.

515

:

Until it has those things,

we're not going to production.

516

:

Yeah.

517

:

Like and we can explain

why, there's solid rationale,

518

:

you know, but that, that definitely

gets a lot of angst when you're working

519

:

on the business side of the house

so Yeah, no, I can imagine.

520

:

But - but there are fields advancing it.

521

:

I think the other ones,

the more generalizable where the,

522

:

the diversity, it's a very broad gradient

of both the medical conditions

523

:

and the medical treatments,

those are incredibly complex.

524

:

And those are going to take longer

where something like

525

:

machine learning algorithms to

AI start to make sense to us.

526

:

And the field believes

it's the right thing to do

527

:

and it's showing benefit.

528

:

Yeah.

529

:

You know, surpassing

what standard of care is today.

530

:

It is more prevalent

in the performance world

531

:

because again, and mostly it's a risk

- that risk:benefit ratio.

532

:

Yeah.

533

:

And when you're talking about potentially,

potentiating elite athletes or,

534

:

or people who are considered

occupational athletes, people

535

:

who go into hyper extreme environments

like the Everest climb or,

536

:

you know, things of that nature

going to Antarctic, high altitude work.

537

:

That's when you're saying, like, “Well,

hell, I got nothing to lose here.

538

:

Like, like if it can help me do it

better, let's, we're all for it.”

539

:

So a lot of data gets gathered,

a lot of biomedical monitoring is going

540

:

on, and then you're dealing with super

deep data on an individual

541

:

that you can do a lot of work on them,

baselining them

542

:

and figure out, like, are there ways

to potentiate them and who they are in

543

:

a, a pattern we couldn't have seen, other

than throwing these algorithms at it.

544

:

It’s like a signal to noise issue, like

we're going to gather a bunch of stuff,

545

:

we're going to know to

546

:

look at some things, you know, that we

typically have have done for decades now.

547

:

But there's a lot of potential signal

in all of this noise.

548

:

We just don't know how to find it.

549

:

Yeah.

550

:

So the ML AI process

kind of draws the signal out

551

:

and I always tell people my approach

is that signal just becomes a clue.

552

:

It does not tell me what to do yet.

553

:

Now the work happens.

554

:

Like let's go verify that signal.

555

:

Let's verify

what we would do with that information.

556

:

And if it belongs in the operational

domain, does it belong on Everest?

557

:

Does it belong in high altitude?

558

:

Does it belong in space spaceflight.

559

:

You know? Yeah.

560

:

It's interesting because of course

I've seen that, you know, whether

561

:

when it comes to performance coaching and,

of course,

562

:

athletes

and definitely extreme athletes as well.

563

:

and another tidbit about my history.

564

:

I was in the military as well.

565

:

Yeah, that’s another area, they’re

very interested in all of us for sure.

566

:

As you might know.

567

:

Yeah, yeah.

568

:

And that's something, you know,

I had the pleasure and honor of,

569

:

working in Bethesda

and got to see one of their labs there,

570

:

or work - work with one of their labs

for human performance.

571

:

And it was actually, really amazing

to kind of see exactly

572

:

how we're not only tracking performance,

573

:

but also increasing performance,

improving performance.

574

:

And that was my first time

575

:

really seeing any semblance of machine

learning, you know, being used.

576

:

And it was

it was enlightening to me, to myself.

577

:

That being said, you know,

578

:

you do want to make sure that the - that

the data is good.

579

:

Yeah.

580

:

And any of your modalities

that you're, implementing

581

:

you want to make sure they're good.

582

:

How long typically does it take?

583

:

You know, at the - from the research

level of, let's say

584

:

research

has been validated, peer reviewed and

585

:

industry, specific

industry, let's say those coaches,

586

:

you know, how long does it take for that

information to not only get trickled down

587

:

but also used, and then for,

588

:

on the backside, how long does it take

for that data to get sent back up and say,

589

:

“Hey, we're using this.

This is actually great.

590

:

You know,

we think we should actually improve

591

:

or increase our use of AI, any AI system.”

592

:

Yeah, I think it's a,

it's still a question of,

593

:

it has variable lengths

depending on who the user is.

594

:

Yeah, that makes sense.

595

:

So you see people who are

596

:

I will say another group who's avid

users of this,

597

:

are people interested in longevity.

598

:

You know, a ton of data is coming out

599

:

on biochemical pathways,

molecular pathways that get turned,

600

:

get turned on, get turned off Over time,

601

:

you know, chronology affects biology.

602

:

but then lifestyle factors,

you know, who are you?

603

:

What and how have you been living?

604

:

Where have you been living?

605

:

Very important.

606

:

What are your leisure activities?

607

:

So that in a composite

ends up creating the version of

608

:

what are your exposures and exposures

times kind of your genetic

609

:

vulnerabilities versus robustness

lead to your outcomes over time.

610

:

And some can behap more quickly

versus happen later.

611

:

But if someone wants to be an architect

of their biology,

612

:

you're going to have to dig pretty deep

into the molecular world.

613

:

And as your body translates from,

you know, your DNA code

614

:

into the RNA, then to a protein

and a protein to function,

615

:

and then the function

to how your body operates.

616

:

Right?

617

:

That's where the rubber meets the road.

618

:

Like do you run faster?

619

:

Do you live longer?

That's the end question. Yeah.

620

:

And those people,

they call it biohacking now.

621

:

I would say the biohacking community

is willing to use

622

:

just about any tool possible,

and they'll take any clue and try it.

623

:

That is terrifying.

624

:

It really is.

625

:

And but in this day and age,

626

:

if it doesn't require a clinician

to prescribe something,

627

:

you have the freedom to go acquire stuff

and people will leave the country,

628

:

to go get access to tools,

629

:

meaning therapies, medications, whatever.

630

:

There's a laundry list of things

under that headline.

631

:

But,

yeah, that moves very rapidly, right.

632

:

Because the clue happens

and they want to go try it.

633

:

They are their own experiment

over and over again.

634

:

And there are people who have suffered

the ultimate consequences of,

635

:

using themselves

as, as a science experiment.

636

:

that when strong validity is not there

637

:

because a lot of it is like,

who knows how wrong they were?

638

:

We'll never actually know because of how

they, did they document what they did?

639

:

Can we repeat this experiment of one?

640

:

It could be what I call fail

for the wrong reason,

641

:

which is you have the right tool

but the wrong amount of the tool.

642

:

Right.

643

:

Dosing can be dependent.

644

:

Timing can be dependent.

645

:

So that's why a real science protocol

would help you know

646

:

if something has the potential

to be a tool that could be more

647

:

broadly used or prescribed in a way

that could make sense when people need it.

648

:

You know, and I, given my background

and where I've spent the past

649

:

20+ years doing, I do lean a little bit

right of center when you talk about,

650

:

having rigor in that process

651

:

and then having clarity about what we know

and how much we know about it,

652

:

not to obstruct freedom of choice,

but your freedom of choice

653

:

is, obfuscated by the idea of,

you don't know what you're choosing.

654

:

Yeah.

655

:

So in the world of human research,

we have informed consent.

656

:

So when it comes to something

like participating in something

657

:

that's - is using AI to give you insights.

658

:

Part of the informed consent,

and that this is not literal, but how

659

:

I would approach it.

660

:

There are

661

:

other analogies to this is uninformed,

informed consent.

662

:

What you're going to be told is,

we don't know what risks

663

:

you're really accepting here,

but you're willing to do it anyway.

664

:

And what parallels

665

:

this is people who are willing to sign up

for a one way flight to Mars.

666

:

You know, there are different companies

out there trying to get lists of people.

667

:

This happens pretty regularly.

668

:

not the credible companies in terms of,

they have a vehicle and stuff ready.

669

:

They're trying to get funding,

you know, crowdsource

670

:

funding to go build a something.

671

:

but with the caveat that, hey,

we don't think we can get you back

672

:

and they’re like, “I'll go anyway!”

673

:

You get hundreds of thousands of people

signed up.

674

:

And so clearly there's

no impediment there.

675

:

Yeah.

676

:

But they're - they're informed

if you just, if like, “Do

677

:

I have to protect you from yourself?”

Sometimes you know it's very parental.

678

:

That's what people don't like.

679

:

That tends to be

you know what regulatory bodies do.

680

:

So Yeah.

681

:

So AI is sitting in that space where,

it has a lot of potential.

682

:

It's in the wild to some extent,

and you can play with it,

683

:

but I, and you don't need a prescription

for it, so government’s not regulating it

684

:

they're struggling with what that means.

685

:

I don't think they really should.

686

:

They're not good at it.

687

:

[Laughter]

688

:

But what do we do

as a, as a culture, as a civilization,

689

:

where do we give ourselves some boundaries

so that we can ensure people are safe?

690

:

Because I guarantee you as what we see

even in the pharmaceutical industry,

691

:

you know, in the recreational drug

industry,

692

:

you can be very upset

if you suffer bad consequences

693

:

from something that you were -

you were trying to use

694

:

and you thought could give you benefit

and now, now you want someone to blame.

695

:

Yeah.

696

:

So, you know, you do want to set up

a structure where there's some boundaries

697

:

that says, “You go outside

these bounds, you're on your own.”

698

:

But for what we know,

it has legitimate purpose

699

:

and - and has verification

and validation of it.

700

:

We think we could apply it and do better.

701

:

We can do better.

702

:

How we do today because it gives

us insights we couldn't have had before.

703

:

Yeah.

704

:

It actually touches on

one of my final questions as well.

705

:

You know, earlier

you spoke about OpenAI, ChatGPT, and,

706

:

you know, in speaking about

707

:

not only how AI actually captures

information and how it is being used,

708

:

but also specifically

when it comes to health and biohacking.

709

:

You know, you've seen,

we've seen obviously,

710

:

those apps

where you can sign up, I'm guilty of it.

711

:

I think I signed up for when the Wim

Hof Method at some point as well.

712

:

You know, we're all

if you're somewhat health conscious,

713

:

there's

something that you're actually interested

714

:

in, but the one question I never thought

to ask, you know, whenever I'm actually -

715

:

whenever I'm going into the apps

and I start putting in my information

716

:

is how my information, so my, my actual

personal information is being used.

717

:

Are you seeing any concern

within the industry,

718

:

with regards to privacy protections

when it comes to AI?

719

:

A lot of concern,

720

:

and clearly in the,

in the clinical domain, particularly

721

:

in the United States and Europe,

has some pretty strict laws, right?

722

:

They actually have stricter laws that,

at some point when I was dealing with my -

723

:

my job at NASA, I had

international partner, you know, work and,

724

:

yeah, one of their laws,

about electronic data, pretty much like,

725

:

shut everything down for a little bit

until the lawyers

726

:

figured out, like,

how do we implement something?

727

:

What does it really mean?

728

:

So the GDPR was - was something to assure

729

:

that they had the best of intentions,

but it just like, the wheels grinded

730

:

shut for a couple of months until, cause

we had test subjects

731

:

in a very expensive study

and suddenly like they were like, well,

732

:

we can't send you the data

from Europe to the United States.

733

:

Yeah.

734

:

I was like, well,

considering we're paying customer like

735

:

and then they're consented, I'm like,

you're going to have to figure this out.

736

:

And - and

737

:

they did but they

just they didn't know how at the moment.

738

:

And that didn't

have to do with AI in particular.

739

:

But that just gave you like the

740

:

ultra conservative, like it's an all stop

until we figured it out.

741

:

So since so few,

742

:

clinical tools depend on AI,

743

:

you won't see it in a disclaimer

right now, but you get the HIPAA release,

744

:

you know, the Health and Insurance

Portability Act release, which tells you

745

:

we can only send your data to other people

who are going to do X, Y, and Z with it.

746

:

And otherwise, you know, it's - it's

secure, it's behind these firewalls.

747

:

You know, they try to give you some

information about your actual privacy.

748

:

So that structure is in place,

but I don't see anything

749

:

coming out in releases

talking about using AI tools.

750

:

Usually those are going to come out

in a separate consent

751

:

because essentially

that would fall under research.

752

:

Yeah.

753

:

So clinicians do do research

and there is clinical medicine

754

:

using people's data to go train

AI and then surveil AI on the back end.

755

:

Is it

delivering results that actually happen

756

:

because you have the medical results

in the medical record,

757

:

and even it is allowable under HIPAA

758

:

in the United States for your data

759

:

to be used without your consent,

if it can be anonymized,

760

:

meaning like some of your

761

:

demographics will not be moved

along with you, your name,

762

:

your Social Security number

or your insurance information,

763

:

none of that will move, but it will say

like male between 25 and 50.

764

:

You know, it might,

depending on the requesters

765

:

request, it may have something like body

mass index, some - some of the metadata

766

:

that would help them understand

and then would say like, did you have,

767

:

you were normal, healthy,

768

:

not hypertensive,

not two - not type two diabetic

769

:

because then they want to compare you

with people who are sort of like you,

770

:

but type two diabetic

and have BMIs that are high and say

771

:

can AI predict, could have I predicted

who was going to be who.

772

:

So they don't give AI all the information.

773

:

They kind of give them the left side

of the block of information saying what -

774

:

who were you if we knew you from,

you know, ten years old to 25

775

:

and then we map you from 25 to 50.

776

:

So we already know who you became.

777

:

But if we only gave AI the upfront data,

the earlier

778

:

part of your life could it have known

you were going to become that person.

779

:

Could it have tracked essentially

your biomedical information in a way

780

:

that said you had risk factors

we couldn't see?

781

:

And -

and that is the good use of AI, right?

782

:

Because then we can go to the 10

to 25 year olds and say,

783

:

how do we really do prevention?

784

:

How do we stop you

from becoming a type two diabetic?

785

:

How do we stop you from becoming,

you know, the - the heart attack victim

786

:

or the stroke victim like that

is the goal of using AI in medicine.

787

:

So HIPAA has that built in,

which is a great tool

788

:

because it's a phenomenal database,

but it does protect your privacy.

789

:

Outside of clinical world,

790

:

if you're engaging in these apps,

791

:

you have no guarantee

what's happening with your data.

792

:

You can go into the fine print,

and I would always recommend

793

:

downloading the terms

and reading them later.

794

:

Like, we all get it.

795

:

I mean, I've signed up for stuff,

I get iTunes.

796

:

I don't, I'm not a lawyer.

797

:

I don't understand most of that.

798

:

Like I have a pretty high degree

and still I'm like,

799

:

I don't understand my phone bill.

800

:

Yeah. My cell phone bill. Yeah.

801

:

I mean they've had - they've done joke

like not joke but like, can a neurosurgeon

802

:

and a brain, you know, can a brain surgeon

and like a nuclear engineer

803

:

figure out your cell phone bill,

like what am I being charged for here?

804

:

And then like the iTunes agreement,

like, no.

805

:

We just, we want the iTunes.

806

:

Like just, let’s move on.

807

:

But I

808

:

also don't want to sign away my rights

and give you stuff.

809

:

Yeah.

810

:

So the warning is,

is that a lot of times in these companies,

811

:

you are the commodity.

812

:

They are using your data

to build their business case,

813

:

and they are using it to refine their,

their offering, their product.

814

:

So in some cases, if you engage,

you are being provided something of value.

815

:

That's why you did it, right?

You wanted something from them.

816

:

Well, they need to build

their business case of the future.

817

:

So they're going to use your data

and you as a participant to get there.

818

:

And so then the - there's a mutual benefit

819

:

if you understand what you signed up for.

Yeah.

820

:

There are some apps,

and I, I was told this a long time ago

821

:

and I think I,

my son is 16, on the internet,

822

:

in the wild.

823

:

Very nerve-racking.

824

:

And I'm just training him

to be the best critical thinker he can be.

825

:

It's like,

shutting all that off is not an option.

826

:

But, he's faced with a lot of choices.

827

:

He may not really understand. Not,

he doesn't have the life experience.

828

:

So he thinks he knows it all,

but he does not have the life experience.

829

:

So, but I say the one thing you gotta know

is if something is free,

830

:

you are 100% the commodity.

831

:

Anytime someone's having you sign up

and you got to give them your email,

832

:

you know, the texting is terrible

now, the phone number they want, but

833

:

you know, the email and your metadata,

and they may be watching

834

:

you and your habits, like, you know,

some of these shop apps and stuff,

835

:

like they're watching everything

you buy and search and just know that for

836

:

whatever you thought was worth getting,

that they're getting a whole lot.

837

:

Yeah.

838

:

And you have signed away

your rights to know what that is.

839

:

And that's

840

:

where it's very dangerous,

and that's why people go to DuckDuckGo.

841

:

And, you know, which I get.

842

:

I don't know what to do about it.

843

:

I have no answer.

844

:

I'm, I'm a little, like,

willing to try stuff myself,

845

:

but I think that the warning is like,

be skeptical.

846

:

That's healthy. Go educate yourself.

847

:

That's in your power.

848

:

Right?

849

:

Try to understand the

the sources you're getting educated by.

850

:

That's the other one. Yes, yes.

851

:

There's a lot of fake news out there.

852

:

Yeah, yeah, yup.

853

:

It’s a daily conversation sometimes.

854

:

But, you know,

I don't think this is Skynet.

855

:

I think, like, I was there with Y2K,

856

:

I was - I was,

857

:

interesting things

I've seen happen, and predictions

858

:

that - the world still hasn't ended.

859

:

Yeah. 20 times now.

860

:

Yeah.

861

:

I don't think it's going to do that,

but we - we got to keep an eye on it.

862

:

Don't - don't be naive about it

and then think about your data

863

:

and yourself as, protect it like,

864

:

like it's your most precious resource,

you know, ask the hard questions.

865

:

And if, if something you want to engage

in is not being honest with you,

866

:

maybe it's really not worth engaging in,

especially on the internet.

867

:

It's a lesson for life.

868

:

There we go.

869

:

It’s a hell of a podcast.

870

:

Oh yeah. It was fun.

871

:

Thank you Jennifer,

that was my last question.

872

:

Did you have anything that you wanted

the listeners

873

:

to know as far as about yourself,

anything upcoming?

874

:

I think, yeah,

there's a lot of exciting things

875

:

going on in the aerospace domain

and commercial space and my big message

876

:

to people is that we get a lot, you know,

why do we do this?

877

:

Like, we have a lot of problems to solve,

you know?

878

:

You understand

879

:

when you look around you like,

it can be overwhelming at times, right?

880

:

And, can be a lot,

881

:

you know, very hard on your mind

and your heart on any given day.

882

:

But for people who engage in

something like spaceflight

883

:

it itself is - is not the reward.

884

:

The reward is the accomplishment

of getting solutions

885

:

that are going to

change how we live on Earth

886

:

because they have to be

887

:

stripped down of all the things

we take for granted, and we have to

888

:

accomplish things that we just won't solve

for ourselves here on Earth.

889

:

And while I understand

people are talking about

890

:

why we have to leave Earth potentially

one day, and I hope that's never true,

891

:

I am working in a domain

where I want to bring these solutions back

892

:

to Earth and improve dramatically

the equity in access to health care.

893

:

I want to make differences in women's

health and early screening and mental.

894

:

I mean, it's just like today

895

:

even it's just on the top of my head

about some of the women's health issues,

896

:

and that -

those are my goals using spaceflight.

897

:

So while I accomplish one thing,

898

:

I'm going to accomplish these other

and you don't have to pay twice.

899

:

That was the goal.

900

:

Yeah, yeah,

I - it's a huge forcing function to solve

901

:

some really, really hard problems that we

just have not solved for ourselves here.

902

:

And the last thing, it's

actually an African proverb.

903

:

It always makes me

just a little emotional, but

904

:

“The Earth was not given to you

by your parents.

905

:

It is on loan to you by your children.”

That’s beautiful.

906

:

Yeah.

907

:

And I, this is a stunning revelation

and perspective on how we treat things.

908

:

And when you are loaned

something, it's a much different concept

909

:

than when you're given something.

910

:

So take care of it

and it'll take care of you.

911

:

That's it.

912

:

That's awesome. Well, thank you again.

913

:

Thank you so much.

I loved the conversation.

914

:

I’ll come back for more.

915

:

Yes. Please do. I will.

916

:

All right. Well, to our listeners,

thank you guys for tuning in.

917

:

This has been episode one of season

three of The Human Odyssey Podcast.

918

:

Once again, my name is Rashod

Moten and again, we're here with Jennifer

919

:

Fogarty

and as always, please join us next time.

920

:

If you do want to provide any feedback,

reviews,

921

:

anything like that, please

visit us on any one of our platforms.

922

:

We're on all social media platforms

and feel free to drop a like and a review.

923

:

Thank you so much. See you next time.

924

:

The Human Odyssey is

925

:

presented by Sophic Synergistics,

the experts in Human-Centered Design.

926

:

Find out more at SophicSynergistics.com.

927

:

Get Smart, Get Sophic Smart.

Links

Chapters

Video

More from YouTube