Artwork for podcast Future Proof HR
The Plus-Shaped Leader: Leading at the Intersection of Law, Business, and HR
Episode 5427th February 2026 • Future Proof HR • Thomas Kunjappu
00:00:00 00:46:29

Share Episode

Shownotes

In this episode of the Future Proof HR podcast, Thomas Kunjappu, CEO of Cleary, sits down with Fernando Garcia, Vice President of Corporate Services at Cargojet, to unpack what HR leaders are getting wrong about AI, and what they should be doing instead.

Fernando brings a rare blend of experience across HR, legal, compliance, and business strategy, and he makes a clear case for why AI will not replace HR, but it will reshape the job. The work that remains, he argues, is the work that matters most: human judgment, context, experience, and the ability to make value-based decisions when the answer is not obvious.

Together, they explore the practical reality of “shadow AI” already happening inside organizations, why guardrails matter more than hype, and how HR can work with legal as an early strategic partner rather than an emergency hotline once something has already escalated. The conversation also goes deep on the ethical tension of AI-driven hiring and fairness, including how bias can show up on both sides, whether decisions are made by people or machines, and why the “trolley problem” isn’t just a thought experiment anymore.

This episode is for HR leaders who want to adopt AI responsibly, without parking their judgment at the door, and who want to future-proof their work by leaning harder into the human side of leadership.

Topics Discussed:

  1. AI will change HR jobs, not eliminate them
  2. Why HR’s value is judgment, context, and the human element
  3. The “three-legged stool” of business, legal, and people leadership
  4. How HR and legal can partner early to prevent risk, not just react to it
  5. The reality of “shadow AI” and why assuming no one uses AI is the biggest risk
  6. Practical guardrails: data privacy, PII, sensitive employee info, and due diligence
  7. Where AI helps today: drafting, surveys, training design, and early recruiting support
  8. The ethics of AI in recruiting, fairness, and bias on both sides of the argument
  9. Why culture, training, and peer learning matter more than expensive enablement programs
  10. Skills that will matter most for future-proof HR: curiosity, EQ, relationship-building, and broad capability

If you’re an HR leader trying to balance innovation with compliance, curious about AI’s real use cases beyond automation, or navigating how to adopt these tools without losing the human element, this episode offers a grounded and thoughtful framework.

Additional Resources:

  1. Cleary’s AI-powered HR Chatbot
  2. Future Proof HR Community
  3. Connect with Fernando Garcia on LinkedIn

Transcripts

Fernando Garcia:

Are you nervous about AI taking away work?

2

:

I think no, I think it's going

to change our jobs and it's

3

:

going to change what we do.

4

:

But the value that we truly

add is that value judgment.

5

:

That knowledge base.

6

:

That experience that you're

bringing into your role.

7

:

And I think that human touch,

that human element of it is

8

:

really what adds our value.

9

:

And I think that's no matter what,

that's never gonna be taken away.

10

:

Thomas Kunjappu: They keep

telling us that it's all over.

11

:

For HR, the age of AI is upon

us, and that means HR should

12

:

be prepared to be decimated.

13

:

We reject that message.

14

:

The future of HR won't be handed to us.

15

:

Instead, it'll be defined by those

ready to experiment, adopt, and adapt.

16

:

Future Proof HR invites these builders to

share what they're trying, how it's going,

17

:

what they've learned, and what's next.

18

:

We are committed to arming HR

with the AI insights to not

19

:

just survive, but to thrive.

20

:

Hello and welcome to the Future Proof

HR podcast, where we explore how

21

:

forward-thinking HR leaders are preparing

for disruption and redefining what it

22

:

means to lead people in a changing world.

23

:

I'm your host, Thomas

Kunjappu, CEO of Cleary.

24

:

Today's guest is Fernando Garcia,

25

:

Vice President Corporate

Services at Cargojet.

26

:

With 15-plus years of experience across

general counsel, HR and labor relations

27

:

compliance and corporate secretary

roles, plus a BA in labor studies and a

28

:

master's in industrial relations and HR.

29

:

Dual civil and common law degrees,

and an MBA in strategic management.

30

:

I think it's fair to say, Fernando

brings a rare T-shaped blend of legal

31

:

depth and business people breadth.

32

:

So he leads at the intersection

of law, business, and innovation.

33

:

And advocates for pragmatic adoption of

legal and HR tech to unlock efficiency

34

:

while at the same time managing risk.

35

:

Fernando, welcome to the podcast.

36

:

Fernando Garcia: Thank you very much.

37

:

It's a pleasure to be here and,

looking forward to our conversation.

38

:

Thomas Kunjappu: Absolutely.

39

:

So, I'd love to set up our

conversation with this concept of

40

:

the three-legged stool I think we

were talking about a bit earlier.

41

:

We've got business, legal,

risk management, and people.

42

:

Now you've pursued degrees and have

worked in a combination of all three.

43

:

How do you see that all coming together?

44

:

Fernando Garcia: When I look at

what makes a business successful,

45

:

I think, and looking back at it

when I first started my career, I

46

:

thought there were three elements.

47

:

One is obviously the people.

48

:

The HR part is critical.

49

:

It is what distinguishes

one company from another.

50

:

That is what makes a place attractive

for someone to go work at and stay there.

51

:

The other part is legal.

52

:

We're all involved in an environment

of laws and regulations and compliance,

53

:

and that affects how we do our job.

54

:

That affects how companies go to business

and which companies are successful.

55

:

And then obviously the business

component of it, the business strategy,

56

:

the planning, and, making sure your

products are meeting the needs of

57

:

society and they're adapting as

necessary to be able to be successful

58

:

in your environment or your market.

59

:

So when I was looking and thinking, how

do I wanna shape my career moving forward?

60

:

I thought those are three elements

that I really wanted to touch on,

61

:

become experienced in and really

62

:

gather experience.

63

:

So one was the educational background,

which you've read already, quite exciting.

64

:

My wife always complains that if I

could, I would be a lifetime student.

65

:

And I think in many ways,

we are all lifetime students

66

:

because we never stop learning.

67

:

We never stop developing.

68

:

We never stop growing.

69

:

And if we do, that's

when things get boring.

70

:

So we're always looking at that.

71

:

But those are three areas that

are of interest to me in terms of

72

:

education, but also in terms of

what I do on a day-to-day basis.

73

:

I don't want to be just a lawyer, and

I don't want to be just an HR person.

74

:

I want to be a business person

with an HR and legal background.

75

:

Thomas Kunjappu: I love that

concept of combining these two,

76

:

so it's like one C two functions.

77

:

but tell me where you see the

strongest synergies between

78

:

specifically HR and legal.

79

:

I mean, my mind and probably everyone's

mind goes to like employee relations

80

:

and like, just making sure that

you're, you know, firing and doing

81

:

things in a compliant way, right.

82

:

And on the people side.

83

:

But yeah, tell me about how you see the

two coming together in your day to day.

84

:

Fernando Garcia: I think a lot

of it is also risk management.

85

:

Some of those elements that you mentioned

are very important on a day-to-day.

86

:

Making sure you avoid litigation,

making sure you're following proper

87

:

processes and hiring, firing, and

all those day-to-day things of HR.

88

:

But there's an element of HR that

looks longer-term, more strategic, more

89

:

longer term value-based, and the legal

environment and the legal background also

90

:

helps you mitigate risk in the long run.

91

:

So helping strengthen your brand

and making your brand stronger, both

92

:

locally and internationally, and

understanding what are the things that

93

:

are gonna impact it so that you can.

94

:

Be successful in that environment.

95

:

And part of that is a huge part of

it is having the right people in the

96

:

right jobs and feeling like they wanna

stay and be developed, have their

97

:

skill set acknowledged really fully

developed within the corporation.

98

:

So it's in the short term and in the long

run that I think there's a ton, tons of

99

:

synergies between HR and legal itself.

100

:

Thomas Kunjappu: When you say 'short

term and long run,' what do you mean?

101

:

Like in day-to-day

tactical things, but also.

102

:

Fernando Garcia: If something happens

and I need support or I need your

103

:

advice regarding this particular

issue that we're dealing with today.

104

:

But also thinking about longer-term

in terms of where businesses are

105

:

going and what are some of the

unique needs of risk mitigation,

106

:

development and all the fun stuff.

107

:

And AI walks right into that.

108

:

Thomas Kunjappu: So okay, let

me ask for some advice because a

109

:

lot of HR leaders don't have your

background in and training in law.

110

:

But of course are very much in the weeds

or their teams are in compliance, right?

111

:

Fernando Garcia: Yes.

112

:

Thomas Kunjappu: So what have you seen

or what would be your advice for HR

113

:

leaders in terms of like how they can

best work with the legal department?

114

:

Fernando Garcia: Yeah, I think

it's also a function of the

115

:

mindset of your legal department.

116

:

There's legal departments that

are like I always say that the

117

:

people who are in charge of forms

347C, and all they do is 347C.

118

:

And if you have a legal question

about that form, you go to them.

119

:

But then there's also the

legal departments And I've been

120

:

advocating for a long time.

121

:

you mentioned that the T-shaped

lawyer or I called it the plus-shaped

122

:

lawyer in one of my articles

that I wrote many years ago now.

123

:

Not just being a legal department

that handles legal questions, but

124

:

that it's part of the business

integrated into the business.

125

:

And that's the case.

126

:

I think HR and legal working together

really help to fill the void to

127

:

mitigate risk, which is the things

that are very important for us.

128

:

And for anyone really operating

in an environment, right?

129

:

Because you don't want to create

any kind of a situation or in the

130

:

environment or to the company.

131

:

Thomas Kunjappu: If you're talking to

like your friends, colleagues in HR

132

:

who are not obviously lawyers as well.

133

:

Like what are the kinds of things

that were you or situations,

134

:

did any situations come to mind

where you feel like this was I

135

:

wish in this case the HR

department had like checked in or

136

:

worked more closely with legal.

137

:

And are there any kinds of situations

that keep coming up in your mind where

138

:

you know, the HR function could work

more tightly with the legal department?

139

:

Fernando Garcia: I think if you look

at your legal team as a business

140

:

strategic advisor and really

part of the business itself, then

141

:

you get them involved early on.

142

:

The biggest issue or what I look at one

of the greatest limitations is when you

143

:

get involved with legal but too late.

144

:

Once something has already blown up,

once something's already a conflict or

145

:

there's a potential litigation issue.

146

:

At that point, it's much harder

because your tools or the available

147

:

outcomes are already much more limited.

148

:

But if you get them involved early

on, if the relationship is one of

149

:

constant communication, interaction

and working together on things, you

150

:

can be proactive and address the

issues before they even become issues.

151

:

And that's where when you

add the most value, because

152

:

you're working hand in hand.

153

:

And again, I've seen many

situations of people who are

154

:

lawyers who go and in terms of HR.

155

:

And I've seen the other way

around too, HR people who are very

156

:

specialized, especially in areas like

labor relations, and then they go

157

:

Thomas Kunjappu: Oh.

158

:

Fernando Garcia: they take

on a quasi-legal function,

159

:

Thomas Kunjappu: Sure.

160

:

Fernando Garcia: their organization.

161

:

So it goes both ways.

162

:

But I,

163

:

Thomas Kunjappu: Yeah.

164

:

Fernando Garcia: find that there's a

lot of synergies there in terms of the

165

:

functions and the roles that we play.

166

:

Thomas Kunjappu: So, you talked a

little bit about that mindset then.

167

:

The T-shaped mindset, but

also the ability to be like a

168

:

focus on the business outcome.

169

:

Like regardless of kind of function.

170

:

So sounds like that might apply equally

to HR and legal on both sides, right?

171

:

But you also mentioned how that

mindset may be actually even like

172

:

a perfectly fit for the age of AI.

173

:

So tell me about that.

174

:

From both perspectives.

175

:

And especially from a risk mitigation

and compliance perspective, which is what

176

:

we've been talking about so far, right?

177

:

How do you think about AI,

especially with that mindset of

178

:

being flexible, agile, T-shaped?

179

:

Fernando Garcia: Yeah, I think AI brings

another element to our toolkit and it's

180

:

a critical tool in terms of helping take

away maybe some of the more administrative

181

:

functions or responsibilities and

really focusing more on the value-added.

182

:

Giving you another person that they, it's

a virtual person that you can run ideas by

183

:

come up with alternatives.

184

:

Now again, I wouldn't say it's a

professional partner because I think

185

:

there's still some development to go.

186

:

and you've got to be somewhat critical

and you've got to be careful about

187

:

what information is given to you and

how you are using that information.

188

:

So you've got to check in and you

still have to go do your due diligence.

189

:

Thomas Kunjappu: Sure.

190

:

Fernando Garcia: I think it's a critical

tool in terms of helping take away

191

:

the more intensive administrative

functions and really helping you focus

192

:

on where you can add the greatest

value in terms of your organization

193

:

and even your function itself.

194

:

With the caveat you've got to

be careful with it as well.

195

:

Thomas Kunjappu: And I

196

:

Yeah, let's get very specific then.

197

:

Like what's your safe list or

the more exciting things to talk

198

:

about is like, don't do this.

199

:

These areas are just like,

you know, you've got to be

200

:

really careful about with AI.

201

:

But how do you or maybe you

can kind of answer both.

202

:

Like, how do you think of it?

203

:

What is safe?

204

:

What are you gonna go into, what's

okay to experiment with versus where

205

:

do you really need to be careful?

206

:

Fernando Garcia: Yeah, I think

like anything, I want to give you

207

:

lawyer answer, but it depends.

208

:

really the fundamental here is to make

sure that you're understanding terms and

209

:

conditions in terms of what you're using.

210

:

If you're putting in private information,

make that you understand that it stays

211

:

and contains and, working IT department

to make sure it's a closed system versus

212

:

the information putting in all of a

sudden becomes part of the public domain.

213

:

you're putting in sensitive employee

information in there, possible health

214

:

information, which is obviously, HIPAA

and some of the other concerns that

215

:

you have in that particular case.

216

:

But if you have all that covered

up that issue is not a concern.

217

:

It does open up the possibilities

in terms of how you can use it.

218

:

Just mindful of whatever information

you put in there, you have to double

219

:

check, you have to review it's safe.

220

:

Not gonna have loss of privilege

issues or you're putting out

221

:

personal information onto the domain.

222

:

But in terms of that element of it, I

think there's a lot of opportunities

223

:

too in terms of recruiting in terms of

drafting letters, in terms of just really

224

:

sometimes even just getting you started.

225

:

But the key is you

still have to review it.

226

:

'Cause it does tend to

hallucinate at times.

227

:

And there's also the concept of

garbage in is garbage out, right?

228

:

So you have to make sure that whatever

you're, how you're prompting it or

229

:

what information you're feeding it

is gonna affect what the outcome is.

230

:

And you got too be mindful

of that part of it as well.

231

:

Thomas Kunjappu: So I guess

there's a lot to consider.

232

:

Are there, but.

233

:

it sounds like, I think you're saying

there there's a clear set in your

234

:

mind, of areas you have to like

make sure that you're bulletproof

235

:

on in terms of risk mitigation.

236

:

Right?

237

:

And data privacy.

238

:

And once that's in place, would you say

that's pretty universal is it really in

239

:

your view, besides cultural expectations

where some companies are more AI forward

240

:

versus, or innovative versus others,

241

:

Fernando Garcia: Yeah.

242

:

Thomas Kunjappu: like there's like

a kind of a generic sense of, you

243

:

know set of guardrails across all

industries, across all companies?

244

:

Like, hey, no training with this

data for the core LLM models, there's

245

:

some kind of like onboarding and

training for every employee who's

246

:

gonna have access to these things.

247

:

PII or customer data should not be on

this or this is the specific guardrails.

248

:

And there's trainings on that.

249

:

You know, there's a set of

things you could come up with.

250

:

And do you feel like that's like pretty

universal and really in the only gap

251

:

is cultural or in your view like from

what you can tell, it's just like, it

252

:

just, every industry is like different

and you really have like nuances about

253

:

Fernando Garcia: Yeah.

254

:

Thomas Kunjappu: is

what the guardrails are.

255

:

Fernando Garcia: I think it's

beyond just every industry.

256

:

I think every company, every

location is at a sort of a

257

:

different stage in their journey.

258

:

And some people are still

very much experimental.

259

:

Some people are all in and they

want all their employees to have

260

:

access, all their employees to be

properly trained and other employees.

261

:

The reality though is that most

people, whether it's on a personal

262

:

basis or at work, are using something.

263

:

you've got to make sure that you train

them up in terms of saying, look, if

264

:

you don't have the proper guardrails

in place, still use it, but be mindful.

265

:

Don't put employee information in there.

266

:

Don't put names, don't put

personal information, don't put

267

:

stuff about products I haven't

released to the public yet.

268

:

Don't put market share data sensitive

pricing information unless you are at

269

:

that stage where you've now checked

the box and everything's okay,

270

:

and there isn't that risk element.

271

:

So every company, every location

will be at a different stage of that

272

:

journey, but I think the biggest risk

of them all is assuming that nobody in

273

:

your company or no employees using AI.

274

:

because I think more and more, whether

it's just to write a greeting card or

275

:

to start an email, or really to check

a report or review something, or even

276

:

draft, initial stages of reports.

277

:

People are using it.

278

:

So you have to be mindful of that

and you've got to be mindful of

279

:

how knowledgeable are they of

it and how can they protect the

280

:

information as much as possible.

281

:

Thomas Kunjappu: Yeah.

282

:

Fernando Garcia: So some minor

training I think is critical.

283

:

Regardless of where you

are in that journey.

284

:

Thomas Kunjappu: Yeah.

285

:

So regardless of whether you've

adopted wall to wall some kind of

286

:

like LLM enterprise solution or not.

287

:

In all likelihood, someone

out there is using it.

288

:

In previous years, we'd had this

concept called like shadow IT and that

289

:

when SaaS was first coming into the

fore and now there's like Shadow AI.

290

:

That there's usage happening and

it's similar things, like your

291

:

data could be leaking without

the right kind of guardrails.

292

:

I'm very curious, given your two

halves here, where do you think there's

293

:

more, there's been more adoption of

AI tools within the legal kind of

294

:

community for all types of contract

law, employee law, there's all types

295

:

of paralegal type of work or the HR

world which, you know, our audience

296

:

obviously like, knows a lot about.

297

:

But do you think there's a

I don't know, an emergent

298

:

any differences in terms of adoption

or like, if you see like the different

299

:

communities that you're traveling in and

the general stance towards these tools?

300

:

Fernando Garcia: Yeah, For example, we

were talking about the legal industry.

301

:

I think you can break

it up into two parts.

302

:

Because you have the in-house counsel

and then you have the external

303

:

lawyers who work at law firms.

304

:

Big law firms tend to have more resources,

tend to have more training and tend to

305

:

have more of that giving them the tools

and having the parameters properly set.

306

:

In-house counsel, depending on the

size of the companies they work at they

307

:

can have that available to employees

and training and everything else.

308

:

Or you could be in a smaller

organization where there's less

309

:

of that, less of that training.

310

:

But I think in HR, same concept.

311

:

I hear from people who are very much just

drafting emails with it or doing some

312

:

preliminary to people who are getting

resumes for a particular role and they

313

:

run a preliminary search through the a

hundred resumes, say, pick me the top 10

314

:

based on the following job description.

315

:

But again, you've got to be

very mindful of that as well.

316

:

That you've got to be careful

that you're still doing that.

317

:

Your proper check and your proper

due diligence because I hate to

318

:

be in a situation where people are

using AI to create their resumes.

319

:

Then they're using AI to check amongst

the resumes to see which is the best

320

:

one to then check it against the job

description that was developed by AI.

321

:

And at the end of the day, AI's just

doing your recruiting and it takes

322

:

away that human element, which is I

think, at the end of the day, critical.

323

:

What a resume shows or what a resume

provides doesn't necessarily mean

324

:

that person's gonna be a good fit

within the environment, within the

325

:

industry that you work in and that

they're going to be successful.

326

:

Thomas Kunjappu: Well, the part

that you're missing in that

327

:

equation is that the candidate

pool is also not standing still.

328

:

And that resume that these AI tools are

reviewing are half generated customized

329

:

in response to the AI generator job

description to give it a great likelihood

330

:

of advancing to the next stage.

331

:

Fernando Garcia: Yeah.

332

:

Thomas Kunjappu: Just AI

tools fighting AI tools.

333

:

And there's just like volume

of applications jumping up like

334

:

dramatically in all types of roles

335

:

Fernando Garcia: Oh,

336

:

Thomas Kunjappu: because it's,

337

:

Fernando Garcia: That's

338

:

Thomas Kunjappu: yeah.

339

:

Fernando Garcia: trend that

you're seeing that for any role

340

:

where you might have had 20 to 25

applicants, now you're getting 200.

341

:

Which makes it even more important

to have the tool to go through and

342

:

least the initial process to try to

narrow it down and to focus on it.

343

:

But again, you've got to be very careful

in terms of what it is doing and that

344

:

you're still having an element of

human judgment and discretion in it.

345

:

Because I think that's

where you add the value.

346

:

Thomas Kunjappu: I think you're

getting to a key, almost ethical

347

:

concept, arguably, right?

348

:

Which is sitting at the

intersection here about fairness.

349

:

And really about bias avoidance.

350

:

It's very obvious as a concept

in the recruiting process.

351

:

But you know, also in performance

management all the way to

352

:

performance improvement processes

and firing situations as well.

353

:

But it's across the board like important

as AI tools get into the workflows here.

354

:

So yeah, how do you think

about the ethical side of it.

355

:

Besides the efficiency and besides

the legal ramifications, how do you

356

:

even think through and we can take any

one of those like use cases and like

357

:

think through a little bit together the

ethical consequences of leveraging AI.

358

:

Whether there's bias or judgment

that you're now outsourcing

359

:

to Artificial Intelligence.

360

:

Fernando Garcia: Yeah, and it's

interesting because you get people

361

:

who are on both sides of the equation.

362

:

There are some people that say that

the more you use AI, the more you

363

:

take away the potential bias of

the person who's doing the review.

364

:

Thomas Kunjappu: Right.

365

:

Fernando Garcia: Other people say, hold

on a second, somebody has programmed

366

:

it or it's learned from something.

367

:

It just doesn't come

up with these concepts.

368

:

The training.

369

:

So somebody at some trained it, and

then the biases or the experiences

370

:

or whatever else of that training

element, wherever it's coming from,

371

:

will creep into the decision making.

372

:

So it's almost like one of those dilemmas.

373

:

Is it taking away the bias by using

more AI or is it just perpetuating the

374

:

bias by using the tool that was created

by somebody who obviously had bias?

375

:

What at what point do you minimize that.

376

:

And do you get the best result?

377

:

Thomas Kunjappu: Right.

378

:

Well, that's a great question.

379

:

I mean, in the world of self-driving

cars, you have a specific outcome

380

:

where you're trying to reduce accidents

and make sure that streets are safer.

381

:

And you have an outcome metric

that can be relatively objective.

382

:

Maybe the issue in hiring is

that the outcome metric is

383

:

like who judges the outcome.

384

:

Fernando Garcia: Yeah.

385

:

Thomas Kunjappu: Like fatalities is an

objective measure and there's like data

386

:

that you can get that you can trust.

387

:

In this case, it's like, well, you

can make hires, but is it a good hire?

388

:

Was it the right hire?

389

:

Do we miss other people?

390

:

There's so many other aspects to it

and at least this one facet, arguably.

391

:

You know, hiring humans is more of a to

392

:

really get to clarity in

393

:

what is like the best outcome.

394

:

But you mentioned these both sides

of the argument are you convinced

395

:

or swayed by either side or...

396

:

specifically with recruiting?

397

:

Fernando Garcia: sides.

398

:

I wanna remain cautious and

knowledgeable that there are risks

399

:

and there are limitations both ends.

400

:

And you mentioned the autonomous

vehicles, for example.

401

:

Even that has a bias.

402

:

And there's a great website, I'm

trying to remember the name of it.

403

:

It's by MIT and I think

it's called Moral Machines.

404

:

And it basically says this: you're

driving your autonomous vehicle.

405

:

And who's driving it?

406

:

The autonomous vehicle is.

407

:

And it comes to a situation where

it must decide, is it going to

408

:

get into an accident and kill the

occupant or is it going to kill one

409

:

person who's crossing the street?

410

:

What if it's two people crossing

the street versus one occupant?

411

:

What if it's other people crossing the

street and it's two young children in

412

:

the car and it starts creating all these

moral dilemmas in terms of saying at

413

:

some point something's got to decide.

414

:

And depending on what your value is,

if you're a car company that comes

415

:

out and says, as of 2030, I'm going to

have zero occupancy deaths in my car.

416

:

Then that means that when it has to

make that moral decision, it's going

417

:

to obviously prioritize the safety and

wellbeing of the people in the car versus

418

:

the ones who are crossing the street

and getting to an awkward situation.

419

:

So I think even when you're

looking at something as autonomous

420

:

vehicles, there's still a value

judgment that will come into play.

421

:

And I can totally see it, that at some

point it'll say, whoa, I don't know.

422

:

I can't make that moral decision for you.

423

:

Here's a steering wheel back.

424

:

You decide, are you going to put

yourself in a ditch or are you

425

:

going to run over the two people

who are crossing the street.

426

:

Maybe jaywalking or something else.

427

:

Even at that level, even autonomous

vehicles, that technology still

428

:

will have a number of bias to it.

429

:

Thomas Kunjappu: Fair point.

430

:

I mean, you're talking about the

trolley problem, but instead of

431

:

like deciding between two different

external parties and a human deciding,

432

:

it's an AI deciding between, in this

case, like a customer who bought

433

:

their product and like someone else.

434

:

But that's a tricky dilemma to get into.

435

:

And the outcome or like the escape valve

let's call it is that well, okay, we don't

436

:

know, let's let the human needs to decide.

437

:

So in the recruiting world, if we come

back to what we're talking about, it's

438

:

like we need to let the recruiter or the

manager with all the human flaws and all.

439

:

We're gonna attribute it to

this person so that we can

440

:

move it out of the hands of a machine.

441

:

Fernando Garcia: You can do all

the behavioral testing you want.

442

:

And you can do all the

testing and the AI reviews.

443

:

But at the end of the day, having

that person meet, whether virtually

444

:

or in person with the team.

445

:

and seeing that interaction,

seeing how they're going to fit

446

:

in, seeing how well they're going

to thrive within that environment.

447

:

Nothing will replicate that.

448

:

Why people always say, are you

nervous about AI taking away work?

449

:

Or I think no.

450

:

I think it's going to change our jobs

and it's going to change what we do.

451

:

But the value that we truly

add is that value judgment.

452

:

That knowledge base.

453

:

That experience that you're

bringing into your role.

454

:

And I think that human touch,

that human element of it is

455

:

really what adds our value.

456

:

And I think that's no matter what,

that's never gonna be taken away.

457

:

There might be less labor intensive

elements of it and there's gonna be

458

:

questions about how do you get people

trained and make sure that when you're

459

:

using AI you're not just parking your

brain at the door that you're still being

460

:

critical and you're using it properly

and you're learning how to prompt.

461

:

But you're also learning

how to apply the knowledge.

462

:

But at the end of the day, I think

that is truly where we add value.

463

:

Is the human piece.

464

:

And I think that's never going to go away.

465

:

This has been a fantastic

conversation so far.

466

:

If you haven't already done so,

make sure to join our community.

467

:

We are building a network of the

most forward-thinking, HR and

468

:

people, operational professionals

who are defining the future.

469

:

I will personally be sharing

news and ideas around how we

470

:

can all thrive in the age of ai.

471

:

You can find it at go cleary.com/cleary

472

:

community.

473

:

Now back to the show.

474

:

Thomas Kunjappu: So how do you imagine

that transition happening then?

475

:

So that folks and let's focus on

like the HR department as as a whole.

476

:

You said maybe we're not gonna be spending

as much time on the administrative

477

:

elements of it and the repetitive tasks.

478

:

So how do you imagine going from

like today to like, what does

479

:

that future state, look like?

480

:

Fernando Garcia: Yeah.

481

:

I think, if you gain the 200

resumes for that single job,

482

:

if you can cut it down to 15.

483

:

Using AI and then understanding that

within those 15 you still have to go

484

:

in there and do the review, meet them,

do the screening, to see how they're

485

:

gonna apply in your, workplace and

how they're going to thrive within it

486

:

and what that we're gonna add to it.

487

:

I think that's part of it.

488

:

So it'll help you get partially

there, but you still have

489

:

to drive it home at the end.

490

:

And you still have to use

your judgment on that part.

491

:

Thomas Kunjappu: So are we okay with

the moral hazard of AI eliminating

492

:

90 out of a hundred resumes and

is is that gonna be unbiased?

493

:

'Cause what we're saying is like

at least for that one phase of

494

:

the application process, then we

are okay with the trolley problem.

495

:

Whatever value judgment that we're

maybe helping program into the AI.

496

:

But we're gonna.

497

:

Use that to make some judgment, right?

498

:

And then continue onwards.

499

:

Ultimately it's a practicality I feel

like for a lot of recruiting teams.

500

:

Because if you have a thousand

resumes like what do you,

501

:

right.

502

:

So do you kind of escape the ethical

problem because of the necessity?

503

:

Fernando Garcia: I think you have

to be aware of the ethical problem

504

:

and you have to be aware of the

potential risks associated with it.

505

:

And then at that point, I think

it depends on the role, right?

506

:

If it's a critical role.

507

:

Then maybe you say look, this

is such an important role that I

508

:

will go through the 200 resumes.

509

:

Some other one that maybe isn't as

critical or one that maybe isn't fit

510

:

is not as important or who knows.

511

:

Whatever the parameters are.

512

:

Maybe that one requires a

little bit more or allows you a

513

:

little bit more space in that.

514

:

But again, I think there are elements

of you always have to be aware of.

515

:

And the biases are there.

516

:

And but again, I think

there are risk both sides.

517

:

'Cause the bias of a person we're

reviewing the resumes could also

518

:

be as important as a bias in the

technology or how we're separating

519

:

it or doing the first cut through AI.

520

:

You are never gonna completely eliminate.

521

:

I think regardless of the two.

522

:

Gotta be aware of it and

you gotta be mindful of it.

523

:

Thomas Kunjappu: Yeah.

524

:

One idea I had as you're saying that is

that the roles which the organization

525

:

has hired like hundreds of in the past.

526

:

You've had like over many years and it's

like the same kind of background that's

527

:

successful and you've established that

qualitatively and quantitatively, right?

528

:

So these is all input into like whatever

you're using for screening as an AI tool.

529

:

So the more I guess validated

specific input data you have on

530

:

a specific decision, the more

maybe you've controlled bias.

531

:

And you're maybe more comfortable

ethically, you know, moving efficiency

532

:

around versus hiring for the first time

for a role ever in in a new context.

533

:

You just literally don't know.

534

:

right?

535

:

What is the background or like what

are the set of characters that a

536

:

resume can say that actually represent

success if it's never been done.

537

:

Especially at this organization before.

538

:

Fernando Garcia: But again, there's no

single answer and no single right answer.

539

:

Because again, you always hear

that the most dangerous words are

540

:

we've always done it that way.

541

:

So just because always hired this way

and obtained this level of performance,

542

:

you don't know if taking a different

approach or hiring people maybe who

543

:

are not normally the people you would

hire for that particular role is not

544

:

gonna elevate level of performance.

545

:

So you also have to every once in

a while just break up from that

546

:

trend and be able to think outside

the box and maybe go outside of

547

:

the areas you normally hire from.

548

:

Maybe go outside of the colleges you

normally hire from or whatever, because

549

:

that is how you identify potential value

that might have otherwise been lost.

550

:

And I think that's where AI can

maybe fail because it does look at

551

:

trending, it does look at what have

you done and it establishes a basement.

552

:

But sometimes you wanna go

above and beyond that basement.

553

:

I think that's when you don't

even be a lawyer or anything else.

554

:

The creativity of a novel argument

or some additional way of looking at

555

:

something which has not been done before

is when you're truly adding value and

556

:

you're elevating that performance from

what is normally you're accustomed to.

557

:

And the question is, will AI ever

get to the point where it can do that

558

:

additional critical thinking or jump to

say, this is what you've always done,

559

:

this is what you've always achieved,

but if you do something different,

560

:

you might be able to get here now.

561

:

Because we tend to learn from the past.

562

:

We tend to replicate the results.

563

:

But the question is, it starts

getting to the point where it's now

564

:

cognitive and starting to make it's

learn and go beyond just the confines

565

:

of the past or previous decisions.

566

:

That's when it starts getting interesting.

567

:

Thomas Kunjappu: Right.

568

:

Like incorporating new data about

like what the business needs

569

:

now, for example, or like what's

shifted now and yeah, absolutely.

570

:

Fernando Garcia: much risk is your

company willing to take in this role.

571

:

To say look, maybe we go outside of

the normal beaten path and maybe look

572

:

for that purple unicorn or it was a

purple squirrel that they call it.

573

:

That additional person and sometimes

you gotta go outside of that to find it.

574

:

Thomas Kunjappu: Let's get

a little bit practical then.

575

:

Can you tell me about the key workflows,

use cases that you've personally seen.

576

:

Like some success in and maybe others

that you're excited about where AI can

577

:

start changing the way work has done.

578

:

Fernando Garcia: Yeah, some of

the interesting functions that

579

:

I've seen or applications are

things like employee surveys.

580

:

Finding trends and identifying things

that maybe you need to focus on developing

581

:

training programs for individuals.

582

:

Go in there and say, look,

this is where I'm having a gap.

583

:

How do you recommend that?

584

:

training, it can help you

develop training programs.

585

:

We talked about the recruiting

obviously that's a critical

586

:

one and job descriptions.

587

:

Helping you take that initial

draft and the things that you do.

588

:

The letters, you have a transformation

coming up and all of a sudden

589

:

you're last minute rushing it.

590

:

It can help you get the first draft

going and then you can perfect it.

591

:

So I think, if people are starting to get

curious, people are starting to use it,

592

:

companies are developing pilot projects

and getting individuals who are trained

593

:

to start applying it and then once they

see an element of success and comfort and

594

:

they start adopting it to greater numbers.

595

:

Everybody's handling it differently.

596

:

But the beauty is people

are looking at it.

597

:

right?

598

:

And again, challenging the status quo

in many ways and just trying to be

599

:

more value and doing more with less.

600

:

Which is I think the reality of all

of our jobs and of our functions.

601

:

Thomas Kunjappu: So

tell me more about that.

602

:

What are you seeing within the world

of like HR specifically around this

603

:

concept of doing more with less?

604

:

Is that seeming like universal.

605

:

Like with your peers?

606

:

What's the solution here?

607

:

Like how do we get to the point of

like being productive and getting

608

:

resource enough to be able to do that?

609

:

Fernando Garcia: Yeah, I think

there are those who are the

610

:

trendsetters who are using it more.

611

:

And then you go to a conference

or you go to an event and you

612

:

start talking to other people.

613

:

This is what I've done with it.

614

:

This is how I used it.

615

:

Somebody else goes, Ooh, that's

actually a pretty good idea.

616

:

Yeah.

617

:

Try that.

618

:

Right.

619

:

The critical element of this is the people

who are gonna be the trendsetters and

620

:

the people who are gonna be able to adopt

the best practices or learn from what

621

:

others are doing and start implementing

in their own play in their own way.

622

:

But the key is just being

curious using experimentation.

623

:

Bringing things in and doing

things differently the AI and

624

:

where it's going and technology.

625

:

But again, AI is one element but

it's not the only element, right?

626

:

Those information that you're

taking from HRS systems, from

627

:

contract management systems.

628

:

Data is power and the information

in the technologies is critical

629

:

in terms of us and our functions.

630

:

Thomas Kunjappu: So if that's where

things are headed, like what do you

631

:

think are the kinds of people or

skillsets that are coming into the fore?

632

:

Who are gonna be more effective

going into the future?

633

:

A lot of what we try to talk

about is future proofing.

634

:

right?

635

:

Fernando Garcia: Yeah.

636

:

Thomas Kunjappu: In

like whatever function.

637

:

And actually maybe let me

ask a specific question.

638

:

Is it getting to the point where

job descriptions are starting to

639

:

more heavily feature, you know,

either the ability to learn...

640

:

Like rethink how you're doing

workflows and or all the way up to

641

:

demonstrated experience in leveraging

AI to bring out efficiencies.

642

:

Like...

643

:

Fernando Garcia: Yeah, I

644

:

Thomas Kunjappu: ...where,

645

:

where.

646

:

Fernando Garcia: ...a

647

:

lot of that yet.

648

:

I think I've seen that in terms

of jobs that are heavy with AI or

649

:

that are involved in the FinTech

or some of those things like that.

650

:

Where that's critical component right now.

651

:

But I think at some point I wouldn't

be surprised if we start seeing

652

:

things like experience in AI.

653

:

Yeah.

654

:

Or even then making the assumption that

maybe people are not coming in with that.

655

:

But really focusing on

training that skill set.

656

:

Because I think companies can do a

really good job in terms of helping

657

:

people develop and get comfortable

and see how that can apply.

658

:

Whether it's through mentorship

with people who are in the industry

659

:

or in your department when you

come in and they can help you.

660

:

Look, this is how I'm using.

661

:

How are you?

662

:

Things like lunchtime training sessions

and sharing your best practices and tips.

663

:

Prompting, things like that.

664

:

I think that they're critical.

665

:

So it's a question of do

you buy it or you make it.

666

:

And some I think it's gonna

become more and more that

667

:

they put that as a preference.

668

:

But I think at the same time, people

are saying look, let's train up.

669

:

Let's get this as a skill

set that we're developing.

670

:

No different than leadership training

or any other country that we do on a

671

:

day-to-day that we can do AI efficiency

training or adaptive technology training.

672

:

Again, it's gonna be

critical advantage one day.

673

:

Thomas Kunjappu: So if that's

true, how do you square that?

674

:

And I would imagine we'd agree

that, the enablement of that for

675

:

the organization is primarily gonna

be done through the HR department.

676

:

There's some like, individual on every

functional level, people comparing notes,

677

:

but you're trying to enable the entire org

strategically up level it for some ROI.

678

:

How do you square that with less,

fewer resources to make it so.

679

:

Fernando Garcia: It's one of those where

you invest early to get the benefit after.

680

:

But again, it doesn't

have to be expensive.

681

:

You can get somebody who's very

knowledgeable in AI or very

682

:

comfortable with it that can

just do a lunch hour session.

683

:

Hey, here's some peaks on the

side and come and sit down.

684

:

Let's, go through some

examples of how I'm using this.

685

:

Doesn't...

686

:

Thomas Kunjappu: Have you

seen the food inflation?

687

:

Fernando Garcia: Granted.

688

:

There's formal training, but

then there's training that can

689

:

happen through peer-to-peer.

690

:

There's the train-the-trainer concepts.

691

:

And just associations,

industries, conferences, sharing

692

:

best practices, sharing tips.

693

:

I tend to find that's really, important.

694

:

When I, first became how can I

say this comfortable with the

695

:

AI, that's how it happened.

696

:

I went to a conference

and we had a session on.

697

:

Is what I'm using it for,

these are my tips to you.

698

:

And now all of a sudden it's

wow, I can do so much with that.

699

:

So it's really having those people who

are early champions, early adopters,

700

:

and then having them share their

knowledge and experiences with others

701

:

within their industry, within the

company and just amongst their peers.

702

:

Thomas Kunjappu: As people are sharing

and like learning and upskilling.

703

:

Do you imagine new roles and

titles might even appear?

704

:

Especially within like HR and legal?

705

:

I don't know.

706

:

Within the compliance or

employee relations world or

707

:

Fernando Garcia: Yeah,

708

:

Thomas Kunjappu: within recruiting

or you know, just HR support...

709

:

Fernando Garcia: HRIS-type individuals

who are in charge of data and putting

710

:

in and analyzing totally see...

711

:

Thomas Kunjappu: like

system configuration.

712

:

Fernando Garcia:

Configuration and development.

713

:

And I took a stat here, it was from

the Conference Board of Canada.

714

:

Said the 4 in 10 HR teams are using

some sort of AI for talent management.

715

:

So just looking at that number

again, 4 out of 10, it's not

716

:

a lot, but it's a number.

717

:

4 is getting to that critical mass

point where, you know, as long as

718

:

they can start sharing their past

practice and their learnings with

719

:

the other six, you'll slowly start

seeing that shift the other way.

720

:

Thomas Kunjappu: Right.

721

:

Fernando Garcia: I wouldn't be

surprised if 10 years from now we're

722

:

looking at the number where eight

in 10 or nine and 10 are using it.

723

:

Because I think it's a tool.

724

:

It's no different than

725

:

computers came up.

726

:

And I remember working at a law

firm and for the most part, most

727

:

lawyers were using computers.

728

:

But there were still a few who

decided to have dictaphone and they

729

:

were still dictating their notes

730

:

Oh, interesting.

731

:

type it up for them.

732

:

So it took a while to get there.

733

:

I don't think you'll see that anywhere

else in the firms anymore, anywhere.

734

:

But there's that transition

period that has to happen.

735

:

There are people who are gonna

be extremely comfortable with

736

:

saying, look, hey, I'm curious.

737

:

I wanna learn.

738

:

I wanna develop myself and I wanna do it.

739

:

And there's others who are

saying, hold on a second.

740

:

I'm fairly confident in what I'm doing.

741

:

I don't need that tool.

742

:

So they're gonna be more

resistant to change.

743

:

But like anything, like any change effort.

744

:

right?

745

:

There's gonna be those who

take the change head on.

746

:

And those who are gonna be

more resistant to the change.

747

:

You just have to start persuading them

of why it's important, how they can

748

:

improve their performance by using it.

749

:

And how maybe they're more worried

about the repercussions of the

750

:

risks, how you're mitigating those

risks and training them to do that.

751

:

Thomas Kunjappu: It's funny, maybe

the Dictaphone people were actually

752

:

ahead of their time because if

you look at what's happening now.

753

:

Fernando Garcia: Right.

754

:

Thomas Kunjappu: No one's typing up notes.

755

:

So if like future focus, you're actually

having the, like an AI transcription,

756

:

happening off of the call to then

summarize and kind of go further.

757

:

Which then you might like review.

758

:

Which is closer in some ways, right?

759

:

To the voice dictation, which

then is manually written up.

760

:

If we kind of imagine that 4

to 10 gets to be like more like

761

:

10 out 10 like eventually over time,

what do you think and to your point,

762

:

yes, like AI it's a tool, right?

763

:

It's a tool set that you can leverage

humans can and will leverage.

764

:

And it's kind of the

next phase of technology.

765

:

If you just focus on the

humans for a moment, right?

766

:

What is the advice that you have for

someone who's young and just coming outta

767

:

school, maybe looking to go into HR.

768

:

Maybe even considering law school, right.

769

:

And is just trying to think about

like what they should be, like what

770

:

skill sets they should really be like

trying to focus on to make sure that

771

:

they're employable in the future.

772

:

What advice would you have for them?

773

:

Fernando Garcia: Yeah.

774

:

I always say try to focus on

getting the broader or as broad

775

:

of a skill set as possible.

776

:

And the curiosity and the use

of technology is one of'em.

777

:

Don't lose track of

the ability to network.

778

:

The ability to build human

connections and interactions.

779

:

To be able to work with people.

780

:

The emotional intelligence is critical

in terms of everything that we do in

781

:

HR, legal or anything else that we do.

782

:

And it goes back to that T-shaped concept.

783

:

right?

784

:

Where your legal or your HR knowledge

could be this, but then there's

785

:

the whole horizontal piece where

there's all these other skill sets

786

:

that's so critical for you to have.

787

:

Whether it's project management skill

sets the relationship building or with

788

:

diverse and international workforces.

789

:

That there's all these things that

add value to you as a person that

790

:

are not gonna be just technology.

791

:

So you've got to be careful not to be the.

792

:

the person focuses on a

technology's helping you do A.

793

:

But what else are you doing?

794

:

How else are you doing in your

skill set and how else are you

795

:

contributing and adding value to it?

796

:

The technology's gonna help us

get to a certain point, but the

797

:

human piece is always what's gonna

take you above and beyond that and

798

:

make you successful and adaptable.

799

:

And adding value to whatever organization

you're in or whatever function

800

:

or task or role you're taking on.

801

:

Thomas Kunjappu: That's great advice

and seems timeless in some ways.

802

:

But then while I have you,

I have to ask, Fernando.

803

:

Before I let you go, as you're looking

in the horizon of the next couple of

804

:

years, is there anything in general that

you're working on or a concept that you

805

:

feel like is going to come to fruition

that you're particularly passionate

806

:

about that you'd be willing to share?

807

:

Fernando Garcia: That's interesting.

808

:

Personally, I've been working a lot in

terms of the legal industry, in terms of

809

:

making sure that we think of ourselves

as especially as in-house counsel, more

810

:

as business executives with a legal

background or in an HR background.

811

:

And really thinking about how we're

adding value to our organization.

812

:

How we're helping people grow within

the industry or within our companies.

813

:

How we're identifying talent, how

we're helping those talents stay.

814

:

I think there isn't one particular

area, but I think it's more about how

815

:

holistically we're all becoming a little

bit more knowledgeable, a little bit

816

:

more skilled, and how we're incorporating

all those skill sets to do better work.

817

:

And to make sure that we truly say

that people value, people matter.

818

:

And that's one of our strengths

and we're adopting all these tools.

819

:

Because the one risk of technology

is if you stick to it too

820

:

much, it might dehumanize you.

821

:

I think at the end of the day,

that human element of it will

822

:

never go outta style, will never be

something that doesn't add value.

823

:

And that's not something we

should ever stop developing.

824

:

Thomas Kunjappu: I love that thought.

825

:

And it's something I can personally relate

to and having wonderful conversations

826

:

like this on this podcast is I feel

it humanizes me more personally.

827

:

I'm in terms of having conversations.

828

:

Because you know, working in the

software and AI world constantly,

829

:

you're just kind of honed in there.

830

:

But this helps you keep that

human connection no matter

831

:

what it is that you're doing.

832

:

So thank you for this

conversation, Fernando.

833

:

Because we covered a lot of ground

from some I didn't expect to go

834

:

into the trolley problem where

835

:

the remix is that AI is making

the decision and you are one

836

:

of the lives not these others.

837

:

And relating that to talent

acquisition funnel problems.

838

:

But it's really interesting how

you're kind of looking at things

839

:

from a legal, customer contract and

compliance perspective while also

840

:

very much keeping the HR hat on.

841

:

And I would think that maybe you

could put finance in there as well.

842

:

But you know, these are the functions

that typically are the most risk-averse I

843

:

think it's fair to say in an organization.

844

:

And maybe by design, sales and

marketing and the CEO is supposed

845

:

to push for new boundaries.

846

:

And you're trying to say hey, you

don't want to do that because we'll

847

:

get sued and we've got to hold back.

848

:

But it's really refreshing to

hear about your nuanced position.

849

:

Because even if that's true, you're

very much like an early adopter of

850

:

AI tools, which have ethical and

efficiency issues and trade-offs.

851

:

But simply cannot be ignored.

852

:

And you know, I like the

nuance in our conversation.

853

:

Where it's every company and every

role might have a different nuance for

854

:

how you might use AI versus people.

855

:

And that probably goes more broadly.

856

:

And it's really important

to weigh those things.

857

:

Even though you have some fixed

guardrails to get started with.

858

:

And thanks for going through

some of those, because

859

:

Fernando Garcia: Yeah.

860

:

Thomas Kunjappu: A lot of companies

are just getting in the early stages

861

:

of that journey of figuring out how

to go wall to wall or enable people.

862

:

So I think that would be helpful for

folks who are out there listening,

863

:

who are looking to future proof

their own functions in HR but

864

:

also their organizations overall.

865

:

So thank you once again,

Fernando, for the conversation.

866

:

Fernando Garcia: It'll be interesting.

867

:

Five years from now, we'll look back

on this and say wow, we either got it

868

:

right or we completely missed the boat.

869

:

Or technology and AI started

developing completely different

870

:

ways and the applications were

that we never thought about.

871

:

But it's an interesting time and

if I can say anything it's just

872

:

stay curious and do it safely.

873

:

And do it within the guardrails.

874

:

But do it because it's an exciting time.

875

:

We have an incredible tool that we're

seeing developed in front of our

876

:

eyes that's growing exponentially.

877

:

And you don't wanna miss out on that.

878

:

Thomas Kunjappu: Absolutely.

879

:

So let's leave it there.

880

:

So thank you and everyone out there.

881

:

We'll see you on the next one.

882

:

Bye now.

883

:

Thanks for joining us on this

episode of Future Proof HR.

884

:

If you like the discussion, make

sure you leave us a five star

885

:

review on the platform you're

listening to or watching us on.

886

:

Or share this with a friend or colleague

who may find value in the message.

887

:

See you next time as we keep our pulse on

how we can all thrive in the age on AI.

Links

Chapters

Video

More from YouTube