Artwork for podcast Future Proof HR
AI Governance, Guardrails, and Risk - Live at Transform 2026
3rd April 2026 • Future Proof HR • Thomas Kunjappu
00:00:00 00:16:37

Share Episode

Shownotes

In this special live episode of the Future Proof HR podcast, recorded on the floor at Transform 2026 in Las Vegas, Thomas Kunjappu sits down with Shawn McIntire, General Counsel at Pebl, for a quick but packed conversation about what AI governance actually looks like in practice at a global EOR company.

Pebl provides employment outsourcing solutions that allow companies to hire talent internationally without a legal entity in-country, giving businesses a faster path to global expansion. As General Counsel, Shawn has had to think carefully about how AI fits into the company's existing risk framework and how to build a culture where employees feel comfortable experimenting without flying blind.

They get into why GDPR is the right lens for thinking about AI governance, why keeping your policy short and jargon-free matters more than covering every edge case, and where HR teams are most exposed to AI-related liability today. Shawn also makes the case that the answer to "should AI be used for this?" is almost never no, and explains why every employee should be thinking about how to work themselves out of their current job.

A candid, grounded conversation straight from one of HR's biggest stages.

Topics Discussed:

  • Why GDPR is a useful framework for thinking through AI governance
  • Fitting AI risk into your company's existing risk profile, not the other way around
  • Keeping AI policy short, simple, and actually usable
  • Where HR teams face the most AI liability risk today (hint: hiring)
  • How bias gets amplified when AI is making decisions at scale
  • The case for always iterating and never putting a lid on AI exploration

Additional Resources:

Transcripts

Speaker:

We are live here at Transform in

Las Vegas and I am here with Shawn

2

:

McIntire General Counsel at Pebl.

3

:

And we are rolling with our micro

episodes talking about AI and the

4

:

impact of it with an HR teams.

5

:

Although this one's a

little bit different.

6

:

As General Counsel, I'd love for

you to tell me a little bit about

7

:

your role as well as a little

bit about what Pebl does first.

8

:

Shawn: Thanks for having me.

9

:

I've been with the company for

about eight years, so I've seen the

10

:

company grow from like very small

in the industry itself as an EOR.

11

:

Very small, no knowledge

about what it does.

12

:

No like customer engagements to

what it is we see now there's like

13

:

a four or five EOR providers here.

14

:

So what EOR does at its very basic

level is we provide employment

15

:

outsourcing to customers who do not

have a legal presence in a country.

16

:

so if you think of the situation where,

hey, I wanna hire an engineer in Ireland,

17

:

but I don't have an entity there.

18

:

I don't have registrations.

19

:

Normally that would take you six to

eight months to get everything set up.

20

:

With EOR and companies like Pebl,

you're able to do that instantly.

21

:

So we have an infrastructure

all over the world.

22

:

You can hire it through our entity, have

that worker start working immediately,

23

:

and really expand your global footprint

much faster than you normally would.

24

:

Thomas: And obviously very important

in the world of remote companies

25

:

and companies that grow into

10 countries with 10 employees.

26

:

Shawn: Right.

27

:

Thomas: I'm really curious, being

a General counsel, tell me about AI

28

:

governance and the way you look at it.

29

:

How do you think about AI governance

within the organization and in a way to I

30

:

assume, of course from your perspective,

you're looking at risk mitigation, right?

31

:

Yep.

32

:

So tell me a little bit about

how you think about that.

33

:

Shawn: Sure.

34

:

So I kinda look at this similar

to what GDPR was, right?

35

:

If you may remember the GDPR freak

out that happened within companies.

36

:

We have to keep pII and a lock safe

and away from everything else, right?

37

:

So I think that was a forcing

function for companies to think

38

:

about across the organization, a

single regulation that really touches

39

:

a lot of areas in the company.

40

:

and when I look at AI, I think there's a

lot of lessons learned that that companies

41

:

can take from kind the GDPR experience.

42

:

When I look at it, every company

has a risk profile, right?

43

:

There's decisions that they made.

44

:

Whether that's specifically put

down on paper or whether it's

45

:

discussed in executive meetings

or within the board level.

46

:

And I think when we talk about

AI governance, it's gotta fit

47

:

in within your risk profile.

48

:

If you're a company that is risk averse

and you suddenly drop in AI and you

49

:

tell your engineers to go vibe code and

create all this stuff, you're gonna be

50

:

misaligned about what your culture is.

51

:

And I think as a company you

want to define how you want to

52

:

use AI, how you want your risk

to look when it comes to AI.

53

:

And once you do that, that delivers

the message to the team of okay,

54

:

here are the guardrails that a

company should have or that we should

55

:

have within the AI infrastructure.

56

:

We're really fitting it in

with the structure that we

57

:

already have as a company.

58

:

And that makes it more natural for people

to make decisions and act on their own.

59

:

Thomas: I like how you're bringing

it back to GDPR and like not just the

60

:

freak out, but then over time now anyone

who's leveraging any kind of HR systems

61

:

I'm sure you guys are being

vetted for SOC 2 or ISO

62

:

Absolute.

63

:

Shawn: Yeah.

64

:

Thomas: And, what are your

processes for protecting PII.

65

:

So every company is different.

66

:

But if a company, let's say, has

a mature stance on all of that and

67

:

already does, how much are you actually

changing your stance practically

68

:

when you're coming into the AI world.

69

:

And you think about AI

governance framework.

70

:

And even like vendor

evaluation for example.

71

:

Does it really shift dramatically?

72

:

Shawn: I don't think so.

73

:

I mean, I think the speed at

which AI is growing is really

74

:

what's causing the shift, right?

75

:

We have an enterprise risk management.

76

:

You have InfoSec team, a legal team

who are all doing types of evaluations.

77

:

And we slotted AI review as a separate

review as we would with a PII review.

78

:

So I think, it really doesn't

change the posture of a company.

79

:

But at the same token, you look at

the power of AI and what it can do.

80

:

And that is something that I think

you really need to think about and

81

:

understand as a company and that drives

your framework of what the AI posture.

82

:

Thomas: So I was having a previous

conversation I want to ask you about.

83

:

Typically you at this point, everyone's

awake to the idea that you need to have

84

:

some kind of AI governance policy, right?

85

:

Maybe not a framework, maybe not

strategically necessarily, but there's

86

:

something in at least to protect the

company, because you know that for

87

:

many organizations, for your average

employee, personal AI usage is so rampant.

88

:

But then there's maybe a path

or a gap between just having.

89

:

A piece of paper that

you're making everyone sign.

90

:

To make sure you're protected.

91

:

And what is a best fit that

will enable the company.

92

:

So tell me a little bit about what

you think is like your role and

93

:

your colleagues across the C-Suite.

94

:

Especially of course, we

are very curious about HR.

95

:

How you guys should be collaborating

well to ensure that you're not

96

:

just creating a piece of paper to

protect the company at all costs.

97

:

Shawn: Check the box is always just

a terrible way to look at a problem.

98

:

Especially when it comes to

something that's important.

99

:

So from my end, again, legal

is always deemed to be the no.

100

:

That the area that's

going to be the blocker.

101

:

And for me, I would like to look

at the company's objectives and

102

:

determine where legal can fit in to

help maximize what our goals are.

103

:

And AI is a huge function of that.

104

:

As you mentioned, we have a policy, right?

105

:

And we could have created this

very long multi-page policy.

106

:

We didn't and we kept it very simple.

107

:

We wanted to keep people have

guidelines, but most importantly

108

:

have an avenue for people to have

their questions answered quickly.

109

:

And I think that's the most

important thing when we talk about AI

110

:

governances and AI controls, is that

honestly, I could sit here for days.

111

:

I could even use AI tools to give

me all the risks that can happen

112

:

or the scenarios that can happen.

113

:

But things are changing on the fly.

114

:

So we really wanted to create

a structure where there's an

115

:

openness between the people who

are actually developing the tools.

116

:

Whether it's vibe coding or

some other type of AI execution.

117

:

To be able to ask those questions

and have that open door policy to

118

:

say, okay, hey, I want to build this.

119

:

How does this impact our program?

120

:

And the team that we put together,

it's not a committee, right?

121

:

It's not this formal, overarching.

122

:

Hey this is what you do.

123

:

Don't don't do anything else.

124

:

It's a here's our framework.

125

:

If you have questions, come talk to us.

126

:

We're learning the same way you are.

127

:

But if you have that ability to

where people aren't afraid to have

128

:

open conversations and explore the

fringes of what can and can't be

129

:

done, then I think you come out

with a better product in the end.

130

:

Thomas: I love that.

131

:

So there is a framework, but

like you're thinking about it.

132

:

First of all, length keeping it short

133

:

and not legalese.

134

:

Because really you're thinking about

the point of impact is when the employee

135

:

in whatever function has an idea,

136

:

of maybe I can use this tool that I came

across or use a tool that we already

137

:

have approved, but in a different

way with different type of data.

138

:

Can I be comfortable in

answering the question.

139

:

Yes, this very much is something I

can do versus is it on the edges?

140

:

Has that been an iterative

process, would you say?

141

:

Shawn: Oh, absolutely.

142

:

Yeah.

143

:

I mean, daily.

144

:

And I think, again, we're learning

a lot of what can be done with it.

145

:

So there are areas where we did not

think that AI could be used to influence

146

:

what we do today, two months ago.

147

:

And now we see it and we see the

opportunity and we say, okay now we

148

:

need to make a risk-based decision on

is this something that the benefits

149

:

of what it can offset what risk exist.

150

:

And then figure out ways to

mitigate that risk if it does.

151

:

So I think for us, it is always

iterative and I think it just goes to

152

:

show how quickly this area is growing.

153

:

And not having kinda that ego of thinking

that you can put a lid on it, right?

154

:

You're not gonna create a lid.

155

:

You gotta create some open air

platform to allow people to

156

:

breathe but still understand what

people are doing on the ground.

157

:

Thomas: And it's very much a

learning and development option.

158

:

And it's also something like

there's a drive for many driven

159

:

employees to try something new.

160

:

And so you wanna have that like space.

161

:

So I'm bringing up some of these

topics which are then maybe there's

162

:

overlap with what an HR team and the

function is trying to enable, right?

163

:

You're trying to map it to

the culture that you want to

164

:

enable at the organization.

165

:

How do you see those conversations

happening with your peers to

166

:

enable like this environment?

167

:

Shawn: Sure.

168

:

So HR is unique.

169

:

And there's AI regulation

it's at our doorstep.

170

:

There's still a lot of AI laws, a lot

of states, countries, they're trying

171

:

to figure out how to balance this.

172

:

But the two areas that you really

see AI liability today is in the

173

:

employment bias arena and in marketing.

174

:

And from the HR teams, right?

175

:

I think using tools that can review

hundreds of resumes to narrow it down to

176

:

candidates that you want that is where

a lot of the regulators are looking

177

:

and say, Hey, this is something that

biases can be inherent within the AI

178

:

tools themselves and how you utilize it

and you're making a problem that again,

179

:

humans are fallible, right?

180

:

It's like we make these mistakes

and therefore the tools that we

181

:

build make the same mistakes.

182

:

So I think it is expediting

a risk that exists today in a

183

:

much more scalable platform.

184

:

So we talked to HR we wanna understand,

185

:

How are you using AI?

186

:

And if you are using it in a way to

scan, these resumes or scan different

187

:

candidates, we want to understand, okay,

what inputs are you putting into there?

188

:

What are you trying to get out of it?

189

:

And if your inputs themselves are biased,

then the outputs are gonna be biased.

190

:

So I think for HR particularly,

it's one of those areas that if

191

:

you can understand the use case.

192

:

For your particular team and

then understand where the risk

193

:

can exist within that use case.

194

:

I think you can create a product that

creates those efficiencies, right?

195

:

It's your point earlier.

196

:

It's iterative, right?

197

:

I think you've gotta run

tests, you've gotta run audits.

198

:

You've gotta see how these

different tools produce results.

199

:

And then determine are you within the

framework that you feel comfortable.

200

:

Thomas: So I wanna give you some scenarios

that are extremely out of bounds.

201

:

And kind of see what you think,

like how you think about that.

202

:

So you brought up talent acquisition,

There's been litigation in that world.

203

:

But then, I guess I'll ask it this way.

204

:

If you have an agent or technology

making a decision about whether a

205

:

candidate is of fit or not a fit?

206

:

Is that something that is completely

out of bounds and HR teams should not be

207

:

allowing AI to make that kind of judgment?

208

:

Shawn: Not at all.

209

:

I think the question of should

this tool be used for this purpose?

210

:

I think is always a yes, right?

211

:

How the tool actually defines

that role and executes on it.

212

:

That is where the risk.

213

:

I never want to be in a situation.

214

:

I don't think anybody should be

in a situation where you're saying

215

:

this is not a good fit for AI.

216

:

Everything is a good fit for AI, okay?

217

:

How you utilize it.

218

:

It can make a sound decision unsound.

219

:

And I think when you talk about, yeah, AI

with the right prompts and with the right

220

:

guardrails can take what could take hours,

somebody to review different resumes.

221

:

And to a decision a matter of seconds.

222

:

That saves teams so much time,

it's so much more efficient.

223

:

So I think those opportunities still

exist, but you really need to understand,

224

:

again, where our own biases are.

225

:

as humans.

226

:

And where those biases can be interjected

into the tool and then create a

227

:

product that could be was five x now

can be 10 x because I'm using a tool

228

:

to make decisions at 10 times the

speed that I would otherwise do it.

229

:

And therefore create 10 times

the bias that may exist.

230

:

So yeah.

231

:

The answer is never know, but it's

understanding why and making sure that

232

:

you build that framework around it.

233

:

Thomas: I love that thought.

234

:

So another question.

235

:

It's kind of similar, which is

and I'm trying to come, these

236

:

are the doomsday scenarios.

237

:

People think about, right?

238

:

I got fired by AI.

239

:

Because it was analyzing my work and this

HR team is using some kind of agentic tool

240

:

and it's decided and told my manager that,

or AI is my manager and I got like fired.

241

:

Is that common?

242

:

Is that something we should

be like preparing for?

243

:

Or is that kind of thing fit into, I kind

of made it apocalyptic, kind of like end

244

:

state, but really what we're talking about

is performance management, compensation

245

:

decisions, career laddering, all these

micro judgements going towards, AI.

246

:

And to your previous point,

it's obviously it saves time.

247

:

But it kind of leads me to the

philosophical question, and I

248

:

wonder if you answer it distinctly

depending on the use case.

249

:

Like should you,

250

:

even if you could, should you.

251

:

Shawn: So this for me,

this is my take on it.

252

:

I think we should all be trying

to work our ways out of a job.

253

:

I know that's a fairly controversial

statement to make but the reality

254

:

is there is somebody within every

organization who is trying to streamline

255

:

the operations in whatever department.

256

:

They're either looking at the costs, the

efficiency, the revenues, whatever it is.

257

:

So they are making these

decisions about a particular job.

258

:

And if you are not on board, if you're

not thinking the same way they are, then

259

:

you're not even in the same room, right?

260

:

Now, I'm not saying my job

will ever be fully automated.

261

:

But I should be thinking of ways to make

my job more effective and more scalable.

262

:

And if my job becomes redundant because

of AI or if there's a task that becomes

263

:

redundant then I've thought about ways to

264

:

effectively enhance the organization,

enhance what the company can do.

265

:

The next conversation that

happens, I will be part of that.

266

:

Even if I'm not in the same role, I

was part of that conversation that got

267

:

us to the level where we need to be.

268

:

So I think you should always

be looking at scenarios of

269

:

how do I make myself more

efficient in everyday life?

270

:

AI is just a tool to

expedite that much faster.

271

:

But me, I'm somewhat of an optimist

when it comes to it, but I feel like

272

:

there are jobs and there are roles

and there are ways of operating that

273

:

we don't even think of yet today

because our minds are so focused on

274

:

what we do today and that's gonna go

275

:

Thomas: away

276

:

Shawn: saving what...

277

:

you know what I do today.

278

:

Thomas: Right.

279

:

Shawn: But we're hindering

progress I think in that case.

280

:

And if we can evolve to the point of

being able to utilize AI in a way that

281

:

makes us make different decisions that

we're making today then I'm all for it.

282

:

Thomas: That's a great message.

283

:

And it's also what we talk about on

future proof HR or it's about how we

284

:

can future proof the organization,

as well as the HR function itself.

285

:

And the idea that you are

responsible for your own career.

286

:

And you need to be looking at and

experimenting and thinking about ways

287

:

that you can make yourself more efficient.

288

:

There's a certain with that kind of

MO there's no victim mentality, right?

289

:

You are taking full ownership of it.

290

:

It's not even asking the company

what does a courier ladder look like?

291

:

What else?

292

:

Can I get to like next?

293

:

Because to your previous points,

a company doesn't even know.

294

:

And whoever figures that out is

gonna have a seat at the table

295

:

to figure out like what's next.

296

:

Getting that like mindset and AI

governance and frameworks, where

297

:

people can get into that groove of

experimenting and learning what they

298

:

can do next and better gets us there.

299

:

So I love those points.

300

:

I'm curious, this is day two of Transform.

301

:

Any takeaways?

302

:

Like starting to sense any

patterns, recognition that you're

303

:

starting to come across and for

this particular moment in time?

304

:

Shawn: Yeah, I think one of the

more interesting things I've sat

305

:

through a couple of the panels,

there's the human element of this is

306

:

very top of mind for people, right?

307

:

And I think that is something that in

our organization, we obviously have

308

:

conversations about people, about future,

about roles, but I think that shared

309

:

kind of understanding of we really

do need to look at how can we support

310

:

the people through this transition?

311

:

And not necessarily how do we change

what we're doing to make it fit

312

:

for the people, but how do we help

build the people in today's world?

313

:

How do we help them

understand the tools better?

314

:

How do we help them

utilize the tool better?

315

:

And that was a consistent theme across

a lot of the stuff we talked about,

316

:

which I think it further validate some

of the stuff that we're doing at Pebl.

317

:

Thomas: I love that.

318

:

Shawn if people wanted to connect

with you or kind of follow your work,

319

:

what's the best way to be in touch?

320

:

Shawn: Yeah, You can find me

on LinkedIn Shawn McIntire.

321

:

Also this is a shameful plug but so we

have just created what we've created

322

:

a little while ago but our AI tool,

it's AI chat bot that you can interact

323

:

with on our website at HelloPebl.com.

324

:

The tool is named Alfie,

which is named after my dog.

325

:

Is a miniature dog named Alfie.

326

:

Okay.

327

:

So if you find him you can see him.

328

:

He is got some pictures

of him on our website.

329

:

but yeah, more to come.

330

:

Thomas: I always knew that dogs would

be our masters, not AI in the future.

331

:

That's right.

332

:

So that's very, very fitting.

333

:

So everyone check that out.

334

:

Connect with Shawn and so thanks

for following along on this

335

:

another micro episode live here

at Transform for Future Proof HR.

336

:

Thomas Kunjappu: Thanks for joining

us on this episode of Future Proof HR.

337

:

If you like the discussion, make

sure you leave us a five star

338

:

review on the platform you're

listening to or watching us on.

339

:

Or share this with a friend or colleague

who may find value in the message.

340

:

See you next time as we keep our pulse on

how we can all thrive in the age of AI.

Links

Chapters

Video

More from YouTube