2 Minute Drill: Who's Managing Your AI Agents? The Case for Non-Human HR with Drex DeFord
Episode 331st March 2026 • UnHack with Drex DeFord • This Week Health
00:00:00 00:05:52

Transcripts

This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.

 Hey everyone. I'm Drexel. This is the two minute drill. It's great to see you today. Here's some stuff you might want to know about. Let me tell you about a job opening that, uh, doesn't exist yet, but maybe it should. It's not for a nurse or a CIO or even a ciso. It's for the vice President of non-human resources.

The vice president of non-human resources for the person who should be leading the function that corrals all those AI agents in your environment today. And it's okay if you think I'm a little crazy at this point, that's fine. Stay with me because this isn't coming out of. Nowhere. I've spent a good part of the last year thinking about and working with and researching and even building AI agents.

And so I've spent a lot of time thinking about the upside and the downside and the promise and the risk. And like most of you, I've been trying to sig separate all the signal from the noise. I mean, basically asking myself, how in the hell are we gonna manage all these? AI agents, all these things that are coming into the environment.

And I think I may have found something interesting. It's not a solution exactly. I, you know, maybe, uh, but at least it's an interesting way to think about the problem. And here's where I started and it's where a lot of you are telling me that you're struggling too. There are more agents in the environment than I could possibly know, it seems like, and there's new ones every day, sometimes.

Multiples of new agents every day, and I can't keep track and of the, I can't keep track of the agents. I can't keep track of what they're doing. It's like the wild west out there. So how should I think about this? Well, here's where my brain has kind of made a shift in the last week. We're treating agents like software.

They behave a lot more like employees. They have access. They make judgment calls. They operate inside of frameworks that impact real people. And in healthcare that means patients and families, but they also impact our business operations. And if you do research, they impact your research operations. But it turns out there's no job descriptions.

For most of these AI agents, there's no onboarding, there's no regular performance review, there's no baseline training. They need to work at our health system or to understand our culture. There's no clear escalation path when things go wrong, and it seems like right now anyone can hire these agents.

Even our vendor partners can hire new agents and just. Put them into our environment. Now, you wouldn't run your hospital that way if we were talking about people, but we're starting to run our AI agents that way. The signal on this, at least to me, is getting louder. Harvard Business Review recently made the case that if you wanna scale AI agents.

You need to think of them like team members with defined roles and boundaries and supervision. Deloitte is calling this a digital workforce and Gardner is building governance models to manage their behavior. And Microsoft is, is is designing control systems that treat agents more like identities than tools.

So a lot of different language, but I think the same signal. We are quietly, maybe unintentionally building a second workforce inside our organizations. And we have almost none of the typical management infrastructure to support it. And I'm not saying HR for agents is the right answer, but it's a construct to guide a conversation.

So what would HR for agents actually do? Well, they define the job. What's the agent allowed to do? What's the agent absolutely positively not allowed to do? They'd handle onboarding. What data can it use to learn from? What systems can it touch? Or what systems should it never touch? They'd manage performance.

Is it accurate? Is it drifting? Is the agent escalating when it should? They'd manage enforcement policy. Can we audit its decisions? Can we explain their decisions and can we actually describe exactly what it is that they're doing? Can we stop them instantly if something goes wrong? And they'd manage the life cycle because agents shouldn't just be deployed and forgotten.

They should be promoted and restricted and retrained, or retired, or maybe even sometimes fired. Here's where this matters for healthcare. We've spent years building governance for people, credentialing and privileges, and access control and clinical oversight. And now we're introducing non-human actors into those same systems.

And we're assuming that our existing structures will hold, and that's a risky thing to do without a construct to manage them because these AI agent employees don't get tired. They don't slow down. And when something goes wrong, it happens at machine speed. And again, I don't know that this Vice President for non-human HR is exactly, exactly the answer, but it might be better framing for the conversation.

And right now, better framing might be exactly what we need. Thanks for listening. That's it for today's two minute drill. I'd love to hear what you're thinking. Return fire is always welcome, by the way, toward the end of the week. Thursday, Friday, Saturday. Sometime in there I publish a newsletter. The newsletter is the two minute drill extra.

It has a transcript of this podcast, plus it has eight or 10 other new stories that you should probably be paying attention to. Um, I'll put a. Spot in the comments where you can click and go sign up. And again, thanks for being here. And as always, stay a little paranoid. I'll see you around campus.

Chapters

Video

More from YouTube