Newsday: MFA Isn't Enough and Why Healthcare Can’t Just Hack Back with Preston Duren
Episode 10720th October 2025 • UnHack with Drex DeFord • This Week Health
00:00:00 00:31:17

Transcripts

This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.

[:

Learn more at fortifiedhealthsecurity. com

I'm Bill Russell, creator of this week Health, where our mission is to transform healthcare one connection at a time. Welcome to Newsday, breaking Down the Health it headlines that matter most. Let's jump into the news.

itle preston Duren of Threat [:

And wow. So I guess today we're gonna talk about threats. Are we gonna talk about threats today?

Preston Duren: I mean, hopefully some AI threats. I mean, that's what most people are interested in. It's fun to talk about. And, there's not a whole lot of information. There's a lot of information, just, it's not a whole lot of actionable information, I guess, or backed up information.

So hopefully we have some good conversations around AI and some threats and make a, make pretty good thing to listen to.

Sarah Richardson: Can we talk about insurance too, like how AI is changing the cybersecurity insurance and sort of landscape in those spaces also, since everyone's under financial pressure.

Bill Russell: Sure.

Alright we're, here's what we're gonna do. We're gonna go over to the, this week Health website. We're gonna go to the cybersecurity, becuase we now organized everything by topic because we will hit news headlines quickly and we will get to some of those topics we just talked about.

sentially using phishing. To [:

Talk to me about m ffa. Is MFA changing? It sounds to me like it needs to change. Like it's, you know, not something we can just put in place and say, oh, we've got MFA we're good. We're golden.

Preston Duren: years ago, you know, MFA was just starting to get popular and one of the first ways that a lot of us in healthcare were able to get it, the buy-in from the business was the e-prescribing of narcotics, right.

Of controlled substance. So it kind of required that in certain states and that was our kind of dipping a toe into it. Say, Hey, we've gotta do it right. Lemme get. A hundred licenses and then the next year I can renew and buy 5,000 licenses. Right. So like that was the first step in.

, you everybody has MFA. The [:

And I remember working with certain healthcare, you know, EHRs and things like that, and their support models at the time couldn't really handle, they maybe had 200 support guys that couldn't really handle, you know, we didn't know how to deal with that, right? So it was kind of excluded.

And so you gotta have kind of workarounds and things like that. I think now people are starting to get more used to it, and it's like, Hey, look. For cyber insurance for different things. We've got to have this, we require this in our network. And so people are getting a little bit more on board with it now.

Bill Russell: Well, the friction. So Drex, I wanna talk to you about the friction a little bit. So the friction of MFA has always been sort of the downside of it, and, you know, you pull it up, you hit your password, and then you have to do something, some other crazy thing or some other aspect. But I'm noticing more and more like with my Apple devices and whatnot

now, you know, I just do the [:

Drex DeFord: can type it in,

Bill Russell: before I can type it in.

So I mean that friction has always been a challenge at health systems. What's the fine line? What's the balance?

Drex DeFord: Yeah, I'm gonna actually today's 2 minute drill, which will be published probably a couple of weeks before this episode comes out, but I'm gonna talk about this whole issue too.

This payroll pirate issue is the reality is I think MFA. As we have field it, for the most part, today has become kind of old school and the bad guys have kind of figured out how to get around it. So they're doing it by like MFA, bombing your phone until you just relent and say, okay, somehow they've gotten your username and password from somewhere else.

d they get your username and [:

They are in the system, so these phyto resistant, these phyto. Compliant phishing resistant technologies are really kind of like the next version of MFA and there's, it's pass keys and things like that, which ultimately really makes it easier for the end user too, because now it's something that's sort of in the device or built into the device.

They may not even need passwords. They can just. Go right into the system. And these are things that, you know, you were talking about, like if you've used your fingerprint or you've used face id, you've kind of used this kind of technology already. And that's the place I think we have to move to as we go down the road here.

h crap. Another acronym like [:

I'm, I am personally getting overwhelmed with the AI stuff, and I'm taking my eye off the ball on the cybersecurity stuff. Like I don't know how I stay current on all this stuff.

Sarah Richardson: So when you talk about it in terms of, especially when it's new, if we say Fido or we say Fast identity online, and we say two in front of it and we just say to people it's an open, authentic standard.

It enables. Your Passwordless login make it a little confusing becuase you tell people it's using cryptographic keys instead of a password. But if you go back and just say, Hey, Fido is this, and what it actually does is what you're already using. And that's, for example, facial recognition. becuase it's stored on your device.

nd say this plus this equals [:

Then they're talking. Two people about it at home. I always think about how do I tell the 10-year-old next door, how do I tell my 84-year-old aunt? When both of them understand? Then I know that I understand it as well, and I can adapt in that perspective to my audience and make it part of the vernacular, not just something that people are afraid of.

Drex DeFord: I think the 10-year-old next door might be the one who's explaining this stuff to me. In a lot of cases there, there is that they're already using it on their Xboxes and other things, like they're already down the road with it. So I think that mostly plain English, mostly non-technical. How do I come up with something that they already know about or use today as a way to explain it is probably that's the easiest route.

Bill Russell: Well, let me kick off the next article with a little bit of pushback with it to say, platforms suck. Okay, so let's just start there, because one of the things that platforms do is they consolidate a lot of stuff and they give a single point of attack for these threat actors.

So. The next article is [:

I'm curious, like as we keep going to platforms, are we increasing our exposure? Are we decreasing our exposure? , Who wants to field that one to start?

Drex DeFord: I say we asked Preston.

Preston Duren: again, I think that all of the attacks. Most of the attacks can be solved by kind of going back to the basics in general, right?

nt platforms, it's much more [:

Right. So I think it, it, the really, the answer depends on the attack, honestly.

Bill Russell: you would rather have fewer platforms? Yes. Easier to spend then. Okay. Yes Because I can have a guide and I

Preston Duren: can say, you're the Palo expert, or you're the insert thing expert. It's your job to make sure it's configured right.

It's matured. It's secured. Right. So, yes. I think boiling them down is safer.

Drex DeFord: You and I have talked about this sometimes what you see Preston from your socks are things like, a lot of scanning of things like Palo Alto in the days before something bad happens.

It's almost like the bad guys are trying to figure out like, are these platforms vulnerable? Do have they put the patches in at all of these organizations, which kind of means they're raring to go, like coming right behind that is they have a zero day all teed up and ready to go, and they're trying to figure out where's the right place to weaponize it.

Absolutely. You [:

Preston Duren: Yeah, we do. And I mean, I have so many examples, but you know, you don't wanna waste a zero day on let's be honest, a small rural hospital, right? I mean, you want to use your zero day against Oracle, right? I mean, you know, that's, there's a news article about, you know, some of the Oracle stuff, and even going back to the MFA conversation we do ir, where the help desk will be outsourced and an attacker will call, reset the MFA device and then wait till the next shift.

Or later on where that it's a different person on and then call and reset the password.

Drex DeFord: Uhhuh.

Preston Duren: I mean, so like it's a multi-stage attack and all these attacks are against the human at this point. Right. So I mean, we absolute, you know, going to go back to your question, we absolutely see the scanning of these devices and things like that, and they're ready to weaponize and as soon as an exploit comes out, right?

n a firewall in a healthcare [:

Bill Russell: Question I wanna ask here, becuase it's is cybersecurity controls and response going to be the first thing we give over to AI in IT. Like just completely give it over because the attackers are going to do it. The attackers are gonna give it all over to AI and allow, you know, the phone calls will come from a bot.

The, the,

Drex DeFord: even the, they're doing a lot of that today.

Bill Russell: Yeah, absolutely. And so they, their attack speed and their agility is gonna get faster and faster. And human in the loop. To me sounds like a flaw more than a a feature in cybersecurity prevention

Preston Duren: Drex and Sarah probably have insights in this too.

en if you give it a specific [:

And so I think that there should always be, at least for right now, a layer of human on top of that, right? We've gotta make sure that, I think we are able to control that.

Bill Russell: Your answer's gonna be different a year from now.

Preston Duren: Probably and in a year from now, we can talk and it may be different.

As of right now though, it's just so dangerous. I mean, the exfiltration to everything. I mean, we all, I mean, I use. The AI tools as a, as an accompany, like, to help me be faster, you know, things like that. I'm very careful about what I put in it. It used to be data exfiltration as files from that attackers are uploading.

man intervention before it's [:

Bill Russell: Sarah, what are you worried about with ai? What are you worried about with, from a security and compliance perspective?

What are you worried about with ai?

Sarah Richardson: I, here's what I think about I've been using a term recently becuase I've been talking to some BCDR friends and for those where, for knocking down acronyms, still business continuity, disaster recovery is some of the things that are important is disaster resilience, a little bit, the whole engineering of resilience in your organization.

And that all came about because. Literally ask the question how should you use AI in your cyber defense mechanisms? So to me, remember like when we had to learn the network I remember, please do not throw S'S pizza away the whole network. Like stack

Preston Duren: OSI model.

Sarah Richardson: Yes. So SI model, it's okay, now you add the human or the AI piece.

e that AI flags something in [:

Because if you just go black box it is gonna get pretty dangerous pretty fast. Especially if AI can. Infiltrate and recommend bad decisions being made, and you have to go back and also authorize that and then just assume it's going to be targeted. So you set up your sandboxes in a way that says, I know this is gonna be a primary target.

So you're working with all your partners on this. How are you handling an adversarial attack? What if your platform goes down? How did it make the decision? How do you prevent it from poisoning your models And. How do you make sure that in healthcare specifically, so a lot of our partners are more than just the vertical of healthcare.

What additional controls do you have in place because the healthcare environment is that much more acute?

et's go back and see how the [:

How the agent did right in the allow all model. This is where you're basically gonna say, the AI is super well trained in this, and when it makes a recommendation, we're just gonna let it take the recommendation in the denial. It's gonna be something that's super, you know, like touchy and we're gonna say.

You can make the recommendation, but you can't take the action until the human has actually seen it and told you that it's okay. Maybe even interrogated the agent to make sure they're really clear on what's gonna happen, and then they'll let the agent go off. But in both cases, you're gonna go back and look at the after effect of what happened when the agent took.

The action. And this is like having an employee and you go back and look at their work and say, well, you know, it would've been better if you would've been a little bit more to the right or a little bit more to the left, or they really get good at a particular piece of work they're doing, and you shift them from denial to allow all because they're really proficient and really good at that.

to treat these agents a lot [:

Bill Russell: Yeah I agree with that statement, and then I will say, I think you're all wrong. Most, mostly for, just for fun. , And here's, here's the, here's my experience. So my experience is they came out with Claude 4.5. And Claude 4.5 thinking has a 3% increase in its coding over the previous version of Claude.

And you think, how much of a difference does that make? And I'm here to tell you it makes a massive amount of difference. It is a lot smarter. It can read my entire code base, it can make recommendations I wasn't even looking at. That's how far it's come in a couple of months to the point where.

I am now almost entirely coding with prompts, building out applications with prompts. Almost entirely. And I'm sitting there going well, gosh, I mean, this is one of the best programmers I've ever worked with. It understands the entire code base. And I can give it even more context.

it more context. I connected [:

I agree. Sarah, we're gonna need transparency. We're gonna need to be able to see what these models are doing. We're gonna need to be able to intercede in these models, but I think it's gonna be more of drex the allow all. With transparency and then come back in and, but I also think it's gonna be like hiring these agents and we have to keep coming in and looking at the agents to make sure that they're performing the way we expect them to perform and have those kinds of things.

And and then quite frankly, I think we have to respond to the attacks. And I think the attacks are gonna get more sophisticated, more fast. And I don't think humans are gonna be able to keep up

Drex DeFord: when you look at the big companies right now who have big security platforms, they are deploying AI agents into those platforms, and the idea is that those agents become.

[:

They keep coming out of agent school and they got more and more capabilities, and so you trust them more and more over time. I think that's true. I think that's gonna happen and I, you know, look at where we are. I mean, what has it been three years since GPT was launched or something? It doesn't seem like it's been that long.

And look at where we started and where we are today and how fast things have changed just in the last six months. In the last two months, like it's going to, it's gonna keep spiraling like that.

Sarah Richardson: But I'm gonna go back to, let's just say you've got 20 somethings coming out of whatever schools, certifications, colleges, whatever.

[:

And then that whole microsegmentation aspect, like all the fundamentals will always be true. And if you don't know how to still do them manually or otherwise, then you're gonna forget how AI is building upon all of those so that plumbing and engineering. To me, it's still really key and something that I will always keep at the forefront of the conversation.

Because if you don't know those things, how the heck are you gonna add technology on there that's supposed to keep those things safe as well?

Preston Duren: Just to kind of piggyback off that, you know, one of the question that's I'm gonna do, I'm gonna have question. One of the things I don't hear people talk about.

[:

But what I don't hear people talking about is the risk of what AI is able to infer. Correctly based on the data that it maybe should have access to. That's a dangerous thing, right? So when you're taking that leap from what it has access to versus being able to infer, end goals or, you know, data, things like that.

I mean, I think that's a big, you know, step forward and that's a place that, you know, I don't think anybody really understands how to do AI securely right now. It's happening too fast. And for healthcare companies, I mean, if you don't adopt ai, you're gonna get left behind. So you have to adopt ai, right?

But how do we secure against that

Bill Russell: Preston, number one cause of cyber events that healthcare organizations is probably third party human error. Phish human error. Phish probably tied to third parties though.

Sarah Richardson: Yeah. But then a lot of, it's, a lot of it's third party risk is right there, bill.

Bill Russell: No, absolutely.

enator. It's human error and [:

Drex likes people. My job is to push people and to say, absolutely change your skills. Your skill is gonna be driving the AI machine that is gonna be doing cybersecurity in two years. All right, so what skills do you need today to really understand what that's gonna take? And Preston, what are you gonna say to somebody the first time they come to you and say, Hey.

We're thinking of turning over the keys to ai. Like we want to turn on the, allow all, as Drex said, and let it start responding to these events are, I mean, you're gonna you're probably gonna say, okay, caution.

Preston Duren: again, I think it depends on the situation. I think that understanding the rules of engagement, like, you know, when you say turn it over, turn over the keys.

tarting to go the wrong way? [:

I mean, ultimately I think that AI is a really good thing, but I think you have to adopt it thoughtfully, right? You have to understand like what are the actual risks, and then like, what can we do to minimize these risks? Because I don't think, not using AI as an option, and I don't think anybody on this call thinks that's an option. Mm-hmm.

Bill Russell: Yeah,

let's turn to money. Sarah brought it up. She usually brings it up in our conversations. Apple boost bug bounty to $5 million for critical exploit discoveries. I found that really $5 million for a critical exploit. Are we gonna start to see, I don't know, health systems do this and others do this, like, Hey, try to hack into our system and we'll give you.

A hundred grand, 200 grand, 500 grand for are we seeing this already Drex? I'm curious, are we seeing this already?

Drex DeFord: I mean, I think you see pen testing in health systems. Health systems are paying companies like fortified to come and actually try to break into their systems.

hing to me is that there's a [:

Like I, it makes me wonder, for companies like Apple who are offering, you know, $5 million now they're offering $5 million for like the very worst possible bug. Right, right. That you could find that's in the top category of like, you could wipe out Apple. But that's the way I think a lot of those companies get people to come in and hammer them with white hats on as opposed to, you know, the bad guys coming in and trying to take advantage of it.

s win are ahead of the black [:

Sarah Richardson: I want the hack back attack. So what I mean by that, so when I see health systems get attacked, ransomware as an example, to me that's human warfare. It's actually illegal. We as a country don't go and attack back from a government perspective yet.

What if you're a healthcare system? So we talk about 2, 2 9 health system and our HCSP program. What if we get attacked? We get ransomware and we say, well not, are we not paying it? We're going after you. And you have a partner who can hack back and go after them. Drex is like, absolutely not. Yeah. This is my other side of it.

Go after the bad guys.

Bill Russell: I, I like it, Sarah. Let's do it. I think it sounds good idea. It only takes one health

Sarah Richardson: system to go after it. And you, these are like ninja guys. These are the people you don't talk about. These are the real hoodie guys. What if you did hack back and there was some kind of a secret, you know.

I'm my secret handshakes usually get me into bars, not into like hacking the

but think about that. And I [:

Preston Duren: these two? Were like, no. Don't get, no, I love the idea on paper. It sounds fun. I think we may underestimate the level of spite and ego in some of these attack groups and they have an unlimited amount of time to just.

Make you pay for that sin. Drex, I'm interested to hear what you think about that.

Drex DeFord: I, you know, I mean, first of all, I think it's technically illegal for you to actually go hack back on somebody else. Now you can hack back and you see companies like Microsoft

Bill Russell: and I know people. I

Drex DeFord: know and that's what I'm saying.

There's companies like Microsoft and you know, there the CrowdStrike, there are others too that, that do hack back. They do it with the approval of the US government. It usually is tied to some kind of theft of intellectual property. And they allow those companies to take particular steps to try to protect themselves in their intellectual property.

n't know is that there is an [:

Arrests are made, A lot of those things are happening. You don't really, it's not publicly talked about until. The result flashes up on the screen.

Bill Russell: Let's close with cyber insurance. I think it was a good call out from Sarah early on and we'll close with it. You know, we've seen this sort of morphing over the over because the prices went up so high and the requirements were so, specific.

They're implementing certain [:

Preston, I'll start with you, Drex, then come to you. I would love to hear what you guys are thinking.

Preston Duren: I mean, I think that after talking to all of our, you know, clients, I mean, you hit the nail on the head is the requirements are coming more and more strict and the cost is while the requirements are more strict, the cost isn't going down, the cost is just continues to escalate.

And we've started to see people kind of self-insure and say, look, I can run the math and say, you know, put X amount of aside and it's cheaper. Because, even if you look at. I asked a lot of our clients, what determines your SLAs for patching? You know, criticals in 15 days or whatever.

And almost every single one of 'em said are cyber insurance company. And I'm thinking in my head, that's interesting becuase I'm looking at your vulnerability data and you are not even coming close to those SLAs. So what does that mean? If something happens, the cyber insurance company's gonna look at that and be like, I'm not paying you out.

But you've been paying all these high premiums. I think it's a dangerous game, honestly.

ith you. I mean, I think it, [:

And so there's a lot of escape routes and escape clauses in these insurance policies. And I would say you really need an expert to kinda look at these things and make sure that you're. Covered like you think.

Bill Russell: if you're not gonna comply, there's no reason to pay the money. becuase you're not gonna get paid out at the end of the day anyway. And we saw actually that self-insured came up a bunch. They're going for different types of policies now, which is like a business continuity policy.

Yeah.

Cyber insurance policy. It's been interesting to see that change, although I would imagine that same kind of requirements still apply. Yeah, I do. Go ahead. They still

Sarah Richardson: require proof of the controls you need in place. We talked about MFA, disaster recovery, incident response, I mean all of it.

y. But then you start having [:

Level policies because you can say, Hey, these are required for insurability, but if you don't have the cash flow, it doesn't really matter.

Drex DeFord: There's also this sort of odd situation sometimes that happens that when you're insured and you have an incident and you contact the insurance carrier comes in with their own team and tries to take over the event.

Now, sometimes the cheapest route out for them is to pay off. The bad guy and move on with it. It's cheaper than being down for 45 days or 60 days and figuring out, now you gotta pay all this other stuff to the health system. It's like,

Bill Russell: yeah, we just, now we're just

Drex DeFord: gonna pay him off. Right. What Derek

Bill Russell: is saying right now does not necessarily reflect the views of this week health.

ir IR people. I'm not saying [:

Drex DeFord: follow the money, they're driven by the money. And I'm not, I mean some great insurance companies out there, some great IR companies that are affiliated with those insurance companies, I don't want somebody coming in and taking over this whole event.

Bill Russell: Yeah. Well press Preston. We'll have to have you back on. This was a fun conversation and we'll start with pay the ransom. Don't pay the ransom back.

Preston Duren: Hack back. I love the spirit of hack back. Look I'm bought in on the spirit of hack back. Like I, you know, I'm bought in on the spirit of it.

I'll say that.

Sarah Richardson: Human warfare, Preston, we got. That's right.

Preston Duren: That's right.

Bill Russell: Oh, Preston, thanks for being here. Drex and Sarah always great. And look forward to the next time.

That's Newsday. Stay informed between episodes with our Daily Insights email. And remember, every healthcare leader needs a community they can lean on and learn from. Subscribe at this week, health.com/subscribe. Thanks for listening. That's all for now.

[:

Chapters

Video

More from YouTube