Newsday: AI Post Executive Order: Changes and Innovations with Drex DeFord and Robbie Hughes
Episode 22620th November 2023 • This Week Health: Newsroom • This Week Health
00:00:00 00:13:19

Transcripts

 This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.

Today on This Week Health.

(Intro)   with the staffing challenges, it matters what we do. It matters the sequence. It matters that this stuff works well first time. And, , if you're going to get the return you expect on a lot of these interventions, they need to be specifically applied to the right patients in the right way.

Welcome to Newsday A this week Health Newsroom Show. My name is Bill Russell. I'm a former C I O for a 16 hospital system and creator of this week health, A set of channels dedicated to keeping health IT staff current and engaged. For five years we've been making podcasts that amplify great thinking to propel healthcare forward.

Special thanks to our Newsday show partners and we have a lot of 'em this year, which I am really excited about. Cedar Sinai Accelerator. Clearsense, CrowdStrike,. Digital scientists, Optimum Healthcare IT, Pure Storage, SureTest, Tausight,, Lumeon and VMware. We appreciate them investing in our mission to develop the next generation of health leaders.

Now onto the show.

(Main)   📍 hey welcome to Newsday.

I'm Drex DeFord. Nobody seems to know where Bill is. We're at the Chime forum in Phoenix, Arizona. We're happy to be here. There's a lot of cool stuff. going on. A lot of interesting stuff in the news. I think we're probably going to stick mostly to one particular topic. I'm Drex DeFord, Executive Healthcare Strategist at CrowdStrike.

You see me from time to time on with Bill doing this. I'm going to play the part of Bill today, and with me is Robbie. Introduce yourself, Robbie.

Drex. Nice to see you. I'm Robbie Hughes. I'm the founder and CEO of Lumeon, and we're a clinical workflow automation company, really helping providers get more done with

less.

Yeah, awesome. so we've had a little We'll chat about this ahead of time. The one Newsday item that I think is really interesting that's the past couple of weeks, is the Executive Order from the Biden Administration on Artificial Intelligence. And there's a lot of stuff in the order, and there's three things that are really tied to this executive order. One of them has to do with testing for safety. So when organizations, this is primarily through the Department of Commerce, when organizations are gonna use AI model, the executive order says there should be some testing for safety.

Now, the devil's in the details and this is always like an executive order. What does it mean? How do we do it? So that's the first one. The second one is a general comment to all of the departments in the U. S. government saying, AI is coming, you should get smarter about it and you should be thinking about how you're going to use AI.

And the third one's around high skill immigration. And so making sure that we've got the human resources, no matter where they are, the human resources in the U. S. to help us with this. I know you've been having a lot of conversations about AI. There's been a lot of conversations about... Out here, generative AI and other aspects of ai, What have you heard today?

I think the human aspects, the safety aspects in healthcare. This is all critical stuff. 'cause there's just not enough bodies to do the work that we need to do. And this kind of stuff has a lot of promise, but it's critical that it's done safely. I've heard a lot of providers talking about the fact they're using a lot of the new generative AI capabilities.

Particularly around things like in basket processing and things like that. But I do think today we're still at a situation. Processing messages coming in, still having people to review them, and then still having to edit, and maybe there's an efficiency there, but there's a lot of work to do to get it safely at the scale we need to deliver on a lot

of its promises.

Marker

find the processes that are already pretty well defined, and those are the things that you go after for automation, for AI in the beginning, or are there other places where you think we should be going with this?

So I think a lot of this stuff, you've got to think about it.

A little bit like a pyramid of need. AI really sits on the top. It's the icing on the cake to optimize your... I

love the

Maslow's hierarchy. Yeah, this is exactly it. The AI helps you prioritize and manage really at the pinnacle of your optimization. So you've got a thousand things to do. What should I do first?

What should I do next? That's great, within the context of workflow, obviously. But if you don't have a stable base, if you don't know what you're doing, you can't do it reliably, then, a lot of these things are not going to help that. Right. You can cover up a lot of synths, but fundamentally, if your training set's noisy, if what you're building on is really not that stable, I think the value to be had is still pretty limited.

So you've got to pick the use case really

carefully. Yeah. I think for me, generative AI was a really a big time wow moment because I started playing with it and using it. I've talked to Bill about this on the show, ChatGPT is trained on everything on the internet, so it's it's a cool parlor trick, but you have to be really careful and thoughtful about the questions you ask and how you ask them and the answers that you get back, and then be really skeptical of the answers you get back because you don't know, really, the sources. So the trick to generative AI, in your Maslow's Hierarchy of Needs idea, to me is, if you have a really good, pure, clean set of data that you want to ask questions against and have the AI use, and then the other part is really knowing...

How to ask the questions because that turns out to be a really

important. Yeah, I love it for data mining. I love it for data discovery. I love it for, a lot of these things where there's a good understanding of a link data set and you can actually doesn't even have to be linked, but a data set that's clean and you can rely on the stuff that we would use it for.

Yeah. I'm,

I was going to ask that, what are you using AI for, or what are you thinking about in your company's business model? Where

are you going? So what we're doing is we're using graph technology to work out. In real time, what the next best step for any given patient is. And then we're using automation to make sure that it happens reliably in a way that's personalized to the patient.

That's the core of what we do. Within that, there's a couple of very interesting use cases for AI. And one of them I mentioned earlier, which is around this prioritization piece. So if I've got a population of, I don't know, 100, 000 patients, and I want to run a bunch of programs against them, and I want that fully executed and scaled and run to the highest level of efficiency, we do that today.

What we're not doing, which I'm excited about, though, is... How do I determine which of the top 20 patients that I need to do something about now? And so how do I rank, or how do I stack those patients based on some level of population analytic, rather than patient level analytic? So we're a little bit odd in that we've come at this problem in reverse.

We've looked at the patient, the personalization, what do we do for the patient that's the right thing? But the job that we have now, and I think these tools are super cool to help us solve them, is how to look at that then in a cohort or an aggregate basis. Right. Which I think is a great opportunity.

I mean, it's interesting, right? CrowdStrike uses a graph database, what we call Threat Graph. We've been doing this for about a dozen years now. So we have 30, 000 customers and we have sensors on all those endpoints at all those customers. And those sensors feed to the cloud, to Threat Graph, what is happening on all those individual endpoints.

And then we built ML into that. I mean, just thinking about the sort of correlation of what we're doing here, how these things are a lot alike. We've built ML to be able to look at the information in that graph. But it's a big thing to have a soft database as it's build. Look for patterns for something that isn't quite right.

That's exactly it. And then be able to take action on that. Yeah. And then every time we find something bad that happens, it updates the machine learning model which then gives community immunity to everybody in the platform. But you're talking about kinda doing the same thing for patients. Yeah, and then,

in a clinical setting, can you imagine if every patient was always on a plan, you knew what the plan was, and it was always constantly updated.

Yeah. And then you had a predictive model that said, okay, based on where Drex is right now, and what we know about Drex, and what We have a high level of concern that these three things are going to happen and this is the evasive action you need to take today. It's amazingly cool stuff, but again, to your point, it's the same use of predictive technology plus graphs.

Yeah. To work out exactly how you're going to traverse the graph and work out what's

going to happen next. The stuff, too, for a specific health system is one tier of this, right? But how do we get to the higher tier of like pulling data from all the health systems and putting that into the graph database?

So that you really have like universal

best practice that you're working about. So this is part of the challenge, right? You've got a, I think people spend a lot of time focused on clinical best practices. How do I make the right clinical decision? But a big neglected area of opportunity is the operational best practice.

So how do I make the right decision in the context of what we can actually execute on? Yeah, and I think of it as again, this sort of two by two, and we're going up into clinical excellence. You're not focusing on operational excellence. So how do I communicate the right decision to the right person?

to execute at the right time in the right context and personalize that patient. If you're not doing that, then the clinical decision to count for much because you're just not going to see it through and get the outcome you intend.

Yeah, timing is everything, right? I mean,

ultimately. But again, this then causes a broken loop, because then you've got a challenge where okay, I've made this decision, I've documented it, but it didn't have the intended outcome.

Not because the decision was wrong, but because it wasn't executed well. So how do you bring in that operational knowledge and know how into the clinical decision? So that you're doing the right thing and you're actually getting a reliable execution. So that's what we're excited about.

I love that. So I'm a Toyota production guy.

There you go. I've been I lived in Japan for three and a half years. I've been back several years later to spend time with Toyota, Yamaha, Piano and others. The standard work you're talking about really is the hard part and it's not cookbook medicine. It really is trying to figure out what is the best way in what order and when to do things to have the best outcome for patients and families.

Specifically personalized to the patient. And I think this is a challenge I'm hearing a lot about here. Where people are talking, there's a lot of excitement about lots of different types of solutions. But they're almost being applied in a one size fits all manner. So people, I had a conversation about hospital at home and some people were really excited about it.

Other people were saying it's not for us. The reality is none of these solutions are going to be a best fit for everybody all the time. The job is how do you personalize it and orchestrate the right intervention for the right patient at the right time. And I should get And this is just, it's interesting.

Like, this is not a problem that people have really invested in yet. Because in a fee for service model, whatever you do, you're paid on a cost plus basis, it's fine. But now with the staffing challenges, it matters what we do. It matters the sequence. It matters that this stuff works well first time. And, , if you're going to get the return you expect on a lot of these interventions, they need to be specifically applied to the right patients in the right way.

And that's a huge opportunity to do well, but it's so underexplored. And again, it's just, it's been my passion for the decade I've been doing this. It

makes a difference to the patients and families, but it also makes a to the providers too, who are also struggling with burnout. They don't want to do things over and over again or try to figure things out.

Let's go, let's take care of more patients. And

particularly to the nursing teams. Yeah. , these guys are basically sweeping up the mess of all of this reactive, corrective care. Right. When actually they know what they need. To do. They know they wanna do it first time, they just can't get it done.

That gap between the clinical intent and the operational execution has never been greater than it is today. Yeah. And it's a massive opportunity for the right teams to focus on the right things. Okay.

We'll talk about one more thing. The expectations of what people think they're gonna get out of AI or the work that they're doing with their partners around AI versus the reality of what actually comes out of it.

What do you see in there? What are your feelings about that?

Again, I can. I can only relay what I've seen from some of the panels. There's a lot of excitement, obviously, in the areas that AI has been applied for a long time. RCM, that kind of stuff. That's done. That's fine.

That works. I think there's a lot of optimism that some of this stuff will deliver a lot of benefit. But again, there's still a huge concern, particularly in the clinical space, when you're looking at the generative stuff and it's stating with 100 percent confidence to this patient, you've got to do this one thing.

That's not the right thing. Yeah. That's a safety issue. And so how do you balance the efficiency that you're trying to get with the need for the review and the safety? There are models that will work, but again, applying it again, from my perspective, I think narrowing the training set round to something that you can trust and rely on.

Right. That's always gonna be

a good stuff. The data, start with the data,

but training it on everything and then hoping

for the best is maybe not the right approach. The other part too, I think, is that when you get those suggestions, that ultimately it is a human who has the final Yeah. Decision.

Now, there may be parts of that over time that you're able to automate and say, when it recommends this, just go ahead and do that. Yeah. Because there's really no harm that can come from whatever that thing is. Yeah. In

those situations, you don't need AI though. You're following protocol at that point.

Yeah, and that's that, we do all that today. Yeah. Yeah. So for me it's an interesting balance. Some people would describe the graph technologies we're talking about as ai. I think you and I would probably agree that it's, deterministic machine learning. Yeah. It's very specific technology, but.

All of these things together have a huge impact if well executed and well delivered. The challenge is getting it out of the lab into something operational at scale, which as we know, in this industry in particular, requires some effort.

It can take some time. Hey, thanks for doing this. Thank you, Drex. it was my first time subbing for Bill you did a great job. Good, thanks. I appreciate it. Okay. That's all for now. Thank you.  📍

And that is the news. If I were a CIO today, I think what I would do is I'd have every team member listening to a show just like this one, and trying to have conversations with them after the show about what they've learned.

and what we can apply to our health system. If you wanna support this week Health, one of the ways you can do that is you can recommend our channels to a peer or to one of your staff members. We have two channels this week, health Newsroom, and this week Health Conference. You can check them out anywhere you listen to podcasts, which is a lot of places apple, Google, , overcast, Spotify, you name it, you could find it there. You could also find us on. And of course you could go to our website this week, health.com, and we want to thank our new state partners again, a lot of 'em, and we appreciate their participation in this show.

Cedar Sinai Accelerator Clearsense, CrowdStrike, digital Scientists, optimum, Pure Storage, Suretest, tausight, Lumeon, and VMware who have 📍 invested in our mission to develop the next generation of health leaders. Thanks for listening. That's all for now.

Chapters

Video

More from YouTube