Keynote: Generating Meaningful and Long Lasting Outcomes With Christopher Longhurst
Episode 7410th May 2024 • This Week Health: Conference • This Week Health
00:00:00 00:25:12

Share Episode

Transcripts

This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.

Today on Keynote

(Intro)  it's very easy to focus on the models and the model prediction, but in our estimation, that's 20 or 25 percent of the outcome.

It's the hard work of process redesign, workflow redesign, education of your. Community and users that really gets you to the outcomes that you need.

  📍

📍 My name is Bill Russell. I'm a former CIO for a 16 hospital system and creator of This Week Health, where we are dedicated to transforming healthcare one connection at a time. Our keynote show is designed to share conference level value with you every week.

📍 Today's episode is sponsored by Quantum Health, Gordian, Doctor First, Gozio Health, Artisight, Zscaler, Nuance, CDW, and Airwaves

Now, let's jump right into the episode.

(Main)  All right, it is keynote and today we're joined by Dr. Christopher Longhurst, with many roles actually, Chief Medical Officer and Chief Digital Officer at the University of California, San Diego. As well as, gosh, Associate Dean as well. Chris, welcome to the show.

Thank you very much, Bill. It's great to be here. Yeah, it seems like a lot of things going on at UC San Diego these days. And you hold intriguing dual roles, Chief Medical Officer, Chief Digital Officer while also holding the Associate Dean at the School of Medicine. Describe how these roles intersect and what your day to day looks like.

Yeah, sure, Bill, I'm happy to, and maybe I'll give a little background on how I ended up here, because wasn't a big master plan. So long story short like many physicians, I had an interest in healthcare technology, really from the eye of improvement standpoint. How can we make care better?

master's thesis was in AI in:

I took on additional roles in quality and patient safety and ultimately the chief medical officer role. So my job is really ensuring the high quality care that we continue to deliver. here at UC San Diego Health by any means necessary. And sometimes that's education and process redesign, and sometimes that's introduction of new technologies or optimization of existing technologies.

And so my roles kind of reflect that. Unlike a typical CMO, I have the IT department as part of my portfolio with the CIO as a direct report. And for me that's really critical because it gives us a lever to pull that allows new and innovative solutions that would not always be accessible to your CMO as a leader.

So you were studying AI back in the day. Did you anticipate the kind of movement with AI that we're currently experiencing?

ort of new scale, November of:

That being said we've certainly seen an evolution in machine learning and unsupervised kind of classification and AI tools over the last decade or so. And so even prior to the large language model, introductions, we had an AI governance committee stood up five years ago, and we were looking at AI principles and ethics and thinking about how static do you make your algorithms versus how much did they learn from your own data and what risk does that, introduce.

So we've been thinking about it for a while, but the advent of large language models has really turbocharged it across the country and the world.

You guys have been able to Do a lot of studies and I've seen you partnering with a lot of different universities. You've been publishing findings and those kinds of things.

How do you find the resources to do that? And have been some of the findings so far?

Terrific question. me back up first of all and say, as an academic medical center, we believe that part of our role should not just be delivering highly reliable care, meaning the best available evidence should be applied every patient every time, but that we should also be a learning health system, meaning that we're helping create the new best practices.

And so if you think about that concept of the learning health system, it's really the intersection Thank you. of clinical research and quality improvement. And so in my role as chief medical officer, we very intentionally tried to support all of our medical directors and leaders as they roll out new practices and new policies.

to take a rigorous approach to evaluating outcomes. In fact, we even introduced a new committee. It's my favorite acronym. It's called the ACQUIRE committee because it's about aligning and coordinating QI research and evaluation. And what the ACQUIRE committee does is it encourages leaders to submit quality improvement efforts so they can receive IRB exemption, write up the results for peer reviewed journals where they're rewarded on the academic side, And this creates as a byproduct a database of all of the QI efforts going on outside of the central quality department, right?

So any hospital quality department is able to support, a few dozen large enterprise initiatives. But ideally, all of your employees are thinking about improvement all the time. And so our acquired committee has only been stood up for a few years, and we already have over 400 active projects. And so that becomes a culture because we point our trainees to that and we say, hey, if you're interested in scholarship and improvement, look for existing projects that you can join.

And there's a way to actually do that now. In fact, just yesterday I was looking at our Acquire database and there's almost two dozen AI projects that are being rolled out for purposes of improving quality of care. So that's really the background that leads us to this culture. of constantly evaluating and contributing new knowledge in the space of AI based quality improvement.

It's interesting to hear about the culture and building that culture of innovation. Today we're hearing, and this will give people an idea of when this is being recorded, we're hearing about strike in Northern California at a Kaiser facility led by nurses.

And the strike is specifically, related to the use of AI in nursing. It's about safety and selection of applications for the use of AI. In light of those concerns, share how UCC San Diego approaches The integration of AI in clinical settings and I guess particularly in ensuring both safety and the appropriate use of these technologies in the care of patients.

Yeah, terrific question, Bill. Ensuring that we're rolling out any AI based decision support solutions in a safe, respectful, and trustworthy manner was the reason that I stood up an AI Governance Committee five years ago. And it's a multidisciplinary committee that includes, one of our CMIOs chairing it, but also representative from bioethics, from our health equity research team from nursing and from other disciplines.

And so that AI governance helps to ensure that anytime we move something into production, it's been adequately reviewed to ensure that it's not unintentionally creating unintended consequences, that it's respectful, not only to the patients we're serving, but to the providers as well. And Really give a lot of credit to Dr.

Amy Siddhapati, who has historically chaired that committee. We recently hired our first Chief Health AI Officer, Dr. Karandeep Singh, and Karandeep is constantly helping us revise and rethink and improve those processes. So it's only going to get better moving forward. I'm deeply aware of the strike that's occurring in Northern California, and one of my colleagues at UC Davis actually commented on it publicly today.

He said it's great to have open discussions because the technology is moving at such a fast pace and everyone's at a different level of understanding. And many health systems actually do have these guardrails in place, but perhaps they haven't internally communicated it and there's a knowledge gap.

But at the end of the day, we have this mission that no patient, no clinician, no researcher, and no employee will get left behind as we look to take advantage of these latest technologies.

d the picture, in December of:

From concept to implementation.

Yeah, absolutely, Bill. And I'm happy to give you two examples. And again, it really starts with these AI principles. The first example, to your point, we've been working with us for years. It's about our approach to supporting the care of patients who may be septic.

And as your viewers probably know, Almost a third of a million Americans die every year of sepsis. What many may not know is that 85 percent of that sepsis doesn't develop in the hospital setting, but actually in the home setting. So patients who are at high risk because they have cancer, perhaps some other immunocompromised state, can become septic at home and they show up in the emergency department.

And our challenge is to make that diagnosis as early and as quickly as possible because we know that early intervention changes outcomes and saves lives. And we're constantly looking to improve that process. And about three or four years ago, as the chief medical officer, I named our new director of sepsis, Dr.

Gabriel Wardy. Gabe is a practicing emergency medicine physician who's also board certified in critical care. He was the perfect choice to lead this. And let me tell you, Bill, Gabe has no background in data informatics, but he's an outstanding, respected clinician. And when he took the role. I said, Gabe, I want you to ensure that we're always implementing the best practices and the sepsis compliance bundle to the best of our ability, and I want you to partner with one of our faculty members from Health Informatics to look for new ways to improve the care.

And Gabe was fantastic. He said, I'm all in. Around the same time, we recruited a PhD data scientist from another institution, Dr. Shameem Namadi. And Shameem had several years of funding and background and research in sepsis AI. And when he interviewed at UC San Diego, I said, Shameem, why are you interested in coming here?

It seems like things are going well. He says, Chris, I've never been able to implement any of my research. And I said that's perfect because we're looking to implement and make a difference in outcomes. And so Shamim and Gabe became a dyad pair. And about six months after he arrived, I said to Shamim, how are things going?

I know we're building the pipes to be able to implement your tools. And he said, Chris, it's fantastic, but I've learned more in the last six months than the prior six years. And so what do you mean? They said now that I'm sitting on the hospital sepsis committee, I finally understand how challenging the workflow is and it's not just about the algorithm, it's about the early notification to the right person at the right point in workflow at the right time to make a difference in decision making that affects the patient in a positive way.

And so credit those guys and the entire sepsis care team. They really took a lean based approach to redesigning workflow, notifying our central code team, making sure the nurse was involved. And they did it all in such a thoughtful, informed way that it was not a surprise to me when we saw that our mortality in the emergency departments at UC San Diego had dropped by 20 percent with the

introduction of this AI alert.

Now, this

wasn't just about identifying the right model, right? Cause you could really start at the EHR vendor in most cases, and they'll give you a sepsis model to work with. You guys took it far beyond that, I would imagine.

You're absolutely right, Bill. And, I give credit to the vendors who are trying to make available standard models or the ability to create models off of your own data.

One of the challenges, and Dr. Karandeep Singh pointed this out in a recent kneejum AI paper, is that Sometimes people will take the data prior to the diagnosis of sepsis, create the model, implement it, and then wonder why outcomes haven't changed. But many times the data that the model is ingesting includes things that are indicative of a clinician's suspicion of sepsis, even if it's prior to diagnosis.

For example, if your model is using You know, a lactate order to help predict sepsis likelihood the clinician's already thought of it, and it's not going to make a difference in outcomes, right? And so we really tuned our model to be much earlier in the process, and we'll take lower predictive value in exchange for something that's actually going to help a clinician before they've, thought about it and developed their own suspicion.

And then again, the workflow is critical. And so this 20 percent drop in mortality bill actually translates to 50 lives saved on an annual basis, just at UC San Diego, a 20 percent relative decrease in mortality, which is really only about a 1. 9, 2 percent absolute decrease.

But realistically, if we could drop mortality by 2 percent across the entire country, that would be huge. a lot of credit to the team for doing that, and one of the neat things is this was published in Nature Digital Medicine, and it was accompanied by an editorial written by Dr. Joseph Kvetter, the of Nature Medicine, and the editorial points out that, in fact, the editorial is called, Healthcare AI, more than just algorithms.

And he goes through in painstaking detail and articulates sort of, all the different things that we described in our paper that were not just about the algorithm. And I think that's a really important point for your viewers that it's very easy to focus on the models and the model prediction, but in our estimation, that's 20 or 25 percent of the outcome.

It's the hard work of process redesign, workflow redesign, education of your. Community and users that really gets you to the outcomes that you need.

That's the thing I think that's most fascinating about your role and the groups that report in through you. and this is where the disconnect's been in healthcare for all these years.

The disconnect between the dialogue between the people implementing the technology and the people using the technology. Really fascinating.

  📍 📍   In the ever evolving world of health IT, staying updated isn't just an option. It's essential. Welcome to This Week Health, your daily dose of news, podcasts, and expert commentary.

Designed specifically for healthcare professionals like yourself. Discover the future of health IT news with This Week Health. Our new news aggregation process brings you the most relevant, hand picked stories from the world of health IT. Curated by experts, summarized for clarity, and delivered directly to you.

No more sifting through irrelevant news, just pure, focused content to keep you informed and ahead. Don't be left behind. Start your day with insight at the intersection of technology and healthcare. This Week Health. Where information inspires innovation.   📍 Increase

 Now, you promised me two use cases. I think I cut you off on the first one.

Yeah let me tell you that the sepsis use case one of the things it really demonstrates is the importance of ongoing monitoring, right?

It was the fact that our AI Governance Committee was looking at the outcomes on an ongoing basis, iterating and improving it, that actually led to the mortality drop. A couple of other principles that helped drive the second example. Include number one. So we endorsed the FAVS principle. I actually got to be part of the White House task force that put out the statement December 13th from the administration about ensuring that healthcare AI is deployed in a way that's fair, accountable is equitable, is safe.

And so these are things , that are sort of like motherhood and apple pie until you get to hard questions. So one of the things that is embedded in the ONC statement about phase is transparency, and that actually was something we were really committed to early on. And it's easy to talk about transparency of models and data sets used to generate, algorithms, et cetera.

published a paper in April of:

In fact, the clickbait headlines were that the, AI was higher quality and more empathetic than doctors. Now this was on a set of Reddit questions, publicly posted, doctors who didn't know they're being rated, etc. So it wasn't really my conclusion. My conclusion was that in a limited, small amount of time, the AI could draft responses that seemed higher quality, Partly because they were longer, more detailed than the physicians would in a short period of time, right?

But, I'll put up the doctors against the chatbot any day of the week at least currently with GPT 4, if you give them enough time to research it and write a well researched answer. And this paper led partly to our partnership with our electronic health record vendor, EPIC, and also a three way partnership with Microsoft to trial GPT 4 in the electronic health record.

e rolled this out in April of:

5. We learned a ton about prompting engineering and every single message, the way we design this functionality, there's two buttons. Either. Start with blank reply or start with draft reply. So there's no button that says, just send now. And that illustrates another important AI principle, which is keeping a human in the loop.

And then if you start with the draft reply, every message has an automatic addendum that says something effective. This message was automatically generated and reviewed and edited by, your doctor's name. We just published those results actually a couple of weeks ago, so less than a year from the the launch of the pilot and the prior paper, we have another outcome study.

again, credit to Dr. Ming Tai Seal, who led the outcomes analysis, and Dr. Marlene Millen, who led the operational project. So what did we find? It's a little bit surprising. I thought we were going to save time. That was my hypothesis, it was going to help our busy clinicians, and it turns out we did not.

Stanford also published their results, and they found a similar finding. There was a recent Doximity user survey that suggested physicians spend somewhere around 12 to 15 minutes on a daily basis answering, asynchronous patient questions in the portal. And our doctors spent about 30 seconds reading a message and then, drafting a reply or using a dot phrase or macro or template.

After we introduced GPT, it turns out it still took about 30 seconds to read the patient message in the draft reply and then provide some light edits and click send. So it was not a time saver. However, it did come through very clearly in the data that our physicians really liked this. They told us anecdotally that It helped reduce cognitive burden.

It's easier to edit than it is to start with a blank screen. They told us that they like the empathetic tone that the message starts out with. They told us that in some cases, like our primary care physicians who often don't have RNs helping with their pools, it felt like they had a virtual assistant, drafting these messages for them.

So it was a very positive finding, but it wasn't what I expected. And so this is, yet another example of why it's so important to monitor these outcomes.

Can you measure the reduction in cognitive load and its impact on physician stress and burnout?

We're definitely working on that. So more to come.

Stay tuned.

All right, want to talk about it because it's interesting to me that use case sits right in between the patient and in most cases, the physician. I guess my question is, what's your vision for how UC San Diego communicates with your patients moving forward?

Will there be some aspect of you're saying bots, but I think it's much more sophisticated than bots these days where a person in your community who's looking for information on health, who would normally go to Google or normally go to something else, would actually come to your health system and interact with some aspect of a large language model.

And get information that has been trained by UCC San Diego or maybe the UC system and provide responses to help either navigate them or just help them to make some basic decisions about their health.

That's a great question, Bill. And I think that the answer is undoubtedly yes, that's going to happen.

And the devil's in the details, right? It's how we do it, how we roll this out, how we train, these AI agents or tools, and particularly if they're going to be functioning in unsupervised way, we have to have a level of confidence that these are not going to, introduce unintended consequences or harm our patients.

And so I think for the short term future, we it's going to be sort of a human in the loop. But I'm fond of, paraphrasing someone else's quote that I'm quite sure we're overestimating what AI is going to do in the next two or three years in healthcare, but I think we're vastly underestimating what this is going to do over the next seven to ten years.

And I really do believe that this is, As important a moment in healthcare is the introduction of penicillin. I think that 10 years from now, the delivery of healthcare will be universally AI enabled, and we're going to see a rapid change in medical legal practice where, operating as a provider without an AI agent I think 10 years from now may be considered to be below the standard of care because we're gonna be able to show that physicians, with augmented intelligence, with AI support can deliver better outcomes than humans alone.

Chris, we're gonna leave 'em wanting more with this episode 'cause I promised I would finish this up at the bottom half of the hour. We've got about three minutes closing question, And by the way, thank you for your time. I really appreciate it. guidance would you give to healthcare leaders across the country who are doing this?

Not all of them are at a UC San Diego, not all of them are at an academic medical center with grants and other things to really look at some of this stuff. When you think about range of different health systems what guidance would you give them as they're trying to figure out this AI journey?

Yeah. Bill, that's a really simple question with a complicated answer. First of all I want to acknowledge the support of a very generous donor Irwin and Joan Jacobs, because as a result of their philanthropy, I help to lead our Jacobs Center for Health Innovation, which is where we get to fund our portfolio AI work.

UC San Diego Health has some wonderful, smart faculty members, but we're a hospital serving the needs of the county the underserved. Our margin is not huge. And this gift has allowed us to do things we might not otherwise have been able to do. And I recognize what you're saying, which is a lot of health systems are operating on these grocery store margins, and aren't well positioned to sort of, lead in this, and I guess I would advise two or three things.

The first is lean in, right? Learn, educate yourself that doesn't mean just, the vendors education. It means going to conferences that are academic in nature perhaps even considering, chief AI officer type roles for folks that can help to guide a safe journey on this AI timeline.

The second is we really have to demand meaningful outcomes. And, I gave the sepsis example earlier, and yet just a couple weeks ago, the FDA approved the first ever sepsis AI algorithm, and it was based on some data that the vendor provided that did not include any clinical outcomes. It was algorithm outcomes, right?

And I think we just need to be very cautious, because you ask any patient, what they're looking for is meaningful clinical outcomes. Outcomes that matter to them. And so we have to be careful of that shiny object syndrome AI for the sake of AI, but rather really focus on what it's going to help us to deliver, right?

Can we improve efficiencies? Can we redeploy people in new ways? Can we restore some of the humanism in medicine with AI scribes that save time for our doctors? But rather than, giving them more patients, we can send them home earlier to have dinner with their families, right? And becoming educated, I think, is critical, and then the last point is partner, partner with vendors, partner with other health systems that are, leading.

I think that we're going to see a lot of learnings, and not all of them will be positive. I think that we have to be eyes wide open, that there is potential for unintended consequences, and if we're not really carefully evaluating for those, consequences. They're going to slip through and they're going to cause serious harm.

And partner with neighbors, even competing health systems. There are some great organizations out there like the Coalition for Health AI, which UC San Diego is a part of as well as Valid. which is a more applied group that we're also a member of. And those type of organizations can help with both the learning, but also the outcomes evaluation partnerships.

Fantastic. Chris, thanks for your time and I really appreciate it. We'll have to catch up again. This is moving fast and I think we're going to continue to see advances across the board. I appreciate your work.

Thank  you for having me, Bill. I appreciate everything you do with this podcast to help inform your audience.

 Thanks for listening to this week's keynote. If you found value, share it with a peer. It's a great chance to discuss and in some cases start a mentoring relationship. One way you can support the show is to subscribe and leave us a rating. it if you could do that.

Big Thanks to our keynote partners, Quantum Health, Gordian, Doctor First, Gozio Health, Artisight, Zscaler, Nuance, CDW, and Airwaves

you can learn more about them by visiting thisweekhealth. com slash partners. Thanks for listening. That's all for now..

Chapters

Video

More from YouTube