This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.
The 229 Podcast: Inside Stanford Medicine’s AI Sandbox With Michael Pfeffer, MD
Bill Russell: [:Michael Pfeffer: that's ultimately when you boil this whole thing down, that's the only thing that matters, that the best decisions are being made for our patients in real time.
The best evidence
Bill Russell: My name is Bill Russell. I'm a former health system, CIO, and creator of this Week Health, where our mission is to transform healthcare one connection at a time. Welcome to the 2 29 Podcast where we continue the conversations happening at our events with the leaders who are shaping healthcare.
Let's jump into today's conversation.
All right. It's the 2 29 podcast where we continue the conversation that has started in our 2 29 rooms. And today I'm joined by Michael Pfeffer, Dr. Michael Pfeffer, chief Information and Digital Officer of Stanford Medicine. Mike, how's it going?
Michael Pfeffer: It's great, bill. It's always good to be here.
Part of T two [:Bill Russell: Good. You know, I always joke with people that the best 10 minutes or the 10 minutes, I don't record like the 10 minutes before we hit record and the 10 minutes after I hit record at the, at, you know, stop the recording at the end. And we just, at, it's, we're gonna talk a lot about AI.
AI is one of those things that's been really interesting to me because. If you thought like the internet never forgot anything, AI proved that the internet never forgot anything. Like it scours into the deep corners, pulls things out. And I was just telling you that you know, that the, in my research, AI keeps telling me, oh, you gotta ask Mike about being a soccer referee.
Right. You're like, we had a whole great
Michael Pfeffer: discussion about that.
Bill Russell: Yeah. It's like soccer referee. Where is it even getting that from? It's like, and then as we start talking about it, you're like, oh yeah. You did use that. So how, just outta curiosity, how long were your soccer referee?
Michael Pfeffer: Oh, it was a long time ago.
long time ago. Much younger [:Bill Russell: Well, my, my question for this was gonna be you can't possibly still be doing this. Like, you can't be like going out, like leaving work and going over to referee soccer games.
I can't imagine with all the stuff you have going on, I don't
Michael Pfeffer: anymore, but I do like watching soccer and or football, whatever you want to call it. I think it's such an amazing game and so I really, I enjoyed it, you know, growing up, making a little cash on the side. It's funny because little. A little bit of geeky me back in the day like use some of that money to buy Microsoft Word, like when it was in like a box with like floppy disks and I was all like excited 'cause I could, you know, use this new word processor on my ancient computer.
ly challenging not to be, to [:It used to be we used to apologize for talking about AI so much, but I think it has. Pretty much started to prove that it has serious, practical application. In fact, that's been one of your topics for this year. As I'm going back through Becker's and some other things that you've done you talked about AI made practical and whatnot.
You highlighted ambient listening, generative AI, computer vision as levers. I'm just curious from, what you're seeing at Stanford. What's crossed the line from promising to practical at this point?
Michael Pfeffer: Yeah, I mean. Of course we have to talk about AI bill. I mean, that's like, that's what everybody's talking about.
you know, we were predicting [:AI and medicine haven't been good friends for a while but now we got like a super advanced version of AI capabilities that, that is really exciting and changing the way we think about, you know, how we do things, which is fun because we've always been trying to think about how do you do things differently in the informatics world.
And this is just such a great tool to rethink a lot of the ways we have done things in medicine. But, you know, big picture is, it's just accessible. Like people get it, people get to play with it. People get to use it in different ways, learn about it. Write fun prompts, like dig through the internet in ways they've, you know, never done before.
to solve problems. What can [:Bill Russell: I was reading. Some other stuff about, you know, the green button. You did a green button interview and talked about that a little bit. Just for the, for those who don't know what the green button is, give us a little rundown of what the green button is. It's not the red button.
It's not the blue button. It's the green button.
Michael Pfeffer: Yeah. So that is a great question. I mean, that, that was an idea that came about at Stanford a while ago before I was here, to really think about how do you use dating electronic health record. To better determine what the next step of care is where there isn't, you know, necessarily evidence-based guidelines.
button, which then led to a [:So that definitely is a uniquely Stanford thing, which is, you know, out there and very exciting. But it really was one of the first of its kind to leverage. Data in the electronic health record to make decisions on patients you're caring for at the time. So that was really cool, and you can imagine how that's growing to be more and more automated, more and more integrated.
We're actually doing pilots around that in our primary care clinics to bring evidence in real time to our clinicians.
me odd, probably longer than [:But the but 10 years ago, the technology around that had to be fairly, I don't wanna use the word primitive, but early, you know, on, in the process of being able to, you know, pull all that information together, it was almost like search and find and use NLP And I would assume that has progressed pretty rapidly over the last three years.
Michael Pfeffer: Yeah, for sure. I mean, you know, large language models are really good at summarization, categorization, generating new kind of text. Pictures, whatever. So when you're trying to look through medical records, which are, you know, huge amounts of unstructured data plus the structured data this is, these are just great tools to do that.
And there's a lot to learn still, you know, on how these perform. But fundamentally, this is just such a amazing set of tools to look into you know, unstructured data in ways we haven't been able to before.
c in their I don't know what [:It looks a lot, awful lot like this from 10 years ago, right? So they were highlighting what cosmos would look like and how it would, you know, find the evidence and bring these things together and even look at different population sets and all that in real time. Now they were talking about that in the future.
And I think people sort of think this already exists within Epic. I mean, does it exist or are there still some barriers that are keeping us from just making it a green button that's there for everybody?
Michael Pfeffer: Yeah. Some of it does. It still has I think some ways to go but instead of having to do it more with older technologies now with, you know, large language model capabilities and.
s was for particular disease [:But you can now think about how you could expand that much quicker across many disease states. So I think, I think that's what's really exciting about it and then bringing the tools, you know, right to the clinicians in real time I think is also, you know, really exciting about these. But a lot of exciting stuff happening.
And I think we're gonna continue to see. These kinds of things iterate.
Bill Russell: I want to slice the population a little bit here with Ambient. You know, everybody always asks about ambient and I find the questions to be a little too generic. I, the question I wanna ask you, we're pretty far along in our ambient journey and I, Stanford, we've talked about for a while now.
Which clinicians have benefited the most or the least from ambient listening?
inician and how they use the [:You know, you're seeing the patient for the first time where it has a little more. I think room to grow is around we'll follow up visits where you already have a note, you already have information about the patient. How do you, you don't just wanna necessarily start from scratch.
And, you know, for highly specialized clinicians, they've gotten, some of them have gotten really good with the templates that they've built. Over the years, and so it, then it do you really need necessarily ambient scribes? So, it's interesting to see. I think it varies across the board.
Bill Russell: It really is an individual, like how they practice and how they have used the technology before. And I mean, that's interesting because I've heard a lot of systems. When as we travel around the country, I'll ask people and they'll say, oh, I mean, everyone touts, like we're doing ai, ai, we're doing ambient scribes.
ing ambient scribes. So, you [:Michael Pfeffer: Absolutely. And I think that's, you know, o one of the nice things about it it's an option, right? It's another tool that clinicians can use. You know, take care of patients. And if it works really well, great. If it doesn't work well in that particular practice, great. You don't have to use it.
In fact, it may work well for certain kinds of patients that you're gonna use it for and other kinds of patients you don't use it for. And I think that level of flexibility is really important that real personalized experience and you can begin to customize how these things work. So how do you like it to.
and these tools continue to [:I mean, so you know, you'll have to, we'll have to see where this all goes. But remember, this is like version one. I mean, we've been on, you know. many tools have been version one for like 20 years. Right.
ne, what I rolled out back in:Michael Pfeffer: Right. So this is version one of, you know, ambient scribes version, you know, big version two is gonna be amazing. Big version three. You know, it's just gonna get better and better. And you can imagine that things are gonna be integrated into that. Ambient end voice workflow and then add video to that you know, it's gonna come together in a really amazing way.
I,
my style. I got in trouble. [:AI models in the patient room. And when I push them on their use cases, I'm like, you could do that with Zoom. Like, why did you buy an AI platform for that? It was like, well, you know, we haven't implemented any AI sort of around that. I want to, let's start with this, and that's, as you imagine, that got me in trouble.
'cause we're kind of touting, you know, we have this AI platform, but I'm like, all you really have is a computer and a. And a microphone in a room, in a TV that's connected to a tv. And let's call that version 0.5 of where we're going with this. How does the experience with Ambient listening inform what the future of the patient room looks like with cameras voice ambient.
ned anything in terms of the [:Michael Pfeffer: Yeah. They definitely go together, right. I think the room of the future is gonna have, you know, computer vision and ambient.
Voice kind of interacting together in a way. And it's it's gonna be for the clinicians and staff that, you know, work in the room and the patient. And the family and the whole experience it's all gonna ultimately come together in some way and. I would say that as we think about all these different technologies, we really wanna get back to like, what problem are we trying to solve with what outcome, right?
do here at Stanford is like.[:Measure that understand from the beginning, like, this is what we're trying to solve and here's the outcome we're trying to move, and so let's make sure we actually do that. And sometimes it doesn't even involve ai, right? I mean, so I would just, you know, as I think about all of these things, I wanna make sure that we're using the right tools and the right solutions to fix the problems and move the outcomes that we want.
And I think that really being the north star, I think you'll drive value out of all of these technologies,
Bill Russell: , That phrase uniquely Stanford has come up a bunch in the research of, that comes out of your mouth a fair amount uniquely Stanford. I mean, what does that mean to be uniquely Stanford?
t such a nice, like, amazing [:But it, it really allows us to kind of think big and do things that hopefully again solve problems. Move outcomes in the right direction and then can be, you know, taken and disseminated across the world. That's kind of the uniquely standard. Yeah,
Bill Russell: Think big is a, well, actually that's a gift.
'cause I know a lot of CIOs that are struggling to get their organizations to think big and you don't have to worry about thinking big. But that's a double-edged sword too, right? So it's like, Hey, you know, we're doing research. We need access to this. This, and this. And we've, we, I don't wanna go too far into this, but we've talked about the sandbox that you guys had to develop to sort of meet the needs of researchers.
se it's a it's unique. Maybe [:Michael Pfeffer: Yeah. Well, you know, we, there's a lot of talk about learning health systems and I think this is a really great example of one of the ways you can be a learning health system. And so early on in the launch of large language models, we decided that, we were going to kind of create a place for everyone in Stanford Medicine to go that's secure and has a couple different models to choose from to start to play and learn and see how these things work.
And we called it SecureGPT. That creative name. Right.
Bill Russell: That's so creative.
mazing is like, well, we can [:And I'm like, how about six weeks? And he's like. Okay. And six weeks later we had it and since then we, we now have like over 18 models in it. But what's really cool about it is we get to learn what people are using it for. And so again, using models, we can look at the prompts anonymously and say, okay, let's categorize it into different things and see what people are doing.
And learned a lot about then, okay, well what kind of automations do we want to build for the organization based on. What we learned in the portal. And then we also have APIs into the secure portal so researchers can use it to do things. And one of my favorite examples is we had this lab that was recording these very complex patient.
lp us? And we added an audio [:Transcribed it in a beautiful way. There was summarizing, they were able to, and it's just like that kind of like excitement of, oh my God, this is amazing. Like it's saving me so much time. So we learned from what was going on and that in combination with you know, the research enterprise in the school of Medicine and our chief data scientist, Nick Shaw.
Really came to say, well, let's really bring the clinical record together with the large language models to do some of the things that people were doing in the secureGPT in a, in an easier way. Right? You don't have to move things around. It's just right there. And that Born Chat, EHR.
Bill Russell: And is. So those are
Michael Pfeffer: just examples, but we keep learning from this.
and then we can develop kind [:Bill Russell: I is chat EHR in sort, still in sort of a pilot phase, or is that rolled out?
Michael Pfeffer: It is live, yeah. No, fully. We did pilot it for some time because it's important that we.
Learned that it works and can scale, but you know, it's been live since September to all of our physicians residents and apps.
Bill Russell: And the number one question people want to ask is. How do you make sure it's accurate? Right. So it's going through this massive amounts of unstructured data and it's coming back with insights, I assume, based on the medical record.
How do you, I mean, that's obviously why I was in pilot for so long is to Right. Try to do that. How did you get past that? I mean, how did you figure that out?
Michael Pfeffer: Yeah, well a lot of work in iteration and monitoring of the system and you know, obviously we limit it to this, just that one patient's.
ord in a very secure way. So [:And so that significantly reduces. You know, the hallucinations and that, but we continue to monitor it through using models to monitor it. We have a a whole framework called Med Helm, which you can Google, and there's some really interesting things about that to help really understand how the large language models are performing.
And we get feedback. So with every response you can give a thumbs up or thumbs down. And so we iterate on that. And it's
Bill Russell: care everywhere. It's pulling from the various HIEs, anything that's out there with regard to this patient.
Michael Pfeffer: Yep, yep. And so, it's been really amazing to see this in action and have people.
Just love it and [:Bill Russell: would. Yeah. It could find out that you're a soccer referee and maybe, you know, it would explain some injury that you have.
I have no idea. Right.
Michael Pfeffer: If it's in the record. Uh, But um, for sure. And so it's been really exciting. Of course, we're iterating on. What it can do. But the other piece of it, which I think is equally as exciting as the user interface in the EHR, is the ability to then run very sophisticated analytics.
that's staffed by us nearby.[:And , academic medical centers vary full. So we wanna make sure patients can get beds in a timely manner. And so we can actually send patients from our emergency room here in Palo Alto over to this other wing in Redwood City. And so how do you know which patients are eligible for that transfer?
Well, you have to meet a whole set of criteria because , we don't do all of the things that we do on the main campus there, which is, you know how lots of health systems work. And so we, we took that large criteria and we can run it against every patient in the emergency department and then flag the ones that meet the criteria instead of having people having to do manual chart review to find these patients.
ets of criteria against, you [:Bill Russell: I don't know how relevant of question this is for our audience, but it is for me.
So I'm curious, the have you learned anything about the models? Like which ones get used the most and for what tasks and those kinds of things?
Michael Pfeffer: We sure do. And in Med Helm actually it. Shows you like how models perform for different healthcare tasks. And it's really interesting to see how different models perform in different ways.
But yes, I mean, people are using different models for different things and and they do perform in different ways. So that is a very important, you know, thing for people to understand. Like not every model's the same, right? They all kind of perform different tasks in different ways.
Some, a lot better than others.
Bill Russell: And there's no one model that you would say, yeah, just use this model and you'll get, I mean, there, there are just like people, there's things that people specialize in that they're really good at.
le different things there to [:And so, yeah, it's really interesting. I mean. You have different models to choose from, and then you can set, you know, the temperature on those models, then you can prompt them in different ways. And so there's a lot of creativity that, that actually can occur in this space, which I think is really exciting.
It does require training. So, part of what you know you have to do to get access to SHA EHR, is you have to take a training course, which talks about how it works and what's its limitations. What, how to prompt and et cetera. So, , part of what's exciting about the tool is that people actually also learn how to use these models.
Yeah, so that's also I think a lot of fun.
Bill Russell: You're still practicing or has anything changed there?
Michael Pfeffer: Still practicing.
what what technology or tool [:Michael Pfeffer: ChatEHR
Bill Russell: Really. So you're using it and you're Oh, yeah.
Michael Pfeffer: Practice. Yeah. Yeah. I mean, definitely if you're picking up a set of 20 patients on service you can ask questions of each of the chart and really get to know the patients really well. As opposed to trying to read, you know, hundreds of notes or whatever.
If you're, you know, admitting a patient for the first time, it's really helpful to dive into the record and make sure you pick up all the details that you need. So yeah, chat, EHR has been really game changing. Discharge summary writing, it will summarize the hospital course better than I've ever seen anybody do it.
So that's kind of exciting. And, so I think, that's been really fun and you know, I think the evidence-based tools now that are powered by large language models where you can ask questions and it gives you back, the evidence on how to take care of a patient I think is really big game changing as well.
idents now, and it's always, [:But there are so many nuances and new evidence generated and all of these things that, using these technologies which provide, you know, evidence quickly is something that I'm a huge advocate for, to make sure we're doing the right thing every time for every patient. Because that's ultimately when you boil this whole thing down, that's the only thing that matters, that the best decisions are being made for our patients in real time.
The best evidence
Bill Russell: We're having conversations with a lot of other health systems. They're a little skittish on. On AI in the clinical setting. Now, we've talked a lot about how you're mitigating those risks and whatnot, but you have the resources of Stanford and a lot don't.
rely heavily on partners and [:Michael Pfeffer: Yeah, absolutely. I mean, you know, automation of our, support services a huge opportunity. So we're doing a lot of work on revenue cycle. We have a. Great partnership with our chief patient experience officer around the call centers. And we're looking at new omnichannel capabilities obviously with AI agents that can do some of the work.
moving all the other things [:So absolutely looking at all of those areas and I think that's where you'll see a lot of value from AI and healthcare initially because, it's so ripe now. The big question is in my mind is, are we going to automate the same processes we have today? Or are we going to be able to do new things?
Right. And , sure we could automate prior authorizations and the insurance company will automate rejections of prior authorizations or approvals of prior authorizations. And this goes back and forth and back and forth. But if you kind of go back to what's the purpose of a prior authorization?
thing for the patient at the [:You know, all of these things can be enabled by, you know, AI based decision support. If you remember like a few years ago, like there was this whole big push to put radiology decision support that was being mandated by the government, and you had a, you know, you had a order, and explain why and all of these things.
And that ended up like disappearing, which I think in hindsight was pretty obvious because it's too hard to do that with rules-based decision support, right? It's too hard. But with ai. And , the ability to really move this in the right direction. I think there's, it's not like blocking you, but it's just now kind of helping you synthesize, well, what is the best imaging to order?
if we kind of focus on that [:Right? Right. It's the most important thing is that the right test is ordered for the patient at the right time, not delayed, not, you know, we gotta get this test first before we can do it, but this is the right test based on the evidence to give it the first time we get there. Then you don't need prior authorizations.
Bill Russell: There are problems we're gonna be able to solve within the four walls of the health system. I mean, not that those are easy, they're easier. And then there's things that cross the boundary. Payers, provi, you know, payer, provider those kinds of things. To a certain extent you can influence that depending on, you know, the scale of your health system and the market and those kind of things.
I follow is Aaron Levy from [:And he's like. He goes, there are absolutely some jobs that will not exist a couple years from now. However, he goes, we will be doing things with this technology that we never did before. Like, Hey, we're gonna go through all the images we've done for the last 10 years, and we're gonna do a secondary read on all those images, looking for something specific, which now is like, just fire up the computers and away you go.
Whereas before it would've been, well, we couldn't possibly hire all the people to do those kind of things. And he has, he just keeps putting these use cases out there where people are going. Now, obviously box is a storage thing. He's like, people are scanning all these files that they never had enough manpower to do.
ork. It gives us a chance to [:You know, for a lot of CIOs we were just learning those skills within the four walls of the health system. Now we're thinking across the community and potentially across the nation with,
Michael Pfeffer: yeah. I'm optimistic. I really am. And, you know, I think we kind of have to be because the, these things have we have to change.
ssion, but has been hampered [:, Back in the day I. When we put in the electronic health record, there was a way to configure it so you can click a whole bunch of buttons to generate the history and physical and all this stuff. And I was just completely against it and my thinking was like it doesn't tell a patient's story at all.
It's just a bunch of pre-generated text that you lost the whole kind of. Feeling about the encounter and what the patient's trying to tell you. And so we didn't implement that. We stuck with like old school dictation or you can type or create a template or whatever. And I was always dreaming of the time where the technology would capture the patient's story for you and we're here.
Right. [:So now here we are with the technology available to capture the patient story. The patient's words in a way more accurate way than probably we've ever done in the past. That's just incredibly exciting. And so imagine five years from now, all of those notes that have been generated by ambient with the patient's story at the center with much more accurate information what are we gonna learn from those notes?
bviously a lot of work going [:'cause it's the right thing to do. And now we have the tools to kind of do that. So
Bill Russell: I should end here, but I'm not going to, like the good podcaster would end right here because that was such a great close. And I'm gonna go into a really unsexy area, but I want to just brainstorm a little bit with you on plumbing.
So there's an awful lot of people struggling out there with AI from a plumbing standpoint. So we talked about data. So you have fairly clean data that you're working with. You talked about you talked about a sandbox of models. So you have privacy, security of the data. You talked about monitoring of the use of those models.
You probably had governance over the selection of those models, I would imagine as well. But what else am I missing here? I'm trying to piece this together for somebody who's sitting there going, okay, how did they, how did, well,
ere? No. I mean, we're still [:In a scalable way. We foundationally have something we call rail responsible AI life cycle, which is really built into our standard processes around project management and how we manage applications, et cetera. And we're learning how to continue to refine that and make that even better and faster.
Streamlined. We use a firm assessment. We, I think we probably talked about this in a prior podcast, to help assess for ai, you know, what are the things that we need to understand about the project, the models, the workflows the outcomes to measure, et cetera. So all of our AI goes through that, but we're learning.
it into their processes and [:But you know. Our philosophy is really it's not one team. It's not its own thing. It's something that everyone in it needs to think about, know, own, be part of. And it's a tool that can be leveraged at the right time. Again, going back to solving the problem, right? I mean, not every problem needs ai and the flip is entirely true.
AI going to look for problems isn't the right way to do it either. So. It's really kind of bringing all of that together. And we continue to learn. We continue to iterate and yeah, I don't think there's like any magic bullet, but having a philosophy kind of sticking to it, having governance, I think is really key.
Bill Russell: Last item. So Chris Longhurst has gone the doctor technology, CEO route. Do you think we're gonna see that more or, I mean, no better person to lead Seattle Children's than, yeah. I'm so excited
ichael Pfeffer: for Chris. I [:As a CIO. But yeah, I mean, how could you not separate out technology and healthcare now? It's impossible. And when you think about the future, everything you're gonna do is gonna involve technology in some way. And so yeah, I think that's really exciting. Will we see more. Following Chris Longhurst footsteps.
I hope so. But Chris is, you know, a unique guy. He trained at Stanford, so maybe he's uniquely Chris. I don't know. But I'm super excited. He's uniquely
Bill Russell: Chris. That's, that is truth.
Michael Pfeffer: I'm just super excited for what he's gonna do at Seattle Children's. They're very lucky to have him.
And yeah. Super exciting.
Bill Russell: Mike, I want to thank you for coming on the show, and thank you for being a part of the 2 29 project.
t's an exciting time and you [:And how do we and making people's lives healthier. And I think, again, I'll go back to being optimistic. Like, I think we're gonna be able to continue to move the needle and we didn't even talk about what's possible in research, in medical research. Yeah. And discover. All right.
Bill Russell: Well that tees up our next conversation.
Michael Pfeffer: Yeah. I mean, that's like incredibly exciting clinical trials and with all the work that's going on in cancer and, you know. Using AI for protein discovery, drug discovery list goes on and on. I mean, how that's just remarkable and it's gonna completely shape how healthcare is delivered in the next three to five years easily.
Yeah,
Bill Russell: absolutely. Thanks again, Mike. Appreciate it. All
Michael Pfeffer: right. Thanks Bill.
[:If you have a conversation, that's too good not to share. Reach out. Also, check out our events on the 2 29 project.com website. Share this episode with a peer. It's how we grow our network, increase our collective knowledge and transform healthcare together. Thanks for listening. That's all for now.