Today in Health IT AI chap out responses compared with physicians for quality and empathy. This is from jama. Look forward to talking about it this morning. My name is Bill Russell. I'm a former CIO for a 16 hospital system. And creator of this week, health Set of Channels dedicated to keeping Health IT staff current and engaged.
We wanna thank our show sponsors who are investing in developing the next generation of health leaders, shortest and art site. Check them out at this week, health.com/today. All right, let's get right to the story. And it's interesting, right? So you knew it was, you knew it was coming, you knew somebody was going to compare physician and chatbot.
Answers, they sort of had to do it right because it's being integrated into the, , ehr. The, you have the Microsoft, , epic announcement. , I think others have already headed down this path as well, and they're looking to integrate this as well. And we have a problem and the physicians are just overwhelmed with inbox messages, some of which.
Are kind of mundane and, , one of the things that chat, e p t four and others have shown a propensity to do is to word things very well. Right. I know it's just a predictive model and it's just picking out the next word and it doesn't really know what it's saying, but at the end of the day, it strings words together very well, and it can take the input and it can, it can generate some really good responses.
And as long as we have human feedback on the other end of this, this is a, it's a good, it's a virtuous cycle, I believe, where we're gonna be training it to be better and it's gonna continue to provide better solutions. All right, so, Group got together. Let's see if I have a summary. , all right. I'll just give you, so this is straight from jama.
Question. Can artificial intelligence chatbot assistance provide responses to patient questions that are comparable quality and empathy to those written by physicians? All right. Here's the findings in a cross sectional study of 195 randomly drawn patient questions from a social media forum, a team of licensed healthcare professionals compared physicians and chatbot responses to patients questions asked publicly on a public social media forum.
The chatbot responses were preferred over physician responses and rated significantly higher for both quality and empathy. Meaning these results suggest that artificial intelligent assistance may be able to aid in drafting responses to patient questions. All right, so those, that's, , Jim Art articles are so nice.
Question findings, meanings. Here's the importance, the rapid expansion of virtual healthcare. Has caused a surge in patient messages, , with more work and burnout among healthcare professionals. AI assistance could potentially aid in creating answers to patient questions by drafting responses that could be reviewed by clinicians.
stant released in November of:Evaluators, preferred chat bot responses to physician responses in 78.6% of the 586 evaluations. Let me tell you why I believe that is. One is physicians are busy and. They, , when you are busy, when you are stressed, you tend to respond quickly and you tend to get to the point very quickly. Right. And the thing that you can do with chat g b t four specifically, is you can tell it how to respond, respond at a fifth grade level, respond to the parent of a fifth grader and explain this disease or those kind of things.
Or you can say, respond in a way that, , , that is sensitive to the feelings of the person on the other end. Now, it doesn't understand, it doesn't feel empathy. But it's able to, from its training, understand what are empathetic answers, what are words that are used that cause a certain response in humans, and therefore, use those words and string those words together effectively to answer the question in an empathetic way.
So physician responses were significantly shorter than chatbot responses Again. Busy, quick to the point. I have a lot of things to do and I wanna see my family. So they were typically shorter than chatbot responses. , let's see, 52 words versus 211 words. Sometimes the chatbot can be a little wordy.
Chatbot responses were rated of significantly higher quality than physician responses. The proportion of the responses rated as good or very good quality, greater than. , grade four, for instance, was higher for chatbot than physician. , the, this amounted to 3.5 times higher prevalence of good and very good quality responses for the chatbot.
Chatbot responses were also rated significantly more empathetic than physician responses. The proportion of responses rated, empathetic or very empathetic was higher than for chatbot than for physicians. , And then they have, , this amounted to 9.8 times higher prevalence of empathetic or very empathetic responses to the chatbot.
And then we have the conclusions here, and I'll, I'll, I'll spare you the rest of it. , in the cross-sectional study, a chatbot generated quality and empathetic responses to patient questions posed in an online form. Further exploration of this technology is warranted in clinical settings such as using a chatbot to draft responses that physicians could then edit randomized trials Could.
Assess further if using AI assistance might improve responses, lower clinician burnout, and improve patient outcomes. Look, that's a, that's a key sentence. Randomized trials. So we need to do further studies and we need to collect this. , if using AI assistance might improve responses. Lower clinician burnout and improve patient outcomes.
We're in a situation right now where the, the technology is going to get utilized. It's gonna be utilized very rapidly if it's the, the challenge with this is it's already being used. Right. And this, this is the case I'm making. If you're not getting in front of this, they're already using it. And so we need to do these studies.
We need to put the mechanisms in place where, , it is potentially generating these responses through the api. And then those responses are being reviewed by clinicians before they go out. Okay. So that's, , you know, the, the human response aspect of this is going to be, or the human check on this is gonna be so very important as we move forward.
, let me give you, , let's see. The researchers, , were at U C S D, so San Diego, Chris Longhurst, , yeah, I believe it doesn't mention him by name, but I'm pretty sure. Yeah, let me go down to the bottom here. Yes, he is part of this study, so UCSD's doing a study here and they're gonna be one of the trials.
I think Stanford was another one that's doing the trials of the Epic implementation of this, of this response. And again, these are two good places to start. UCSD Stanford. Two good places to start. They understand how to do this kind of research, how to do, put together the studies, , and to validate the use of chat e p T four in this clinical setting and.
Quite frankly, the potential for this is huge. I know there's a lot of really, there's a lot of hype around this right now. I've been talking about it a lot, and the reason we're talking about it a lot is if it takes, let's say 200 word response, 50 word response, 50 word response, educated response, probably takes a couple minutes to write, let's say three to five minutes to write for each physician.
How long does it do you think it takes for a physician to read those same hundred words that is generated by somebody else and they can just go, yes, yes, no, or edit and send? Obviously the risk there is that they get so overwhelmed and so overloaded that they just hit send on a bunch of these without really validating them.
But these are physicians, they're not going going to do that. , And they need to be trained not to do that. And we need to put a check in there to say, if the response generated by the AI is the same one that's sent out, I would pop up a box that would essentially say, are you sure you have not edited this at all?
Are you sure? And let them hit yes. , just another check to make sure. And I know that's gonna annoy some people, but at the end of the day, we need to make sure that people aren't just rifling these things off without, without, , Properly vetting them, but if they properly vet them, I think we're gonna be able to save, I don't know, a couple minutes per message that gets sent to a patient.
And how many messages are we talking about a day per physician that they're sitting potentially at home trying to respond to and that kind of stuff, or they've, , given it to their staff to do or whatever. This is one way we can drive efficiencies. It's one way we can drive, , some of the cognitive load out from the physicians and other things.
This is why I'm bullish on this. I'm, I'm looking for specific cases that are gonna drive minutes out of the equation, drive efficiency into the equation without putting, , , patients at risk and potentially giving patients a better response. All in all. And if we're saying that the, this study is showing, hey, higher quality response with more empathy, , that's, that's a win-win.
So, , interesting that we're starting to do studies. I think we will see more of these if you are, , if you are thinking of implementing this and can't wait for the studies, I would say participate. You know, get these, you know, find some peers and start, , crafting a study and start your own analysis of this.
I think this is gonna be coming fast and furious because it represents a couple minutes here, a couple minutes there on tasks that happen 40 times a day. And so that easily adds up. I, I can't put a, a number and an equation on the cognitive load that physicians have to handle, or clinicians in general have to handle every day, but AI is going to help to alleviate that cognitive load.
And so that becomes a, again, more of the promise. And it's why we keep talking about this. It's an exciting topic, exciting time to be in health. It. , and I believe we're going to, , be a part of shepherding this forward. So important time. All right, that's all for today. If you know someone that might be benefit from our channel, , forward them a note that really helps us.
Let 'em know that you're listening to this channel. They can subscribe on this week, health.com or wherever you listen to podcast. We wanna thank our channel sponsors who are investing in our mission to develop the next generation of health leaders. Sure. Test and 📍 artist site. Check them out at this week, health.com/today.
Thanks for listening. That's all for now.