TownHall: AI Adoption, Risks, Operational Automation and Predictions with Vik Patel
Episode 3926th March 2024 • This Week Health: Conference • This Week Health
00:00:00 00:22:21

Share Episode


 This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.

Today on Town Hall

one of the quotes from Dr. Chang was quite eye opening in the next five years, we will see a health system get sued for not using AI before we see a health system get sued because they were using AI.

So I just feel like. Talking about the trends and, the adoption, I don't think you should be on the sidelines,

My name is Bill Russell. I'm a former CIO for a 16 hospital system and creator of This Week Health.

Where we are dedicated to transforming healthcare, one connection at a time. Our town hall show is designed to bring insights from practitioners and leaders. on the front lines of healthcare. Today's episode is sponsored by ARMIS, First Health Advisory, Meditech, Optimum Health IT, and uPerform. Alright, let's jump right into today's episode.

My name is Brett Oliver. I'm a family physician in the CMIO for Baptist Health System in Kentucky, Indiana, and I am excited to have with me today, Vic Patel. Vic is the Chief Operating Officer for TIDO. Welcome Vic. Hey Brad, how are you? Thanks for having me. Nice to see you.

Yeah, nice to see you again. If you would just take a second and let folks know your background and maybe a, just a little bit about TIDO before we get started.

Yeah, thanks, Brett.

e have been in business since:

Definitely reach out to us and now actually even AI, we'll talk a little bit about our new AI solution, which is the integration detection and response solution. So we can chat about AI as well.

Awesome. Awesome.

Welcome. Now you just got back from


How was that experience?

Vive was great, this is the first time at Vive, and I met Bill Russell for the first time as well, so I mean, I have been, following him for a long time, but finally met him in person, so that was nice, but the conference was great. Thanks. Amazing time, a lot of networking, and I felt it was a really well organized conference.

Like, the opportunities to connect with people, and even the app itself. It is a technology conference, and a lot of times, it's not always the best, let me tell you. I mean, I have been part of many conferences, and It's very hard to use the app, find the agenda, network with people. But this one, from the beginning, even on LinkedIn, like the Vive marketing people are really good.

They definitely help promote anything that you posted about Vive on LinkedIn or on Twitter, or X I should say. And No, I mean, just the whole experience. I will definitely attend again, and maybe the health conference that's I think it's in the fall, right? In, in Vegas. Similar company, you're right.

Oh, let's jump in. Um, It seems like, AI is basically what everybody's been talking about for the last year plus. From your perspective, you mentioned a lot of different contact points that you have with health systems throughout, Canada and the U. S. Where do you see adoption among health systems at this point? Is everyone adopting something, or is the adoption across the board or specific to a part of healthcare? What have you seen?

I mean, there's definitely no percentage out there, right? I mean, if you were to research how many organizations are actually using AI in some form, I don't think we'll find any exact numbers, but if I were to, just based on what I have seen talking to everyone, I would be surprised if it's less than 50%, in some way or form that you're using AI today in one of your solutions.

And I feel like imaging is definitely one that I have seen a lot more adoption just. Detecting cancer or heart diseases or anything else, just helping the radiologists, the technicians find those things with the AI algorithms. I mean, that I feel like is definitely used a lot. There's also a lot of predictive analytics identifying high risk patients, for example.

But I want to go a little, you know, back to Vive a little bit and, one of the sessions. I didn't have a lot of time to attend many sessions but I did have a few bookmarked and, and one of them really stood out to me so this one was with, and I think a lot of people in the healthcare space will probably realize these names, and this was a panel between Anthony, Dr.

Anthony Chang, Jessica Beagle, Marty Paslik from HCA, and Stephanie Lahr from Artisite. really good panel, very experienced, and they actually talked about the whole AI adoption and, where the trend is. And it's funny, I think everyone on that panel was strongly about, you should be using AI, you know, at this point, don't be on the sidelines, it's definitely the way to go.

And one of the quotes from Dr. Chang was quite eye opening I'll try and summarize it correctly. But what he was saying is that in the next five years, we will see a health system get sued for not using AI before we see a health system get sued because they were using AI. And, just so many reasons that they provided in terms of why you should be using AI, why you should be helping your clinicians to make decisions, right?

So I just feel like. Talking about the trends and, the adoption, I don't think you should be on the sidelines, definitely use it. And maybe we should go a little bit, next into the risk some of those things. But I feel like even from a non clinical standpoint, like why not use it from an operational, non clinical way, right?

Like there's so many solutions now coming up where you can use AI from an operational standpoint, create efficiencies from an operational standpoint. So. I would say the adoption is increasing every day, and yeah, I would be surprised if it's less than 50%.

Well, let's do it. Let's do get a little more specific.

So, much of the conversation at a very non technical level, when people are saying AI, at least over the last year plus, has been they're referencing generative AI. How should healthcare systems look at that in particular the risks of generative AI and then attempt to mitigate them, whether we're talking about in an operational sense or in the clinical space?

For me, I feel like one of the biggest things is around the data privacy. Because this is so new I feel like we are probably in the first inning of a nine inning game, you know, it's, it's so early on and the potential of the AI models exposing the PHI, like, Nobody really talks about this. It's like, oh yeah, we have trained the AI models on this data and this data, but it's like, OK, what about where is this data and what's the potential of actually exposing this, right?

So, so that for me, it's a huge unknown from a data security and privacy standpoint. The other thing is the actual data. The unknown Kind of, the whole black box of the unknown training data that a lot of these models are based on. And so, for example, let's say the training data was based on population in cities, right?

And, very metro area, and then you're using the same solution in rural areas. Now, wouldn't that be more biased, you know what I mean, so like those kind of things in terms of using the same model that was trained on a very different geographic location, and just the bias and suggestions that it would give you on a population, in a rural area, so I think, just, that's just an example, but I, again, we don't know what data was used for training the model, so that's a risk, and the bias that It may have in terms of providing you the support and the suggestions.

The other thing I feel like that's missing around this is the whole testing and validation of the AI models. again, we just talked about, a lot of times we don't even know the models, but let's say we actually knew the models that they were using, it was transparent to us and it was being shared, but I feel like there has to be a lot of conversation in terms of processes, you know, what are you doing as an organization to test these models that your AI solution is using, right?

So, and a lot of times they may not share that information because it's intellectual property or whatever the case may be, but then now that's a risk for you so I think that, most organizations that I have talked to so far don't have the methodology or processes in place to test and validate these AI models used by all these different solutions, right?

To make sure and ensure the accuracy. So I think that is another.

I was just saying, really, your

only option

is to be able to run it. Kind of in the background in shadow mode or however you want to term that and sort of test it yourself. But boy, when you're talking about dozens and dozens of algorithms being available, it really becomes difficult to have to do all of those all the time.

And to your point, I'm not talking about testing it just to implement it, but then once it's implemented, it's How often do you need to make sure that algorithm is still doing what you thought it was supposed to do?

You are exactly right, Brent. That was going to be my next point about post deployment monitoring, right?

Like, who's doing that? like, oh yeah, we have the solution in place, it's working, but okay, what if they change the models? What if, what if there were change, updates are always being made? So how are you doing your post deployment monitoring?

And I hear that talked about a lot, but then on the other side of things, there's also disease treatments change, disease frequencies change.

So on the clinical side, you've got, yeah, the model can change and be upgraded, but so does the clinical picture treatments, et cetera, change. And so what is that? Is that six months for this model? Is it a year? Is it 18 months? And I would assume it's going to vary by model.

No, that's a great point. Yeah.

From a clinical standpoint. I think that's where I feel like one of the push should be. From a health organization, when you are looking at a lot of these solutions. you just said, there's so many algorithms out there, but there should be a push to use the explainable AI models, right?

The XAI models and the transparency protocols. So once you have these things in place, it's like, hey, that's our policy. Like, we need to know what models you're using and if they are not part of the XAI models. Okay, then do we consider this solution? Do we move forward with it? Or, what are the next steps?

And I think that's a good thing to do instead of, Oh yeah, it's an AI model, must be good. No, that's not always the case. So I feel like we shouldn't just blindly trust whatever, the solution that's being presented, the right questions should be asked. And that's where maybe you need to partner with an organization that can help you with some of that, that can help you with the validation and testing of those models.

And first of all, finding out what it is, both before the implementation and then, like we just discussed, even post implementation, the constant monitoring of that.

  📍 📍   In the ever evolving world of health IT, staying updated isn't just an option. It's essential. Welcome to This Week Health, your daily dose of news, podcasts, and expert commentary.

Designed specifically for healthcare professionals like yourself. Discover the future of health IT news with This Week Health. Our new news aggregation process brings you the most relevant, hand picked stories from the world of health IT. Curated by experts, summarized for clarity, and delivered directly to you.

No more sifting through irrelevant news, just pure, focused content to keep you informed and ahead. Don't be left behind. Start your day with insight at the intersection of technology and healthcare. This Week Health. Where information inspires innovation. 📍 Increase 

Yeah, I think that's of

particular importance to like community health systems like mine where, I don't have. A lot of resources in an academic, like, we don't develop AI products ourselves, things like that.

And you get a lot of press from the larger academic centers, and I'm glad that they do the work that they do, but it's very different for someone like my health system that doesn't have true data scientists and things like that to do that kind of work. I think there's real opportunity there. One of the areas that I hear a lot about is automating the operational work, I'd really like to get your take on that in the non clinical applications.

I hear a lot about the clinical pieces, some of the, ambient listening things and that. piece of the operational work. Every list I see has these non clinical operational lists, but I don't hear a lot of conversation about it. So I would like to get your take on sort of the non clinical operational work and automation that some of the GenAI products can do.

Yeah, and I did have some conversations related to this this week at Vive. And, one of the things like around data entry, right? Like the whole scheduling piece of it. I feel like that there's definitely a lot of opportunity there, just because, you know, all the way the intake of the patient from your website, whether they are using the website on a desktop browser or an app, on the phone, but all the way from there, I mean, some of those workflows can definitely be automated.

Asking the right questions, again, making sure the solution is HIPAA compliant, so that's another thing, but again, you're not, it's very simple stuff, it's registration related stuff, it's data entry, data collection, and data entry, and then making sure that this information gets into the EHR in the right place in a timely manner, I all this should be done in By an AI, process, like it's an AI solution, and it's not about, filling out a form and all that.

It should be about, it should be, obviously, if you're talking about AI, it should be smart enough asking the right questions, collecting the right information, and then sending it into the EHR. Now, a lot of times there's integration related questions. Okay. to it. You may have either push, HL7 or some other FHIR APIs that could be used to send that information into the EHR.

Some of them, I actually have learned that they even do bots, so they pretty much, deploy bots that would then mimic a user and enter the information into the EHR. Probably not the best way. I mean, it's like, oh, AI and then now you're trying to do screen scraping and enter the information, but.

The other thing I think also around, for example, insurance claims, right? So a lot of the processing around insurance claims, can some of that verification and fraud detection, could that be done using an AI solution? Again, non clinical. Expedite your billing processes, help your operational processes.

So I feel like those are some of the places that I think organizations should be looking into.

Well, let's get personal for a second here. Where has generative AI already changed y'all's business?

Yeah, so we, I mean, we have been focusing on integration for the longest time, and one of the big gaps today even today, is that we depend a lot on clinical staff, on the clinical end users to report clinical application issues and integration issues, there's quite a few examples of this.

Hey, I haven't Received my PAX orders, they're missing. I already put them in the main EHR system. I still don't see it. The integration team doesn't see it because the interfaces look fine. But there could be many reasons why they are not showing up in the right place. in the right workflow, but you don't find out about these things until after the fact.

So, I just described one workflow. I mean, there's hundreds and hundreds of workflows that are happening constantly in a health system, both on the acute and ambulatory side. And there's hundreds of interfaces. You have 300, 400 interfaces sometimes that are sharing information constantly in real time.

And yeah, there's certain alerts and notifications built in every engine. Even on the EHR side, sure, but it's not very smart. It doesn't detect the right issues and a lot of times it's a notification fatigue. It's an alerts fatigue. So a lot of times the integration engineers, they don't even look at it.

It's just not practical. And I understand why it's not their fault. You have, 40 alerts coming at you in five minutes. You're not going to decipher what is the right, where is the right issue. So that's where what we have done is like, okay, you know what, why don't we take advantage of AI and try and accelerate the triage and resolution processes using the AI intelligence.

to minimize any effects on patient care and revenue, right? So that's where now we have a cloud native AI solution that integrates with every major engine and with all major EHRs to help detect these issues in real time and then alert the right department and the right users so that we know about it instantly instead of Waiting until it actually happens, and then, an hour or two later, you're actually working on it.

And in our case, yeah, I mean, we are again, using the known models. So, we are like our team is using the ARIMA. There's also the STL, the LSTM the VAR. Like, these are some of the the known models that I'm talking about. It's easy to find definitions on these, so even if you googled it or you asked chatGPT what that model is, it will tell you exactly what it is and it'll explain it to you.

So again, there's lots of algorithms that we sort through and then we come up with the best way detect the issues in real time. So that's one of the big change that we have made.



power of the generative models allowed you to do that, where, before it was, couldn't be real time.

Is that kind of the point? Yeah,

exactly. Like, it, before it wouldn't even detect the right issue. So, if you had, for example, on a daily basis, you had about 50 Cardiology orders, and today the trend is, only around 20. I mean, a lot of the times, no one's going to pick that out, right?

Like, these kind of trend changes, for example. And because the interface is outrunning, there's no connection breaks. So this is where the AI would actually see that, right? It will actually see the differences and, hey, this is not the norm. Something's going on here, and then it'll dive into a little bit deeper and then alert the right people.

Again, we have learned over time, we did have in the beginning quite a few false alerts and all that, but it learns over time and it does get smarter. So, within, I would say once the solution is up and running on top of your engine and your EHR, after about 30 days, the number of false alerts goes down tremendously.

And it actually, It's pretty accurate for most of the time.

r predictions for the rest of:

Oh, definitely. I mean, that's increased adoption. Like if you are not doing this, not using AI in some way, I think you definitely should.

And I think most organizations will. I mean, you go to any conference, and Everyone's talking about AI, like every person, every solution has AI built into it, so you can't get away from it. So I would say, it's a no brainer in my opinion, like definitely increased adoption. But I feel like the other thing I'm going to add is it's also going to be increased integration in your workflows.

So what I think is going to happen is like a lot of your workflows, the way you are doing today, may remain the same, but underneath it. AI is going to help you, right? So it's going to help you in that decision support where before it was only here, in a couple places, but I feel like more and more in every step there will be AI involved.

You may not even know about it, but it will be integrated, and I feel like where the beauty of it is. It shouldn't get in your way, but it's in the background helping you out when it needs to.

Fantastic. Well, Vic, I want to thank you for taking the time to be with us today. I appreciate your insights.

Thanks. This was fun. Yeah, this went by really quick. Thanks, Brett. Yeah, I appreciate it.

Thanks for listening to this week's Town Hall. A big thanks to our hosts and content creators. We really couldn't do it without them. We hope that you're going to share this podcast with a peer or a friend. It's a great chance to discuss and even establish a mentoring relationship along the way.

One way you can support the show is to subscribe and leave us a rating. That would be really appreciated. And a big thanks to our partners, Armis, First Health Advisory, Meditech, Optimum Health IT, and uPerform. Check them out at thisweekhealth. com slash partners. Thanks for listening. That's all for now..



More from YouTube