This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.
Today on This Week Health.
We're still early in terms of federated learning. So as AI algorithms are kind of penetrating the marketplace what we're seeing is people starting to understand and start to get some experience. And the one thing we're seeing across the board is the need to test the stuff on local data. So the opportunity for federated learning is to not move the algorithms to the data, let the data kind of reside in the local institution. And so that's where we see a lot of potential at the moment to kind of accelerate some of the, the tuning, if you will, to local conditions.
All right today, we have a solution showcase for you. We have a conversation that I I've 📍 moderated from the HIMSS floor at the VMware booth with NVIDIA, with Rhino Health and with the American College of Radiology. And this is about federated learning for AI models. It's a very fascinating conversation. Mike Tilkin represents the American college of radiology. We have Ittai Dayan who represents Rhino Health. Really cool solution for federated learning. And obviously we're in the VMware booth and we have Brad Genereaux who is with NVIDIA. Great discussion. I hope you 📍 enjoy.
All right. We learned something the last time we did one of these, we learned that you have to have the mikes almost in your mouth so that people can hear you. So I'm excited for this conversation. We're going to be exploring federated learning. We're also going to be exploring, building out your AI infrastructure around that. Here's what I'd like to do. I'd like to have each of you introduce yourself. Who you're with and what your role is.
So, hi, I'm Mike Tilkin. I'm the Chief Information Officer and Executive Vice president for technology at the American college of radiology. So we are a physician organization about a hundred years old and we represent over 40,000 radiologists. And so we have programs that around quality and safety and research and education all aimed at empowering our members to provide the best possible patient care.
Hi everyone. Ittai Dayan, a Co-founder and CEO Rhino Health. Physician in the background. Rhino Hills disruptive AI start up with a goal of revolutionizing this field using distributed compute and federated learning.
And my name is Brad Genereaux, medical imaging and smart hospitals Alliance manager with NVIDIA. And what I do is I cover developer relation on anything in the medical imaging and smart hospitals ecosystem, whether it's AI, visualization, virtualization, or analytics. Doing the full stack to help deliver solutions to that possible providers.
Alright, former CIO. We always start with the use case. I want to know a problem you're trying to solve or what you're doing for the industry. So, Michael, we'll start with you. What is the problem we're trying to solve?
Yeah. Well, as AI hit the radar screen about five or so years ago in the out of the computer vision community, the ACR created the Data Science's Institute and our goal was to promote the safe and effective use of AI.
So we looked at the types of use cases clinically that would be useful. We looked at the data needs. Since these are data hungry applications, we looked at the workflow needs to make sure that they're implemented and kind of safe and effective ways. And we spent quite a bit of time looking at what it means for these algorithms to be ready for prime time, what it means to validate the algorithms. And then because we know they're susceptible to drift to make sure we're monitoring over time. So it was kind of a make sure safe and effective use of AI.
All right. So you're member organizations utilize your services for validating these AI models for making sure there's no drift. I mean, is that, am I capturing that right?
So we have a wider in quality and safety programs and so things like national registries or accreditation or things of that sort. So what we're trying to do is to make sure that the community is applying these effectively. So we do have programs to help promote safety. We're also advocating, we're working with regulators, we're working with industry. We're trying to educate our members. So the problem space right now is as much education in providing tools not just ourselves, but really promoting industry tools space that's going to help folks validate algorithms. Train it on their local data, test it on their local data. So really trying to create an ecosystem for success.
All right. So when I hear that you have a lot of member organizations that are connected to you. You're pulling that data and this gives me a way to work with other organizations through you to validate these models and then take these models out into the clinical world and those kinds of things. Is that, is that accurate?
Well and empower our members. So individual institutions are going to work with vendors directly. So they're looking for guidance. They're looking for I need to worry about. What's the infrastructure I need at my facility? How do I need to communicate with the larger community to understand that my performance characteristics are appropriate? It's a kind of a national par. That's kind of. Trying to trying to help.
So, Ittai what do I need to have as a health system to participate with this? What's the, what's the architecture? What does it look like?
So Rhino is a distributed system based on field clients and cloud orchestration. So you actually feed the log. If you set up a VM, we could install our system fairly quickly. Cloud orchestration brings the best out of both the ability to use your existing IT stack without having to make my HO IT investments in order to participate in the AI development and validation game while also being able to live with the agility and scale of cloud.
I want to come to you. I love talking to you about this. We talked yesterday a little bit about this. This is an opportunity for me as a CIO to put in an architecture that's going to allow me to do many things. We talked yesterday about mammography and whatnot, but it's sitting on this same kind of architecture where we can build out the AI capabilities and we're, we're actually virtualizing this whole AI infrastructure to talk a little bit about the infrastructure that sits underneath this.
Yeah, absolutely. So what we've done with NVIDIA certified systems and AI enterprise with VMware as our virtualization stack, creating an ecosystem where we can build all of our applications on one environment. Investing in the one platform so that I don't have to go and buy a one-off boxes for every single solution.
Yesterday, we talked about iCAD and mammography today. We're talking about training AI models together using ferderated learning. If I were to have a box for every single one of these applications that we see, and even when we walked the show floor, It's impossible. We need to have that stack, that platform to build everything on top of, and that's what we're creating with NVIDIA and empower our other partners, VMware out with Rhino health and ultimately with the ACR.
If you're out there, you can be thinking about questions. If you have questions, we would, we would love to take them. I want to come back to you as a CIO. One of the first things I want to be asked is who else is doing this? What are the use cases? What other organizations? Help me to understand organizations that are utilizing the federated learning. What are some of the outcomes that they're seeing and what are some of the things that they're doing?
Well we're still early in terms of federated learning. So right now, what I would say is, as AI algorithms are kind of penetrating the marketplace what we're seeing is people starting to understand and start to get some experience. And the one thing we're seeing across the board is the need to test the stuff on local data. So we do things, for example, if we have a registry to get results back and help people benchmark and the like. So when it comes to federated or at least distributed validation, people are starting to take these algorithms and test on local data. In terms of training algorithms in kind of a community federated manner, it's happened more in the research realm than it is kind of in a commercial realm at the moment. But it all kind of speaks to the same problem, which is that. Is sitting locally and what you see in these really data hungry applications, most of the efforts to collect lots of data, start to pump up against the struggles of data, leaving the facility.
So the opportunity for federated learning is to not move the algorithms to the data, let the data kind of reside in the local institution. And so that's where we see a lot of potential at the moment to kind of accelerate some of the, the tuning, if you will, to local conditions.
So Ittai. That is generally as a CIO people come to me and say, Hey, can you move, move all your data up into our cloud platform? And then we'll process it all up here and what we'll bring it back down that creates so many problems for me as a CIO. First of all data in transit at rest. I mean, I've got to worry about all sorts of things. I don't know how they're storing that data. Now I have to verify their data center and all their practices and those kinds of things. Talk about doing these algorithms at the edge. And the benefit that we're, we're looking for.
Yeah. So, so this is generally industry challenge where most of AI today in health care is created on a very narrow training set and validated on fairly narrow and not necessarily representative data of the target population of patients.
Much of that is driven by the cause that today as a medical institute in order to participate in the training and testing of AI, you need to either make major investments and that many new platforms locally for many one-off collaborations, such as was mentioned before by Brad. Or we need to move according to move all your data to the cloud that may work for few collaboration and for early product development, but it doesn't work for product introduction and ongoing product improvement and validation when you need access to massive amounts of data. The ACL is well aware of that issue. And as part of that is late in the infrastructure of data management and interfacing tools that help you connect with a PACS and reporting systems in a meaningful and standardized way. That's a very big part of the solution after you have that, you need to be able to actually action on that data and feel that you need both platform in order to validate the algorithms and identify weaknesses, identify subpopulations that have not benefited in an equitable way, but if a model and when you need a way to action against that and actually make product improvements, which is the federated training perspective. NVIDIA have done an amazing job in terms of creating a lot of this seminal technology, such as the Flare System was, is now open source and implemented by Rhino in order to create a lot of these kind of like global stack level tooling in order to build product.
Now, Rhino is building a more consumer facing and startup facing medical, innovative facing, institutional facing tool. In order to scale out that innovation throughout large networks The ACR use case is a very impressive and prominent one in the sense that it has a interface with most of the radiology data in north America, where it's a highly reputable organization, because put the patient to in the front of everything it does and has been actually building a lot of methodological frameworks for this for the last probably 10 years. Much of it done by the data science institution and in that we think the ACR is an obvious spot belt in order to prove the value of using real-world data at scale, in order to test products better and improve products for the benefit of patients.
One thing that I'll add is that you know when I talk to a CIO and say, Hey, I'd like to get started. I'd like to work together. Let's say I'm a children's hospital. I want to work with other children's hospitals. Let's train a model together to identify a particular condition. If I have to start that conversation with, well, let's go buy a server, right?
Well we're talking three months before we can get started, right? The amount of time to do any of these solutions, we can't wait. Right? Let's get started now. And when we create this one stack that we can run Rhino Health using Nvidia Flare SDKs, open source tooling to help power what ACR is doing. We can now go to member hospitals and say, you got the stack. Let's drop the VM on VMware. Let's go. Let's go. Like we can start now. We can start tomorrow and not have to wait.
Tell me about Flare. Help me understand Flare a little bit better. I've heard it now said twice so what exactly are we talking about here?
So Flare is our SDK to help drive some federated learning. Collaborative learning. What we would do in the past, let's create these data, swaps these data lakes where we put all the data up in one spot, and we going to train our model up there. And then we create our, our model then use a validator etc. With Flare, what we've done is different organizations that have their data, they leave their data there, and it's basically a server client relationship. So the server, which could be at one of the member hospitals, it could be at the ACR. It could be up in the cloud. We'll orchestrate this training. So what happens is you start with a seed model that seed bottle goes to all the different member, hospitals using the flare client. They train their models locally. So they take that model individually, do their tuning and then put a subset of the model wage, just the model weights back up to that server, aggregate an average and then push that out for more rounds.
So like five, six rounds. And the idea is that it's privacy protection because we're not actually sending any healthcare data whatsoever. We're only doing a subset it's you can't reverse decompression. But ultimately we have a model that generalizes for all the different participants, rather than this is just geared towards one particular hospital.
One particular modality, one particular manufacturer generalized for all. So it's a framework that helps do this. And we started with flair in medical imaging but what we're seeing is it applies all over the place. It applies in EMR data plus imaging data. We've actually done it. We did a model called exam would use as a chest x-rays and respiratory data to help train a model. And we're also seeing it now being applied to finance and other domains outside of health care. But this one started in healthcare and we're propelling it forward. Super excited.
All right, I'm going to come back to an interview I did with a CIO from an academic medical center. Any questions out there?
Yes, so the question I have is maybe a slightly different from the AI side of things of analytics, but more on the operational and transactional side of things. Where do you see a use case where your solution stack is applied to real time data delivery where you're actually trying to drive the data delivery to different users in the healthcare space. Particularly as the patient journey, regresses from a facility from one area to the other, because everybody's not on analytics, which is fine. Right. You need that. No questions asked, but there's a big gap in the, if you want to pull up the interoperability side of things, where I would see this as, as a potential flaw and or a limitation.
So one thing that we've started to see using flair is that it potentially applies not only to training workflows, but to other workflows as well. And one of the really neat ideas that we've started to explore is what does federated inference look like? So in this case if we have a model and we bought a number of different participants that could run that model, we could all of a sudden create really high availability amongst a single hospital where you got multiple locations, maybe a hospital system or different participants inside to say, Hey, I don't have the capacity to run inputs on this model right now.
One of the other peers in this network, maybe potentially. Some of the challenges with that still is on the interoperability front because of how do you get the data from one point to another for this is where we've got things like DICOM with HL seven. I do a lot of work in that space. I've written a lot of the standards that have gone into this. It's coming. It's absolutely coming. Absolutely. I'd love to.
I was just gonna say that one of our key goals is to promote a thriving vendor ecosystem. We're trying to increase options for our members to do what they do. So interoperability is a huge part of this. So we do spend quite a bit of cycles thinking about how can we use this technology? How do these components need to talk to each other? How could the results get to the right place so that we can put the folks that are imaging experts in our case data, in their hands to make the right decisions. So it's a huge, huge thing.
I would maybe add to that, that as long as it was able to avoid data aggregation, you can do a lot of very interesting things using the system. And part of that is something which I think is now nicknamed as vertical FL where they're not actually using data as diveersive redundant but level data that's complimentary from different places. And as you mentioned, was the patient journey, maybe the patient's scan in one hospital and the patient's scan in another hospital is actually powered with lots of panel data. They have those in one model and one time stamp.
It's interesting. AI adoption within healthcare. I'm seeing really three areas. One is imaging because the images are clean data. I'm seeking telemetry data because in telemetry data a lot of it is streaming all the time. Clean data. I'm also seeing you then in the administrative side, because you can make mistakes on the administrative side and it doesn't impact outcomes.
So those are the three areas I'm seeing it applied pretty significantly. I want to come back. I had an interview with a CIO from an academic medical center, and one of the things that he was driving home to me was the need to utilize local data. And in his words, he goes everybody's trying to aggregate these massive data sets and that's good for certain kinds of research and whatnot.
He said, but I need the data, I need to start analyzing the data in my community because my community is the perfect representation of my community. Right, right. And so I can get a very detailed population health metrics around this community, as opposed to trying to look at a larger dataset isn't really going to help him to understand who's coming into his, his E D during 10 o'clock at night to seven o'clock in the morning.
And even that population set is a little different than the population he's seeing from nine o'clock in the morning, till 10 o'clock. And he's talking about all that data said, how, how important is that? That local data set to the members that you're working with?
Yeah, it's critically important for all the reasons that you just described. These algorithms are also very sensitive to things like scanner type and protocol and all sorts of ways the images are acquired. Local variation is huge. So we really see the need in particular to test if nothing else validate, validate, make sure that it's working as you expect it on your population, work with your vendor.
If it's not obviously make sure you're monitoring because not only is it initially you want to make sure it's working, but you want to make sure it continues to work. So that's why we've pushed things like our assessing registry to make sure you're monitoring over time. But I think that local variation for all the patient and other reasons is tremendously important.
Talked to me about Drift. I mean, that's one of the things I'm hearing over and over again.
Well so you've got performance characteristics based on your initial tests, which are fantastic. And then maybe great, then you see software updates, you see protocol changes, you see patient mix changes. And so at that point, the algorithm may perform not quite as well. Your data that's coming in on day two than it did on day one or day 30. And so we just see the critical importance of just continuing to monitor that over time. Make sure you kind of keep your performance at expected levels. If not, then you need to need address that.
Where can people get more information, if they want more information on the work you're doing. Where would they go?
So at the American college of radiology, the data science Institute, we have a dedicated website to the data science Institute. So that's a great resource to look at all the things that we've got from education to standards, to technology, to what's the latest and greatest list of cleared algorithms out there. So quite a bit of resources at the data science Institute site.
Well, there's the Rhino Health's website. www.rhinohealth.Com and also plan to provide more information about the ACL and Rhino collaboration soon..
Where'd the name come from?
Well, many reasons, but I'd say that the rhino symbolizes breaking through obstacles and we saw the rhino as a data silo buster.
More information on what, what you're doing and VMware's doing.
Absolutely. So if you look up AI enterprise at nvidia.com is a great place to start. NVIDIA certified systems. We've got a healthcare brief that shows exactly what the stack looks like from the hardware, through the virtualization, with VMware, all the way up to the top of the stack for tools that help data scientists to help developers to help do that last mile.
The thing I love about this as a CIO is it's a platform, right? I get the platform. You mentioned this before, I get the platform in place. I can start working with the American college of radiology. I can start working with other systems because I have that infrastructure in place. I can start looking at these, these other players are who out here and say, look, I need you to drop into my infrastructure because every time I do a one-off, complexity, cost, agility. The ability is just lost. So.
Unsustainable. It absolutely is.
Yeah. I appreciate it, gentlemen. Thank you for your time and thank you for everybody who's here. Appreciate it.
Thank you so much.
What a great conversation with Mike, Ittai and Brad Genereaux with NVIDIA. We wanna thank them for being a part of the panel. We also want to thank VMware for making their booth available to us to have this conversation. And for all the people that participated in the conversation at the HIMSS conference. I love the federated learning aspect of this. I hope you got as much out of it as I did. If you're looking for more interviews, just like this, more conversations like this, this is the conference channel head on over to the newsroom channel. And it's called this week health newsroom. And we have interviews from the ViVE conference and the HIMSS conference that we have aired. There's gonna be upwards of about 40 plus of those interviews. And they are fantastic. Getting great feedback from 📍 the community on those. So check those out. That's a wrap. Thanks for listening. That's all for now.