Today in. It, the accuracy limits of data-driven healthcare. My name is bill Russell. I'm a former CIO for a 16 hospital system and creator of this week health, a set of channels dedicated to keeping health it staff current. And engaged. We want to thank our show sponsor shore investing in developing the next generation of health leaders, Gordian dynamics, Quill health tau site nuance, Canon medical.th,:
Up to clouds like Azure, we're going to be talking to two health systems that have done that with epic. And what they have experienced. So check that out. If you get a chance, you can sign up on our website this week. health.com. And it is, , right there on the front page. All right. , also check out our new podcasts this week health community. We launched it this week.
, we have health leaders interviewing practitioners in the field. Amazing new podcasts. Find it wherever you listen to podcasts. Found a really cool article, David TAVI. PhD, MBA CTO at Jon snow labs. Is, , making AI and NLP solve real world problems in healthcare, life sciences and related fields. He wrote a great article on Forbes around the accuracy limits for data-driven healthcare, and let's just get right into it.
Algorithms are only as good as the quality of data that they're being fed. We know that this is not a new concept, but as we begin to rely more heavily on data-driven technology, such as AI. And other automation tools and applications, it's becoming a more important one. Recent research from MIT found a high number of errors in publicly available data sets that are widely used for training models. An average of 3.3% errors were found in the test sets.
Of 10 of the most widely used computer vision, natural language processing, NLP and audio datasets.
Given that accuracy baselines are often at or above 90%. This means that a lot of research innovation amounts to chance or overfitting to errors, data science practitioners. Should exercise caution when choosing which models. To deploy based on the small accuracy gains on such datasets. These findings are particularly concerning when it comes to AI applications in high stakes industries like healthcare.
All right. And he goes on to talk about why this problem exists. First reason. One of the reasons for this is the data source itself more than half of the clinically relevant data. For applications like recommending a course of treatment. , finding actionable genomic biomarkers or matching patients to clinical trials. Is only found in free text. He goes on second challenge and other barrier exists in the limitations of what's in the data itself, because there are no shared standards for data collection across hospitals.
And healthcare systems, inconsistencies and inaccuracies are common. Between different organizations, collecting different information and records, not being updated on a consistent basis. It's difficult to know how accurate the data is, especially if it's being moved. And updated among different providers.
Third thing. It's not just the providers to blame either inaccuracies come directly from the patients themselves. A recent study from the journal of general internal medicine shows just how prevalent this can be. When exploring the accuracy of race, ethnicity, and language preferences in EHR. The study found that 30% of white self-reported identification with at least one other racial or ethnic group.
As to 37% of Hispanics and 41% of African-Americans patients were also less likely to complete the survey in Spanish. Then the language preference noted in the EHR would have suggested.
All right. It goes on. There's clearly need for better data collection practices in healthcare and beyond accurate information can help the medical community. Understand more about social determinants of health. Patient risk prediction, clinical trial matching, and more standardizing how this data is collected and recorded.
Can ensure the cleaned data gets shared and analyzed correctly. This is both a medical and social challenge. For example, what is the correct race to fill in? When exactly is someone considered a smoker. This is also partly a technology challenge as we're already way beyond the limits of what's reasonable to ask providers and patients to manually input. All right. He goes on one more. There are also data quality issues outside our direct control, such as fraud and abuse, the national healthcare anti-fraud association estimates.
That healthcare fraud costs the nation. About 68 billion annually, about 3% of the nations, 2.6 trillion in healthcare spending. Other estimates range as high as 10% of annual healthcare expenditures. Or 230 billion. While we can account for error rates within that data. It's an imperfect science at the end of the day. And it's important to understand the limitations.
All right. So that said it's not all doom and gloom. When it comes to the quality of data and the algorithms we use. And he goes on to talk about some of that.
First of all technology that can automatically understand the nuances. Of unstructured, text and images, as well as reconcile, conflicting and missing data points is gradually maturing.
NLP, for example, can address many pitfalls of data quality, such as uncovering disparities in a EHR versus a doctor's transcript,
or what a patient self reports
in recent years, newer algorithms and models can apply the context, medium and intent of each data source to infer useful Symantec answers. It goes on current state of the art peer reviewed publicly reproducible accuracy benchmarks. On both competitive academic benchmarks and real-world production deployments has been steadily improving.
Over the last five years libraries like spark NLP surpasses 90% accuracy. On a variety of clinical and biomedical. Text understanding tasks, reproducibility of results, consistency. Of applying clinical guidelines at scale, and the ability to easily tune models
to a specific clinical use case or setting. Our three keys to successful implementations and to building broader trust in AI technology. And finally, the healthcare industry is varied and complex. And so too, is the information collected. When using data to make any decision in this field technology that helps we'll keep improving, but it's critical to remember the fundamental limitations of data, quality and accuracy that power, these algorithms simply put, it's not safe to assume that a piece of data is correct because someone typed it into a computer.
This is a great article. I love this article. You can find it out in Forbes. And again, it's a, the accuracy limits of data-driven healthcare. , my, so what on this is I, to be honest with you, I'm not sure I can expand on this. This is a, an extremely well-written article and clearly articulate the challenge and promise.
Of data driven healthcare. All right. That's all for today. If you know someone that might benefit from our channel, please forward them a note. They can subscribe on our website this week, health.com or wherever you listen to podcasts, apple, Google, overcast, Spotify, Stitcher. You get the picture. We are everywhere.
We want to thank our channel sponsors who are investing in our mission to develop the next generation of health leaders. Gordian dynamics, Quill health tau site nuance, Canon medical, and current 📍 health. Check them out at this week. health.com/today. Thanks for listening. That's all for now.