Ethics of artificial intelligence in the life sciences industry
Episode 68th July 2024 • CRA Sessions Podcast • Charles River Associates
00:00:00 00:42:25

Share Episode

Transcripts

CRA Sessions Podcast

Ethics of artificial intelligence in the life sciences industry

The opinions expressed are those of the author and do not necessarily reflect the views of Charles River Associates, its clients, or any of its or their respective affiliates. This podcast is for general information purposes and is not intended to be and should not be taken as legal advice.

Lev Gerlovin, Vice President, Charles River Associates

Hello everybody. Thanks for joining. My name is Lev Gerlovin. I’m a Vice President within the Life Sciences Practice at Charles River Associates. It’s my pleasure to welcome you to today’s fascinating discussion on the issues of ethics in AI.

is part of the Life Sciences:

I’m pleased to welcome a dynamic panel of my Charles River Associates colleagues to talk about these issues and now I will turn it over to my colleague, Kristen Backor to introduce herself and moderate the discussion. Thank you.

Kristen Backor, Vice President and Director of the Market Research Center of Excellence, Charles River Associates

Thanks, Lev. My name is Kristen. I’m a Vice President. I’m also the director of our Market Research Center. In terms of who we have on our panel, we have Michelle. She’s an Associate Principal with us and an epidemiologist who advises clients on improving traditional drug development and commercialization pipeline using machine learning, statistical modeling, and cloud-based computing. Michelle’s lead R&D transformations for various life sciences companies using data-driven decision making and intelligent clinical trial design. Michelle is also extremely passionate about AI and machine learning. Prior to coming into consulting, Michelle held roles in the biotech and pharmaceutical industries and was a R&D management consultant and helped build advanced analytics platforms for companies.

We also have Travis today. Travis is a Principal in our Life Sciences Practice and has experienced across pharma commercial strategy with a focus on analytics and forecasting. He’s worked across a bunch of different therapeutic areas including rare disease, immunology, pulmonary disease, kidney disease, CNS disorders and metabolic conditions. And before joining CRA, Travis worked as a research scientist investigating cell biology, infectious disease and cystic fibrosis.

And our last panelist today is Artes. She leverages her expertise in economics and policy consulting to develop solutions that support the business and social goals of pharmaceutical leaders in global markets, including Europe, Asia, and a range of emerging markets. She’s supported her clients in navigating healthcare policy reform, identifying international and national policy tools to improve timely access, strengthens healthcare system solutions, and incentivizes the development of new treatments in different disease areas including oncology, rare disease and infectious disease.

So, if you were listening closely, you probably noticed that we have a range here moving from policy, epidemiology, Travis with the more analytic side, and you’ll see that come through in the discussion topics that we’ve prepared. Before we get into the actual topics which are laid out here, pharma, R&D, medical affairs, sort of regulatory and oversight; and then just broadly, the main topic of our discussion today is ethical opportunities and challenges. We’re going to start with the pharma R&D and medical affairs piece moving to Michelle. So, Michelle, as I said in your intro, your background is in drug development and research and development; and I’d love to talk about some of the different ways that AI is being used in that space in particular. So, what are some kind of recent examples or recent developments there that you’re particularly excited or thinking about?

Michelle Guo, Associate Principal, Charles River Associates

Yeah, so I think in R&D specifically, they’re kind of ahead of the curve actually when it comes to adopting advanced like AI use cases. I think particularly because of how I guess two factors in general first, COVID really, I guess one of the good things that came out of COVID was this kind of push for innovation, you know, decentralizing clinical trials, trying to get people more access to being able to be and take part of clinical trials. And then with data connectivity, right, trying to pull all these different types of data sources that were traditionally done manually or done via paper and pen into either cloud storage or some sort of like integrated data storage. The particular use cases that I’ve seen coming out and then a lot of projects that we have worked on for our large pharmaceutical clients, is they’re particularly around optimizing clinical trials. So essentially building advanced analytics models to kind of optimize things like country and site selection, right? So, depending on whatever the indication that you’re interested in or like portfolio of indications, you know the countries that would be best from a recruitment perspective, a regulatory environment perspective, what your evidence requirements are in order to reach certain endpoints that you’re deciding on. There are also certain parts of the actual protocol itself. So, for those who don’t know, protocol is essentially like the blueprint for how you run a clinical trial, trying to optimize portions of the clinical trial protocol. So, you know, how can you make your inclusion exclusion criteria optimized so that you can allow the most people possible while also making sure that you know you have a balance essentially the risks of AEs and SAEs that could come.

Kristen Backor, Vice President and Director of the Market Research Center of Excellence, Charles River Associates

Can you give a concrete example like with inclusion and exclusion criteria? What’s an example of a way AI could?

Michelle Guo, Associate Principal, Charles River Associates

Yeah, so a very concrete example and I like live this so I forget that like, you know, most people, most people like will need a primer on this. But essentially inclusion exclusion criteria, they could look from anything of like an age range or you know like the ability to speak English as a primary language to, so you know for certain for certain drugs, like you’ll need to be within like a range, right for like a biomarker. So HBA1c for type 2 diabetes trials usually have to be between, you know like a certain percentage, another one from most of the diabetes trials I’ve helped run is also like waist circumference is another one you have to be within like a certain range usually with a maximum and then also with BMI. You know you can’t exceed like a certain amount, but then there’s also like a threshold that, you know, you can be a part of the trial. So what we’ve seen before is, so a lot of what I was talking about just now with like connecting different data sources, we were actually able to pull in a lot of the screen failure rates from past to trials that this one large pharma company had run and then using said screen failure rates, we were actually able to go through each one of the inclusion exclusion criteria individually. And you know, with a lot of caveats that I do not have time to get into right now. But essentially, try to estimate what the screen failure rates were per each one of the inclusion and exclusion criteria, and if there were that had, like, you know, exceedingly high rates of screen failure, we would take a look at those and essentially just from a clinical perspective, just be like, OK, why in particular is this one inclusion criteria causing 4% of all screen failure rates per patient, right? Very specific example and this is something that I that we did a like a couple of years ago. We found out that this one company had set their HbA1c range to be a little I think it was probably two thirds of essentially what all of their competitors were doing, and you can see basically other public trials protocols using public data sources like clinicaltrials.gov. We saw that like all of the other big pharma companies that were running competing type 2 diabetes trials had a much broader range of inclusion criteria from HbA1c perspective, and then we essentially presented this and then we’re just like if you just expanded this a little bit, you’d be able to, you know, reduce your screen failures in half on the on just the inclusion exclusion criteria. And you know that saves them from an impact perspective cost because you have to screen fewer patients. And then also from the perspective of time, you just have to recruit fewer patients. That was like a very tangible impact that had for from like just looking at something that was, you know, pretty minor honestly at the end of the day.

Kristen Backor, Vice President and Director of the Market Research Center of Excellence, Charles River Associates

Travis, I know you had thoughts or an interest in the protocol design piece as well. Is there anything that you’d like to pull in here either based on your experience?

Travis Ruch, Principal, Charles River Associates

Yeah, this wouldn’t be based on my experience, but kind of based on what you see being talked about a lot today, which is this idea of creating digital twins within clinical trial design and it’s sort of you know exactly what Michelle just described. But on an even larger level. So the idea is effectively you can take a patient. You can upload all of their health information into an AI so it’s digitized. You can treat IE introduced your drug into the digital environment and understand is this patient responder or are they not a responder? And based on the outcomes of that, you could effectively run single arm trials or decrease your sample size needed in your clinical trials. So it’s effectively a way pharma companies could identify potential responders out of a broader patient pool. And I think this raises a really interesting question, something we kind of wanted to talk a lot about today is what do you lose when you increase that efficiency? So if you’re a pharma company, this sounds great, right? You create a digital AI. It effectively identifies patients who are going to respond to your drug, you run your trial. You get a great signal. All said, you’re going to get approved by the FDA, but on the back end, how does that impact your commercial, your basically your commercial prospects because in you know what payers normally do at least in the states is they’re going to look at your trial criteria, they’re going to say, OK, we’re going to set our PA to match your trial criteria. If your trial criteria is patient was predicted by our AI algorithm to be a responder. Who’s to say payers aren’t going to say, well, hand that over we want to use that too. We think that’s great. So you’ve, you know effectively had a really efficient way to run your trial, but you may have limited your commercial opportunity, you know dramatically by having this AI selected patient group. In addition, if you’re a patient and you would be a responder to this drug, you might not be able to access it, or you need a fork over all of your personal information to your insurance company so they can simulate you to run you through their AI algorithm to see if you’re a responder. So you know, I think there’s upsides and downsides here. In terms of, you know, what is the trade-off for the increased efficiency from a commercial standpoint, you could be extremely limiting who would be you know accessible for enrollment on your drug. And then on the other side that from the patient perspective, all of these algorithms need data. So it could require you to fork over your data or you know, not get access to treatment.

Kristen Backor, Vice President and Director of the Market Research Center of Excellence, Charles River Associates

I’m going to go rogue. I’m going to ask something that’s not on our outline, that any of y’all can jump in on. But I think one of the things that’s been coming up recently in particular is not just representation and clinical trials, which I think Michelle, you talked a bit about, but also being able to test products in audiences that are typically excluded from clinical trials. Like pregnant women, for example, or children. Do you all see potential for AI to be able to extrapolate to other groups or to somehow help incorporate audiences that might otherwise not be able to be included in clinical trials?

Michelle Guo, Associate Principal, Charles River Associates

Honestly, I think a better way to do it is using real world evidence at the end of the day, because AI is definitely not to the point where it could essentially create a synthetic cohort in order to be able to predict the kind of outcomes that you’re going to get. So I think it’s much better to use, I guess, boring methods, you know, boring traditional methods of essentially building a cohort using RWE of pregnant women or children or people with four or five different comorbidities where they would be excluded from a clinical file to begin with. And essentially follow them in a prospective way, because that would probably get you much better data and much better sense of like what the what the actual effect of the drug would look like in that population as opposed to AI. I would definitely caution against using AI in that.

Kristen Backor, Vice President and Director of the Market Research Center of Excellence, Charles River Associates

Yeah, I mean fair, that’s part of why we have the discussion. Michelle, what do you think about Travis’s commentary on sort of payers and how they might use AI, like clinical trials informed by how payers might then use that to think about who’s eligible for a drug, for example?

Michelle Guo, Associate Principal, Charles River Associates

Well, I have to admit I don’t do a lot of work with payers just generally. But I think the one thing I’m concerned about because you know this is then an ethics talk and is I’m afraid that payers will go too far and essentially try to use a lot of you know outcomes based off of HEOR research as like gospel for whatever either in the place of real clinical research or to supplement clinical research. That’s just the one thing I would be I guess like prepared or like would be wary of essentially in the future.

Travis Ruch, Principal, Charles River Associates

Yeah, Michelle, there’s a big difference between using AI to read through all of your PA forms.

And using it to evaluate the patient and deny them. That is, it’s not something that’s going on today. But I think from a future perspective thinking like what’s going to happen moving forward, it’s something we should be aware of.

Kristen Backor, Vice President and Director of the Market Research Center of Excellence, Charles River Associates

Yeah. So Artes, I know your work has been more in policy and regulatory also obviously like general audiences that are going to be watching these developments closely. And can you talk a bit about how some of the different agencies we work with in those spaces might be involved here in thinking about AI applications?

Artes Haderi, Principal, Charles River Associates

Sure. Definitely those agencies you just mentioned, Kristen, they don’t move at the same speed as technological advance. So typically the pace is really cautious change is really slow. However, AI is a reality and it has had some application with implications for these bodies, so they are starting to respond, some even preempting the impact that this might have in the future across the value chain, from discovery to access to monitoring. And in particular, I suppose there are two sets of key policymakers. The first is the regulators and their main role, for example, the ones like the US FDA or the EMA in Europe, would be to continue to assure an appropriate governance structure around the use of AI or AI enabled technologies in the development, but also in the manufacturing process so that treatments that which patients are safe and effective.

So these bodies are already recognizing that the current frameworks need an upgrade guidelines. Guidelines are being reviewed. There are more explicit considerations on AI applications implementing updates to address issues, such as for example, data risks, bias transparency of models. So there is already quite a lot of debate, especially from the main leading agencies. So the other set of stakeholders such as the assessment bodies, HTAs or payers, the impact for them will probably be as Michelle and Travis were just discussing, equally important, potentially even more so, however, is less well understood or less well debated at the moment. We’ve seen some discussion on, for example, how do we manage AI implications on supporting, for example, dossier preparations for HTAs or optimizing HEOR studies, patient monitoring, registries for example, NICE in England, which typically aims to kind of innovate on the most recent tools and methods has an HTA lab which is working with pharma companies to pilot AI enabled assessments.

However, that’s not the end of the story. There is so much more, further debate and proactive policy action on how to deal with the future system where we might have more personalized and dynamic value assessments, as we were just discussing and decisions about access and price and incorporation of real-world evidence might happen at a faster pace and we will need to have the right guard rails in place in terms of data protection too. So a lot of the discussion is not as robust at the moment. However, we’ve seen some initial signs and especially at regulatory level, there is more in parallel and more similar action being taken, the discussion from the FDA, the reflection paper issued by the EMA covered very similar ground, went through very similar process. However, for payers a lot more divergences which is a known trend.

Kristen Backor, Vice President and Director of the Market Research Center of Excellence, Charles River Associates

Are you seeing sort of country, you mentioned a few different countries and organizations there. Are you seeing differences in the speed at which countries respond to this or are approaching the issue?

Artes Haderi, Principal, Charles River Associates

Developed countries are taking a lead and that is a dangerous place today, especially if we want to realize the full potential that this can happen. Health as a system in general, we have seen in Europe, I suppose the common theme is that policymakers are looking at this proactively across different levels. Both horizontally we have the UAI Act, which is across sectors but also vertically in pharma. At a new level, the EMA action in member states we have different speeds and different directions. However, we are seeing leading countries issuing agendas, the payers being involved in considering what will be the impact on the healthcare system as a whole. So there is a lot more cross-sectional action and with general legislation, but also trying to complement on the existing building blocks of regulation and frameworks.

The US, I suppose the focus has been in a similar direction as we said on regulatory and at the system is very different in terms of payers. The CMS has issued some previous guidance with a very clear stance to avoid the use of AI on algorithms on determining or denying coverage, however, on the private market, we have seen a lot of legislative action lawsuits against private companies for relying on advanced predictive tools to exactly deny coverage to more seniors individual.

However, there are calls to engage at a higher level. Policymakers like, for example, the Congress focusing particularly how do we regulate AI and healthcare focused on Medicare and Medicaid.

I want to mention developing countries. Just because, as I said before, it is dangerous to not think of countries, moving in somewhat of a tandem, because that’s how we can realize the full potential of AI at least creating the right policymaker frameworks. There are some notable examples Rwanda emerged recently in some debates is one of the countries that actually issued a dedicated, comprehensive AI policy and had an aspiration to be a hub in the region and it coupled these efforts with also some pioneering innovative data protection privacy legislation. Which has led to some really cool applications in health, especially to address healthcare shortages with programs to support, for example, AI enabled triage services. So we are seeing some action, obviously it is slower, but especially certain countries that maybe tend to be more on the innovative side in specific regions they will. It is important that they follow the leading frameworks that are being established at the European and US level.

Kristen Backor, Vice President and Director of the Market Research Center of Excellence, Charles River Associates

Sorry if I missed it. Did you say which country it was developing better?

Artes Haderi, Principal, Charles River Associates

Rwanda.

Kristen Backor, Vice President and Director of the Market Research Center of Excellence, Charles River Associates

Oh, OK. Interesting. Yeah, Michelle, I know you have some sort of direct experience doing this. Do you have experience cross country or any thoughts with respect to differences you’ve seen across countries in response to this?

Michelle Guo, Associate Principal, Charles River Associates

Well, most of my experience has been US centric, which is kind of odd because the US actually doesn’t have, I think like a central regulating body for AI and I think the from the perspective, government governance around AI in general, like Artes has said the FDA is exceedingly slow when it comes to actually innovating and change.

We’ve seen a lot of this in gene therapy where it feels like they’re constantly on the back foot when it when it comes to approval of new technology or innovative clinical trial design or just a lot of R&D in general, right?

So, I think one of the one of the things that the US has to consider, and it’s something that the FDA is currently now thinking about is essentially how they can strike a balance between having a lot of these like private companies lead. So, like Pfizer’s, your Eli Lilly’s and some tech companies as well, you know, big tech companies, are all trying to get into healthcare clinical trials as well as keeping data privacy at top of mind because you know, we all know how great the US is about that. And other things like also trying to keep the integrity of trials and making sure that they’re still upholding the standard for what clinical trials and outcomes generally. So, I think in in the US in particular this is something that is still growing. We were actually at a medical affairs conference in Europe a few weeks ago and because of GDPR, they have more stringent regulations around data and are quite a bit ahead of us when it comes to the conversation. But I think unfortunately the governance hasn’t caught up to the actual innovation itself. So it’ll be interesting to watch essentially this like, I don’t want to call it an arms race, but it is a little bit.

Kristen Backor, Vice President and Director of the Market Research Center of Excellence, Charles River Associates

A main topic of our discussion and obviously ethics has come up several times already because you can’t talk about AI without ethical things coming up, but what we’d like to do with the second-half of our discussion is really focus in on some of those ethical opportunities and challenges and ethical tradeoffs. And so I’m going to kick it to Travis here. One of the things we’ve talked about is potential impact on health disparities, and Artes, I really appreciate you bringing in sort of the perspective of developing countries and highlighting some front runners there, because I think that’s really important.

But Travis, what are your thoughts sort of starting in the real ethical meat of things, thinking about the potential impacts for AI and in health disparities in particular?

Travis Ruch, Principal, Charles River Associates

To understand it, we should probably start out and just paint the picture of what’s being sold to us, what AI is supposed to do. So, within the healthcare environment, you can imagine having AI help for physicians that lower their burden of work. So they help them code patients, they help with diagnosis, they can read up-to-date, summarize things, etc. So effectively lowering the burden of work doctors have to do to enable them to do a better job with patient care. Hand in hand with that, some of the AI chat bots or large language models could potentially be leveraged to help patients get access to care so they could either take care of really simple things like oh, I have a cold, they need to get antibiotics, you know, let the bot deal with that. There’s even talk about using them in CNS disorders, as therapists, having a chat bot therapist, that’s always there, you can just pull out your phone and you have a trained therapist there to help you deal with whatever you’re feeling at that at that moment. And then in addition, it can help with billing and you know chip out on your insurance company, let you deal with any kind of billing questions you have and it’s a glorious world where efficiency is improved. Everybody’s making more money and you’re getting better care. It’s win, win, win, three wins. Everybody wins, AI wins too.

There is a darker timeline though, and maybe we can paint a picture of what is the darkest timeline. Look, if all this technology was available and at your fingertips. How could it go wrong? And I think for me when I start hearing all of these things pop up about efficiency and things being cheaper. What you end up with is a greater health disparity wherein lower income or lower access to capital people, all of their healthcare is dealt with via AI chat bot or some version of chatbot. They don’t get to see real doctors and the wealthy people with really good insurance, they get to see real doctors. You know, there’s great evidence that seeing a doctor in person, like the actual touch of a doctor, the doctor listening to you is really therapeutic and helps outcomes quite a bit. So you could effectively rob an entire portion of the population of that, because you’re making an economic decision. I’ll put a provocative situation out there. OK, let’s say I’m going to pick on a state. I’ll pick on New Jersey because I live in New Jersey, let’s say New Jersey’s Medicaid team decides that all right, for Medicaid, we need to save money. They would go – anyone on Medicaid needs to talk to a chat bot prior to seeing any doctor. That’s just a rule. Now imagine you have schizophrenia and you’re on Medicaid. And you’re having a moment of paranoia. Do you have to go in and talk to a chat bot in your paranoid state to get access to a psychiatrist? Now, that’s again a provocative statement, one that hopefully we’re not in. But when you think of leveraging all of these AI innovations to increase efficiency and pull humans out of the equation, you can end up in spots where you’re denying care and human interaction to people who really need it. It’s a pretty important part of healthcare. And so I think from an ethical standpoint, making sure that these innovations are additive and not subtractive for patients is really, really important. And the gains from increased efficiency are getting more people into the healthcare system in the hands of trained professionals versus showing them way to where they can’t complain because they have their chat bot and you know they’re stuck in a loop of, you know, we’ve all probably tried to talk to Amazon’s web assistant to do a return, and it’s impossible. You need to actually talk to someone, most of the time. But yeah, that that is my provocative statement. Kristen, I think there’s huge potential upsides here, but I think when you think of the real world and how these things are generally rolled out and the economics kind of behind this, there’s, you know, a situation where it ends up pretty bad, pretty grim.

Michelle Guo, Associate Principal, Charles River Associates

I’ve seen this happen. Especially with these diagnosis or predictive diagnosis tools, especially in these claims. I think the problem, the thing I like to say always like whenever you’re training a model is garbage in, garbage out. And unfortunately, I see Travis laughing, as I think we touched on this briefly but, I think especially with a lot of like claims data, right, there’s a lot of noise and unfortunately inherent biases. I know a recent example; a lot of liver diseases are historically more under diagnosed in women. And unfortunately, when you train a model to try to predict whether someone will develop liver disease, a lot of times these models are a lot more accurate in predicting for male patients as opposed to female patients. And it’s because of the inherent under diagnosis rate. You know, either the albumin levels are set lower or just the fact that there are way fewer women that actually get the medication that they need. So you know, if you set medications as one of your indicators for like whether or not someone is going to develop liver disease in the future, that’s obviously not going to get picked up in a model. And so there are a lot of very real and tangible drawbacks to trying to use AI, especially around diagnosis. So I’m glad Travis brought that up.

Kristen Backor, Vice President and Director of the Market Research Center of Excellence, Charles River Associates

Yeah. Who do you all think is the right and Artes, maybe this is a question for you, the who regulates something like this, who puts in place these structures or controls these sorts of things? Who would be the right audience to do something like that?

Artes Haderi, Principal, Charles River Associates

I suppose it’s a series of policymakers, and it would also depend on at what level this is legislated. But the regulator would play a key role because they determine also what are the clinical trial models that you can use the data that you can use as Michelle was explaining before, so on the basis of that research is then conducted in order for it to be compliant. So then that safety and efficacy can be proved. So the regulatory bodies would be really key. However, there is also a more general data and governance structure around in each market that regulates that.

To some extent there are also incentives on how the pharmaceutical innovation happens or incentives on how HCP’s interact with the medical information that they input, how consistently, how thoroughly for different types of patients, so all building blocks of the healthcare system and the regulation that goes into those either by the National Healthcare Authority or the Regulatory Agency when it comes to the medicines itself, will have an input that will ultimately result in. An outcome that supports a broader set of patients.

One note that I wanted to make on in terms of equity and supporting broader access, is that there will always be pros and cons on this issue. However, there are also data that suggests that being able to have this powerful systems, we will be in theory or already seeing in practice be able to draw more from different types of data sets going back to the potential of recruiting more easily different types of patients that maybe we were unable to reach before. This can actually might help us get to discover of rarer diseases or diseases in more neglected populations or countries that we just didn’t have that information based for. There will always be, as mentioned, some advantages and disadvantages, but there is also a side of a potential new horizon being discovered and innovation maybe reaching patients it didn’t before.

Kristen Backor, Vice President and Director of the Market Research Center of Excellence, Charles River Associates

One of the things I think we’ve talked a lot so far about like pushing things out and a bit less about receptivity from the general population. I’m sure that things are going to happen whether we (if we consider ourselves part of the general population) want to or not. But do any of y’all have thoughts about kind of receptivity of people and potential patients and things like that to some of these kinds of developments that we’re talking about and how they might be received by the people that they’re intended, well…I was going to say by the people that they’re intended to help. But to Travis’s earlier point, I think that’s often the doctors, but by the people that would be doing sort of the interaction with these bots or getting the consequences, any thoughts on the real-world reaction from users?

Travis Ruch, Principal, Charles River Associates

Kristen, if I were to guess, if you pulled people, they would say absolutely not, I’m not sharing my personal data. But given the size of like Facebook, Instagram, TikTok, Twitter, no one cares about their data really. So, if you let them post a picture, did something like somehow pulled the data in, I think most people would be willing to give up their data if there was a small reward.

Even if they say they won’t, they wouldn’t. I think most people are completely willing to give up all of their personal information.

Artes Haderi, Principal, Charles River Associates

Yeah. Or you make it inconvenient enough to actually protect your information.

Travis Ruch, Principal, Charles River Associates

Yeah, exactly, exactly.

Kristen Backor, Vice President and Director of the Market Research Center of Excellence, Charles River Associates

We wanted to end on a more uplifting note and talk about, I think Travis already did this, laying out his best-case scenario, optimistic Utopia. Maybe we can get a little bit more specific, and I’ll ask each of you to share a potential positive impact, hopefully one that we haven’t talked about already to close us out. And while we’re doing that, if you have questions for our panel, please feel free to go ahead and submit them. Hopefully you can multitask and still listen to their brilliant responses to this question.

Travis, we’ll start with you since you were the most optimistic in your early presentation. What’s something that you’re particularly excited about or optimistic about or a positive impact that you would love to see in the future?

Travis Ruch, Principal, Charles River Associates

Taking a step back from consulting and the services we deliver, just like everyday life, I think having an AI platform to help schedule appointments with doctors. If any of you have parents of a certain age, they talk a lot about their doctor’s appointments, about seeing the right doctor, getting to the right doctor. And even myself if I need to bring my kids in, it’s always a nightmare. But if you had something that had an overarching view of the healthcare system within, like an IDN or something like that and could schedule you to see your PCP and then set up an immediate follow up with your cardio. You get to see your pulmonologist, etcetera, etcetera. It would really smooth out a lot of the hassles we have on the US side within the healthcare system, which is just getting an appointment with your doctor and not having to wait eight months to get the mole in your back looked at to see if it’s skin cancer or not, like you actually can get in. And I think short term we’ll start seeing those changes. I mean, they’re already starting, but I think that is something where AI could definitely be utilized. And like I said, within the health system where you have all of these providers shared calendars, shared data. And I think that that would be a sea change for a lot of patients.

Kristen Backor, Vice President and Director of the Market Research Center of Excellence, Charles River Associates

Yeah, sounds nice. Artes. What about you?

Artes Haderi, Principal, Charles River Associates

I suppose maybe if I could have two just to apply that kind of higher income country setting and for lower income markets. On first, I think just this is really optimistic and it would need a lot of guidance and guardrails around it, but one hypothesis we can think about these systems actually really improving the decision-making power on access to treatments and getting them to patients quicker and in a more personalized way. Starting from enabling tools that exist to have quicker or earlier conversations between pharma companies and authorities - such as through horizon scanning. Doing this a lot more efficiently, being able to predict more in a more powerful way, and bringing those dialogues earlier on in order for companies to respond in their research development process and also later on what is required can be really powerful. And having more opportunities to implement more innovative access models and those not just being reserved for those countries that already have a track record history and have been typically associated with. So, making this more common practice, so the patient and the patients can receive their treatments in a more timely way.

And for the lower income markets, I’ll keep this brief, but I think also related to what Travis said and what we’ve mentioned before, really helping with addressing shortages in terms of healthcare workforce, this is a massive issue for this market. So even having simple tools that maybe we do not consider ideal in terms of care provided, getting to have an answer and have it quick can be a really important first step to then maybe reserving the healthcare workforce for later stages of the care journey.

Kristen Backor, Vice President and Director of the Market Research Center of Excellence, Charles River Associates

Michelle, what about you?

Michelle Guo, Associate Principal, Charles River Associates

I’ll plug in R&D use case again. Patient burden right is something that is definitely top of mind for a lot of people right now. And you know just essentially how onerous it is to participate in the clinical trial, right. I’ve seen clinical trials where in pediatric populations where they require a blood draw almost every other visit. And I don’t think a lot of people know this, but there are limits that are set to essentially like how much blood you can draw, especially based off of body weight. A lot of some of the models that like I’ve built in the past before take like a multifaceted approach, essentially taking into account like anxiety and how stress inducing specific visits can be. You know, if you have to go undergo full anesthesia, like what the kind of impact that has on your entire day. And essentially the kinds of disparities between different genders and races that participating in a clinical trial can have. Historically, we know that clinical trials have been skewed a lot towards like white men and there have been a lot of efforts, in particular around regulatory recently that are trying to push for more diversity in clinical trials. And I think one thing that will that really helps is a lot of these like patient burden models. You take into account a schedule of activities within a protocol, a lot of these models now can essentially kind of predict what the overall burden would be essentially to a patient that would participate in the trial. And if that is above a certain threshold, you should probably consider how you can redesign a specific schedule of activities to make it so that your clinical trial is more inclusive and equitable for certain patient populations. Whether that’s finding a site in an area where there’s a huge lack of public transit transportation and not a lot of people have cars - finding areas where you can instead of having central labs you can do local labs essentially process things like a urine test or a serum test instead of having to go to like a central location. But a lot of these things essentially have been starting to help introduce a lot more of these subpopulations that generally have a lot of very different AE and SAE profiles into clinical trials. And I think that’s a very good thing.

Kristen Backor, Vice President and Director of the Market Research Center of Excellence, Charles River Associates

Well, thank you all so much for participating in this and thank you for attending. Lev is going to talk us out as well. He has a final spiel.

Lev Gerlovin, Vice President, Charles River Associates

Thank you so much, Kristen, for moderating and of course Travis, Artes, and Michelle for participating. These topics are only going to get more interesting and more important as we go forward. The idea of artificial intelligence is to ultimately give us more space to actually be intelligent. Again, thanks for your time. And till next time.

Links

Video

More from YouTube