Artwork for podcast Global Medical Device Podcast powered by Greenlight Guru
#328 FDA Guidance on Artificial Intelligence (AI) in Medical Devices
Episode 32820th July 2023 • Global Medical Device Podcast powered by Greenlight Guru • Greenlight Guru + Medical Device Entrepreneurs
00:00:00 00:50:44

Share Episode

Shownotes

Description

In April of 2023, FDA released a draft guidance entitled, "Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence / Machine Learning (AI/ML) - Enabled Device Software Functions."

In today's episode, we speak with Mike Drues, PhD, President of Vascular Sciences, about artificial intelligence in medical devices, the history of this type of technology in the medical device field, and what this guidance means and doesn't mean. We hope you enjoy this episode of the Global Medical Device Podcast!

Questions Asked

  1. What does artificial intelligence mean in software as a medical device
  2. Why is this new draft guidance needed?
  3. What recommendation for medical device companies does the draft guidance provide?
  4. What are the challenges with validating the modifications for an ML-DSF? (Or are there other, greater challenges?)
  5. What are some of the specific items a PCCP should include?
  6. Does a PCCP negate a future need of a Letter to File or new 510k? What would necessitate an additional market submission?

Quotes

"I really try to stress what I call 'regulatory logic,' because if you understand the regulatory logic, really, all of this should be common sense." - Mike Drues

Reference Links

Transcripts

Mike Drues: I really try to stress what I call regulatory logic, because if you understand the regulatory logic, really, all of this should be common sense.

Anouncer: Welcome to the Global Medical Device Podcast, where today's brightest minds in the medical device industry go to get their most useful and actionable insider knowledge direct from some of the world's leading medical device experts and companies.

idance that came out in April:

Anouncer: Hey, everybody. Welcome back to the global medical device podcast today. With me is Mike Drew's. A familiar name on the podcast. In fact, Mike, we were just talking about the last time you recorded an episode on this topic, which maybe I should mention that topic, AI and artificial intelligence and software as a medical device. But before we get into that, I guess, how are you doing today? Good to have you on the show.

Mike Drues: I'm well, thank you, Eddie, and always a pleasure to speak with you and your audience.

ntioning with that episode in:

Mike Drues: Yeah. According to the Greenlight Guru Statistics, the podcast that you're referring to, and we can provide a link to it as part of this podcast. When your predecessor, John Spear and I talked about artificial intelligence and machine learning, it was the number one podcast listened to in the medical device world of that year. And since then, obviously, you and I have talked about AI off and on in other circles. But I'm looking forward to sort of revisiting this topic and providing sort of an update as to what's changed, if anything, in the last four years.

Anouncer: Yeah, and I suppose what kind of kicked this conversation off to start again is the draft guidance that was released in April of this year. But maybe before we get into the actual draft guidance, maybe it's worth talking about what artificial intelligence, that phrase AI that's being tossed about everywhere, what does that even mean in software as a medical device? And I'm curious what your thoughts are, for better or for worse.

Mike Drues: Well, I'm happy to share my thoughts as always, Eddie, but I'm curious, if you don't mind, let me turn the tables on you just a moment and ask you, because you're exactly right. We see not just in the medical press, but in the popular press every single day, AI this and AI that. So when you think of artificial intelligence and or machine learning, and there is a subtle but important difference between those two, what does that mean to you, Eddie?

Anouncer: Yeah, it's a good question. So the way I've looked at it, and I'm interested to get your corrections or approvals here, is I've looked at it as almost as a spectrum. So machine learning, if we start with machine learning, something that's relatively easy to grasp, I think of maybe I have an app on my phone where it asks me, did you eat lunch today? Did you not eat lunch today? And determine my blood glucose levels or something like that on a continuous glucose monitoring system or something. And so it learns based on my direct input and this very closed data set that I'm providing it, and it gives me information. So I look at that as machine learning, whereas AI, it's still almost machine learning, but taking from such a large data set that it's hard for me to quantify where the data is coming from. And what are your thoughts? I'm curious, any corrections you have there?

Mike Drues: Well, I think that's a great start, Eddie. And let me ask sort of a leading question, a clarifying question here. Do you think that one of the characteristics of software that claims to have artificial intelligence is that it should be static? That is, it should not change over time, the code is locked, so to speak, and it cannot change? Or do you think that it should be able to learn and evolve and change almost in a Darwinian evolution sense? What are your thoughts on that?

Anouncer: Yeah, well, if it's true artificial intelligence, I would expect the latter. The ability to change and to improve itself.

Mike Drues: And obviously I'm a little biased here, Eddie, and because I asked you my leading question, but that's exactly how I feel. I strongly believe, never mind as a regulatory consultant, but as a professional biomedical engineer, that this phrase, artificial intelligence is a grossly overused phrase, to say the least, not just in the medical device universe, but in the entire universe. I see lots of products, including medical devices, that claim to have artificial intelligence in them. And when I look at them, I don't see any intelligence in them whatsoever, artificial or otherwise. So one of the characteristics that I think is important of, as you said, true artificial intelligence is its ability to learn and change and evolve very much in a Darwinian evolutionary sense of the word. Because if you don't, and by the way, as some in our audience will already know, there's a reason why I'm starting with this, because it's going to go to the premise of the guidance that we're talking about today. If you don't allow the software to learn and change and evolve, then how is it different than any other piece of static software that's out there, that's been out there for decades? And why do we even call it artificial intelligence? So I think that's one of the most important, if not the most important, characteristics of true artificial intelligence, and that is its ability to change and to evolve. And we'll talk more about how to do that and how do we put some limits on that, some barriers on that, so that we don't have pieces of software that get out there and kind of get out of control. I don't know. I'm getting a little old, Eddie, and I don't even know if kids anymore read Isaac Asimov and his famous Three rows of Rules or laws of Robotics. But asimov predicted all of this stuff over a half a century ago. It's amazing. But anyway, go ahead, your turn.

Anouncer: No, well, it's been a while. I did read that and I couldn't quote the three laws at the moment, but maybe I should look those up and we can provide those in a moment here. But no, it's a good point, and I think it's a relevant point to talk about these things, especially because AI is becoming such a buzword in the industry. And you mentioned the ability to change and adapt, and that's kind of what this draft guidance is really trying to get at, it would seem. I'll put a link in the show notes to the draft guidance, but the title of it is Marketing submission recommendations for a Predetermined Change Control Plan for Artificial Intelligence machine Learning Enabled Device Software functions. And that gives us a new acronym, Mldsf device Software Functions that maybe we need to start familiarizing ourselves with, I suppose. But yeah. What are your thoughts pertinent to this draft guidance with that ability to change?

rred to a moment ago, back in:

Anouncer: Yeah, let's talk about that.

Mike Drues: Yeah, okay. So one of the solutions I said to get around this archaic concept, and again, I'm using the word purposely archaic concept of a locked algorithm is to give the software the ability to grow and evolve, but put some limits, some boundaries on it. So for those of you in our audience that have a quality background, this should not be a new idea for you. This is a predetermined validation. So basically in a nutshell, the idea is we're going to say to the software. Okay, Mr. Or Mrs. Software. And by the way, there's another reason why I'm addressing the software this way. Because if the software is truly artificially intelligent, we should think of it kind of like a person, kind of like a human being. One of the metaphors I sometimes use is a corporation. In the eyes of the law and in the eyes of the IRS, a corporation is essentially treated as an individual, as a person. So AI should be thought of or treated as a person. So we're going to say, okay, software, you can vary a particular parameter, temperature or power level or whatever it is, as long as it's between X and Y. In other words, as long as you want to make this change between X and Y, you can make any change that you want without having to tell the manufacturer, without having to tell the FDA. You just do it as the software unilaterally. And the reason why I made that recommendation and the reason why FDA has now introduced it into guidance as a predetermined change control plan is because we have pre validated that range. In other words, we test the limits. We do the classic validation where we show that for a value of X and for a value of Y and for a few values in between, that the device will be safe and effective, will perform as intended and so on. So that was one of the many solutions that I offered up back almost five years ago. And to be fair, some of the products that I've been involved in helping to bring to market with artificial intelligence have utilized this concept of what FDA now calls the PCCP, the predetermined change control plan. But long before this guidance had come out. I'm trying not to get too far into the weeds in this, Eddie, and at least at a certain level, does that make sense?

Anouncer: It does, I guess, in practice. Do you have a good practical example of how this could work where you gave the example of the temperature, but how do you arrive at some of these to make sure that you have a large enough boundary that maybe you're providing? I wonder if you have any thoughts on a practical sure.

Mike Drues: So let's try to use a real but somewhat hypothetical example. So using the temperature as example, imagine a Cauterizer that you use in an or in either a traditional surgery, open surgery, or perhaps laparoscopic surgery. A cauterizer is a device that will introduce some form of energy. It could be heat, it could be electrical current in order to cut through tissue and at the same time seal the tissue sorry. To prevent bleeding. Okay. Now, typically the way these devices will work is there will be a setting, usually like a dial potentiometer on the device, where the surgeon will be able to change the temperature, the current radio frequency, whatever it is that you're using. Imagine that that button is removed and now it's under software control, and now the surgeon is cutting through tissue and it starts out at some nominal temperature. But let's say the device at the same time is sensing how this cutting is going, and maybe it's taking too long, or maybe you're still getting some bleeding or something like that. So rather than the physician or the surgeon having to reach to the device and crank up the power, so to speak, the device does it itself. The device realizes that it's not getting the response that it wants, and so it gooses up the temperature by 10% or 20% or something. And we will allow the software to do it itself as long as we have pre validated it. As long as that the software is making either increases or decreases in temperature or power between that predetermined range, then we know it's okay, we know it's safe and effective. That's during the actual procedure. But where does the real artificial intelligence come in? This is what a lot of people don't realize, Edie. And because they're looking at these devices being used individually or sort of a one off kind of a fashion. So imagine that the surgery is completed, but the machine, the software learns that information that in this particular patient, it needed a little more power. Now, maybe when the next patient comes into the room, the device will crank up the power, if necessary, a little bit sooner because of its past experience with the previous patient. And imagine next week, after you've done, ten patients and 20 point patients. And if you think that's cool, Eddie, and let me take it a step further. Why does this Cauterizer need to work in isolation? Why can't this Cauterizer then tell its other Cauterizer friends that are part of either that hospital or in other hospitals around the country or around the world based on its experience? This is what I've learned, right? So this is where the AI gets really interesting. And this is clearly the future of this kind of technology. But with the limitation now with the caveat that at least temporarily, we're putting some boundaries on it. And let me just make one other reminder, Eddie, and then I'm happy to let you chime in, because I'm sure you got lots of questions and comments. A lot of people think that a lot of the challenges when it comes to AI are new. In fact, I see very few challenges here that are new. If you understand the regulatory logic, what we're talking about here are basic principles of change management. A special 510K versus a letter to file. For example, how do you decide? When do you notify the FDA via a special 510K, or when do you not with a letter to file? A topic, as you know, that gets a lot of companies into trouble? Well, the same logic applies to artificial intelligence and the predetermined validation that I mentioned a moment ago. The other piece of regulatory logic that comes to mind is what does FDA regulate and what does FDA not regulate? FDA does not regulate the practice of medicine. So when a surgeon cranks up the dial on the machine, FDA has nothing to do with that because that's the practice of medicine. FDA doesn't control that. But when the practice of medicine is done by the device, in this case the software, now FDA is all over it. So the challenges that we face in AI are really not unique. Many of these challenges we've been facing in maybe different ways for years or sometimes even decades. Does that make sense? It does.

Anouncer: And that's an interesting call out to change where the burden lies, whether it's with the physician or with the device, I don't know. Maybe let's think about that for a little bit. That's really interesting. But I'm curious. That being said, the device still so let's just use your example for the cauterizer that potentiometer likely should have been validated from the low point, the high point, maybe an initial starting point there's precedent or a path forward. It seems preexisting.

Mike Drues: Yeah, correct. And funny you mentioned that, because you're exactly right. If that range of values on that temperature setting were pre validated, then really it's even more simple, because all we're doing is we're taking it out as a knob and we're implementing in software. But at the end of the day, Eddie, and from a patient's perspective, from somebody lying on the table undergoing surgery, do you think that they're going to know or even care whether it's a knob or a piece of software, as long as the boundaries are the same? Something to think about. And one other thing to mention, Eddie, and as we dig into this further and this is much continuation of that previous podcast, this is not just a rehash of the podcast, we're using that podcast, but we're building and we're going much further. How do I want to say this? One of the frustrations that I have when I look at the labeling of a lot of medical devices today that have artificial intelligence in them, diagnostic devices, for example. And I've been involved in lots of diagnostic devices that have now AI in them. In the labeling, they'll always put the little disclaimer, they'll always put the little caveat that the software thinks that the patient has this particular disease or that should take this particular drug or something like that. However, and I forget the exact verbiage that we typically use, but something along the lines of the physician can always overrule it. The physician can always say, no, I disagree, or make a change. And I completely understand from a regulatory perspective why companies are doing that today. But to me, as a biomedical engineer, that drives me nuts. The reason why companies are doing it today, it's a classic form of risk mitigation by putting that disclaimer on there, Eddie, it's essentially making it like sort of an adjunct diagnostic or an adjunct therapy. And the adjunct is a word that we often use in regulatory labeling, quite frankly, Eddie, and the reason why we do it mostly is to reduce the regulatory burden. In other words, when we say this is an adjunct diagnostic, basically what we mean by that is this is to be used in combination with other things, including the physician experience and judgment and so on. And the reason why that's advantageous to a company, Eddie, is because it tremendously lowers the regulatory burden. In other words, we don't need to have nearly as much data to support a claim for an adjunct product as we would for a primary diagnostic or a primary treatment. And once again, just like the concept of the locked algorithm I talked about earlier, I don't have a problem for now treating these products with that kind of an adjunct. We don't use the word adjunct in AI usually, but the regulatory logic is, I won't even say substantially equivalent. It's exactly the same. It's a temporary solution, it's a step going in that direction. But eventually, Eddie, and the future of this technology, in my opinion, is to get rid of that completely. And the decision that the software makes, whether it's a diagnostic or even a treatment, should be just as valid as a decision that a physician or a surgeon makes equal footing. Most companies don't want to go quite there yet, although I do have a couple of products that we're doing that. But most companies don't want to do there yet because they would have to collect a heck of a lot more data to support that kind of a claim. And obviously that's going to mean more time, that's going to mean more money. But that's, I hope, where we're going. Does that make sense?

Anouncer: Oh, totally. And I agree. I've spoken with physicians on this before and they typically want to push back and they say, well, the physician is the one who finally does the diagnosis. However, that equal footing that you're talking about is really interesting because we'll just use your example one more time. Let's say that doctor is cauterizing that wound, whatever. They may have a visual. They may have a biofeedback of it's cutting slowly and so forth. Whereas you could have a sensor that actually detects on a much more molecular level and to determine what's actually happening on a deeper level and to have actual better understanding than the physician, especially after a certain amount of intelligent data gathering. I would think so. I totally agree. And I hope that is the case where we are moving forward to.

Mike Drues: And here's another quick example of what I said a moment ago, where a lot of these ideas or these quote unquote challenges that people think that we have with artificial intelligence are new. And then, as I said, a moment ago. They're really not new. You're probably familiar with robotic surgery and some of the robotic surgical devices that are out there. In fact, in the interest of full disclosure, I've been working as an expert witness in one of the largest lawsuits in that area, product liability lawsuit. And I'm sure many in the audience can probably guess which company. It doesn't matter. But I think it's interesting that people use the phrase robotic surgery. In fact, it is absolutely not robotic surgery. It is robotically assisted surgery. And what I mean by that, in most cases, not quite all, but in virtually all cases, the robot is not performing the surgery directly. That is not unilaterally. What's happening is and you've probably seen this on the TV, if not in firsthand, the surgeon is manipulating the instruments outside of the patient, and the robot is doing nothing more than taking the motions of the surgeon's hands. Maybe scaling them up or down a little bit, maybe filtering out some tremors a little bit, but essentially just translating them into the instruments, into the patient. So it is not robotic surgery in the Isaac Asimov sense of the word. It's robotically assisted surgery. And the same logic applies to artificial intelligence when we use that disclaimer and the labeling that I just mentioned, that this is what the software thinks. But you doctor can trump what I think if you want to. That's kind of like robotic assisted surgery, right? But again, I want the technology eventually and we're making progress, but it's taking a long time so that it's truly autonomous, so that it is working just like a surgeon. I'm not saying that we're not going to need doctors or surgeons anymore. That's not my meaning here. But I think you know what I mean.

Anouncer: And ultimately it's really just a matter of identifying the other additional inputs that could impact the decision. Is that not accurate? I mean, the doctor may have a certain piece of data that could alter the decision. So that's really what they're referring to. And maybe mean it's a regulatory game, I know, to decrease or increase the regulatory burden. But identifying that additional input might be helpful in developing the AI and determining what other pieces of AI could help assist the software as well in the future.

Mike Drues: Yeah, at least theoretically, the software, when it learns either pre market during the development process or as it learns post market. Assuming that we get past this archaic concept of the Lock algorithm and the software can still learn post market, it should have access to all of the information that a physician or a surgeon would normally have, not just all of the different variables that might be important for that particular patient. Maybe blood tests, maybe imaging data. But it should also be able to draw on its previous experience, just like a good doctor will draw on his or her previous experience. So when I said at the beginning of our discussion, Edin, that we should think of this software in almost humanistic terms as an individual. I literally mean that in every sense of the word. Maybe that's a little scary to some.

Anouncer: I'm so the boundaries that we're talking about because the goalposts are moving a little bit in my mind in different ways. So you talked about one way, drawing from its experience, drawing from its companions experience. So another device that's doing the same thing in Australia, for example, it's learning from that from a cloud connectivity standpoint. But what about other devices that it could potentially be learning as well and learning from those possibly.

Mike Drues: So that's going to introduce, if you're talking about hypothetically, a cauterizer learning something from an EKG monitor or a blood pressure monitor or something like that, I suppose that's theoretically possible. We'd have to think about that a little bit. But the direct learning here is from other of the same devices. It's brothers and sisters, so to speak. But let me take that learning example a step further if it's okay with you, Eddie, because as you know, I like to use really simple metaphors in order to explain very complicated topics. So imagine that you're in a classroom back in the day in college listening to a lecture, and I asked you, what did you learn? And you tell me whatever it is that you think you learned from that particular professor. And then I asked the same question to the person sitting next to you who listened to the same lecture. Do you think it's likely that the person sitting next to you will tell me that they learned the same thing that you did? Well, perhaps. I mean, obviously there's going to be some overlap, but there's also going to be some differences. Even though you both were in the same room, even though you both listened to the same information, maybe one person was zoning out a little bit for a few seconds. Not like any of us ever did that in a college lecture. And keep in mind this is coming from somebody who does teach graduate students part time. I learned a long time ago not to be naive to think that my graduate students listen and retain every word that I say. And similarly, I'm not going to be naive to think that everybody that listens to our podcast listens and retains every word that we talk about. But nonetheless, the learning experience is going to be a little bit different because of our experiences, because of our thinking and so on. Why can't software as devices that use artificial intelligence, why can't they learn from one another just as and one of the suggestions that I mentioned almost five years ago that unfortunately FDA has not implemented officially yet is if the record so we talked before, for example, about the predetermined change control plan. And I think that's a step in the right direction. It's. A baby step, but it's a step in the right direction. Here's another step. If the software after a particular patient or a particular number of patients realizes that there's another way, a better way, a faster way, or a more efficient way to accomplish something, and you don't want to give the software the ability to make that change unilaterally, how about this? How about that? Software sends a signal, sends a report back to the manufacturer, and the software basically says that, hey, based on my last ten or 20 patient experiences, I got the following result. And I think I meaning the software, that I can get a better result if I make the following change. And that information is then evaluated by the company just as if the company was going to change the device themselves. But the recommendation for the change is not coming from the company directly, it's coming from the device. I think this is really a cool idea, don't you? It's coming from the device. And then the company decides, okay, is this a valid change? And they go through all of their change management procedures and their QMS to validate this change, and they decide then at the end of the day, is this a change that we can handle internally via a letter to file? Or is this a change that would require FDA notification via a special 510K? Or in the class three universe, a PMA supplement? Goes back to what I said earlier. The regulatory logic alien is exactly the same whether we're talking about AI or not. You just have to use a little imagination, a little creativity, dare I say it, a little intelligence in how we implement these things. But the logic is the same. And this is why throughout all of our podcasts that we've done not just today, but all of them, I really try to stress what I call regulatory logic, because if you understand the regulatory logic, really, all of this should be common sense.

Anouncer: Anytime we insert the word artificial intelligence, I like to think, well, what if we put real intelligence behind it just to make sure I understand what you're saying. So I'm going to use your potentiometer example again. So maybe the surgeon is saying if I spike it just barely, and I'm talking about things I don't I haven't experienced myself, but let's just say as a hypothetical, I spike it just for a moment, and it does what I need it to do. And he sends back to the manufacturer, hey, I think you should increase. Allow me to go just a little bit beyond what your parameters are. Just another example of what a manufacturer.

Mike Drues: Potentially it's a great example, Eddie. And not only that, that exact example has happened for decades, right throughout the medical device industry, as I'm sure you know, physicians will use a product and then they'll say to the company, either to the salesperson that comes to visit them or tell the company somehow. And I used to get this from as an R and D engineer all the time, hey, I use your device this way, but if you make it a little longer, a little shorter, a little fatter or thinner, I can use it for something else. So this has been happening since the beginning of time, or certainly since the beginning of medical devices. Now we're just introducing another player into the game. In this case, it's the device, it's the software itself that can make some of those recommendations. And whether you allow the software to implement those recommendations without any input from the outside world via, for example, the predetermined change control plan, or you allow the software to make the recommendation back to the company and then the company evaluates it and decides, is this a recommendation that we want to implement or not? At the end of the day, you end up at the same place.

Anouncer: Yeah. If we go back to this draft guidance, I don't know if you wanted to speak to any of the specific particulars of it we've mentioned the Predetermined change control Plan PCCP, I suppose that's an acronym we're going to have to get used to. PCCP. It has three elements that aren't really that surprising the detailed description of the specific plan, device modifications, and I have it in front of me to read a little bit of this, the associate methodology, how you're going to develop, validate, implement those modifications in the future. And then the third aspect is an impact assessment to determine the benefits and the risk. So really it goes back to risk management at the end of the day. Any comments on the bones of the draft guidance?

Mike Drues: So here's a comment that I think is going to surprise some in the audience. We could spend a lot of time talking about the details of those three points. And I agree with those three points that you just mentioned. I think that anybody that has an IQ of more than five and I don't mean to be condescending, but I'm just simply trying to make a point that makes sense. But what's the precedent for this? Once again, I'm trying to focus on the regulatory logic. To me, there's a tremendous amount of regulatory precedent here, and one of the other examples that comes to mind is from the area of adaptive trial design for those in the audience that are familiar with clinical trials and specifically adaptive trial design. Now this is a topic of a completely different discussion, Eddie, and a much more advanced discussion, to be honest. But what you just paraphrased and the FDA guidance here that we're talking about on this whole predetermined change control plan is right out of the adaptive trial design playbook. The idea for those that are not familiar with it, in a nutshell, traditional clinical trials, everything is locked down, so to speak, before you begin the trial in other words, the number of patients, the number of sites, the inclusion and exclusion criteria, all that kind of stuff. In adaptive trial design, everything is open. You can change almost anything that you want during the actual clinical trial. But there are some caveats to that. There are some boundaries to that because you can't just change anything you want willy nilly. Otherwise that would be cheating. These changes need to be identified in advance and they need to be validated in advance, I e. Predetermined or pre validated, whatever you want to call it. So again, the regulatory precedent here, in my opinion, is spot on. The regulatory logic. If we took this guidance on AI ML device evolution that you're referring to and we scratched out the AI and we replaced it with adaptive trial design, we would largely have the same thing. I don't know if that's a direct answer to your question. I'm happy to drill into those key points that you just mentioned a moment ago in more detail. But I'm just simply trying to illustrate that how there's really not a lot here that's new if you just kind of think a little bit and if you really try to focus on the regulatory logic and not get so hung up on the letter of the law, but rather the spirit of the law.

Anouncer: Yeah, that does make sense, especially when you think about that adaptive process. Maybe your product already is changing in the field or has a certain range that it operates under. It makes sense. That being said, I am curious if you have any, I don't know, have seen common validation issues with. Maybe the validation isn't the issue. Maybe it's the forward thinking ability, the ability to know and actually set good goalposts or boundaries for your device.

Mike Drues: I don't know.

Anouncer: What are some of the challenges? Maybe that's the question I should ask.

Mike Drues: Well, I would say perhaps the biggest challenge in my opinion, Eddie, and when it comes to validation, not specifically AI, but just in general and I'll refer our audience to a webinar that I did for Greenlight probably at least a couple of years ago now, on validating your validation, right? Because a lot of times I see companies, they do a validation and it turns out that they're validating something that is the totally wrong thing to validate. So what is the point of validating something if you're not validating the right thing when it comes to software and specifically AI or this predetermined validation, whatever you want to call it, pardon me, the underlining assumption is that of course we're validating the right thing, right? So we used the temperature example before. There are a litany of other examples. But the first thing, and it sounds very basic, but I see a lot of companies, quite frankly, they screw this up. Ask yourself the question are you validating the right thing? Or in this particular case, are you allowing your software to validate the right thing. Here's another practical suggestion. When it comes to AI software, most medical devices, certainly not all, but most of them, they have at least a few, in some cases, a lot of different parameters that the user can vary, can control, might be any number of different things. Maybe you're going to at least start by allowing your software, your artificial intelligence, to vary some parameters, but not other parameters. Probably from a basic statistics perspective, it's going to be a lot easier to allow the software to only vary one or two parameters as opposed to allowing it to vary ten or 20 parameters, right. If you remember your basic statistics when you get into and I'm going to embarrass myself here, but what the heck is it called when you have multivariable analysis? When you have a lot of different parameters changing at one time, the way you usually start is you lock everything down and you only vary one parameter and see what the effects are. And then you lock that parameter and then you vary the next parameter. When you start to vary multiple parameters at the same time, that makes the math and the statistics and as a result, the time and the cost really gnarly very quickly. So as a practical matter, start out with the low hanging fruit. Start out by allowing your software to vary only the most important one parameter or the most important one or two parameters. Even in devices that I can think of that have a lot of different parameters that can be changed, most of the time it's one or two that are the ones that the physician will change the most frequently. And maybe ask your users or maybe you ask your marketing friends, hey, pardon me, this device has been out there for a while. Could you tell me of all the different parameters that the physician can use to change, what are the one or two most common that they're likely to change? And then the R and D engineer or the software designer, they can design the software to work that way. That would be another small suggestion I would offer.

Anouncer: That's a good suggestion. I think I already know the answer to this because you're talking about the regulatory logic and that's a great way to approach this. But if I think about this software as a medical device or whatever medical device that includes AI with it, what about labeling? I mean, should that even change as the parameters change? Because when I think just reading through the draft guidance, it's almost as if, okay, I'm anticipating I would have maybe a special 510K or a letter to file in six months due to some changing parameters. But since I'm doing this PCCP, I'm not going to do that. Would the labeling have changed? And I'm curious what your thoughts are about that.

Mike Drues: Well, let me answer that in two ways. Eddie and I know we're getting close to our time so we can wrap this up soon. I get a lot of questions. In fact, I got a question from one of my customers just earlier today on a similar issue. They have a device on the market that is a 510K. Let's say the device is under manual control. They want to come out with a new version of the device. The device does exactly the same thing. The new device does exactly the same thing as the old device. The only difference is that they want the new version of the device under software control, under artificial intelligence, right? So they said, Can I use my previous device as my predicate? I said to them, theoretically yes. But remember, one of the two basic criteria of substantial equivalents of the 510K is on the technology side. Any differences in the technology, and clearly manual versus AI is a difference, cannot do two things. They cannot introduce new questions of safety and efficacy, nor can they change the overall risk. So those are the two basic requirements of the I've done lots of podcasts and webinars with greenlight on that for those in the audience that need more information on that. If we can go to the FDA and say to them, we're adding AI, but it does not add new questions of safety and efficacy, nor does it change the overall risk. And then assuming the label claims are the same, then yes, it is possible to make a strong, substantial equivalence argument. However, in most cases, certainly the cases that I'm familiar with, I think that's difficult to do. And even if you could do it as a 510K, many cases you might be just simply pushing a bad position. And in that case, I might encourage the company to flip to the de novo. Because as you know, at the end with a de novo, you're not constrained by you don't have to play this game of, well, we're kind of like the previous product in these ways, but we're not like the product. You don't have to waste your time playing any of that nonsense. You just say, my device is new, we're doing a de novo, end of discussion, and let's talk about the cool stuff. So that's the first part of the response to your question. The second part in terms of labeling, and I think this is the gist of your question, I think you're asking me, is it necessary for us to announce or disclose in our labeling that we're using AI? From a regulatory perspective? I would think not. I mean, at the end of the day, what's most important is our label claims. If we say that our device is going to do X, Y and Z, we need to be able to prove that our device does X, Y and Z. Whether the device does it via manual inputs from a user, or whether device does it autonomously just by itself from the user's perspective, who the heck cares? Now, there are some gray areas like we talked about earlier, where if the software is going to say yes, I meaning the software thinks that the patient has skin cancer, for example. But you as the doctor, you can change it if you want to, then obviously you're going to have to disclose it. But if you're asking me the question is it necessary to say that their device has AI? From a regulatory or even an engineering perspective, I would say no. However, from a marketing perspective, Eddie, this is exactly why I think most people want to do it. It's the same reason why people want to say that they have a laser in their device or something like that because it sounds I'm dating myself, it sounds Star Treky, it sounds know. Does that answer your question?

Anouncer: Absolutely, yeah, it answered my question and more and it was really good. And I know we are at time but this maybe merits for future discussion. I don't know. We'll maybe give that to the audience to see whether or not they have feedback, would like to hear more or have specific questions that we'd like to answer. So thank you so much mike, any last piece of advice or words before we go?

Mike Drues: The last two just at a very high level, the last two reminders is, yeah, AI is relatively new. Although as I said before, Isaac, asimov predicted all of this know more than half a century ago. But I'm constantly reminded of the old French philosopher, I never remember his name, who said, the more things change, the more they remain the same. So yeah, there are a few differences when it comes to AI and machine learning versus non AI ML, but really there's a heck of a lot more similarities than there are differences. So try to understand and focus on the regulatory logic and don't just get hung up on the minutiae of what the guidance says or what the CFR says. That's point number one. And then my other long standing piece of advice, Eddie, and to our audience, is once you figure out what makes sense to you and you get your ducks in a row, so to speak, and how you're going to handle the AI and how you're going to train the algorithm and how you're going to allow for predetermined change control and so on, Take it to the FDA in advance of your submission and sell it to them. Whether you do it in the form of a pre submission meeting or something else, I don't really care. But sell it to them because so many of the problems that I see companies run into, not only are they preventable, but they're so, you know, don't treat the FDA as an enemy, treat them as a partner. But remember my regulatory mantra and that is tell, don't ask, lead, don't follow. Don't walk into the FDA and say, hey, I have this new piece of AI. Can you please tell know how do I test it? Know blah. That in my opinion is a terrible approach. So those are some of my final thoughts. Anything that you would remind our audience as we wrap this?

Anouncer: Oh this is all really good. I really appreciate it. I don't have anything to add. I think you covered a lot. I have a lot of links to put in the show notes, so those of you listening definitely check out the show notes, listen to the previous podcast to get a little bit more background as well as links to the guidance. So no, this is good. Thank you so much Mike.

Etienne Nichols: We'll let you get back to the.

Anouncer: Rest of your day. Everybody take care.

Etienne Nichols: Thank you so much for listening. If you enjoyed this episode, reach out and let us know either on LinkedIn or I'd personally love to hear from you via email. Check us out if you're interested in learning about our software built for Medtech. Whether it's our document management system, our Kappa management system, the design controls risk management system, or our electronic data capture for clinical investigations, this is software built by Medtech professionals for Medtech professionals.

Anouncer: You can check it out at WW.

Etienne Nichols: Greenlight guru or check the show notes for a link. Thanks so much for stopping in. Lastly, please consider leaving us a review on itunes. It helps others find us. It lets us know how we're doing. We appreciate any comments that you may have.

Anouncer: Thank you so much.

Etienne Nichols: Take care.

Links

Chapters

Video

More from YouTube