In this engaging episode of the Global Medical Device Podcast, Etienne Nichols and Devon Campbell dive into the complexities of verification and validation (V&V) in medical device development.
Whether you’re a medtech startup founder or an industry veteran, this conversation offers essential insights on creating robust V&V processes. Devon shares practical advice on defining user needs, writing strong design inputs, conducting pre-verification testing, and ensuring that verification protocols provide meaningful evidence.
The discussion explores how to avoid common pitfalls, optimize your design reviews, and strategically approach V&V for complex devices, offering listeners a wealth of actionable takeaways for navigating regulatory challenges and accelerating market access.
Verification vs. Validation – Verification checks if a product meets specified requirements, while validation ensures it fulfills the needs and expectations of its intended users. Both are critical steps in medical device development, and their successful execution depends on precise, testable requirements.
Poll Question: "What’s been your biggest challenge in navigating verification and validation for your medical device? Share your experiences below!"
We’d love to hear from you! Share your thoughts on this episode, or suggest topics you’d like covered. Email us at podcast@greenlight.guru and don’t forget to leave a review to help others find us.
Devon Campbell: Welcome to the Global Medical Device Podcast where today's brightest minds in the medical device industry go to get their most useful and actionable insider knowledge direct from some of the world's leading medical device experts and companies.
Etienne Nichols: Every medtech company has to navigate complex regulatory requirements. But with Greenlight Guru, compliance doesn't have to be a headache. Our purpose built quality management software simplifies compliance with fda, ISO and global regulations, giving you confidence in every step of your product development process. Built by Medtech Professionals. For medtech professionals, Greenlight Guru is the QMS that works for you. See why medtech leaders trust Greenlight Guru at www.greenlight.guru everyone. Welcome back to the Global Medical Device Podcast. My name is Etienne Nichols. With me today is Devin Campbell from Product. Devin, how are you doing?
Devon Campbell: I'm great.
Etienne Nichols: Several months ago you came to me and asked, hey, what if we did a podcast on verification and validation? And I was in Boston a few months ago as well. And so we managed to do that. We got together and tried to record that in person on a park bench. And neither one of us are IT guys. It turns out we're not AV guys.
Devon Campbell: We're not AV guys.
Etienne Nichols: That's right, av.
Devon Campbell: Our video worked fine. It was the audio that we both struggled to catch. Good audio.
Etienne Nichols: Right? So let's do take two. And so just to kind of set the stage, those of you listening, my goal for this, and if you have a different goal, Devin, we'll pivot, we'll try to catch all the goals. But one of my thoughts was it would be great to be able to speak to an early stage company that's building a medical device that they know verification validations on the horizon, maybe, maybe they don't have a solid understanding of what that really entails and what that, what the differences are. So I thought it'd be good to talk about those different things. So what are your thoughts?
Devon Campbell: Yeah, I think that's a, I think that's a valuable discussion to have. I think there's a lot of misconceptions and there's a lot of misunderstanding, especially for early stage folks or for first time entrepreneurs in the medical device space or for newbies to the medical device space. Right. And maybe you're working at a larger company and you're. And you're listening to better understand what does verification mean? What does validation mean? Really easy. Like tell is whenever someone says, oh yeah, I'm working on V and V and they kind of lump it all together. Like it's one thing when it's really two very separate things. Yeah, and it's kind of like A, that's a good, that's a good, that's a good tell right there. If you're playing poker with someone, they're like, oh yeah, I know. V and B, I was like, I know, I know you're not holding a full hand right there. So yeah, I think let's get into it.
Etienne Nichols: And I know I'm talking to an engineer, so I'm going to speak engineer speak for just a moment. Just, just dabble, barely. So I feel like we should go back to step one, but really we don't go to step one. We talk about verification, validation. We go to step zero. The zero part is far upstream and that's user needs and design inputs. Would you agree? And how do you want to tackle this?
Devon Campbell: Yeah, so verification, validation, those are exercises and we'll get into what those mean. But those are exercises that a product will get in, a team will get into toward the end of their product development processes. Not at the end, but toward the end. But in order to verify that your design output. And we're probably gonna have to stop and describe, define what a bunch of these things.
Etienne Nichols: Yeah, why don't we do that?
Devon Campbell: Yeah, let's do this. Get a few on the table and then we'll define them. Right. So to verify your design output requires you to have very strong requirements that you will verify against. And then to validate your design output, you have to have strong user needs to validate against. Okay, so design output, what does that mean? A lot of times you might think, well, I gotta verify my device. Like imagine whatever the medical device might be. You can, you can, it's a, it's a band aid. So It's a Class 1 medical device or it's a, an in vitro diagnostic, It's a Class 2. Right. The device is not what you verify. It is the design output, which is the totality of the device itself, but also how you make the device and how the device is tested and how is it cleared and everything and how do you package it and all of that stuff, that's all part of your design output. So people tend to think of a little bit more narrow minded. So I have to do VMV on my device or I have to do VMV on the software. And it's actually broader than that. Like the software or the device itself is not your only piece of design output. There are larger pieces of design output. An easy way to think of it, for those that may not know that are Listening. Think of design output as like everything you would hand to someone so they would know exactly how to make your thing, how to test your thing, how to fabricate it, how to buy the parts, how to inspect all the parts, the totality of all of that coming together for, you know, you making a lot of a thousand widgets, that whole thing is kind of your design output, right? Think of it as like everything that's in your device master record. Think of it that way.
Etienne Nichols: Design verification. You're verifying that the design outputs have met the design inputs. Let's go on to validation.
Devon Campbell: So let's, let's, let's say it in plain English. Let's not use FDA speak or ISO speak. Verification is you asking yourself and then proving to yourself with evidence, did we build the product right or did we build the product correctly? Okay, so did you design the right thing? Now validation asks us a slightly different question. Validation asks, did you design the right product? And there's two words, junk exposed at the end. But it makes a very, very big difference. So you can, with your design input your product specifications, your product requirements. Right? It needs to do this. It needs to. The device shall do this, the system shall do that. The system need, you know, shall do this. You're verifying through testing that it does do those things. User needs. And there's, there's a lot of podcasts I've been on where we talk about an expansion of the mental model of the idea of a user need. I don't want to get into that in this, in this conversation because it's an entire podcast worth of conversation. But it's searchable, people can go find it. User needs, that's what they want the system to do. So you can actually, you interpret what that means in design specification. Say, this is what I think I'm going to build to meet those user needs. You can test your system and say, did I make this thing the way I intended to? Yes. Does it do what I intended it to? Yes. Now I put it in the hands of users and say, does it do what the user wants it to do? And you can very easily pass verification but fail validation because you had weakness user needs. You didn't really necessarily understand the market or understand the needs of the users. And both of those, I mean, that kind of point illustrates the idea that you can't do verification or validation without very, very strong and well informed requirements. So I think we probably talk a little bit about requirements and requirement management and stuff like that.
Etienne Nichols: Right now you talk about the word Strong user needs. And I know we don't need to get into a whole philosophical argument about user needs necessarily, but could you give some examples of strong vers vs weak user needs or ones that continually fail or a validation might fail due to the weakness of the user need? Sure.
Devon Campbell: I'll give you one of my favorite for user needs, when I write user needs and requirements management and then requirements management, product development and verification, validation like my fun place. It's like where I really love to do work and especially in requirements management, getting those well articulated is very much and art. And it takes a bit of, takes a bit of time to like get it down well. So for user needs, I like to write user needs in the voice of the user. And so you'll often like a weak user need might be the user expects the system to be easy to use.
Etienne Nichols: Yeah.
Devon Campbell: Okay. Okay, so what, what does that mean? So now I have to think about how would I validate that. But my problem here is, and I have no problem with the idea of easy to use in a validation in a user needs statement, because in the voice of a user, a user might very well say, I need it to be easy to use. So I like to honor the voice of the user. The trick here though is who's the user in that sentence? Is it the doctor who consumes the data from the iv? Let's say it's an ivd. Is it the doctor that consumes the data that comes out of the in vitro diagnostic product? Is it the technician that performs the test? Is the user the technician who has to draw the blood or get the tissue sample and then put it into the cartridge or the thing that you need to be able to stick it in the system? Who's the user? Is the user the patient? So that's one of the common problems I see a lot of times we just say the user needs blank, but we don't define who that user is. So I'd rather for user needs call out that user. Right. So I'd say the technician running the test expects the system to be easy, right. Or maybe if it's, if it's an at home type device, right. You could say the parents of a child needing to use this device think of like a COVID test. Right? The parents of a child administering a COVID test expect the test to be blank. Right? Or they need the test to be blank because they're helping and it's the kid, that kid has user needs that needs to be honored. The parents have user needs that need to be Honored. You can't just say the user. It's a cop out. You need to call them out. And there might be four or five different users who all have similar user needs that overlap. But then you have like nice robust user needs. And when it comes time to validation, I have a better chance of being able to validate that because let's say it's the parent example. I could give the test and let you know. 30, 40, 300 parents try it and then administer a survey afterwards and ask them to do like a Rank scaling between 1 and 10 or 1 and 7. How strongly do you agree with the following statement, the system was easy to use. You could say strongly agree on one side, strongly disagree on the other. Right in the middle. You could just say agree.
Etienne Nichols: Yeah.
Devon Campbell: And then you can, you, you can measure ease of use by getting a big enough population and then asking people the question and saying, you know what I'm going to set the target for myself is what did I say? 1 to 7. As long as I get a 4 or better on average, I'm going to say my system is easy to use and you can validate. But I had a, I had a discrete population that I knew who to go test. So the requirement was written well that I knew how to validate it later for organization purposes.
Etienne Nichols: On the user side, when you say user may be a cop out. And I think that's a really good point, I've often thought that myself. How do you, do you have any recommendations on breaking that out or just. Because here's the problem that I see potentially being. You might say the technician needs this, the technician needs that. I feel like you need to come up with a full list of user needs from that perspective and then move on to the next one for a full list versus cherry picking different ones. Or what are your thoughts?
Devon Campbell: If I was advising someone who's never done it before and helping them think through that scope the user universe that they might consider in their user need requirements, I would first think of who does the device touch. And you can think of it that way. That's a very easy way. Like who are the people that touch this device? Well, it touches the patient. Most likely it touches the person who puts it in the patient or puts it on the patient or, or pulls the patient out and puts it on the device in an IBD sense. So think of like who does it touch and then who consumes the information that it generates. And the last one that I'd like to suggest is like follow the money. Who pays for it because they have needs and expectations of a medical device. So you might think that sounds weird, but I can give an example. Think of a lab manager who makes the decision to buy a piece of equipment to be in their lab that can perform some kind of assay or do something. Right. Or it's a piece of equipment that a hospital would use. The doctor isn't the one who actually pays for it. The hospital is paying for it. Who's in charge of the hospital? Who has to buy that one thing? They might have specific user needs. And so I like to follow who does it touch, who consumes the information that it generates, because most of them generate information of some kind. And then who pays for it? Like follow that money through. You don't need to follow all the way through to, you know, a ridiculous degree, but at least think through one or two degrees of separation financially from the user, like from the patient. Well, of course the patient pays for it. They have insurance. Go beyond that. Like move a little bit further. Yeah, that gives you a nice, a nice scope, a nice boundary.
Etienne Nichols: I like that. The way I've looked at in the past is kind of thinking about your supply chain. I know it's not this perfect analogy, but when you think of supply chain, who provided this, who assembled it and so on, and it's just same thing. What's the life cycle of your medical device? You get your full list of users or potential interactors. Yeah.
Devon Campbell: Yeah. Okay. If you want to go like, do you want to go for bonus credit there? You think about who deals with your, you think about who deals with your device when it's done. You think about the end of the life cycle. We don't have as much attention that we give to that here in the US but if you're in Europe, a lot more attention is given to it. So if you're thinking of bringing a medical device to Europe, you probably ought to think about in your user need statements, your end of life situations for the device. How does it get thrown away? Does it get recycled? Does it have electronics in it? So now, you know, we comes into play. Rojas comes into play. Like all these different things come in, but they come end of life and they're geography based.
Etienne Nichols: Yeah. And even in the, in the US if it needs to go into a sharps container, for example. So yeah, there's lots of good things that need to be thinking about. Okay, so that's user needs. And I know we could go further. User needs. Like you said, it's probably multiple days, but design inputs so once you've developed those, you've got your list of your supply chain, your total life cycle, all of the user needs required. What about design inputs coming from those user needs?
Devon Campbell: So in my experience, each user need will give birth to several design input. We can keep calling them design inputs. That's a very kind of FDA centric way of calling it. I mean, they're product requirements and maybe that's just an easier way for us to think about it and design input. So each one will need several requirements in order to. Usually sometimes it's one to one, but usually it's a one to many relationship where you'll have several requirements that you would say, okay, think of it as a what versus how situation. The user need is, the what is wanted. Now it's our responsibility as developers of medical devices to say, how are we going to deliver that? What? Okay, so I make some assumptions and I say, well, they want it to be, you know, there might be a need for there to be a kill switch on the front for there maybe there's some, whatever. There's a user need that needs to be a kill switch on something so you can cut power to it. Well, I might make design decisions to say, okay, the system has to have a kill switch. It needs to be, you know, I'm going to design one that's red. I'm going to design one that's easily visible. So I would say the power switch must be easily, you know, must be high contrast so that, you know, people with color blindness can still see it. It needs to be unobstructed. It needs to be like, I come up with all these requirements that I feel I would need to be able to deliver the ability to cut power, which is the user need. I want to be able to cut power in the case of an emergency. How am I going to do that? I come up with lots of ways that I'm going to do that. So you can see like one user need become several design inputs.
Etienne Nichols: Well, maybe, maybe a different question is, can you have a design input without a user need or requirement? I suppose.
s which by themselves are not:Etienne Nichols: Yeah, I like that you brought that up. Just so those. Well, risk management specifically when I think about ISO 14, I summarize it into the three main mitigations that they want. Safety inherent by design. Which you know, if you're going to, that's changing your design is going to change and, and, and change those requirements. Those protective, protective measures or information for use or labeling. So those are all.
Devon Campbell: Which is always the worst way to answer to solve the problem. But sometimes it's your only way.
Etienne Nichols: You have to have a last, last resort, I suppose. Yeah, yeah. So we've come up with a list of design inputs. Then as a medical device manufacturer now we're going to. This is funny to. Okay, so I'm just going to throw this out there. Design inputs to outputs. To me it's one of the funniest jumps as a mechanical engineer because that is what you do for a living is that input to output, you do.
Devon Campbell: The translation, turn that into that whole.
Etienne Nichols: Product development process, into a single jump from inputs to outputs. But do you want to speak to that?
Devon Campbell: Yeah. You just belittled my entire experience designing medical devices since the 90s and that's all I'm really doing. I'm just turning inputs into outputs. That's all I do.
Etienne Nichols: And I say that jokingly because I was a mechanical engineer as well and it's always amusing to me.
and:Etienne Nichols: Yeah, yeah.
Devon Campbell: You really have to spend a lot of time when you're thinking about the requirements to make sure, in my mind, how would I test that? And when I write requirements, I like to, like, in the notes of the requirement, I write down just a few bullets. How would I test this? Would I test this via test? Would I do testing? Would I do inspection, What I do analysis. There's a couple different ways that you can test a requirement and verification and put down, I would do inspection and what would I inspect? I would inspect these three things and I would look forward across these areas. And so at least I have a basic skeleton for how I might verify this requirement before I've ever even designed it. I'll tell you as a design engineer and as a leader, having led lots of teams of design engineers, if we have the unique ability or opportunity to design to a requirement and we know how it's going to be tested later, it actually gives us additional information as design engineers to design the thing to make sure that we pass the test. So it's a little bit of starting with the end in mind and you think, oh, well, I know they're going to test, blah, blah, blah, and we're going to do this reliability study. So I'm going to over engineer some of the reliability aspects of my system so that I make sure that I not just meet, but exceed my test protocols.
Etienne Nichols: Right. Trying to think of an example as you were talking is like, okay, if. Of an engineer trying to meet the test, which you might think, well, teaching to the test, that's cheating. But if you're not trying, if you're not cheating, I feel like. So if you, if you have maybe a certain aspect of your device that has to be measured, that has to be a very tight tolerance, but you build it in a sub assembly that goes into another assembly and you're going to try to test that at the end. Well, you need to be able to take it apart at least maybe something that doesn't get welded or before it ever gets to the inspection, then that's something to think about. You know, maybe you can measure it from a different, different way than a, than a caliper. But, but if they know what the test is going to be, that's, I think that's really valuable. What else about design inputs to outputs?
Devon Campbell: I think we dove into the early side of it. I think maybe we could start to move. Unless you think there's something that we're missing, it might be good to kind of move into the meat of the spirit of the podcast, which is verification.
Etienne Nichols: You've already mentioned that as you're thinking about these design outputs requirements, you're going to consider how you're going to verify these. What is the thought? I mean, there's different ways to verify things. But maybe we could go in one more time into some examples of what verification looks like.
Devon Campbell: Okay, so verification. Let's set some ground rules. Okay. And we'll bust a few myths too. Verification is again, did we build the product right? Right. Did we build what we intended to build? Not does it meet the needs of the users? So verification inherently opens up the opportunity for us to potentially do bench testing for some of our verification. Right. So let's say you have think of a medical device, you come up with it that has like multiple subsystems to it. Go ahead, just pick up an insulin pump. Okay. So the, we have an electronic circuit in the insulin pump. Right. So we might be able to do some of the verification of the electronics, the circuitry of our system without having to have the entire pump. We don't have to have the entire thing final assembled and everything. We can prove that it's resilient to, you know, dirty power or something like that. Right. Or so we can do that. Right. So we can do. We can do all these different tests on the bench. The units under test, we need to have. We need to have like part numbers for them. We need to be. They need to be under design control. We need to know what rev everything was when we tested it. But you don't necessarily always have to do verification on the fully assembled, fully built thing, nor does it have to necessarily be built like in the factor.
Etienne Nichols: Yeah.
Devon Campbell: But you do need to know what you did your testing on. So definitely design controls are in place for sure. Should have been in place a long time ago, way before you got to verification and validation, in my opinion. I turn them on as early as I possibly can, but make the change process easy early. And then you can add more oops that you have to jump through toward the end. But the earlier you document and the more frequently you revise, the richer your design history becomes and the better you look in an audit later that you have this really gorgeous, rich design history behind you.
Etienne Nichols: Yeah. So what protocol at revision, whatever was it? Tested the revision, a specific revision of that board, for example, and you know, what tests.
Devon Campbell: Right, right.
Etienne Nichols: Tested what revision? Right, yeah, right, right. I want to go back up for. Go. Go ahead.
Devon Campbell: Yeah, no, no, no, go back up. I want to hear it.
Etienne Nichols: Well, I wanted to back up just a minute because we're talking about testing or verifying different components of the device, which I think is really valuable, but I think we kind of skipped over or I meant to ask this with design review. So with design review, we typically look at that as stage reviews. And I separate those in my mind from a project management standpoint, technical review versus stage review. But a lot of people look at, okay, we've got all our user needs, let's do a design review, we've got all our design inputs, let's do a design review. How do you look at that? And do you see design reviews being broken up in a similar way, verifying a component of the device versus the entire subset or the. The entire set of the design inputs? Does that make sense?
Devon Campbell: Yeah. And we can get into philosophical differences here, and I'm sure plenty of listeners might have completely different opinions on how to do this than I might espouse right now. But we, we did mention earlier, like, we do have to generate objective evidence that we have processes in place within our quality system to review our requirements and make sure that they are good requirements. I can't be. That's really designed to make sure that you don't have inconsistencies you don't have, and you don't have a person coming up with everything. Right. So of course we have to have evidence of that review having happened. I don't care if it's in a design review, a phase gate review. If you use a phase gate process, if you're using a completely agile lean process, you know it's going to be different. It's not the stereotypical waterfall process that everyone feels like you have to take because that's the one picture that was shown in the guidance for 820 that the FDA showed. I don't care. I mean what matters is that you've documented it so for the requirements. So absolutely yes. Your requirements need to be under design controls. They should have their own revision. I like to think that the best in class way to control requirements is on the individual basis, not as a document with all the requirements in it.
Etienne Nichols: Okay.
Devon Campbell: And I'll, I'll give an example of that just so like it's clear for anyone else who's maybe new to the field. When I started the medical devices, we started with prd, a product requirement document, which is what we're talking about for design inputs. And we had a cmrd, like a customer and market requirement document. And the sales team wrote that. They said, devin, this is the product that we want you to go make. And then my team on the development side said, okay, great, we're going to write the prd. Here's how we're going to make your product for you. And it was linked. So we knew they wanted this, what they wanted. Here's how I'm going to give it to them. Right. We had to review those documents, but we reviewed them as a whole. The entire document got revved, maybe 300, 400 product requirements. The entire document gets revved when you make a change to it. As opposed to like more modern, in my opinion, more modern requirements management. Requirements management is that you, you do your design reviews either on groups of requirements, but you can do them on an individual.
Etienne Nichols: Yeah.
Devon Campbell: You don't need them all in one giant document, physical document that you then bump rev on every single time. It can be perfectly fine for a requirement to be at Rev0 or Rev1 and another requirement to be at Rev7. Right. We've, we've just showed, we've really thought through it. We've more made it more precise once we start verification validation. Be careful though, like we don't want to move the goalposts and change our requirements. You can, but you should do it very limitedly.
Etienne Nichols: And part of the reason I bring up that design Review different approach is not purely to just talk philosophical or because someone might argue, well, it's just a semantic difference. You. If you only change one design, one requirement, and it's in a document of 300, you've only changed one versus you're only reviewing one by itself. That seems just. It's just a theoretical exercise. However, it has literal, real downstream effects in real life. Because if I'm going to rev up a document that does 300 and I'm going to circulate that with some different people, they are going to look at everything. There's a potential. Some of those that are locked down that we're not touching might say, well, you know what, what if we tweak these and it draws things out? So I actually do think it makes a big difference.
Devon Campbell: Or they tweak it, let's say inadvertently. Because their cursor happened to be in that.
Etienne Nichols: Exactly.
Devon Campbell: And. And they accidentally hit the backspace button and now a pressure of 19 pa turns into 9.
Etienne Nichols: Right.
Devon Campbell: And that becomes the new requirement. And no one knew because there were 300 and no one saw that Devin made that one change again. Like you actually protect yourself from yourself when you're doing it.
Etienne Nichols: Yeah.
Devon Campbell: On a one.
Etienne Nichols: So a shameless plug for Greenlight Guru. Because when I used to do this in Excel, that was my exact problem. But now in Greenlight Guru, you can lock down and have those frozen requirements at different places. So I'll just throw that out there for those who are wondering how to do this the way we're talking about.
Devon Campbell: I swear we didn't set that up. But yes, Greenlight Gurus Design controls module does let you do that.
Etienne Nichols: So we go through. Okay, so we. Let's. We can move the topic from design review. So we figured that out. We can review those different things. So let's go back to verification. You were talking about how you can verify different components, and that's interesting because I've worked at companies and worked with companies who look at DV as an event. Design verification. We have accomplished that. We're moving on and life's good.
Devon Campbell: Yeah.
Etienne Nichols: How do you see that?
Devon Campbell: I like this. I like to use it as an event as well.
Etienne Nichols: Yeah.
Devon Campbell: It still doesn't mean that it has to be on the entire system at all times. It really depends on how you wrote the requirement. So let's bring that back to requirements writing. If we wrote a requirement to your insulin pump. If we wrote a requirement that was inclusive of the entire pump in the sentence, then yeah. To do. To honor that. That that requirement, I need to write a verification protocol where I'm verifying against the entire pump. But if I wrote an assembled requirement. If I wrote. Yeah. Against the entire device. If I wrote the requirement said the electrical circuit must be resilient to dirty power. Right. Not the entire. Not the entire thing.
Etienne Nichols: Or even protected from ingress protection. Yeah. You could just make sure.
Devon Campbell: Right, that one. Yeah, exactly. So there's lots of ways that we can do that. So. But you have to think about like, oh, crud, I wrote the requirement in such a way that it forces me now to do the entire system. And maybe, you know, an insulin pump is one cost, but imagine it's like a large diagnostic device. Right. Like I need four devices to do that. And at some point, like in that product development process, the number of devices you have available to you to be able to do verification is usually limited. Right. If it's a larger system, it's not like you have hundreds of them to work on, to work with. So you usually find yourself like juggling and working maybe nights and weekends to like get all the tests done within the amount of time that you're given. And resources with devices often becomes a bottleneck. So it's clever early to say, I don't need the old device, I just need the circuit. I'm going to write the requirement to be specific to the circuit. Okay. So now we're doing design verification. Right. My best suggestion here is that people doing this for the first time strongly consider a pre verification before you get into official dv. Right. With the term you just used. But design verification. Right.
Etienne Nichols: Yeah.
Devon Campbell: Before you get into that verification exercise, you want to go through it pretty clean. You will have deviations, you will have things that fail. That is normal. If you have an executive team or a leadership team on the development side, think it's going to go through perfectly and everything works great. They're all fooling themselves.
Etienne Nichols: Right.
Devon Campbell: Real life product development, we want there to be like little things that show up. Our goal is to minimize that and not have too many things show up that's bad during design verification. So to guard against that, you perform like a pre verification. And so what that does is it gives you a chance to fully exercise your protocols that you've written. Because every verification protocol needs to be written down and itself design controlled. So it all has revisions. It could test one requirement, it could test 30 requirements. You can have. We're talking about test management a little bit now. You can have one test that touches several requirements. You could have another test that touches several of the same requirements. And it's okay for multiple tests to test multiple requirements. So if you ultimately look at it, you can say, how was this one requirement tested? You could say it was tested actually across four different protocols we did. You might think that's redundant, but it's actually smart verification planning, because later on in life you're going to make a change to that one little piece, that one little aspect of your system that you're verifying. And if you had four protocols that you ran before, it gives you the opportunity to at least consider which is the one that's like the least burdensome, the easiest one, the most important one that I can run right now. And I've already launched my product, like, which one of these should I run to make sure that it's okay, instead of it just being one to one. So I'm just saying it's okay for you to allow yourself to have permission to have overlap. There's some benefit you can take care, you can take from it later, okay? But running this pre verification allows you to fail in two ways. Either you learn your system is not going to pass the protocol, you know, and you think, oh, hold on, guys, we got to do some serious fixing. You'd rather learn that because maybe you didn't do enough unit testing, maybe you didn't do enough reliability testing or system integration testing ahead of time. You know, shame on you. But it gives you the chance to figure it out and fix it before you go into real verification. The other thing it lets you to do, lets you do, and let's presume none of that happened. Good design happened, and it was well informed with really good testing during the development process. All of the things between design input and design output. If your protocol was written very poorly, you may fail your protocol just because you wrote it poorly, right? So the protocol should be written so that someone who doesn't know your system necessarily, or maybe whatever minimum education you want them to have, they can open up the protocol and it tells them exactly what to do step by step by step by step, right? But if there is like, maybe you're trying to simulate a situation where a system loses power and you're trying to demonstrate that we write whatever was happening with the system at that time to some file so that when it turns on power, it can regain. And if you're in your protocol, you never put a step in there that says, okay, get it to this step, and then unplug it from the wall. If you never wrote a step that says kill power the protocol could, I mean the test could fail because you never killed the power like silly little stupid things that when the, when the testing team is writing or it could be the same people as the design team, which often it is. But sometimes in a larger company they're separate. But when it's being written, you can very easily overlook little things like that. And you don't want to just leap into verification, into design verification and then have dozens and dozens and dozens of deviations for stupid little things like that. It's just better to do a nice clean. We're going to go through and just do a really quick pre verification. You can choose how you want to record that or not or documented or not. But it's good to have it as a pre verification step in there.
Etienne Nichols: If your design verification is multiple tests. Curious what your thoughts are about this. Like let's say we have 40 tests that we're going to run on, on the, for, for design verification. Do they have to be labeled as design verification or can they be tests that would, you know, work for. I mean they're truly verifying the device. It's a test and maybe not labeled as such. How does that work? What's the political.
Devon Campbell: It's fine. You don't have to call, in my opinion, you don't have to call it a design verification protocol. In the end, you'll generate a trace matrix that says what test did you do in the company that proves that one requirement was met? And you'll say, well, I ran three tests that prove that one was requirement was met. And these are the three reports and these are the three protocols that I executed. It doesn't matter if they have the word design verification in them. That actually might be a bad idea because it might discourage people from using it later even though it's a **** well written test. Right. You might want to reuse that test later. So yeah, I'm saying 100% agree. Yeah, no, you don't need to use the name.
Etienne Nichols: I think it's a psychological thing too. Sometimes we look at, I don't know, we sack ourselves out sometimes. But. Okay. One other question. Do you recommend then that the engineers who developed the device run the, run the design verification testing or only day shift or what. What are your thoughts on having your, your, your best design engineers doing the, the testing and or assembling of the devices to make sure that we have the best devices possible to test? I'm leading the witness a little bit. Pardon your honor.
Devon Campbell: I don't have too much problem with them. Assembling the device. Right. Because they know it best. And you're not necessarily in design verification, you're not necessarily testing the impact of the manufacturing of everything. So as long as it's well documented, you know exactly what went into your uut, your unit under test, you know exactly went into each one, you know what configuration everything was. You hopefully you had some level of work instruction to put them together, but it doesn't need to be as buttoned up as it would be, you know, like in a full scale manufacturing situation. So I'm okay with that. Personally, I'm okay with that.
Etienne Nichols: Yeah.
Devon Campbell: As long as it's well documented. On the execution side though, I mean sometimes for a really small company you have no choice, you have to. Right? Yeah, it's five people in the company, like whatever, 12 people. You gotta, so you have to. But if you have the opportunity to using someone who doesn't know the device as intimately and doesn't necessarily know like the protocol so intimately gives you the ability to find those little flaws in the system that you as a super experienced user, not assembler, but I mean user will overlook and know just to do. Right. And, and, and, and you'll tend to like also not document that. So it might be something like maybe two things slide past each other and you know, cause you designed the part and you've been fiddling with it for 18 months that as these, as you enter, you know, take object A and stick it into object B, then you know that if you wiggle it a little bit, it does a better job going in.
Etienne Nichols: Yeah.
Devon Campbell: And you, you just know that. And that's just what you've done because you've stuck object a into B 300 times in like the last week. And you just know that. But you won't necessarily, you will in, you will not, it will not be in the forefront of your mind and you won't necessarily write that down. So it's good to have somebody who doesn't know it as well as you do running it for you.
Etienne Nichols: It's kind of pressure testing your protocols. Yeah, yeah.
Devon Campbell: Well you're, and your, and your system and your design. I mean nothing is as humbling as giving something that you've invented and built and got it working beautifully and putting it in the hands of somebody else. And that's where you really learn. Yeah, an incredible amount. You will never learn as much as you will when you start putting it in the hands of people outside the engineering team.
Etienne Nichols: Yeah, I feel bad for the, whoever invented the flat blade screwdriver. Like, oh, this is great for hammering lids back on the paint cans. What other actual advice do you have for medical device companies who are going through this or tips and common challenges? Those are sometimes go hand in hand.
Devon Campbell: One thing that a lot of people overlook, that I see in my practice with like the earlier stage companies, like folks that maybe just don't know. But you also see it from folks that come from bigger companies that maybe didn't have exposure to it. You need to justify how many devices you're going to use to prove with objective evidence that your system that your design output meets your design input. Okay. And there are cases where it's a binary situation. A lot of times, like in software, it either shows the button or it doesn't. It doesn't matter how many devices you load the code on it either. Like maybe the button is supposed to turn red when something happens. Either it does or it doesn't. Doesn't matter how many devices. But if it's more an electromechanical thing, especially something that's assembled and has like tolerances and stack up analyses and things like that, I mean, stack up problems that could come into play. You need to think very carefully about how many tests are you going to do? Because someone eventually is going to ask you, why didn't you do the testing just on one system ation? And you could say, well, that's all I had. And that's a real answer. You could say, that's all I had. But I understand that that statistically wasn't like a very significant task. Like, don't pretend like it was. Then you got to figure out some ways to try to increase that statistical significance. But do you use just one device? Do you use three? Do you use 30? Do you use 300? Like how many? If it's on the bench, how many circuits for the insulin pump should we build up and put on the. On the bench. Right. Is there some variability to the design that could influence the ability for it to work? So generally we know not to do testing with just one sample. We always want a couple data points to be able to prove that. Yeah, they're all. There's some degree of precision and accuracy around the result. Statistical validity and justification of your sample size is something that you have to do. And a lot of times people just don't or they'll write some overarching SOP that just says when we do verification, we will use this approach and this approach and this approach. And it gets buried in some SOP that everyone likes. Does a read and understand training on, and then they forget about. And when it comes time to write the protocol, they've completely forgotten about that. There's an SOP out there that like, tries to guide that. So justifying in your protocol how many to use and why is important another area that I see, like a common mistake or a challenge for people in our industry. We talk about verification and we use these two words, verification and validation all the time. And they mean completely different things depending on what we're talking about.
Etienne Nichols: Yeah.
Devon Campbell: Okay, so let's give some examples. Validation of a medical device versus validation of software. Like validation of being like, guru. We're talking about validation. We might even be talking about software validation in both those cases. But are we talking about software validation of a tool that we use to guide our quality systems, or are we talking about software validation in an IC62,304 sense? Lots of different contexts in which verification and validation show up. I do not like it when people use either of those two words out inappropriately out of those context. So someone says, oh, yeah, yeah, I did some lab testing. I verified this requirement already. It didn't really. Right. They just did some bench testing. You qualified, you did some experimental studies, maybe you executed a nice doe, but you did. Did you write a protocol? Was it under review? Did other people review the protocol before you executed it? Did.
Etienne Nichols: Your hypothesis has not become a theory yet.
ose. And like, whether it's a:Etienne Nichols: Yeah.
Devon Campbell: So you might be thinking of a complex system that has software and mechanics and biology and everything else. So think of like a. Like again, like an in vitro diagnostic big tool. Right. That's going to have. You might break up your validation or your verification into. Let's do design, let's do device verification, let's do software verification. Because we're going to use different teams, we're going to use different people, we're going to use different methodologies and then bring them all together in a nice report that says, yeah, we tested the whole system in this way.
Etienne Nichols: Any last piece of advice that you have for the audience when it comes to verification validation or did we miss anything? Any thoughts?
Devon Campbell: Okay, I got one for you.
Etienne Nichols: All right, let's hear it.
Devon Campbell: And we've been around the block. We've developed a few devices. How frequently do you see a schedule that shows product development for X number of months, maybe shows design shows pre verification for some number of weeks. Then it shows like in a waterfall schedule approach, it shows DV happening, let's say two months end on the 31st and then validation starting on the first of the next month. Almost always. Right.
Etienne Nichols: Well, I always tried to put a build in between, but yes, that's where I'm going.
Devon Campbell: Right. It's often left out. And in the spirit of continuous improvement, which is what our regulations are all about, we should be humble enough to expect something's going to come out of design verification that we might want to fix.
Etienne Nichols: Yeah.
Devon Campbell: And then I have to repeat a little bit of design verification. But that's a healthy thing. That's a good thing for us to develop safe and effective products. And I see schedules all the time where you just go straight from DV right into user need validation or clinicals or whatever it is. And you haven't given yourself time to react to what you might have learned and not. And don't be so arrogant to think that you will not learn something, because you will. And then you force yourself into a situation. Do I make my schedule slip? Do I repeat verification on some things and start validation at the same time and do them, you know, in parallel, which isn't great practice. Do I turn a blind eye to that? Do I just choose not to pay attention to it? It's just better to be humble and understand we're going to learn something. We should give ourselves one build of time. It's not going to be a wholesale redesign. If you did your job right as the development team, you should have a pretty decent device by the time it goes into design. Design verification. But there's going to be some subtle tweaks you're going to want to make. Give yourself time to make it. Yeah.
Etienne Nichols: And I guess I want to Add one question on that, because my understanding has always been, or at least my practice has always been validation happens on a to be marketed device. So if you have a small, small tweak, how small does it have to be before that has to be included in the validation device?
Devon Campbell: I mean, it's a judgment call. There's, I mean, it comes down to a benefit risk analysis, which everything we do in our space should come down to a benefit risk analysis. It also depends on like the risk of the product. Whether it's like a Class 3 product versus a Class 2.
Etienne Nichols: Yeah.
Devon Campbell: Or if it's Class 1, then you know, you can choose to opt into a bunch of this, but you don't necessarily, you're not necessarily compelled to by the regulations. So I think that really comes down to a case by case basis.
Etienne Nichols: You're speaking to a weakness sometimes that we have as an issue. The justification side of why we do. We just want to go above and beyond which we should, we should always go above and beyond the regular, the standard. Standard, I think it's a misused word. Just if you think of it classically. Standard is just a certain expectation that everybody should meet and we should be able to go beyond that. However, like you said, if there's a tweak that has zero impact on risk, you've actually thoroughly analyzed that and can truly prove that, then you have justification for potentially omitting that.
Devon Campbell: Absolutely.
Etienne Nichols: Yeah.
Devon Campbell: Yeah, absolutely. You want to change the color of a knob. I mean, that's a change. It's, it's an obvious one that like. Well, that's not going to affect anything. Right. It's the same. Maybe it's an anodized aluminum knob and it used to be green and you want to make it red.
Etienne Nichols: Well, our HF people might have a problem with it.
Devon Campbell: Maybe. Well, maybe you're doing it for HF reasons that you learned during verification. Right, Good point.
Etienne Nichols: Yeah, yeah.
Devon Campbell: So actually that's, that's, that brings up an interesting learning opportunity that we often miss in verification and invalidation. You know, we're sometimes so blinded by the singular focus of executing the protocols that we have and then that's it. But let's use the example we talked about earlier where maybe we get a few people, not the, not the senior or principal or engineering fellows that like designed it and then built it, not using them to test it. So you use some texts or you use some folks from one of the other programs. If you're a larger company. We did this at one of the really big companies. I Worked at before. We would just swap teams. You can just execute the protocol blindly and then get the result and say, did I pass or fail? Yes, you just missed an opportunity to ask someone for feedback. Yeah, you missed the opportunity to ask and say, what did you think about it? What could I have done better? What would have made. Yes, you followed the protocol to a T. Everything worked perfectly fine. When you ran the protocol, it generated the result that I wanted. Great, I win. But you missed that chance to ask that person, how could I done that better? Do you have any other feedback? Do you have any suggestions? Do you have any. Anything weird happen? Right. You should have a ticketing system in place or some way to record not just deviations, but observations that the user experiences while they're executing the test. Even if it doesn't invalidate the test. Right. Test worked fine, but I heard a weird screeching sound the whole time. Someone should be writing that down somewhere. But when you're using folks again, going back to the idea of nothing is as humbling as putting it in the hands of people who didn't design it and having them use it. But you lose that chance if you don't ask the question. Yeah, it's a good opportunity.
Etienne Nichols: It's slight, maybe out of context, and maybe use it. Misusing the word validation. So you're going to have to tell me if you're going to cringe here, but I like to use the little V validation of your user needs anytime you talk to a physician who does. So if I'm doing my insulin pump, I'm talking to a diabetic physician who works with a lot of diabetics. I'll mention what I'm. What I'm doing little V validation of my user needs. It's like, yeah, do you need this? What, what are, what exactly are your problems? And they validate or invalidate what I'm thinking on that. On that user need. So I don't know, I'll.
Devon Campbell: I'll let it slide.
Etienne Nichols: All right, all right.
Devon Campbell: I did cringe a little. I did cringe a little.
Etienne Nichols: I can see you cringing. What would the. What's the word you would use?
Devon Campbell: Well, if I'm developing it right, if I'm developing those user needs and I'm getting those user stories and I'm talking to people like I'm basically, I'm. I'm researching, I'm qualifying my user needs. I'm coming up with sources.
Etienne Nichols: Yeah, qualifying. Maybe that's the word I'm looking for.
Devon Campbell: Yeah, I'm qualified.
Etienne Nichols: But this is even after I, you know, just. Yeah, okay. Yeah, good. I like it. Okay. Devin, thank you so much. Finger crossed. This episode works and I'm excited. Well, those of you who've been listening, we'd love to hear any feedback. If you have any thoughts, go deeper in this, go deeper in that, or have a completely different topic altogether. I know Devin has years and extensive experience. Love getting to talk to him, especially with the specific examples that he uses, but thank you so much.
Devon Campbell: And I think we have fun. Yeah, yeah, yeah. Help listeners do, too.
Etienne Nichols: This is. This has been fun. This is a topic that's really fun to talk about because it is a controversial topic, too. There's a lot of different ways you could approach things. Some better than others, case by case sometimes. But it's been. It's been really good advice. Devin, thanks so much for being on the podcast. Those who've been listening, we will see you all next time. Take care. Thank you so much for listening. If you enjoyed this episode, can I ask a special favor from you? Can you leave us a review on itunes? I know most of us have never done that before, but if you're listening on the phone, look at the itunes app. Scroll down to the bottom where it says leave a review. It's actually really easy. Same thing with computer. Just look for that. Leave a review. But this helps others find us and it lets us know how we're doing. Also, I'd personally love to hear from you on LinkedIn. Reach out to me. I read and respond to every message because hearing your feedback is the only way I'm going to get better. Thanks again for listening and we'll see you next time.