Artwork for podcast Global Medical Device Podcast powered by Greenlight Guru
#382: Design of Experiments - How and When to Use DOE
Episode 38212th September 2024 • Global Medical Device Podcast powered by Greenlight Guru • Greenlight Guru + Medical Device Entrepreneurs
00:00:00 00:42:17

Share Episode

Shownotes

In this episode, Etienne Nichols interviews Perry Parendo, an expert in Design of Experiments (DOE), about the practical application of DOE in medical device development.

They discuss how DOE can be used to better understand systems, reduce risk, and solve complex problems, especially in R&D and manufacturing processes.

Perry shares insights from his extensive career, offering actionable strategies to simplify complex variables, avoid common pitfalls, and ensure a more effective and efficient development process.

Key Timestamps:

  • [00:02] – Introduction to Perry Parendo and his background in DOE
  • [05:50] – What is DOE? Perry’s simple, non-technical definition
  • [12:00] – Common problems DOE solves and its application in R&D
  • [22:30] – Risk management and DOE’s role in reducing uncertainty
  • [35:20] – Using DOE in manufacturing processes and real-world examples
  • [48:10] – Common pitfalls and best practices when using DOE

Key Quotes:

  • Perry Parendo: “Design of Experiments is a tool to assist in understanding a system. It’s not just a test plan; it’s a way to create structure and strategy in how you approach testing.”
  • Etienne Nichols: “The life of an engineer really happens in that space between input and output—there’s so much to dial in, and that’s where tools like DOE really help.”

Takeaways:

Key Insights on MedTech Trends:

  1. DOE reduces risk: It plays a crucial role in risk management, especially in R&D, where understanding system behaviors early is key to mitigating issues down the line.
  2. Structured problem-solving: DOE provides a data-driven, structured way to isolate variables and pinpoint causes, streamlining troubleshooting and optimization in product development.
  3. Adaptability of DOE: It can be applied to both small and large-scale problems, from manufacturing issues to high-stakes R&D, making it essential for MedTech innovation.

Practical Tips for MedTech Professionals:

  1. Start small with DOE: Focus on fewer variables when beginning to ensure you don’t get overwhelmed. Three to seven variables are typically manageable for early experiments.
  2. Understand the limits of your tests: Avoid putting all variables into one test; break them down to ensure results are meaningful and actionable.
  3. Validate your DOE: Don’t rely solely on DOE results—validate with real-world testing to confirm your findings.

References:

  • Perry Parendo: Founder of Perry Solutions, specializing in product development and process optimization through DOE. LinkedIn
  • Connect with Etienne Nichols on LinkedIn.

MedTech 101: Explainer on DOE:

Design of Experiments (DOE) is a statistical method used to determine how different variables (inputs) affect a process or product outcome (output). It’s widely used in MedTech for optimizing processes and solving manufacturing or product development issues by systematically testing different variables to identify the most influential factors.

Questions for the Audience:

  1. Poll: How often do you use DOE in your medical device development process?

Feedback & Sponsors:

  • Feedback: Loved this episode? Leave a review on iTunes! Your feedback helps us improve and reach more MedTech professionals.
  • Contact us at: podcast@greenlight.guru

Sponsor: This episode is brought to you by Greenlight Guru, the only quality management software designed specifically for the medical device industry. Learn how Greenlight Guru can help streamline your product development process at greenlight.guru.

Sponsor: Today's episode is sponsored by Rook Quality Systems. Rook offers Quality as a Service solutions to help medical device companies navigate the complexities of regulatory compliance. Their team of experts ensures your quality processes meet the highest industry standards, giving you peace of mind while you focus on innovation. Whether you're preparing for an audit or need ongoing quality support, trust Rook to keep your compliance on track. Learn more at RookQS.com.

Transcripts

Perry Parendo: Welcome to the global medical Device podcast, where today's brightest minds in the medical device industry go to get their most useful and actionable insider knowledge direct from some of the world's leading medical device experts and companies.

Etienne Nichols: Built for med tech by Med tech, Greenlight Guru's EQMS software helps you manage compliance, quality, and innovation all in one. Learn more at www. Dot Greenlight Dot Guru are you struggling with quality compliance? Rook quality System offers quality as a service support to help you navigate the complexities of the medical device industry. Their experts provide ongoing guidance ensuring your processes meet the highest standards. Trust Rook to keep your quality on track. Discover more at.

Etienne Nichols: Hey everyone, welcome back to the Global Medical Device podcast. My name is Etienne Nichols and with me today is Perry Perindo. He has been with us on the show in the past where we talked about project management. We went into, well, we covered a lot of ground with that episode, but he has been corresponding with me a little bit via email and he came up with the idea to talk about design of experiments. He sent me something on Doe and I don't think we've ever done a podcast on that. I'm getting a little ahead of myself, but how are you doing today, Perry?

Perry Parendo: Doing great at the end.

ors Research Research Labs in:

Perry Parendo: Yeah, appreciate it. Looking forward to another conversation.

Etienne Nichols: Well, thank you. The thing that I maybe to set the stage for this topic, because design of experiments, it can be a big topic, but, but just like the Doe itself, it could be very big, it could be very small. But I wonder if we could put some boundaries and definitions in place as to what you see a design of experience actually being and maybe what some people think is a Doe, but actually isn't.

Perry Parendo: Yeah, I think a good way to start that is to just provide a definition of it and kind of a non statistical, non goofy title for it. But the thing I've always defined Doe as, or for a long time have, is doe design of experiments is a tool to assist in the process of understanding a system. So there's a lot of components there, but it's really a tool set. It's not a test plan. It helps you create a test plan and it assists in understanding. It doesn't give you understanding, it's a test. It helps you to strategize how to execute a test. And it provides a data format and some interpretation tools. But you as the user really need to dig into that and really truly understand it. And it's for whatever system you want to define. So it's the boundaries can be as big or as small as you want. And that's a good part and also a bad part. If you put those boundaries too wide, you're not going to learn anything. It's too noisy. If you put it too narrow, you're not going to solve the problem because you didn't include enough of the technology that's going on, that's actually truly interacting together and influencing the behavior. So its kind of a short definition with a lot of content behind it.

Etienne Nichols: I like that. And if you have an example of a good, I dont know, problem, maybe to use as an illustration where we could put a doe or a design of experiments and apply that methodology to that problem. And before you come up with that, I think before we were talking, just before we hit record, you were talking about how its not a test plan. And I, I think I agreed with you in principle, and I think that definition has helped me a lot because in my past I thought of a design of experiments as, okay, you know, all the different pieces of criteria you look for all the permutations that you could potentially find so that you can have all of the interpolation ability, you know, with all those different pieces of criteria after you, after you experiment them. But that's really kind of jumping to the test plan part or is that accurate?

Perry Parendo: Yeah, I think I agree with you. That's the whole context of what that test plan is going to look like. And the DOE component, the tool component of it is almost a smaller portion. But you need to really, when I talk about the process of setting up a test, one of the early things is list every variable that could impact it. We're only going to include a little bit in our DoE matrix. But all the other stuff is part of your test plan. Yeah. So the stuff you go, I don't want to learn about that. Well you better fix it in your test plan. Otherwise if you haven't told them they can play with it anytime they want. And so you use the extra things as kind of part of your control plan within your test plan.

Etienne Nichols: Okay. What kind of problems are solved by using a design of experiments?

Perry Parendo: You know, I think when we talked last time about project management, we talked about how risk management was a big deal. And to me risk management is basically a lack of understanding. You don't understand something or have the confidence that you need to really go through and feel this. And design of experiments is a way to gain understanding so they really, really feed each other. And not that every high risk has to have a design of experiments behind it to understand it, but it's least worthy of conversation if that's the right tool to work with to help reduce that risk. From my experience the problems can be big or small. When people are starting. I really encourage so called small problems. Sometimes it's a manufacturing issue or a field failure, but it can also be a research and development issue of you have some competing requirements, you're not sure how to make them work. At the same time, any of those are fair game. And most of my application has been personally in the upstream R and D phase of DoE which isn't really as well acknowledged in medical. We do it as part of our OQPQ maybe, maybe part of the design characterization. But a lot of times I've heard people say they did a Doe. It truly was, they wrote a test plan and randomly changed some stuff. But they had at least a structure from their, their test plan of what to follow and go ahead.

Etienne Nichols: Oh well I was just curious, what's the difference in your mind of an actual design of experiments and what you just described as maybe they changed a few characteristics. What's the difference?

. But I've seen this probably:

Etienne Nichols: It before, that makes sense, and I like that idea a lot. When I'm thinking about that, the pushback I've had in the past when I want to change multiple things at the same time, is the concern that, well, will we be able to isolate the variable enough that we've actually solved the problem without changing, like you said, three to seven things. That's a lot of, in some people's minds, maybe not in some systems. How do we know we've actually solved or identified the root cause of the problem? Is there a trick there?

Perry Parendo: Well, there's two thoughts there. One of the dwe language is we talk about sample size, particularly in validation, and we need maybe 15 or 30 or whatever that number might be. And in a design of experiments, each of those combinations, let's say we did three input variables at two conditions to the third power is eight combinations. Well, we're not doing, say, 15 at each one of those eight, so it's not eight times 15. We're actually, what they call in the DoE world, an effective sample size. And it's actually an eight run doe is actually an effective sample size of four. It's not just one. So if you really need 15 in an eight run doe, you really need just four runs at each of those eight conditions to get your. Gosh, not even. Yeah, that's right. Just doing the math in my head real quick. You just need four runs at each of those eight conditions to get your effective sample size of 15. And so people don't understand that, again, they multiply. Eight times 15, they go, we can't afford that, so we got to reduce it from three. And that's not true. So, effective sample size, particularly in characterization work, it's not vital to hit your full sample size. And so the effectiveness of doe efficiency and effectiveness can be highly leveraged. So that's a part of it. The. And I'm losing my math language here. But basically, in matrix algebra, you know, we have the diagonal identity matrix, basically. And that's all a doe is doing, is setting it up so you can isolate which one of those causes. Is it one of those first three input variables, or is it some interaction of those things? And it's clean off diagonal, it's a zero coefficient, and you can really just extract that, that piece to it. And I haven't done matrix algebra forever, but if you've studied it, at least it'll trigger of, hey, if those really are all ones on the diagonal, we can cleanly extract that. So we have a pretty clean extraction using the Doe method. But then, as I tell people, we have effective sample size, we have other things going on. Do you really want to just claim based on the DoE, we got the answer, we're good? Or do you maybe want to verify it? And a quick verification of, we're predicting this, and if you actually run something at the prediction, you don't have to run a lot. You run a couple, and if it's matching the prediction, I think you're probably good for validation. And so that's usually my I'll call development test or my learning test, a verification test, and then you jump to validation, and that's how you can be confident that you didn't do something weird at the math or weird with the Doe.

Etienne Nichols: I like that you bring that up, because when I talk about design controls, I laugh at myself a little bit because we talk about user needs to input. User needs generate those inputs. Those inputs then become the design outputs. And now you have your design. But between inputs and outputs is so much engineering, and that's really the life of an engineer in that little statement, inputs to output, you talk about all of these different types of tests before you get to verification. That's important stuff right there to iterate and to really dial in the process.

Perry Parendo: It sounds like, yeah, and my validation is validating the doe per se, not necessarily medical device type validation, which is truly patient sent, patient focused. But. But it's. Yeah, it's breaking down. It's what I see many, many times is the characterization work. Is the engineer, a technician out in the lab playing around, and then the project manager kind of storms in and says, we got to start validation next week, or process validation next week, or DV next week. You got to be done. And so you finish your report, and you route it for signatures with whatever data and whatever you did. You say, I'm ready, and you're just ready because the schedule said you are, versus doing a doe, gaining the understanding, predicting a good level of performance, and then saying, I'm ready. Where's my parts? And if you're doing design work, is the process stable? If you're doing process work, is the design stable so I can actually execute, run, and have consistent, are my test methods validated and so on. You're asking other people questions, as opposed to being forced into a corner to see you're ready.

Etienne Nichols: One question I might have, if we go back to when we're talking about that, the three to seven different characteristics, out of 80 to 90, and you mentioned something using your experience, your judgment, and that engineering judgment is pretty valuable, obviously, and your knowledge of the product and so on. But how do you isolate it down to that three or seven, or do you have thoughts or a piece of advice on how to do that?

Perry Parendo: Yeah. So part of it is kind of how I laid it out was tool process design. There can be some logical ways to break it down. There's some I'll call theories out there, approaches out there, of separating electrical stuff, mechanical stuff, from hydraulic stuff. They're not going to tend to play with each other as much as within the mechanical world. Variables might interact with each other. I think there's a few things that exist to help break it down, but sometimes it's. So I gave you an example of the injection molded part. Now, let me give you the other extreme of the impossibly complex system. I was working on a vehicle for the army, and this is literally a 60 ton vehicle for the army, and we were looking at performance of that vehicle, and our DOE was a war game. So what are design details? Well, we could really. We probably had 20 plus high level things we could simulate, but that's not how I want to do it. So we basically grouped some things. I'll call it a high level. And at that high level, I think we still define nine things, but we grouped a couple things together, and it's just being, I'll call, strategic. So that if these high level things aren't important, we don't need to dig down if they are important. Well, now we have a DOE within each one of those areas. And so it was just, you know, I think sometimes an error in the DOE approach that people have is putting every variable and every. Everything into one phase. And a lot of times with clients, I'll come in and say, I think we start, say, the process phase doe. If we find our solution, great. Or the high level type doe. But if not, where I have a phase two, because this is kind of a second priority based on team experience and team opinion, or team decision, because we can't do both simultaneous, and we'll do that second part later, and then, well, third phase would obviously always be that verification I mentioned is we think we have the answer now. Let's replicate it in a non Doe setting, just to make sure that we're not fooling ourselves because there's still statistical confidence in this and that, that maybe we did make an error somewhere. Most of the times that verification does match, but there's still that confidence of we. We played a lot of games with the Doe. It's a lot of statistics. I just want to have it. Have a bill of material, basically build it, test it, and it work. Not all the funkiness of modifying things simultaneous, and it just gives people more confidence and you get an actual build of the actual design that you have.

Etienne Nichols: Yeah, yeah, that's. That makes sense. When I think about the different variables, like you said, most of the time, the mechanical world plays a little bit nicer than maybe some other areas. And knowing how close to put things can help you figure out that interpolation and whether or not you can extrapolate outside those bounds. What are some other pieces of advice that you have for people when they're. When they're wanting to put these together, or that engineer who might be in the lab trying to get to an answer and he has to justify, for example, I'll just tell you where this is coming from. When I've tried to do a design of experiments in the past, a lot of times people would kind of slap my hand, say, hey, we don't have. We don't have time for science experiments. Do you ever see or go far or see the experimentation, go to the point where it feels like we're just spinning our wheels and we're not actually getting answers? Or what's the real root cause of not getting those answers, do you think?

Perry Parendo: I think so. From a general, the engineer in the lab description, the engineer technician, whoever it is, they're always the next test away from solving the problem in their mind. And so if they really are that one five minute test away, then we don't need this formality and anything extra. And I can agree with that. But at some point, as a project manager, you got to say, okay, how many guesses do you have? And if you have three guesses, well, we're going to open up the gate on the tool, then we're going to play with the pressure. If those two things don't work, we'll do whatever the third thing is. There's always a couple things they have in their mind, and then as a project manager, before that starts, you go, so if you do those three things and they don't work, then we're going to do the doe. You kind of got to set that stage before they are so called. And I agree with playing around, because if you can guess quickly, that just saves its time. So let's make sure we're not adding formality when we haven't even played a little bit first. But you have to really put that line in the sand of if you go off track or these aren't getting you on track, and they really should, then there's something more going on. And let's not just keep spinning that wheel. Yeah, that's one way to keep it from getting out of control. Just delaying and not doing the formal doe, but then driving down the. Making it too big in that science project really is just making sure you have a focus doe again, whether it's tool processor or design, let's make sure we've thought about it with a team. I would argue that, and I've only, I think, broken this rule once when I was kind of forced by a client. But I basically never designed a doe in a conference room. I make sure that I'm talking with the people on the floor, the test engineer, the technician who's been playing with this forever. And because smart, educated people in a conference room will get 80% of a test, was that extra 20%? That is either the missing variable or them telling us those things don't matter, just they really bring the life to it. And so I've always walked on the production floor with, here's what we're thinking of, but let me get your feedback. And that feedback always modifies it, because, again, they're so practical, they won't let something that sounds like a science project happen. Those operators will tell you the unvarnished truth go. That would be stupid. Well, I think we've gone too far then, and that's really been a technique for me.

Etienne Nichols: Yeah. When you talked about the project management side, I thought that was really valuable, too. When you say, okay, go ahead and play around, to a certain degree, we're going to know when to stop playing around. If you have three variables or four, we, we know when we're going to start this design of experiments. As a project manager, do you put time in your schedule from the very beginning? We're going to have a design of experiments when we get to this phase. How does that look?

Perry Parendo: Yeah, I had a client one time that they'd seen the success of Doe, and so on this new project, they asked me to help them set up a DOe, and it was a medical device company doing some tissue related up work. And so I asked him, I said, have you ever, you know, how many of these prototypes have you built so far? And they said zero. I said zero. We need to take this tissue and do what you normally do to it. For this design, you need to play with it first. And basically you have two weeks or whatever is a small time frame, but something to get their hands dirty with at first. So you don't want to give them two months. It's probably too long. You can't give them two days. It's probably too short. You look at the situation and you can either say, try this, try this, try that. Whenever you're done, let me know, or give them x amount of time to play with it. And so long as there's some constraint, some endpoint. And also I think the important point is asking for expectations what youre expecting on that test, because if you say were in change of pressure and it increases 2%, you say, I fixed it, but you need a 10% fix. Yeah, it got better, but that didnt hit the threshold we need. So I think knowing how much improvement is desired is part of that project management, so called the control that you want to have. You want to control time, but also performance threshold. And if they played for two weeks and got, you know, 4%, but you really need ten. We know that we're not done the progress, but now we've learned enough to set up, know the levels to be testing and what variables to include or not include, probably a little bit better.

Etienne Nichols: Yeah, you mentioned some tools, I think at one point. What are the tools that you use for, for setting up doe? Or is it, are there tools for this?

Perry Parendo: Yeah, there's a lot of software tools out there, and I would argue from the designs themselves, they're all pretty comparable. Certain tools have a couple more call exotic tools, but I haven't tended to find those are most useful from a pure software point of view. But from a category of tool, the kind of the, I shouldn't say the workhorse, but the theoretical base is something called a full factorial. And those are useful and they're good knowledge to have. But as soon as you get to a handful of variables, say five, the size of them kind of gets a little bit big and probably unnecessary. So there's something called a fractional factorial tool, which makes some assumptions, but if you're smart with them, they're not bad assumptions. And that's how you can get up to the, you know, six, seven, eight, and nine input variables without being two to the 9th power, because two to the 9th power is an ungodly number that is not affordable. But those fractional factorials, and those are the true workhorse tools. The full factorial is kind of the theoretical base, but the fractional factorials are really where you can. That's almost where I start. Every single one of my projects is there when I get called in. But there are advanced tools and the core two, there's way beyond this. But box banking designs and central composite designs, they don't limit to the two conditions. They can be three conditions or five conditions for each input variable. So they get some really funky nonlinear behavior defined. Sometimes I start with them, but usually that's a second phase dependent upon the experience they have with it. So I try and look at every situation and not have a total rule of thumb. But those two tools are the core advanced tools, nonlinear tools that I'll use again, as appropriate.

Etienne Nichols: Yeah, and I would guess that you would need to know quite a bit about or at least some firm understanding of what you're doing before you just jump into a tool like that. I think about minitab when I come up with a six pack for something, for example. There's a lot of ways to shoot yourself in the foot with some of those. If I were to ask one more question, what are some other issues you've seen people get into when they're using this or trying to set up a design of experiments?

Perry Parendo: Yeah, I think one thing that comes to mind, particularly for something that's already in manufacturing, is to take the, basically the manufacturing criteria is the responses or the outputs to your Doe. So a good example of that is we've had an issue, let's say, in the field of scratches, which are unacceptable for our product. So scratches are bad? No, scratches are good, and we basically do a yes no workmanship criteria type assessment for Doe. Well, that's really hard to know if you're making progress. If there's a scratch, if it's a millimeter long or a foot long, they're both bad, but they're a different bad. Yeah. And so I often have to force people, even though it's part of their manufacturing standard or part of their inspection plan. We need to get new measurement system. So again, I will measure those scratches, and if it's, again, if it's a twelve inch scratch versus a one inch scratch, both are bad, but the one inch is better and it points us in the direction of where the best is. So I use the language, if we know what bad is and we know what good is, we know what we can point to, what the best is, yeah, that's good.

Etienne Nichols: I really like that.

Perry Parendo: And so it's really making that measurement system numerical instead of pass fail. And then the other thing, when you're looking again upstream in R and D and you're looking at reliability things we don't have time to do a reliability, a two year simulated life, let's say, in our R and D testing. And I kind of borrow a clinical word, but I'm like what surrogate measures can we have that indicate reliability? Let's say it's wear, say it's a round shaft. If setting one reduces it, say by. And this is extreme numbers, but reduces it by 1. Next one the shaft reduces by 3. Don't know what the life is going to be, but I know one's better than the other. And so if say the 3 mm are baseline, 1 mm is going to be better. I don't know how much better, but at least I have a flavor. I'm going the right way before I run a full, full duration, you know, reliability type life test.

Etienne Nichols: How long does it take to set up a doe?

Perry Parendo: I shouldn't say this, but I can actually do it pretty fast.

Etienne Nichols: I guess getting all the right people in the room too. I assume it's a big part of.

Perry Parendo: It, but yeah, usually what? And I'll kind of walk through the typical project when I get called in. So I'll usually come in and we'll have basically a 1 hour meeting with the team. And it's a pure marketing meeting, hearing what the situation is. And then I'll kind of get my scope of, I think we need to focus again, tool process design or whatever that breakdown might be. So I kind of at least can scope it and give kind of my hours estimate to them. And then I come back and we, we dig in a little bit deeper into those variables again. We go to the shop floor, we look at the tools, we look at existing data, we look at the test method, we dig into that for a certain amount of time and then maybe that's one meeting, maybe there's some little bit extra, but usually it's fairly short amount of time to get our arms around the problem. And then I basically will offline use the software to create a matrix, which is the DoE. And then I don't recommend what to do. I give them usually three to five different matrix options, different approaches, and I'll review those options and the goods and bads of both, or goods and bads of all the different options. And as they basically fling arrows at each of them. Usually it's a 6th option that I'll come up with that. I think this balances all the needs. You just said. So the options really exposes more needs. And then we get our matrix together. So it's affordable, covers the technical range and covers the measurement methods. And then we have something. Then they'll take that matrix, put it into a detailed test plan. So I just did the matrix piece. They write the rest of the test plan, execute it, collect data, feed the data back to me, I'll analyze it either on my own or with them or both. And then it's, did we get there? Do we need to go through another iteration? But once you're one iteration, the second iteration is quick to get up to speed.

Etienne Nichols: Yeah.

Perry Parendo: So that's kind of the timeline on those things. And then, of course, if we think we got there, we want to do that. Let's confirm it. You could call it, I call it a verification test. We call it a confirmation test, too. Did we really, can we get it a second time? Or were we just lucky during the DOE, because the material lot or the humidity, that there can be some accidental things that happen. So proving twice is useful.

Etienne Nichols: And actually that brings up a good point is, well, for example, if I just go to a different example, like verification testing of the actual design itself, I've had different people talk about different environments in which those products should be made. Should the engineers be the one to build them? Should it be the production line? Should it be three different lots with three different operators and so on? Just, you know, there's no hard and fast rule. It's, I suppose, product dependent. But do you have any suggestions about that when you're building out that Doe or those products as well?

Perry Parendo: Yeah, that's a great question. And one I get a lot, and my answer is always pretty consistent in the sense of, what are you trying to learn? Because if you're trying, if one of the things that's, one of your concern areas is the new operators seem to be struggling with this, well, then you need a design or tooling that works for an experienced and a brand new operator. And so sometimes the. And you got to be careful how you label them. You don't want to say good operator, bad operator, but new and experienced or something like that. Sometimes that's part of the doe. But if it is finicky, but it'll be automated someday, maybe it's okay to have your best operator do it from a learning point of view so that new person noise isn't in your test data, because if it is, then it's hard to extract what you're trying to learn.

Etienne Nichols: So strength and material or geometry?

Perry Parendo: Yeah, yeah, yeah. Because if that new person is variable like this, you're not going to see the material difference, where if it's more consistent operator, more experienced operator, then you'll see that material difference. But then in that verification, then maybe you bring it back to, here's someone we just hired. They're trained, they're capable, but they've only been here a month. Let's make sure they can do it because you don't want to release it and not be able to scale or if you're too fine tuned in, the, say, technicians or operators that have to run it. That's not a very robust process that.

Etienne Nichols: You set up, but that's a really good answer. It depends on what you're looking for, because if it's material or design related versus process related, that all can be part of your doe, I suppose, depending on if that's a variable you want to test for.

Perry Parendo: Absolutely. Yeah.

Etienne Nichols: Any other tips that you have for people? This, any, any other thoughts?

Perry Parendo: You know, the, you know, one of the things I see just when people, sometimes when people think about desire experiments, you know, we've been experimenting since college. You know, we've had classes and experimentation and so what do I really need? There's a software that it does the heavy lifting. I'll just go through a tutorial and I'll be good, or I'll watch a couple of videos online and I'll be able to do my own design experiments matrix. I would argue that that's probably a little bit foolish because there's a lot to these tools, a lot of nuance behind some of the comments I've made here today. And so to get some real training and some experience with someone watching you, every class I've ever done in Doe always has a work related project in parallel with it. And so you learn some stuff, but then you work on it and you actually use it. And by doing it in a class format, even if it's a small class, which I tend to run, ten to 20 people in a class, even if there's only ten people in there, if your project doesn't work well, I make them all present their projects to each other. So even if theirs doesn't go well, they'll see nine others. Usually there's one in ten that doesn't maybe go so well, they'll see nine others that did go well. And they get, they see that it really did work, and they can learn some of the nuances from each other, and then they have a kind of a kind of a core group that's knowledgeable and experienced enough for those basic does to get rolling. They maybe not don't understand all the strategy, but they can learn it together as they go. And I think that's key. And I hate to make it sound self promoting, but when I started teaching, I started teaching Doe because I'd heard so many complaints of, I've been trained before, but don't know how to use it. I've been trained before. It didn't stick, or it doesn't work. I go, but it does work, and it shouldn't be confusing. It should be straightforward and kind of the thing, we talked about this before we started recording, but traditionally, a lot of the people, because I told you, my first Doe was on a mainframe. So you get programmers or statisticians that know the tools, and then if they do it in a vacuum, and then just hand it to the engineer, the scientist running it for the hands on, there's a disconnect there. And I found a lot of the people training early on were the numbers, numbers only people. So they didn't have that practical. How do you apply it? How do you think about it? How do you interpret, how do you make project management decisions? They were in love with the greek symbols. And one of the phrases that just kills me is the elegance of the mathematics. If you ever hear a trainer say, elegance of the mathematics, run away. Don't even ask for your money back. It takes too long. Just run away because you really need a practical training. And I found that those are really, really hard to find. And frankly, it's why I started my business. And I've actually had people in class that said, I've been trained before. I've been trained two or three times before, and it's never worked. But this class, it does, and I don't know exactly what I do different, other than the project and other than being so called one of you, where I've failed at everything. I've been the project manager under the gun. I've been the manufacturing guy under the gun, the design guy under the gun. I've been in these places, these people been. I know what they're dealing with. And being able to bring that real life, that practicality to it, that's what I think is essential, because literally, the tool isn't that hard. The math and statistics isn't that hard. Saying statistics isn't hard. It's maybe an oxymoron. But if you boil it down to what's going on, it's pretty simple. But to do it well does take some thinking and some strategizing. And there's an art to it. And you can't get an art off a tutorial or off of a, off of you, too.

Etienne Nichols: Yeah.

Perry Parendo: Well, cool.

Etienne Nichols: We'll put links in the show notes so that people are able to find you and your previous episode as well and the presentations you've done. But because I really appreciate what you're doing to help the industry. We talked about last time how the medical device, well, project management specific. Anyway, there's a lot we could learn from other industries, and I think the same goes for, for things like design of experience as well. So. All right. Thank you so much, Perry. I really appreciate you coming on the show today.

Perry Parendo: Thank you as well. I knew we'd have a fun conversation again. We're two for two now.

Etienne Nichols: All right, everybody, thank you so much for listening. We'll see you all next time. Take care.

Etienne Nichols: Thank you so much for listening. If you enjoyed this episode, can I ask a special favor from you? Can you leave us a review on iTunes? I know most of us have never done that before, but if you're listening on the phone, look at the iTunes app. Scroll down to the bottom where it says leave a review. It's actually really easy. Same thing with computer. Look for that leave a review button. This helps others find us, and it lets us know how we're doing. Also, I'd personally love to hear from you on LinkedIn. Reach out to me. I read and respond to every message because hearing your feedback is the only way I'm going to get better. Thanks again for listening, and we'll see you next time.

Links

Chapters

Video

More from YouTube