Nectar: Innovation Development
Nectar is an award-winning product development company based in southern California. Our product design and development process is an interdisciplinary approach combining industrial design, user experience design, mechanical engineering, and electrical engineering that ensures product designs are successfully executed into production. We’ve been helping clients design products that connect to their users and expand their markets for over 30 years. We are firm believers in the team approach to product developments, which is why we'd like to share our wisdom with like-minded individuals, hence why we've created this podcast. Join us as we take a deep-dive into common product development topics and share our views.
Nectar: Innovation Development
#8 Strategic Risk Management in MedTech Innovation
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode of the MedTech Innovation 360, we delve into risk assessment and management within medical device development. Exploring the definitions of risk, we differentiate between risk assessment and risk management, stressing the importance of early identification and mitigation. Key points include the relationship between risk and hazards, the role of Failure Modes Effects Analysis (FMEA) in identifying potential failures, the significance of ISO 14971 as a guiding standard, and the integration of user and design perspectives in comprehensive risk analysis. Practical examples, such as sidewalk cracks, illustrate risk analysis criteria. We also emphasize considering manufacturing processes and material properties for product safety and functionality. Throughout, we employ real-world analogies and detailed examples to make complex concepts accessible, aiming to help medical device professionals enhance product safety and efficacy through meticulous risk analysis and proactive design adjustments.
Speaker 1 0:12
Welcome to the product development playbook a series where we explore the world of medical devices and development. This show is your go to source for news and best practices in the rapidly changing medical device industry. We'll discuss the latest trends and innovations as well as how to stay ahead of the curve and stay competitive in the medical device space. Our guests are experts in their field, and they'll provide invaluable insights on the topics that matter to you. So get ready to get informed, stay nimble and join the conversation. Today's episode is the second part of our regulatory series. If you missed the first episode, I would suggest going back and listening to that one. Today, we will focus on risk as it relates to medical devices. I'm your host, Sarah, and today we have the same team members joining us from our first episode, Elaine, James, and Jonathan. Elaine, where would you like to start?
Speaker 2 1:11
Well, one of the reasons we thought talking about risk now would be a good thing to do. Because early in the development program, you may have a better opportunity to introduce mitigations to your potential risk, identify them early and either work around them, or perhaps even consider whether or not you can change your requirements. And I like to think in terms of the difference between risk assessment and risk management, you hear a lot medical devices about risk management. And I prefer to think of that as more of the downstream work you do. Once you think you have your risk under control. Now you need to continue to manage it. And work we can talk about that another time. But risk assessment early in a development program can be extremely useful. But I want to define risk. Because I see that thrown around a lot without due process. People will say, Oh, that's risky business. And they don't know what they mean by risk. As engineers, we have to have a little more discipline about that. And so I work from the prospect of hazard analysis, and risk assessment. So I want to be able to define the hazards and then determine the risk from that the definition of risk is a hazard multiplied times its potential. So if I don't have a hazard, I don't have a risk. And yet, folks will throw out the word risk, even when they haven't sat down to understand whether or not they have a true hazard. Now, some folks would define hazard as the potential to do harm. Other folks will define hazard as the failure of the element to meet its requirement. And really, they're both more or less correct at the design level. Because the first thing we want to do is understand if we have a failure mode, engineers often talk in terms of failure modes, effects analysis, that's one technical way of doing a risk analysis. And that's its own discipline. So when you do a failure mode effects analysis, you're trying to determine if your product or your output has failed to meet its requirements. Sometimes, then we look at the mode of failure. For example, is it a fast crack? Or is it a crush, so we want to look at how it is failing. And then we want to look at the effects of that failure. So we see a lot in medical device design folks creating a big spreadsheet. And they're often doing a version of failure modes, effects analysis, that varies and I don't think it's necessary to be a purist with that type of technique. But it's useful to understand when something is failing, whether or not that actually creates a hazard. So when you look at the pure sense of failure modes, effects analysis, you're trying to understand if the failure what the mode of that failure is and what is the effect of that failure. Now what we do a lot in medical device risk analysis is we just more or less assume that a hazard could do harm or a hazard may have a bad effect on the product. And we do that partly because there is an international standard, ISO 14971. That's very applicable to medical device development and examples in that book. And probably best practices through the years have caused folks to basically look at the potential for the hazard if the element has failed to meet its requirements. So that sort of short cuts the concept, so you just immediately go directly to the question of a hazard if you have a potential design flaw. Now, we can't get into these big spreadsheets, and sometimes they're two yards long and an Excel spreadsheet. But what we really want to look at is, if I have the potential that a hazard could do harm, how am I going to understand the mitigation of that? Now, before I get into that, I want to take you to the concept that if I have a crack in the sidewalk, I have technically a failure of the sidewalk to meet its requirements to be smooth and not have a crack. Now, a crack in the sidewalk where granny walks every day may have a higher potential to do harm. And therefore I may have a higher risk factor associated with that sidewalk crack. If that same sidewalk is out in an abandoned area of town where nobody walks, do I have the same risk as I do when the crack is in the main part of town that granny walks on? So we look at the hazard and the potential to determine our risk. And I think it's really important for medical device developers to be able to go through their input and design requirements and analyze what would happen potentially to a patient or the performance of the product if a hazard occurs. And what is that potential the opportunity for that hazard to occur, that allows us to assess these risks and try to mitigate relative to their severity. Now, there's another element that we bring into the picture, the severity of that potential harm. If someone simply has to start the tool over again, that may sound pretty simple, just push the button off and start again. But if it happens to be a heart defibrillator, and you can't find the button, that could be a very serious hazard. If it's just a massage device, and you have to start over, it may not be the same sort of level of risk, potential for harm. So we do this early in the design development program. And we try to incorporate a number of talents in our group, sometimes people will try to do this risk analysis all by themselves, and they'll sit down with a piece of paper and start writing. And that might be a good way to get a draft going. But I like to see the people who are likely to use the product like physicians and the people who are designing the product to discuss this draft and see if they are coming to the same conclusions. Now sometimes a design level hazard analysis needs to be backed up with a user level hazard analysis we want to look at can our users follow our instructions? Can our users see when something is going wrong? Will they use it with other equipment? Well, this was one of the areas that for a long time, we had more concerns on whether or not turning on our device might Frits the monitor so if our electricity wasn't clean, if we were giving off electromagnetic energy, we might be turning on a simple device but fritzing out the monitor and someone wouldn't know that a patient had gone into fibrillation. So those are the kinds of well known potential hazards of interaction between other devices and when users may be putting more than one device in the same area. So in those cases, we're not just mindful of our own design, but how our design is being used. A third important risk analysis is process level risk analysis when we're looking at how we're manufacturing the product. Where could our hazards lie into Trying to make the product properly. Sometimes process analysis will help us be alert to properties of a material. For example, I might buy a material with a certain pellet size, if that same material comes in with a larger pellet size, is it going to go through my extruder in the same way and come out with the same product. So there are many process variables that can affect the performance of the product or the function of the product. And doing a process hazard analysis is a very important thing to do before product goes to full on manufacturing. So we've mentioned user Hazard Analysis, Design level hazard analysis and process level hazard analysis. Now, as I mentioned, risk management in the future goes further into reviewing if you were right about your guests about potential, for example, or how users are going to possibly misuse the product. But again, at the design level, we're more focused on what happens if my product fails to meet its requirements doc statement? And could there be more than one of those potential hazards that could even happen simultaneously? And so that's why it's really a good idea to take as a first step, your early basic requirements list and run through even if it's only a very quick draft? Where are the potentials for the product to fail to meet its design requirements? How would that fail your manifest itself? And what can we do about it?
Speaker 3 11:45
So I really liked your analogy of the sidewalk crack. And I'd like to kind of stick with that for a minute. Okay. So if we think of the sidewalk crack as a problem that we're trying to solve, and product development is design, in general is problem solving. So if we think about this, as a project, our team is going to work together to figure out how to solve this problem. Now we have a few options, are we going to patch it up, or we're going to try to tear up the whole sidewalk and replace it, we're going to read lay concrete down, that sort of thing, we're going to look at risk, we're going to look at other issues. So for example, like what Jonathan, as a human factors engineer is gonna be looking at things like user error, and what types of users are there? What's reasonable to expect of the users? What what are the cognitive abilities of the users? We're thinking about the environment? Like you mentioned before? Is this outside of a nursing home? Is it where you got a lot of foot traffic? Or is this outside of an abandoned factory? Where there's zero foot traffic? Is it well lit? At night? Is it not, you know, the all these different things, it actually translates pretty well to how we tackle product development, because we're going through the same steps as we're trying to design any product, because ultimately, it's just a problem that we're trying to solve. So I think the sidewalk analogy is pretty interesting.
Speaker 2 13:19
Let's use some of your examples there, it's really easy for us to want to jump to the conclusion of how are we going to solve the crack in the sidewalk, you tossed up a couple, patch it pair up the sidewalk, do it again, put some sort of cover over the crack secondary surface coating, like they do our highway down here. You know, put down the oil, put it on the rocks on top of it and hope nobody notices the crack. And I think when we're looking at solving those problems like that, it is very useful to step back and do that second step, the failure mode. Now, again, not everybody will put that into their spreadsheet. But if we consider the mode of failure, why or how is it failing, it might help direct you as to whether or not you need to pull up the whole sidewalk and start again. For example, if the mode of failure is a simple crack that has not changed the physical direction of the sidewalk, it's not humped it up, it's not added a very large barrier. You might say, Oh, well, I can just put a smoothing surface over that and leave it be. On the other hand, if in your analysis, you realize that the sidewalk didn't have a good underlayment in the first place. And no matter what you do, it's going to fail again, I would say okay, one of the modes was simply expansion and contraction. We get a lot of that up here in the frozen north. Or I might say the more owed was that it, its underlayment wasn't properly laid down in the first place with a high potential, it's going to happen again. And so that brings in the question of potential is this highly likely to occur, highly unlikely to occur. And one of the things we do is this tool, it's not always necessary, but they tend, we tend to score the potential or the frequency of when that might happen. And we'll give it like, a high number five, let's say if I believe it's going to definitely happen again. I might give it a one. If I think oh, somebody backed over it while it was curing, it's not likely to happen again, that conflict was one off, not going to happen again. So this has helped me go from scoring my hazard as the mode, how often or the potential I should say, How often is this likely to happen again, or happen at all. And then the other thing that can help you judge how to fix the problem is the severity. And like you said, Okay, so a crack in front of grannies nursing home might be scored with a higher severity, if you believe that granny is more likely to break her ankle. Or you might say, Oh, it's such a tiny crack, nobody is going to get hurt by this. And it's hard sometimes to separate potential and severity of the hazard, because you need to try to be analytical about what would be that hazard if it were to occur and judge that independent of the potential for it to occur. And now, we also sometimes not everybody does this, but you can also factor in detectability. Now, you brought up the question of bad lighting, okay, I've got a crack in the sidewalk, it's downtown, and it's practically daylight down there in the middle of the night, or it's out in the suburbs, and everybody has really dim lighting at night, and you can't see it very well. So if I have a higher chance of being able to detect the hazard, before it can cause harm, I'm going to give that a low number. If I have a poor detectability, I might give that a five. Now why do I care about these numbers scores, I'm going to multiply the potential for it to occur times the severity of the harm times the detectability of the harm. Now, none of these numbers can be zero, obviously. So the lowest score we have anything is going to be a one. And just so our numbers don't get crazy, we're going to create a little scale of one through five. And what we see is most folks will put a word statement against those numbers. So I'm going to give it a one good detectability that ones are the inverse. So I'm going to give it a one if I can see it most of the time, if it's there, I'm going to see it. And I'm going to give it a five if it could be there and I may not see it at all. Okay, severity is going to have a high hazard granny could break her ankle or a low severity, people are just going to pass right over that not even notice it, the potential to do harm is going to go back to that factor of where it is what's what's the frequency of people walking across it, what's the event potential in the first place, if it's downtown, and a lot of people are walking on it, I'm going to give it a five it's out in the neighborhood along an abandoned factory, I'm going to give it a one because it's potential to actually even do harm as low. Now I take that what's known as a risk product number, or Pn. It's a multiplication of potential severity and detectability. And now that RPN number, that could be a pretty high number now. And so I'm going to take those higher numbers and focus on them first. It's not that they're to the exclusion of others. But I like to see, where am I going to put my energy early on? Is there something I need to know about the performance of this product and how it can represent potential harm to the patient or the user or failed to meet its requirements? And how easily might I detect a problem if it's even there. So a high RPN number is basically saying it's a very significant risk that I want to put my energy to, and try to take care of it early. And, and that's where I'm also going to get my engineering hat on and try to see the best way to either improve detectability or reduce potential. Now so Verity, as we score it, traditionally, in medical devices, we basically say, we are not going to be able to change the severity of the hazard, a broken leg is a broken leg, I'm not going to change that severity if I have the potential to cause a broken leg. So no matter what I do, I'm not going to make a broken leg easier. So that's another reason when we're looking at design mitigation, we want to also tackle any hazard that has a very high severity score, because those are the ones that could do the most harm to our user or patients. So that's how we try to get at using our resources best. And it allows you to have, hopefully focus in on the most significant issues first, with your design team. You Yeah, this helps a lot. It
Speaker 4 20:57
would also be great if you could translate the sidewalk crack analogy into more of a medical base project, please, I really love this analogy. It really breaks down into layman's terms exactly where the risks are happening, where the hazards are happening, and essentially, how it can be breaking down. But I believe it would be a great translation to figure out how this analogy can actually place into the medical world with a medical product.
Speaker 2 21:23
I've got a couple of examples here. If I have a performance requirement that says my device needs to be waterproof, then we have a possible hazard. In this case, since it's electrical, the user might be shocked if I have an elect if I'm not waterproof, and I need to be waterproof, because my device is electrical. So possible cause might be an insulation breakdown. This could occur. Because I have bad manufacturing, it could occur because the devices used past its life expectancy, or I bought bad insulation material, any number of those things are possible causes and by the way, we can break those down and score them individually. But for now, I'm just going to go through it one time. So I have a hazard a user could be shocked, I have a possible cause that I have an insulation breakdown. And my control or mitigation might be before I let the product go out the door. And at the end of life, after I've done some aging studies on it, I have to pass my international electrical Standard IEC 601 dash one, I have to show that I can pass that test. And so in this case, I haven't yet done it. So I'm going to score my likelihood of occurrence as a three the severity of the hazard, I could make it a four or five, but the detection of the hazard, I'm gonna say it's a one because if that insulation has broke down or isn't adequate, I'm going to see it or I'm definitely going to know it because somebody just got shocked. Now that puts me at a multiplied value of 12. Now over on the side, I may have determined that, you know, that's that's not a score that would necessarily stop the design. But if I haven't passed my testing with a severity of hazard of four, I, I'm not going to let that product go forward, I have to do testing. And I'm going to put that test report in my dossier or history file. And if it's a submission to FDA, that testing might go into a 510 K application. So I can go across with this electric device and do other questions such as could this electric device fail to operate below zero? Or could this electric device not be operable at a very low temperature because the battery will fail, like electric cars, anyone but this is an example of how we have a crack in the sidewalk. But in this case, it happens to be a crack and insulation and it can lead to various failure modes, I have many options on how to test for it, detect it or change the material to improve its resistance in performance in the environment that I'm working in.
Speaker 5 24:29
Now, that definitely did put it in the medical scope. So so that helped. Yeah,
Speaker 3 24:34
I was just gonna ask Have you ever been in a situation where when you get to like the verification phase, and we haven't talked much about verification yet, but at some point along the course of the project, have you ever gotten to a point where it's like okay, this is become too high of a risk. We need to change the requirements or we need to soften the requirements. So it's Not so rigid, because this is really preventing us from making a great product or a safe product or something that we can verify works.
Speaker 2 25:09
Let me give you an example. It may be not exactly like that. But lots of times, we look at what's called intended use, or intended user. And when we're working with a professional, that's using our device, we may be able to score, for example, the potential occurrence of the hazard, or the detectability of the hazard differently based on who is the intended user. So a professional who's had certain kinds of training might detect this particular hazard much more readily. If it's someone at home using the device, they may not have the experience about how the device should perform, and may not realize that you have a problem that you have a failure mode in front of you. And this is one way that we sometimes can mitigate the risk, we're not necessarily changing the requirement of the performance. But we may be mitigating the risk by either warning our user or making sure we're only selling to trained professionals. And we're not putting this out the pharmacy counter where you can pick it up and check it out and not never have any training on it.
Unknown Speaker 26:38
Yeah, that makes a lot of sense. But sometimes
Speaker 2 26:41
we do have to change the requirement to make the device work in the environment. Now, there are some examples of where devices have worked perfectly good. I mentioned the electrical interference, the devices may work perfectly well. But then in the presence of another device, they may not work as well. And electrical interference is one of the great examples of that. So in that case, you might have to I'll call it harden, set up a better resistance, insulation or whatever around your device so that it is not susceptible to other stray magnetic fields, or electrical fields, we have good standards and medical products now for referred to as EMI and EMC electromagnetic interference and electromagnetic compatibility. So when we put out a device, we're expected to have a device that's clean in the environment, but also is resistant to certain energies in the environment. So we see less of this situation where you turn on a device and it Fritz's all the other products within four feet. So that's where like, if you would like to say, well, I don't need to meet that requirement. I'm going to just skip over that one. Again, I would look at the question, is this being used at home? Or is this being used in a highly energetic environment, like an O R, if it's going to go into an O R, I better have darn good compatibility at the electrical level. So I'm not forgetting other very important devices. If I'm at home, and the TV goes off, that's not so big a problem. So what else you got?
Speaker 3 28:28
So what I was just going to mention is that sometimes this is a little bit less on the risk side, I think, but more on the product feasibility side, but to me, they go hand in hand, sometimes what happens is clients come to us, and they're perhaps overly optimistic with what their goals are for the project. I can think of several examples of that. And we get into the project and we enter the research phase, and we start doing feasibility studies. And we determine that, like, okay, yes, it is possible to accomplish exactly what you're asking, but it's going to cost this much this much in development effort, your end product is going to cost this much and they're like, Okay, and then they realize they kind of come to terms and realize, okay, if I want to hit this back, let's say for example, there's a sensor that's detecting something, maybe it's a camera, it's detecting some kind of visual anomaly on something. And let's say, it's like, okay, if you really won't be able to detect up to two thousandths of an inch, you're going to need a really powerful camera. Those cameras are like two grand or more, can you sell your product for at least that? And they say, oh, no, we wanted to sell this thing for $500 or something like that. Then that's a scenario where you would have to relax your requirements a bit and say, Okay, well, it's kind of a give and take at that point. You either have to relax the requirement or you have to increase the sales target that you're To the retail price of the of the product. So those are the kinds of things that we encounter typically, early on a lot of that gets sussed out in the research phase when we're doing these feasibility studies. But I'm just thinking of there's been any scenarios where there's been a hard requirement from the beginning. And it goes all the way through through research concept design all the way through when you're in V and V. This is too hard like this is, this is going to take another six months of testing and software bugs and fixing and or whatever the scenario is that you have to do to correct the issue. It's not worth it. Let's go back and just relax that requirement. And it's different. If it's a safety thing, if it's a safety thing, I totally get it. There's no compromise, really, you can't put out something that's unsafe. But I think there's more flexibility when it comes to other features. Well, and
Speaker 2 30:57
that's another aspect of risk analysis, from an engineering point of view, we often don't get into like we probably ought to, and that's really a business risk, I can produce the device, but the business risk is it's not meeting the customer's requirement. And although I can make one that almost meets the customer's requirements, am I sort of providing my competition with a roadmap of how to make it better? And so we have to be mindful sometimes that oh, yeah, we can compromise on that performance. But Gee, whiz, we just showed him how to make maybe a desktop computer. And now they're coming back and saying, Okay, I'm going to make my desktop computer faster than yours. If so, we do have a tendency sometimes to build on performance of the other guy, or even our own products. But I would say early in the game, if you've set a requirement, and you find out that the technology won't allow you to meet that requirement, I think you've got some soul searching. And that's not necessarily a hazard of the design, that's a hazard of at the business level, do I really want to make a claim that I can't meet? Yeah,
Speaker 3 32:14
no, and that's a really good point. So at nectar, we call that an Mrd. Market requirement document. And sometimes customers come to us and they have 50, page Mrd. And they've got it all specked out, and they know exactly what they want to do. And it's usually pretty, pretty accurate. Sometimes they come to us and say, oh, I need your help with that. And Jonathan, me, other people in the team will work on that. And we'll put together something that's reasonable. But kind of the running joke at Nektar is if you're talking to a customer, you know, and we're get into it, and we're working on their Mrd. And we were doing this feasibility study, and we're saying, Okay, let's we need to talk about priorities here. Let's take the optical sensor, for example. Do you want to have this optical sensing capability that can detect to 10 thousandths of an inch? Or do you want to be able to sell this for less than $500? They're gonna say yes.
Speaker 3 33:13
It's always yes. And, and then also, this and that. So. So that's a big part of the process that we haven't talked too much about, because we tend to separate the engineering, more safety requirements from the software, customer market requirements. And Jonathan and I are the ones that are usually working on the market requirements, documents and figuring out, there's a project right now where someone's coming to us with an idea. And we're working on a project where we're in the research phase. And we've started talking to users, and doing our own investigations about what might make sense for how to tackle the project. And I mean, where are we now Jonathan, you guys have what five or six different? I'm talking about the mobile? Where are we now on on that one? What do you guys have? What? I don't know how many different scenarios you have? Yeah,
Speaker 4 34:08
the funny thing with that project is I'm currently working with the user experience team to get more into who our niche user is. But at the same time, kind of take a step back, because our client has all these extravagant ideas. But we are still trying to figure out the who, what, when, where and why of this product, and how it's going to be used. At the moment, I believe we've created about three broad categories. Some of our categories can even be broken off into a bit more of these niche categories. But we're still trying to figure out exactly what a client wants. So we are trying to plan out a meeting with them to kind of align our research with their views.
Speaker 2 34:51
One of the things I see a lot and that kind of dilemma is trying to help them find their minimally viable product. If, and I'm sure you've heard terms like that, but basically the lowest common denominator that can get the product on the market. And that might sound compromising, and perhaps it is. But when I'm faced with that, I'm usually looking at can I build on top of my minimally viable product? So recently, I was involved in a project where someone wanted a particular I'll call it a bag to be reusable. And I said, Well, let's make it first and make it work once, then, when you're getting FDA to agree that your product is safe, and works as intended. You can be working in parallel on seeing it how many times you can reuse the outer structure the bag, how many times can you reuse it before it no longer performs as intended, don't set your project up to have five or 10 uses when you don't even know that the first one will work. So oftentimes, that enables me to focus the team on finding that minimum requirement and spitting it out. Because it's sometimes really hard to go from, oh, I wanted to work five times. And I want to know, what's the environment is going to be working in one time? Is it going to be really hot? Is there going to be a lot of pressure? Is there going to be where, what's that gonna look like? Let's do that once, then we can come back and do it five more times. And sometimes this is and that's not really a risk analysis in this classic sense of the ISO standard 14971. But it's more of a classic business where ask that I want to know that I can do it once before I spend a lot of money to prove I can do it five times when my user might be perfectly satisfied to use it once and throw it away. And when I get competition, when I want to reduce the price to the user, that's when I can come in and throw in that economy of use that they can reuse it again. But first I prove I can meet that initial minimum requirement that the customer has to have. So I'm kind of reducing my business risk. And in a sense, my engineering risk because I'm instead of going all the way to I need 10 uses out of this product, let's make sure we know what the environment is, is going to be working in even one time.
Speaker 3 37:35
You mentioned MVP, or minimum viable product, or MVP, that's kind of a trigger word for our design team. A lot of times on projects, we start out and we we discover all these incredible opportunities, we have these amazing things and holes in the market and how you could make your product great. We proposal this. And then sometimes we get to do it, and it's awesome. But then other times it's like, well, you know, I think it'd be better if we just do a minimum viable product, or MVP, and just cut cut all those cool features out unless you do something basic, get it on the market, generate some revenue, and then do we really want to do later. And it's like, it's, you know, sometimes you got to do it for the economics and for the business. But it's a lot more fun for the design team to do something really creative and push the boundaries and see what's see what can be done with
Speaker 2 38:27
well, and another practical aspect of that, that kind of goes hand in glove is really understanding what the user need is. And this kind of goes sort of to the question of Oh, the designer want five uses. But then I go and I asked the question, it's a very simple question. Where is the user going to put this gizmo in between use two and three is their environment even set up for them to store this product for the next use, if if their environment is on the move, if they don't have good storage, if they don't have control over their storage, then a disposable device is meeting the unmet need, where as a reusable product creates a new need. And they don't have that luxury of recycling that product. Like you might think it would be good for the environment. But hey, that means the hospital has got to build another room to store your product. And and so that's why we take it back to that user need early and make sure that's built into our input requirements. And we're not ignoring what the users environment is. So we can actually create a hazard. So if while I'm trying to store my container for four more times, what if I'm creating a hazard that something could crawl into had to the container while I'm not looking and now I've got a bug inside. I got to know that there could have a bug inside I've got to clean the blood bug out before I can use it. I've just introduced a new hazard by thinking I'm so clever by forcing reuse on a customer that's not ready for it. Yeah. Oh, and I think that's one of the reasons why we are very lucky in medical technology that we do have a lot of guidance documents and standards, we have a lot of experience we're standing on, including the experience of how to describe that work environment am I working in and or am I taking my product home, and using all my little daughter, can I get replacement batteries, if if it's a home device, one of the things I started using during the COVID concerns was a pulse oximeter I bought online, it was 40 bucks, because I had read that if your oxygen level starts to decrease your greater risk, and that might be more likely COVID than just a cold. And so I was literally monitoring the family with my little pulse oximeter, and it was 40 bucks. Otherwise, you have to go into the doctor say you're sick, and then they'll put a pulse oximeter on you. And now you're, you're out $200, and you just didn't even you just didn't even know you were sick enough to need to go to the doctor. So little things like our environment, the price, the user, the expectations for longevity, and how it works with other devices, what's its intended use, but also, what's the environment it's working in, including other devices and all of that, it's important for us early in the development phase, to at least take a survey of that we may not have it all worked up, we may not have our final risk analysis document descend into the FDA. But the earlier you can march through some of those requirements in your ability to meet those requirements and the hazard that could occur if you fail to meet that requirement early, early, early. And then you can start to move the chessmen around the board and give yourself more opportunity to either minimize some of the requirements. Specify your product differently, like you know, what components are you going to buy harden it or even put labeling limits on the product. Don't use this in the bathtub, don't use this in the freezer, don't take this out in the yard and let your dog play with it. There's there's labeling and cautions and training that if we can learn some of these hazards early enough, we can build right into the labeling and the structure the hardening of the product.
Speaker 3 42:46
Wow. Well, I think that's a great note to end this podcast on.
Speaker 2 42:50
Okay. Now you call me back if you want some more good stuff.
Speaker 3 42:55
We're gonna try to do one more of these. So we'll definitely be checking up with you. Thanks. Yeah, this was awesome. Thanks.
Unknown Speaker 43:03
Thanks so much. fun working with you guys.
Unknown Speaker 43:06
Yeah.
Unknown Speaker 43:06
Thank you.
Speaker 1 43:07
Yeah. Thank you so much, Elaine. This has been very informative for me, and I'm sure for our listeners out there as well talk to you next time. All right, thanks. Bye. Bye. And that wraps up today's episode of the product development playbook. Remember staying informed, collaborating with experts and embracing human centered design principles are essential for success in this industry. keep innovating, and we'll be back with more insightful discussions in future episodes. Thank you for tuning in.