Quality during Design

Harnessing Team Insights for Risk Analysis using Probabilities

March 08, 2024 Dianna Deeney Season 5 Episode 3
Quality during Design
Harnessing Team Insights for Risk Analysis using Probabilities
Show Notes Transcript Chapter Markers

Navigating the common roadblocks of team consensus on severity ratings during FMEA or hazard analysis can be challenging. But with the right strategies, your team can capture uncertainty and avoid the pitfall of too many conservative estimates that skew prioritization. Learn how a probability mass function can revolutionize your risk assessment, ensuring a smoother, more accurate process for all stakeholders involved.

Your voice matters to us, and it's your feedback that fuels the journey of 'Quality During Design'. Share your thoughts on PodChaser or your podcast player of choice. Your reviews do more than just support us—they help others discover the insights we offer and contribute to our collective success. Tune in, engage, and be a part of something bigger with Quality during Design, where every listener's perspective is valued and every episode is a step towards mastering the art of using quality during product design.

Other episodes you might like:
Use FMEA to Choose Critical Design Features

Reliability Engineering during Design, with Adam Bahret (A Chat with Cross-Functional Experts)

Give us a Rating & Review

**NEW COURSE**
FMEA in Practice: from Plan to Risk-Based Decision Making is enrolling students now. Visit the course page for more information and to sign up today! Click Here

**FREE RESOURCES**
Quality during Design engineering and new product development is actionable. It's also a mindset. Subscribe for consistency, inspiration, and ideas at www.qualityduringdesign.com.

About me
Dianna Deeney helps product designers work with their cross-functional team to reduce concept design time and increase product success, using quality and reliability methods.

She consults with businesses to incorporate quality within their product development processes. She also coaches individuals in using Quality during Design for their projects.

She founded Quality during Design through her company Deeney Enterprises, LLC. Her vision is a world of products that are easy to use, dependable, and safe – possible by using Quality during Design engineering and product development.

Speaker 1:

Hello there. If you've been doing FMEA or any kind of risk analyses for your product design project, you've probably had some discussions, long discussions, and maybe some problems and obstacles in getting your team to just agree about what the severity should be. How can we approach rating severity of potential failures in a way that makes sense and helps our team move forward toward action? I have some ideas. Let's talk about it after the brief introduction. Hello and welcome to Quality During Design, the place to use quality thinking to create products others love for and less. I'm your host, Diana Dini. I'm a senior level quality professional and engineer with over 20 years of experience in manufacturing and design. I consult with businesses and coach individuals on how to apply quality during design to their processes. Listen in and then join us. Visit qualityduringdesigncom Risk analyses.

Speaker 1:

They can be a big benefit to a project team to be able to make some decisions based on risk. It could be a bottom-up approach if you're using FMEA, or a top-down approach if you're using hazard analysis. In both situations, we're examining what sort of bad things could happen and then applying a rating to it or trying to quantify how bad it could be. No matter if we are trying to qualify it with one of those ratings from 1 to 10, or if we're actually trying to quantify it with measures of time or money, our teams can get stuck. It usually goes the case of someone thinks, well, yeah, the most likely situation is going to be a seven. Let's just say, for our discussion purposes we're using a scale of one to 10. One person says it's most likely going to be a seven and someone else says, yeah, but it could get bad. It can get as high as a nine. Being the most conservative with your risk analyses, your team decides to do the most conservative approach and list the highest rating. Now you have a nine and you end up with your risk analyses and you have 10 light items that are a severity level of nine. Well, how do you prioritize with that, With these risk analyses? I know there are other measures. There's occurrence detection sometimes and other summary numbers and priority numbers that we can use to help us prioritize for action, but one of the metrics that we look at all on its own is just the severity of something.

Speaker 1:

So now we have an analysis that is not very discreet between the different bad things that could happen. We have a lot of bad things that could be very bad. We have a team that isn't so confident in how we are describing the risks and we still need to prioritize them. What we're failing to capture is our level of uncertainty or our team's level of uncertainty about what it is that we're looking at, the risk that we're evaluating. We have some information in order for us to be able to say this is a potentially bad thing that could happen. We're pacing that off of something our historical knowledge of something or comparing it with something else and looking at what bad things happen with that other thing. Looking at field data, we are making an assessment about how bad things could be and applying a measure to that, assigning a measure to that based on our knowledge. But we're still a little bit uncertain, like we had started out with. Most likely it's going to be a severity of seven, but worst case it could be a nine.

Speaker 1:

A way to capture this is using probabilities and we're going to assign probabilities where we don't need to collect a bunch of data and do calculations. We can use the knowledge and the information that we have to be able to assign a probability, the way this would look in our scenario that we've been talking about, where we're trying to quantify how bad something is by using a rating scale of one to 10. We can assign a probability using a discrete probability mass function. I know you're familiar with the famous bell curve of probabilities. Well, probability mass function uses discrete information. In our example, we're going to map out our severity rating scale on the x-axis and on the y-axis is going to be the probabilities that we assign to it. With these 10 different line items that we have, there are all a severity nine and we're not really quite sure how to prioritize them. We can go back with our team and revisit some of that uncertainty and ask them For this kind of effect. We said that it was a nine as a worst case, With this effect being a severity of nine.

Speaker 1:

What's the probability that it would get that bad? And you can assign a probability like you would on a bar chart. You would draw a line up to maybe a probability of 20%. The team had also talked about the severity rating being a seven. It's more likely to be a seven, but we chose not to list that in our FMEA because we were being conservative and wanting to list the worst case. We'll go back to the team and ask them what is the likelihood that this bad event is going to be this associated with a severity of seven. In our example, they might say there's a 60% chance that the event is going to be related to a severity of seven. So at that severity seven, on our x-axis we're going to draw a bar chart line up to 60%.

Speaker 1:

Now you can start to see that your team is assigning probabilities to the bad events, to their quantification of these bad events that could happen. Now it's no longer just a point estimate, it's an estimate of a probability distribution. The team is assigning probabilities associated with a certain severity level. Now what would this do for us? It does quite a few things actually. For one, the team sees that the information that they know about this particular event is captured more accurately, more fully. Instead of just trying to pick a point estimate, they are giving you all the information that they have about not only how bad it could be, but the likelihood that it is that bad. And you may be surprised at how much more comfortable your team is in making that risk estimation, Because now they don't feel like they're being pigeonholed into just giving one answer. They can more fully give you a complete answer when we do this for all of our 10 items that are severity of nine and we were having trouble prioritizing them. Anyway, we have more information against which to prioritize these things. In our first example we chose a severity of nine and there's a 20% chance that could happen. In other scenarios we picked a severity nine as worst case, but maybe it's only a 5% likelihood that that would happen. We can use that information, along with the rest of our risk analysis information, to be able to help us make a decision.

Speaker 1:

This kind of thing doesn't neatly fit into an FMEA table, so you may have to create an addendum, add a column to your analysis with an asterisk or a footnote and include a quick, short diagram of a probability mass function that's associated with that risk. You are allowed to do this sort of thing with your risk analyses. In fact, other people in different industries highly encourage it. Douglas Hubbard authored the failure of risk management and here's what he says we use probabilities because we are uncertain, not in spite of being uncertain. You can use this method to help your team work through these kind of assignments the assignment of a probability against a bad event.

Speaker 1:

I know a lot of our risk analyses also include an occurrence rating and I wanna say that this occurrence rating that we're using is not the same as the probability of a bad event. The occurrence rating in our risk analyses is associated with the likelihood that the failure mode and its associated cause are present in the item being analyzed. Assigning a probability for a severity rating is different than assigning an occurrence rating for the failure cause combination and just as a reference, I got that definition of occurrence from Carl Carlson's book Effective FMEAs. So what's today's insight to action? Try assigning probabilities to severities to get over this obstacle of having to choose what severity rating to add to our risk analyses Whether we use time, money or a qualitative rating, like on a scale from one to 10, draw out a probability mass function and see how your team responds to it. You will likely be surprised at how much more comfortable your team is in helping you to make these risk estimations for your risk analyses and it's more clearly defining what it is they understand about the problem. So you can prioritize, If you like, the quality.

Speaker 1:

During design podcast, please consider rating and reviewing us. This is going to help give us information and it'll also help other people to find the podcast. Our favorite place for rating podcast is at PodChaser. If you prefer to leave us a review on your favorite podcast player, feel free. We would also appreciate it there. This has been a production of Deanie Enterprises. Thanks for listening.

Effective Risk Analysis With Probability Ratings
Design Podcast Rating and Review Request

Podcasts we love