Quality during Design

Exposing The Hidden Flaws of FMEA and Risk Matrices: Advancing Your Risk Assessment

Dianna Deeney Season 5 Episode 4

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 18:09

Unveil the hidden flaws of FMEA and risk matrices that could be skewing your analysis and decision-making. 

In the realm of risk assessment and management, traditional tools like Failure Mode and Effects Analysis (FMEA) and risk matrices have been widely accepted as the norm. However, beneath the surface of these established methods lie hidden flaws that can significantly impact the effectiveness of risk analysis and decision-making processes. The latest podcast episode takes a deep dive into these issues, offering listeners an exploration of the challenges posed by conventional risk assessment techniques.

We take our conversation a step further, emphasizing not just the identification of such critiques but the vital role understanding them plays in fortifying our decision-making frameworks. 

The episode emphasizes the importance of staying informed and adapting to new methods in the ever-evolving landscape of risk management. By doing so, professionals can ensure that they are not only equipped to handle current challenges but also prepared to meet the demands of the future.

And for those of you eager to translate this newfound knowledge into practice, we spotlight an exceptional resource: "FMEA in Practice from plan to risk-based decision making," an Udemy course that promises to elevate your risk-based decision-making abilities.

Visit the podcast blog for a list of resources and more links.

Send us a message

Support the show

If your team is still catching problems too late — let's talk.
→ Schedule a free discovery call: Dianna's calendar

Want insights like this?
→ Subscribe to my newsletter: qualityduringdesign.substack.com

Get the full framework.
→ Pierce the Design Fog 

ABOUT DIANNA
Dianna Deeney is a quality advocate for product development with over 25 years of experience in manufacturing. She is president of Deeney Enterprises, LLC, which helps organizations and people improve engineering design.

Criticisms of Traditional Risk Assessment

Speaker 1

When we're designing something whether it's a product, manufacturing process or even a service we are thinking about the bad things that could happen, the risks, and we try to design those out of whatever it is that we're developing. Part of designing it out, of course, is identifying what they are, and we're going to have to make some trade-off decisions, so there's going to be some prioritization going on. A typical way that we do this in development is with a failure mode effects analysis. It's traditionally a reliability engineering tool, but it can be used by a team of people to identify those potential risks, those potential bad things that could happen, that we want to reduce, eliminate or otherwise control. Traditional methods include rating scales on a scale from 1 to 10, for example, and they could also include things like risk matrices or risk indices, where we are mapping out our risks on a matrix that has traffic light symbols green, yellow and red. There are a lot of criticisms and critiques with these traditional methods, and they're not wrong. Let's talk more about it after this brief introduction. Hello and welcome to Quality During Design, the place to use quality thinking to create products others love for less.

Speaker 1

I'm your host, diana Deeney. I'm a senior level quality professional and engineer with over 20 years of experience in manufacturing and design. I consult with businesses and coach individuals and how to apply quality during design to their processes. Listen in and then join us. Visit qualityduringdesigncom.

Speaker 1

I'm happy to share that I recently checked something off my bucket list, which was a course about FMEA. I've done a lot of FMEA in my career. I stem from the medical device manufacturing industry. Where it's required, I was heavily involved in creating these for teams, so I learned best practices and I learned what not to do. I took my best practices and the things that weren't so good and I gomorated it into a course for you on Udemy. At the time that this podcast airs, it will have been out for a little while and you may have seen something about it already. I hope so. If not, please check it out. I really did create it to help people navigate and get the most out of FMEA that they could. I did base it off of my experiences, but I also looked outward into what other people were doing and saying about FMEA and risk management in general.

Speaker 1

In my social feeds and in the blogs that I read, I routinely come across posts about risk matrices and how bad they are and how I need to stop using them Now. A risk matrix is a two-by-two matrix, which is severity, occurrence, and we map out severity and occurrence within the matrix, and it's filled with boxes of green, yellow and red. Green meaning hey, this is great, yellow is caution and red is beware. If you work in the automotive industry, you may be using their latest standard, which uses an action priority number, which is mapped out in a matrix, but it considers three different things severity, occurrence and detection and has the same tri-colors associated with it. Generally, there is a lot of risk index hate going on out there and it's not unfounded.

Speaker 1

I wanted to explore these critiques and incorporate them into the FMEA course that I offered, so I created a lecture that focused on the criticisms of these traditional methods that we use, which is, the ordinal scales for rating scales, risk indexes and some of the other summary priority numbers that FMEA traditionally has used. I put on my reporter hat and I did a little bit of digging and came up with some pretty good resources. In this episode, I want to share with you that one lecture that I included in the FMEA course. I talk about what the criticisms are, the things that we need to watch for why it's considered a problem, and I do it in a place of what we can do instead. Please take a listen to this lecture and I will see you on the other side of it with some of my wrap-up thoughts. Up thoughts. Traditional FMEA rating criteria and prioritization were transplanted into other areas of the business where now we have a larger pool of diverse people using similar methods. There are criticisms to the way we've always done things. Let's review what these criticisms are so we can make our FMEA process more robust.

Speaker 1

Using ordinal rating scales introduces many errors in risk analysis, and ordinal rating scales are what are traditionally used in FMEA. Some practitioners call them subjective weighted scores with arbitrary point values based on categories. When they're described that way, it does sound less than ideal. Let's take a closer look at why using ordinal rating scales can introduce errors. Douglas W Hubbard authors how to Measure Anything, and he says scores in this case, rating scales are methods of attempting to express relative worth, preference and so on, without employing a real unit of measure. It introduces additional errors. So what kind of errors are introduced by using traditional rating scales or these ordinal rating scales?

Speaker 1

There can be several, and it's good to be aware of them so that we can adjust our way of thinking or, when we're building out our rating scales, we can do things to address them. Some of the errors include differentiated values, which means choosing how to differentiate or choose between different ordinal values or categories really has a large effect on responses. There's been further studies done that say that the way we set up our rating scale has a significant effect on what people choose. If we have a rating scale that's a negative, five to five people are going to choose different scores than if we use a rating scale of zero to 10. They get different responses. We can also not use the data that we have. We may have perfectly good quantitative measures, but then we take that and we simplify it in order to fit into an ordinal score and we could lose track of the data that we have.

Speaker 1

Ambiguous labels don't help the decision maker and can add errors, because now we're confused and we don't know which one to choose. Some of the criteria or the way that we describe some of these rating scales are really not enlightening. They don't really accurately describe the situation or our level of uncertainty or the consequences of the effects. So after we've given a score, it doesn't really give us any more information or insight into the risks. Users can treat ordinal scales as a real quantity, which then adds errors. For example, an occurrence rating of 2 doesn't mean that it's twice as bad or twice as much as one, but sometimes people make that assumption and it leads to errors. Also, multiplying and adding ordinal scores has other unintended consequences. Range compression is when significantly different quantitative values are placed in the same category, and it makes our decision-making harder. Another thing that the critics don't like about traditional FMEA methods is that they're not consistent mathematically, which can then lead to decisions that are ill-considered. Back to Douglas W Hubbard, who also authored the Failure of Risk Management.

Speaker 1

It can be shown that mathematically irregular methods may actually lead to dangerously misguided decisions. For example, we shouldn't be adding and multiplying ordinal scales, as is done in many risk assessment methods, and multiplying ordinal scales, as is done in many risk assessment methods. When analyzing our ordinal data, it's mathematically valid for us to describe it using mode and median range, percentile ranges and even bar charts. It's not valid for us to add, subtract, multiply, divide or do proportions or to describe it in mean and standard deviation terms. When it comes to RPN, our risk priority number, which is usually the product of severity, occurrence and detection, or just multiplying severity and occurrence together. That's not a recommended practice when we're using ordinal scales, when we're using RPN.

Speaker 1

We shouldn't rely on only RPN to help us prioritize our risks. Let's take a look at an example of an RPN and how we use it to prioritize. If we have an FMEA and we just calculate the RPN, we can see that we have 10 light items that have an RPN of 10. We can see that we have 10 light items that have an RPN of 10. But behind that are different severity, occurrence and detection ratings that lead to that same number. So different combinations of occurrence, severity and detection can produce exactly the same value of RPN. Their hidden risk implications may be totally different. If we consider additional light items with an RPN of 9 and 12, we may consider that the RPN with 12 would be a priority, when in real case it may not be. When we only look at RPN, we may be hiding risk implications which may be totally different and could lead us to a different conclusion. The last thing to note is that RPNs are not continuous, meaning if you were to create every combination of RPN possible, it would range from 1 to 1,000, but there would be gaps and holes within that range. Now you may be thinking, okay, well, what if we just avoid the math altogether and we don't multiply the RPN? We plot the severity and occurrence on a matrix or plot the severity, occurrence and detection on a matrix. I introduced this to you already as a risk index and an action priority.

Speaker 1

Tony Cox has a PhD in risk analysis and he did a pretty influential study. Dr Cox studied axioms or mathematical rules that risk matrix designers might ideally want their matrices to satisfy. Here's an example of a 5x5 risk matrix that satisfies some of the axioms that Dr Cox put forth for risk matrices, namely a weak consistency betweenness and consistent coloring with a quantitative risk interpretation. Betweenness means an arbitrarily small increases in occurrence and severity shouldn't create discontinuous jumps in risk categorization from green to red. Dr Cox's matrices also tried to use consistent coloring. There's an equal quantitative risk, should ideally have the same qualitative risk rating, color naming green, yellow or red. And finally, the weak consistency axiom means that the points in the top risk category should represent higher quantitative risks than points in its bottom category.

Speaker 1

Despite all of those studies and applying those different axioms toward risk matrices, dr Cox thinks that risk matrices are not a great thing to use. The meaning of a risk matrix may be far from transparent. Despite its simple appearance, risk matrices do not necessarily support good, for example, better than random risk management decisions. After Dr Cox's study, there were other people that continued to study risk matrices and similar risk quantification methods. Others say that there are inconsistencies and arbitrariness embedded in risk matrices and, given these problems, it seems clear to us that risk matrices shouldn't be used for decisions of any consequence.

Speaker 1

Some equate risk matrices as a simplification of risk, like astrology is simplification of astronomy. Here's the bottom line. Fmea is not a math problem. It requires people making decisions people making trade-off decisions, management decisions and other risk-based decisions. Many of the problems that practitioners have with using things like RPN and risk matrices is that they are the only way that people are evaluating risk from these assessments.

Speaker 1

Analyzing and evaluating risk is difficult because of all the uncertainty around it, and risk-based decisions require us to understand what it is we know and then being able to make a decision based on that information. We look for ways to simplify these decisions for ourselves by using a color-coded matrix or a cutoff value of a prioritization number, but in doing so and only looking at that to make decisions, we're eliminating too much of the information that went into the risk rating itself. It's like a management dashboard that is oversimplified. There are some management dashboards that use a red light and a green light. Things are good or things are bad. When things are good, nothing's done. When things are bad, then we need to start to do something. But this eliminates a lot of the information that is being gathered and assessed behind the scenes. We're missing trends, we're missing connections between things and we'll end up being surprised that type of management metric. If we just use a risk matrix or a number cutoff value, we're basically making ourselves a green light red light situation where instead we want to have the information to be able to make decisions. To help us address these criticisms, let's reframe our thinking of risk from point estimates to probabilities.

Speaker 1

That's the end of the lecture from the FMEA course that I wanted to share with you today. I know some of you out there and listening may not be able to get away from using a risk matrix or using rating scales. They are either ingrained within the company that you're working in or they're part of the quality management system it may be specified that you have to use them, but I want us to take a lesson from improv, which is the yes and lesson. Yes, we may need to use rating scales and report our results in a risk matrix, and we're going to further analyze our risks with these other methods so that we can fully understand and prioritize what it is we need to do with whatever we're developing. Yes, we're going to use these rating scales that we have to use and we're going to take a meeting with our cross-functional team that's doing the risk analysis to get alignment and understanding on what these rating scales mean so we can apply them consistently throughout our analysis.

Speaker 1

Understanding other people's criticisms and the challenges that these other traditional methods introduce to our risk analyses is a great step toward minimizing their effect on our decisions. If you appreciate this lecture and you want to take things to the next step with FMEA, then consider signing up for the FMEA course. It's on Udemy and it's titled FMEA in Practice from plan to risk-based decision making. There'll be a link to it in the show notes. This has been a production of Dini Enterprises. Thanks for listening.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.