Quality during Design

Next Steps after Suprising Test Results

December 14, 2022 Dianna Deeney Episode 80
Quality during Design
Next Steps after Suprising Test Results
Show Notes Transcript

During product development, we're consistently looking for ways to learn more about the product in order to make design decisions. Some of that comes from test.

What do we do when our test results are...surprising?

We talk about some next steps I typically take when tasked with surprises: revisiting the purpose, understanding failure modes at test vs. what we expected (requirements and FMEA), finding a root cause (with the help of FMEA), and deciding what to do next.

Visit the podcast blog.

Other Quality during Design podcast episodes you might like:

How to Handle Competing Failure Modes

Remaking Risk-Based Decisions: Allowing Ourselves to Change our Minds.

5 Aspects of Good Reliability Goals and Requirements

The Way We Test Matters

Give us a Rating & Review

**NEW COURSE**
FMEA in Practice: from Plan to Risk-Based Decision Making is enrolling students now. Visit the course page for more information and to sign up today! Click Here

**FREE RESOURCES**
Quality during Design engineering and new product development is actionable. It's also a mindset. Subscribe for consistency, inspiration, and ideas at www.qualityduringdesign.com.

About me
Dianna Deeney helps product designers work with their cross-functional team to reduce concept design time and increase product success, using quality and reliability methods.

She consults with businesses to incorporate quality within their product development processes. She also coaches individuals in using Quality during Design for their projects.

She founded Quality during Design through her company Deeney Enterprises, LLC. Her vision is a world of products that are easy to use, dependable, and safe – possible by using Quality during Design engineering and product development.

Speaker 1:

We're in new product development and we're deciding we want to learn more about our product through test and we get our test results back and they're not quite what we expected. What does that mean and how can we move forward? Let's talk more about some steps we can take after this brief introduction. Hello and welcome to Quality During Design, the place to use quality thinking to create products, others love for less. Each week we talk about ways to use quality during design, engineering, and product development. My name is Diana Dini. I'm a senior level quality, professional, and engineer with over 20 years of experience in manufacturing and design. Listen in and then join us. Visit quality during design.com. Do you know what 12 things you should have before a design concept makes it to the engineering drawing board where you're setting specifications. I've got a free checklist for you and you can do some assessments of your own. Where do you stack up with the checklist? You can log into a learning portal to access the checklist and an introduction to more information about how to get those 12 things. To get this free information, just sign up@qualityduringdesign.com. On the homepage, there's a link in the middle of the page. Just click it and say, I want it. Something standard that we do during new product development or any development is ransom tests. It's just part of the engineering cycle. It's part of the scientific method. We're using our creative energies to come up with new ideas and design new things, and then we develop tests and requirements against which to test them, and then we test them and look at the results. The results are not usually so clean, cut and straightforward. Despite our best efforts at trying to define clear requirements from the beginning. Sometimes our test results are a little messy. Now we can get frustrated about this or we can look at it as an opportunity to learn more about our product. When I am working with test results that are messy or someone approaches me with test results that they're not sure what to do with next, there's some standard things that I tend to do, so let me share what those are with you and why I look into those. The first thing I do is I go back to revisit the purpose of the test. Sometimes these are based on product requirements or user needs. What is it that we were really trying to test against to verify or to learn? Did we learn what we intended to learn from this test or did something new come up? If we're performing a reliability test, what were our reliability goals? They should define what a failure mode is, even if it's worded as a success and our requirements. Our full requirement might be spread out in different sections of our requirements document, but it should be there and we should have based our test method based off of that. If it's a reliability requirement, we could be looking at a measure of time, reliability at specific points in time, and then no matter what we're testing, we're going to be defining a desired confidence level, what a failure would be, the definition of the failure, and then what kind of operating environmental conditions we expect our product to be able to perform within. When we're looking at the results of test, we always go back to what the original intent was. The next thing I look at is what was the failure at test exactly? Is how the product failed match with what we expected it to do, or did something unexpected happen? We may also be dealing with something called competing failure modes. Something I get into in a different episode and I'll link to that, even if this failure mode isn't exactly what we were expecting when we started out the test. Do we understand that this was even a possibility? We can go back to our FMEAs that we've done in our preliminary concept development and we can see if we listed that failure mode within the F M E A and what effect was listed. What was associated with that failure mode and how severe was it? We're not even really looking at the numbers yet, and we're already starting to learn a lot about the results of this test. We're thinking about the original tent and we're looking at the different failure modes that have occurred and comparing that against what we expected the product to do. Now is where we can start taking it to the next step, which is verifying the root cause. We wanna get to the root cause so that we can understand the most out of our product. Why did it fail the way it did, or why was it performing the way that it was? Is it because of the product itself or was there variation introduced in making the product that we hadn't accounted for, or was it even the test method? Sometimes the way that we choose to test the product or handle it or store it between tests can introduce variables that produce failure modes that we didn't expect while we're investigating the root cause. We can go back to the F M E A again, what causes are associated with the failure mode that we saw from our test results referencing the F M E A this way may help us with our root cause analysis. Once we've gotten to the root cause, we can check that yes, we did have it in there, or no we didn't, and we have to add it. We can also look at the occurrence. Now that we've tested it, we better understand what we think the occurrence of that failure mode due to that cause could be. I understand that it's easy to say go find the root cause. It usually takes several iterations of different investigations and different test confirmations to be able to verify that you did get the root cause, but getting to the root cause is an important part of investigating a test failure. So where we are now is we've had a test with results. We've revisited the purpose of the test. Did our results meet up with the purpose of our test or did we learn something new from that? We looked at the failure mode. Was our failure mode something we expected or did we learn something new and then we've gotten to the root cause and we've probably learned the most out of finding the root cause out of our test method. After all of this, we can decide what to do next. What did we learn about the product through the test and the test results? Is it acceptable or not? Did it meet the requirement and is it going to meet the need or did we find a new failure mode that we need to address? There's a whole product development world of different questions we could be asking ourselves with, is this acceptable or not? We should have clearly defined acceptance criteria at the beginning of test, and depending where we are in the product development cycle, we'll depend on how we can react with this question. We want to learn as much about the product as early in the development process as we can. So if this is an early test, then we may have lots of options to redesign things, to reconfigure things, choose different components. Other prevention methods that we could use would be changing the manufacturing process. We could also decide to add detection controls, something like in process testing or inspection to actively look for the root cause that we've discovered. If we're learning lots of new things after the product's already developed, then that can be more challenging, will be more difficult or impossible to make a design change late in the development process. Sometimes our tests pan out this way and our project may get scrapped. What's today's insight to action? No matter where in the development process we're working, we can look at product test as learning about the product itself. If we strive to learn about it early in development as much as we can, we'll have the most chances of making the product right. We can work with our quality and reliability engineering friends to help us develop tests early in development and then exploring potential issues through F M E A. Failure mode and effects analysis with our team can highlight where to test and can also help us in deciding about how we react with the test results. What next steps we should take? If you like this topic or the content in this episode, there's much more on our website including information about how to join our signature coaching program. The quality during design journey consistency is important, so subscribe to the weekly newsletter. This has been a production of Dean Enterprises. Thanks for listening.

Podcasts we love