Quality during Design
Quality during Design is the podcast for engineers and product developers navigating the messy front end of product development. Each episode gives you practical quality and reliability tools you can use during the design phase — so your team catches problems early, avoids costly rework, and ships products people can depend on.
You'll hear solo episodes on early-stage clarity, risk-based decision-making, and quality thinking, along with conversations with cross-functional experts in the series A Chat with Cross-Functional Experts.
If you want to design products people love for less time, less cost, and a whole lot fewer headaches — this is your place.
Hosted by Dianna Deeney, consultant, coach, and author of Pierce the Design Fog. Subscribe on Substack for monthly guides, templates, and Q&A.
Quality during Design
Raise Your Confidence by Strategically Stacking Evidence
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Late-stage design just hit a snag—now comes the moment that separates guesswork from great engineering. We walk through a clear, repeatable method to investigate unexpected failures and make high-impact decisions with confidence. Instead of hunting for a perfect test, we set a confidence target and stack multiple forms of imperfect evidence until we close the gap.
If you’re navigating late-stage product development and want a calm, methodical way to move from 40% to 90% confidence, this framework will help you choose the next best step, allocate limited time and budget, and know when to stop.
Join the Substack for monthly guides, templates, and QA where I help you apply these to your specific projects. Visit qualityduringdesign.substack.com.
If your team is still catching problems too late — let's talk.
→ Schedule a free discovery call: Dianna's calendar
Want insights like this?
→ Subscribe to my newsletter: qualityduringdesign.substack.com
Get the full framework.
→ Pierce the Design Fog
ABOUT DIANNA
Dianna Deeney is a quality advocate for product development with over 25 years of experience in manufacturing. She is president of Deeney Enterprises, LLC, which helps organizations and people improve engineering design.
Setting The Three-Month Arc
SPEAKER_00Hello, welcome to the Quality During Design Podcast. I'm your host, Diana Deaney. We're in month two of a three-month arc. Month one was in October, where we were talking about late-stage design decisions. We've gone through our product development process, everything has been going as expected, and then we hit a glitch. An unexpected thing happened. It could be a failure in the test lab. It could be that we got results that we weren't expecting. And now we're faced with a decision. What do we do? What design decision should we make? In the last episode, we talked about better framing the problem and identifying what's giving us heartburn about it and assigning a confidence level in our design decision. And that led us to understand that the particular problem example from last month is it's a critical unknown. We need to do more testing or more investigation to really better understand the problem and improve our confidence about it. So that's where we're heading into this month. Last month was called Frame It. This month is called Investigate It. And then the next month in December is going to be Choose It. So here in phase two, we're investigating more about our problem and deciding what to do about it and maybe when to stop. Let's talk more about it after this brief introduction. Welcome to Quality During Design, the place to use quality thinking to create products others love, for less time, less money, and a lot less headache. I'm your host, Diana Deaney. I'm a senior quality engineer with over 20 years in manufacturing and product development and author of Pierce the Design Fog. I help design engineers apply quality and reliability thinking throughout product development, from early concepts through technical execution. Each episode gives you frameworks and tools you can use. Want a little more? Join the Substack for monthly guides, templates, and QA where I help you apply these to your specific projects. Visit qualityderingdesign.com. Let's dive in. Just a note about these methods, or it seems like, you know, I don't need a system to be able to help me think through that. I'm an engineer, I have training, I'm good to go. But what happens is when you're in a project and you're facing deadlines, you're facing scrutiny, it's a decision that is an important one. It could make or break the product, literally. We don't always operate with cool heads. And sometimes there is so much information and so many decision points and conversations going on about it that we get confused, we get a little bit lost. So that's why I turn to these sort of systematic approaches. Okay, let's stop, let's take a step back, let's frame our problem. So now that we've done that and we've decided we need to do more investigating, we don't want to just throw spaghetti at the wall and hope something sticks by doing everything we can. We can still be strategic about it, even when we're under time and cost limitations, especially when we're under time and cost limitations. Here's the basic idea and approach for this. You want to stack your evidence. We can call it the stacking principle, because great engineers don't find the perfect test to answer all answers. They strategically stack imperfect evidence until the confidence exceeds their decision threshold. So, what do I mean by that? We're heading into this problem with 40% confidence in our decision, which is low. And it's a high impact problem. So we want our confidence level to be pretty high. We want it really to be 80-90%. So, how do we fill that gap between where we are now, where we're getting heartburn, we're not feeling good about it, we were able to assign a confidence because we defined the problem, we framed it well. Now we have this gap of about 50% confidence that we need to fill. We don't want to just start doing any old tests. Um, we can be strategic about what it is that we do and when. So the the key with evidence stacking is to stack up different methods of tests. It could be literacy searches, it could be expert consultation, analysis, ponent testing, system level testing, and well, field data if you have it. Sometimes we do. All of these are sources of information that we can use to boost our confidence. The key with evidence stacking is we want to evaluate our boost in confidence after each iteration. So let's say that we had an opportunity to talk with an expert. It didn't take a lot of time, the cost was low to moderate, and that was able to boost our confidence in our decision and our understanding by about 10%. And then we decide, well, okay, let's do some analysis, computer-aided analysis. We do some reliability life analysis, and that boosts us another 15%. So now that we've talked with an expert and we've gone through some reliability analysis, that has boosted our confidence by 25%. So that's great. So we haven't even tested anything yet. But let's say we do want to test and we run an accelerated life test. And what we thought or expected was going to be the failure mode is not. Something else failed first. So we have more information, and now we may need to pivot. We can't just ignore this new failure mode. This is something that we have to analyze. So our confidence in our decision has gone down a little bit. Maybe it's reduced by 5%, depending on what it is. As we're marching toward figuring out this problem and being able to make a decision, these hiccups are going to happen. They happen all the time. We think we're going one way and then we learn something new and we have to pivot. That's what makes product design so fun and challenging and frustrating, but yet also rewarding. So here's a rule of thumb. You must decide the confidence level you need before you start based on the decisions impact. I mentioned before that if it's a high impact decision, you'll probably want your confidence to be 80 to 90%. The other thing is, is if you do reliability type tests, accelerated life testing, and reliability analysis, like Weibull analysis, you can build in your confidence within your test setup. So you'll be able to adjust the level of stress and the number of parts that you're testing to match the confidence that you're aiming for in the results. So when do you stop? You stop investigating when you either hit the target or when the next test costs more than the value of the information gain. We'll talk more about that next month in the Choose It episodes. So, what's today's insight to action? You're probably not going to find the perfect test to solve your problem and give you the answer. But you can stack different types of evidence. And that evidence can vary in time and cost and the level of confidence boost that it gives you. But adding the confidence boost and re-evaluating where you are after each step is a useful way to determine if you're on the right track, if you need to pivot, how far away you are, and to what extent you need to gather evidence. And it all goes back to the baseline of when we framed the problem. So remember, the goal isn't the perfect test, it's strategic confidence stacking. Start treating confidence as a measurable metric that moves up and sometimes down based on evidence. That's the stacking principle. And now you have the framework, but frameworks are only useful when you can execute them consistently. So if you're serious about approaching your problems differently in late-stage engineering product development, and you want to get more into defining confidence thresholds for your project, then you'll want to join us on Substack. As a subscriber, you'll get the full posts, which are deeper dives into these topics. You'll have opportunities to ask questions particular to your own project, and you'll get a swipe file with the basics. So when you hit these emergencies in your projects, you can pull out the swipe file, get reminded about some of these techniques, and then apply them to your project. Just visit us on Substack at quality during design.com. And as always, this episode and show notes will be at Dinienterprises.com. And this has been a production of Dini Enterprises. Thanks for listening.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
Speaking Of Reliability: Friends Discussing Reliability Engineering Topics | Warranty | Plant Maintenance
Reliability.FM: Accendo Reliability, focused on improving your reliability program and career
Reliability Hero
MAINSTREAM Community
Manufacturers Make Strides
Martin Griffiths
The Manufacturing Executive
Joe Sullivan
The Antifragility Reframe
Dr. Frank L. Douglas
The SAFE Leader with Mark McBride-Wright
Mark McBride-Wright
Coaching for Leaders
Dave Stachowiak