Quality during Design
Quality During Design helps engineers build products people love—faster, smarter, and with less stress. Host Dianna Deeney, author of Pierce the Design Fog, shares practical tools and quality thinking from concept to execution. Subscribe on Substack for monthly guides, templates, and Q&A.
Quality during Design
Expected Value Makes Uncertainty Manageable
Ever face a late-stage design decision where your gut says “maybe,” finance says “no,” and the schedule says “hurry”?
We unpack a simple way to make those calls with more clarity: using expected value to connect confidence, upside, and downside into one sober view of net benefit. No jargon, no spreadsheets required—just a clear framework that helps you see when a $50,000 test buys real certainty, and when the right move is to ship.
Still, numbers don’t get the final say. The goal isn’t to pick the biggest EV; it’s to choose the most balanced, actionable, project-aligned option.
If this approach helps you navigate the gray areas between risk and reward, follow the show, share it with a teammate, and leave a quick review so others can find it. Got a decision you’re wrestling with? Send it our way—we’ll feature it in a future breakdown.
This blogpost: https://deeneyenterprises.com/qdd/podcast/expected-value-makes-uncertainty-manageable/
Facing a really complicated and nuanced decision? Try this Method to Help with Complex Decisions (DMRCS)
JOIN ME ON SUBSTACK Subscribe today.
GET THE BOOK Pierce the Design Fog
ABOUT DIANNA
Dianna Deeney is a quality advocate for product development with over 25 years of experience in manufacturing. She is president of Deeney Enterprises, LLC, which helps organizations and people improve engineering design.
You're trying to make a decision and you're 75% confident, but then your manager says, Well, can we run another test? The test would be$50,000. Do you do it? Now we're talking about combining uncertainty and money and costs of failures. I'm going to share with you a lightweight decision tool that you can use to help, called Expected Value. More after this brief introduction. Welcome to Quality During Design, the place to use quality thinking to create products others love, for less time, less money, and a lot less headache. I'm your host, Diana Deaney. I'm a senior quality engineer with over 20 years in manufacturing and product development and author of Pierce the Design Fog. I help design engineers apply quality and reliability thinking throughout product development, from early concepts through technical execution. Each episode gives you frameworks and tools you can use. Want a little more? Join the Substack for monthly guides, templates, and QA where I help you apply these to your specific projects. Visit qualityderingdesign.com. Let's dive in. Welcome back. We are in our third and final phase of an arc that we've been taking about decision making and engineering. We're faced with a late stage design problem and we need to improve our confidence in it and then make a decision. Our framework that we've been using is frame it, then investigate it, and now finally choose it. I've been writing in-depth and providing examples on Substack with this system. You can find the articles there. You can also listen to the previous five episodes of this podcast, which cover similar topics. And now we're wrapping it up with a final podcast episode related to this series, for now, anyway. When you're working in a regulated industry, we're usually focused on performance and doing what it takes in order to make sure that a product meets the performance that it needs to meet. I'm thinking things like automotive and medical device applications. Even in those environments, we're still designing for customers and there are costs to consider, business decisions that we may need to make. If you have an MBA or work in accounting, you're probably familiar with expected value. Expected value measures the average outcome you can expect from a decision. What I especially like about it is that it combines our uncertainty about a decision or our certainty about a success, and combines that with the value of our success and the cost if we should fail, which are two different values. Expected value combines these things into a net benefit or loss of a choice by combining the probability of success with its upside value and subtracting the probability of failure multiplied by its downside cost. The probabilities that we're talking about here we've been developing in the last two phases and frame it and investigate it. That's our confidence level in our decision that we need to make. If we are 75% confident that it's going to work, then that's our probability of success and our expected value calculation. And that value of success might be related to things working as intended. Maybe reduce costs, faster time to market, or increased revenue. So then what do we do about if it doesn't work? If we're assigning a probability of success of 75%, that's our confidence in our success, well then the probability of failure is one minus that, which is 25%. So we've already got that figured out. So now what would be the cost of our failure? That would be not meeting our requirements. That could be rework, delays, lost opportunities, those sort of things. Expected value is a lightweight decision tool that helps us combine these things together. All right, well, we think our success is going to be this and we're this confident in it. But then again, if we fail, this is what we could lose. Now, how do we know what kind of costing to put in this equation? Well, that's again another beautiful part about our framework, our frame it and investigate it, choose it. Through this cycle, we're learning more about our problem and what effect it has on the project. And we're quantifying the impact that this decision has on the project at the beginning. And we're learning more about what the solution should be. So heading into this last phase of choose it, we have a lot of information and data to help us assign costs for expected value. Let me run through an example, the example that I use on Substack also. We had a problem with a part. We weren't sure if we should injection mold it or not. We weren't feeling confident about our decisions about how the design was going to look or what material it should be made of. But then coming out of our investigative work, we decided that we wanted to take a closer look at nylon, for example. That was one of the choices that we picked. Nylon had a lot of other benefits to the other materials that we were looking at. We used paired comparison to kind of flush that out. But now we say, all right, well, nylon, we were the least confident in his performance. We had a 60% confidence in nylon performing like we wanted it to. Should we run an additional$50,000 test to increase our confidence to 75%? Our success is defined as the mold or the product working as intended. And that's going to add, we estimated,$500,000 in value through those things like reduced cost, faster time to market, increased revenue. Failure we define as the molder product not meeting requirements, and that would incur$165,000 cost through rework of the mold, delays to market, and lost opportunities. Success is$500,000 and our failure is$165,000. Our probability of success is based on our updated confidence levels from our investigative work. Our confidence in nylon is 60%. So then the probability of failure is simply 1 minus 60%, which is 40%. Now let's plot that into our expected value equation. So the expected value is the probability of success times the upside minus the probability of failure times the downside. So we have probability of success times the upside is uh 60% times$500,000, which is$300,000. And the probability of failure times the downside is 40% times 165K, which is 66K. So our expected value, if we proceed with nylon, is$234,000. So it's interesting to see it's a positive number. But our real question is should we invest in the$50,000 testing to raise our confidence in this material selection to 75%? Only looking at expected value for now, we run the numbers again. And we use the same monetary values, except now our probability of success, we're thinking, will be 75%, and probability of failure is, of course, 25%. We're also going to subtract the cost of test, which is the 50K. So our expected value rounded up comes out to$284,000. If we just proceed with what we got,$234,000, and if we invest$50K to raise our confidence to$75%, even subtracting the cost of the test, our expected value is higher,$284. So yes, it's worth testing if we only care about expected value. In our ultimate decision making portfolio, we're not only considering expected value, but it is an aspect of it. And doesn't that provide a lot more clarity? I mean, you you barely know what the problem was in the first place. We had a molding problem, we're trying to make a material decision. What's it for? Where's our project at? Well, just having this net value and considering our confidence in our decision gives us better information to be able to make a better informed decision, don't you think? I think so, which is why I like it, which is why I'm telling you about it. But it is really only one point of data. Projects are more complicated than that. Products are more complicated. We know that if we pursue additional testing, that's not going to just cost us testing. It's also additional time. We were still considering two other materials, is it still worth it? So there's more things that we need to look into, more points of interest that we need to consider. And depending on your project, maybe the impact to your project, um, wouldn't be acceptable to take any risks or to accept a confidence of 60%. So whereas this is worth doing, especially as engineers, it helps us to wrap in the business side of our decision making. And also helps involve us in the conversation with our project managers and our project managers. We don't want to use expected value when the downside risk is existential, meaning it's company killing. Some risks you just don't take, even if the expected value is positive. If you're in that regulated industry where the best effort has legal meaning beyond expected value. And if the uncertainty is so high, like below 30% confidence, that the numbers are meaningless, then we need to do some more investigation to really figure out more options about our decisions. But here's the thing, this is a math equation, but it's not a math problem, is it? These are complicated decisions with real impact, which is why we get a stomachache about them. This framework guides you from knee-jerk reactions and gut instinct to a systematic approach for data and information you need to make a better informed decision. So stop, evaluate, gather information, focus on what matters for your project. That goes a long way toward making the right decision. And document how you got there. Your phase one framing, phase two evidence stack, phase three rationale. That document is your learning asset. When a systematically made decision doesn't work out, you have something valuable, a documented trail of what evidence you had, what you expected, and where reality diverged. And this turns not into a CYA document, but it turns failure into organizational learning. What did we do this time, or what did we assume this time that we now know better not to assume the next time? And when using expected value, don't choose the highest option. Choose the most balanced, actionable, and project-aligned option. That's what makes it smart, not just mathy. If you want to go deeper into these concepts that we've been talking about the last few months, visit quality during design.substack.com. There are six posts total related to this three-phase system. The most instructional posts are the Ask Me Anything, and they're boosted by the Strategic Insights, looking at how people actually apply these in the real world. For example, in this month, we looked at Tesla's battery design decisions. Not that we have the inside scoop or the inside financials and decision making that Tesla uses, but we did look at the changes that they made. Like why did they pivot from pursuing the most technologically advanced decision? And we use these lightweight decision frameworks to kind of demonstrate how it all works together and how they could have come to those decisions. So, quality during design.com. I'll be posting more articles like that throughout the next year. And this is our final episode of 2026. So I wanted to thank you for being a listener for finding quality during design and joining me this last year. I look forward to more of it in 2026. Happy New Year.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.