Quality during Design

Always Plot the Data

August 10, 2022 Dianna Deeney Episode 63
Quality during Design
Always Plot the Data
Show Notes Transcript

We don't just rely on the numbers - we always plot the data! We review how we use plots to look past the numbers, and to be aware of common gotchas.

Visit the podcast blog to comment.

Related, previous QDD episodes:
How to Handle Competing Failure Modes - Quality During Design 

Discrete Data vs. Continuous Data - Quality During Design 

Give us a Rating & Review

**NEW COURSE**
FMEA in Practice: from Plan to Risk-Based Decision Making is enrolling students now. Visit the course page for more information and to sign up today! Click Here

**FREE RESOURCES**
Quality during Design engineering and new product development is actionable. It's also a mindset. Subscribe for consistency, inspiration, and ideas at www.qualityduringdesign.com.

About me
Dianna Deeney helps product designers work with their cross-functional team to reduce concept design time and increase product success, using quality and reliability methods.

She consults with businesses to incorporate quality within their product development processes. She also coaches individuals in using Quality during Design for their projects.

She founded Quality during Design through her company Deeney Enterprises, LLC. Her vision is a world of products that are easy to use, dependable, and safe – possible by using Quality during Design engineering and product development.

Speaker 1:

We're getting test data back from the lab and the numbers are looking pretty good. Our test results are within our requirement limits. So let's write it up and have it a go. But hold on, let's plot it out first. Let's talk about plots and why they're important, what we can do with them after this brief introduction. Hello, and welcome to quality during design the place to use quality thinking to create products others love for less. My name is Diana. I'm a senior level quality professional and engineer with over 20 years of experience in manufacturing and design. Listen in and then join the conversation@qualityduringdesign.com. I attended a conference last week for reliability engineers. Well, it was hosted by the AQ reliability and risk division. It was reliability, maintenance and managed risk conference. While at the conference, I met some very interesting people, very friendly people. And I sat in on presentations of useful case studies and interesting ideas about reliability. One of the presenters was Dr. Wayne Nelson. He is a expert on reliability and statistical methods. He's won several awards and has also published books and papers on statistical methods. He was a highly respected contributor to the conference. He had a couple of presentations that I sat in and they had a particular theme that he told us about the theme too. His theme was always plot your data as a reliability and quality practitioner. I think my go-to is to plot the data, but I never really thought of it as something that I would mention that I do. I just kind of do it. I thought it was a good reminder for the reliability engineers at the conference, but it's also something good to talk about with design engineers. So let's talk about why we want to always plot our data. It doesn't matter if our data is discreet or continuous, or if it's counts or measures, we can still plot it. It helps us to understand what we're looking at. The first thing I look at when looking at a plot is how uniform are our results. And if it's not uniform that's gives me some clue as to what's happening behind the data. Is there natural variation in our product? It could be caused from the materials itself, from the way that it's made or even the way that it's assembled could also be a stack up of design tolerances, where everything is made within spec, but the design allows for variation in the end product, or is it our test method? Is it introducing issues? Could it be that the way that we hold the part isn't ideal or maybe the way that we're holding it or positioning it during test, isn't really testing the area that we're looking to test, but instead putting stress on a different part and causing to skew our results. And does it look like we're dealing with multiple failure modes? Are they competing failure modes? We talk about competing failure modes of what they are, what they look like and how to deal with them. In a previous episode of the quality during design podcast, I'll link to it in the show notes. One type of plot may not be enough. I've found that plotting it out once sort of starts a breadcrumb trail. One plot will show me the data and may highlight something interesting. Then we follow the breadcrumb and start digging a little deeper into the data. We may add more inputs into the database and generate another plot that could help us investigate what it is we're looking at now, things to watch out for in our plots. And these are some common gotchas. One of the things to watch out for are outliers. I know they're pesky and don't make for a pretty plot, but we don't delete or eliminate them is rather a source of something interesting or a telltale sign of something wrong. It could be another failure mode, maybe a new one. So we're going to check the test results or reexamine our samples. It could be those test method issues that we talked about or a special cause of a failure in a manufacturing method. Maybe it's not the natural variation of our process, but something happened during the production of our parts that we tested. What was it? And do we need to prevent it from happening again in the future? Another gotcha with plotting is just understanding the nature of your data before you start plotting it. Sometimes the data will inform how it is that we plot something. So that's one thing to consider is just the nature of our data. And then some plots like probability plots, assume that your data meets certain criteria. You may have to plot or test your data against those criteria. Before you can assume the results of the plot are correct or accurate. And before you can start making decisions with it, there are lots of other plots that don't require this like run plots and scatter plots. We just need to be aware of which plots make assumptions based on the equations they use to generate them something I learned from a coworker. And honestly, through some software training is just to go through the preferences and assumptions of whatever software you're using to generate the plot, go through each window and look at the inputs and the assumptions that you're making when you're generating the plot, that will give you some indication. If there might be something else you need to account for another gotcha with plots. And this is something that Dr. Nelson pointed out, he worked with an engineer that was comparing four different things, but the things that he was comparing didn't have the same AEs. For example, as, as engineers, we know to be careful to use calculations with the same unit of measure. And it's the same thing with plots. When we create multiple plots and are comparing them, we wanna make sure that we're using the same units of measure, that we have the zero location, the same for each plot, and that we're using the same limits and range on each axis. We wanna compare like with like otherwise it could skew our view of the data and could lead us to some raw decisions. There are two common plots that reliability engineers look at, and it's a probability density function and accumulative density function. And these two plots, we get into more in another episode of quality during design. And this one's a video episode. So you can see a picture of what it is I'm talking about. I'll link to it in the show notes. Also, there are lots of different plots and you may not be familiar with all the types of plots that you could look at. I'm familiar with a lot of plots and I'm sure I don't have all of them covered, but just try one that you think will help you plots are a tool to help us decipher important information from data so we can make decisions. So if the plot that we choose to try first, doesn't help then try it a different way, try a different plot. So what's, today's insight to action. When you get data, go ahead and preview it and see how things might be looking and then plot it out. You'll really get to see how things are looking and we'll be able to make better decisions from your data. If you like the content in this episode, visit quality during design.com, where you can subscribe to the weekly newsletter to keep in touch. This has been a production of Dean enterprises. Thanks for listening.

Podcasts we love