Michelle Bangert, Managing Editor of Quality, talks with Matt Noonan. He's the Quality Manager of Pratt & Whitney Measurement Systems, Inc.




Michelle: So you recently worked with quality on an article related to measurement uncertainty. Can you start off for our listeners who might not be familiar, just explaining a little bit about what measurement uncertainty is?

Matt: Measurements uncertainty is a way to kind of quantify how certain or uncertain you are about measuring the results of a measurement. Definitely an important topic, especially with such type tolerances in some cases.

Michelle: And so with the article, we kind of talked about automation and measurement uncertainty. Could you delve into that a little bit?

Matt: Regarding that question, I thought about how do we use automation here and for positioning artifacts that are to be measured. That's our LabMaster Universal Model 1000A which has automated table controls controlled by software as a readout and So you can view the position of your table top position and orientation, meaning it has basically four controls raise, lower, center, tilt and swivel.

So you use those four controls to position your measurement artifacts for the correct orientation and position, which is important for making measurements because whatever you're measuring is going to have some shape, and you have to position it according to the geometry. How does that relate to uncertainty? Well, in determining the uncertainty of your measurement, you certainly want to consider how far off might I be from my target dimension.

And what's that going to equate to in terms of how far off my reading might be. Also, the automated controls of the tabletop will allow probably better repeatability of positioning so that if you're doing repeated measurements, which generally you want to do, right? I'm sure people are familiar with measure twice, cut once.

So the more times you make a measurement, the more certain you can be at that measurement. And if all those measurements are on the same orientation and same position, then they'll be more repeatable and you'll get more consistent results and be more confident in those results. I like the example someone mentioned in the article. You can be very precise if you measure something that's 12 inches, many different times, but you measure it at 11 every time. You're still not where you want to be. So it's important, a lot of different things to look at here.

Michelle: So for people who want to kind of improve their use of measurement uncertainty, is there anything you would suggest they do or resources they look into?

Matt: Well, I think it starts with having a good understanding of the measurement you're making. You kind of have to have a mental model of the measurement setup. So you can visualize all the different interactions that are taking place. And I think that's how it starts. You start with understanding what you're doing as far as like a three-dimensional model. And then for every interface or every factor, you have to be able to make an estimate of, okay, how bad could this be? And then you have to equate that to, or translate it to what's that gonna, let's say I wanna measure gauge blocks. And I figure, well, I expect that the parallelism of the gauging faces might be about, let's say, four micrometers over the face of the gauge block. And how well can I reposition my gauge blocks so that I'm measuring the same point every time? Well, maybe I might be off by a thousandths. Well, how much would that equate to in terms of millions on the gauge block? And then if I'm gonna measure this thing 10 times, well, what would I expect my variation to be based on this parallelism in my ability to repeat my positioning on this artifact.

So yeah, start with the mental model, then start with making estimates for how far things could be off, and then translating that to what that means for your end result. And then do that for all the major factors, which might be five or six items. Maybe do it for some of the less significant factors, and maybe end up with a dozen factors, and a dozen estimates, and then combine those values, usually using root sum squared. Now take the square root of the sum of the squares of all these values and see what you get. See if that leaves you with something that's acceptable. It's too small to believe or it's too big to be useful. You have to look back at it and figure out if you've done something wrong.

Listen to the Full Podcast Here:

Listen to more podcasts here.