Recently I’ve had discussions with folks regarding the application of measurement uncertainty to general measurement results and in the calibration of gages and instruments. In summary, I think we’ve agreed to disagree on some key aspects of it all and no one has declared outright war on anyone. But then I haven’t checked the property around my company for gun emplacements or land mines, so I may be up for stronger expressions of opinions in the future.

Before I get into hand-to-hand combat on this gentle reader, I thought it might be helpful to explain a few things about what uncertainty is, and what it is not.

Simplistically speaking—the only way I know—measurement uncertainty is part of every measurement made by any source. It is an estimate, usually, of how far from perfection or the true value a measurement may be. The operative word here is ‘may.’ A given measurement may have an uncertainty of plus or minus one unit—or less, but in one case the measured value may be exactly the same as the true value while in another it may be up to one unit away from the true value. The problem is, we don’t know what the true value is and have to allow for that reality.

This subject pops up frequently when results from calibration laboratories are being discussed or, in some cases, fought over. For many users of such services, the fact they are paying someone with a fancy lab and the latest toys to measure something should mean they get an exact value without qualifiers messing up things.  Sadly, this is not the case and it wouldn’t be any different if the work was done by NIST. What would be different is the uncertainty noted by NIST for their work would be a lot less than most—if not all—commercial labs, but their report will note their uncertainty just the same.

All of this becomes a call to arms when the user wants the lab to make an acceptance decision on the gage, for example, that they are calibrating. Without fighting over what the tolerances should be as noted in my last column, a decision has to be made with respect to how uncertainty shall be applied when making that decision.

One of my customers said we were bound to provide that decision and cited ISO 17025 as the source for that claim. But the section he referred to notes that if a lab is providing such a decision, uncertainty will have to be taken into account. But it doesn’t stipulate how this is to be done.

Some years ago we automatically issued accept/reject statements on reports and caught a lot of flak from customers. The reason for this was that once a quality auditor saw the report they got written up for using a rejected gage—even though it might have been suitable for their application. To avoid engineering discussions with the auditor, customers told us they did not want such statements on their reports.

In my lab we do not offer opinions on acceptance as a standard rule. We note the measured value and the related uncertainty and leave it up to the user to decide how it will be applied. If we see something ‘off the wall,’ we call the user’s attention to that but we don’t make a reject decision.

So what does ‘off the wall’ mean? In our case, if the measured value plus the uncertainty produces a number outside of the specified tolerance we advise the customer to review that element before using the item. If the measured value is outside the tolerance but when the uncertainty is applied the adjusted value could just as easily be within the tolerance, we do not flag it.

In a perfect world, the uncertainty would be added to the measured value and the final number used to make the call. An alternative is to note the uncertainty but use the measured value on its own for the decision and assume the uncertainty will not make much of a difference. Most labs seem to follow this practice.

If the user wants an opinion on acceptance it’s best to discuss the ramifications involved before the numbers start flying.

 Like the political world, cooperation and discussion before the fact goes a long way toward eliminating the need to call out the artillery.