One would hope that once a calibration laboratory has been accredited by a recognized agency, you could take the uncertainties shown on their scope of calibration at face value. In theory, an assessor from the agency will have reviewed their procedures and uncertainty budgets to bring such a state of bliss into existence. Unfortunately, too often I’m reminded that this is not the case and while the majority of labs are relatively consistent in the uncertainties or CMC values shown on their accredited scopes, an increasing number are not.
Like anyone in the gage making and/or calibration field, I get involved in measurement disputes from time to time, the majority of which are resolved to everyone’s satisfaction. I also get emails from readers of this column looking for answers to ongoing measurement disputes within their own company. These requests are usually prompted by one party looking for verification of what they believe is the proper way to do the calibration or, alternatively, to shoot down another party’s idea of how the world works.
I try to be as diplomatic as I can under the circumstances and so far, no one has threatened my life or mounted a campaign to have me barred from visiting the USA for business or pleasure. But that could change after this column.
The good news is that most calibration facilities worthy of the name are accredited. The bad news is that the technical expertise of some of the assessors appears to be declining in some areas.
This is reflected in some uncertainty values on scopes that are not realistic but they are on the lab’s official scope so arguing otherwise is a futile effort especially if the other party to a dispute knows even less.
What makes it most concerning to me is when I see scopes for major firms in this field, companies that I’ve known for years, containing some iffy uncertainty values.
This happened recently when I was checking scopes for two companies I’ve done business with for a long time. As you’ve probably guessed, the matter in dispute was simple pitch diameter. Both companies are accredited by the same agency but one was showing an uncertainty way lower than the other—about 40% lower. What made it interesting is that both were using the same techniques and hardware and both have skilled staff to do the work, all of which could mean some differences were to be expected but not 40%. The technique being used by these two labs is used by many in the business and their uncertainties are the same within a narrow margin of the higher of the two labs I’ve been discussing.
The lab with the lowest uncertainty was better than NIST’s uncertainty for the same measurement, which I find difficult to believe. It appeared to me that they need to re-work their uncertainty budgets for simple PD measurements. In fact, I also noted that they had an unrealistic uncertainty claim for measuring linear pitch on a thread plug gage considering they were using an optical comparator for the measurement—something the ASME B1.25 standard notes is not suitable for the job.
I also found it interesting that the same company shows their uncertainty for adjustable thread ring gage calibration is the same as for simple pitch diameter on a thread plug gage. They use setting plugs for this work and indicate that the uncertainty for the setting plug is transferred to the ring being re-set. Ignored is the fact that the setting plug PD is ‘simple PD’ but when it is used in this manner it becomes a functional PD, which is different.
Maybe I’m just picky, but when battles start over millionths of an inch or parts of a micron, you have to be.
Accrediting agencies should review the scopes they endorse to pick up such discrepancies. Peer review by assessors would be a good idea with each being accredited by the agency for specific types of calibrations/measurements rather than broad categories such as “dimensional” the same way the labs they assess have to train and control their staff.