Digital gaging, involving the software and hardware used to capture and process digital measurements, has grown dramatically in the past two decades. It has also made quality more efficient and effective.
While some calibration challenges are ever-present, calibration—as with so many things this past year—has changed because of the pandemic. Manufacturers may have extended their calibration cycles for gages that were not being used, or put off calibration.
It’s not what you’re thinking. It stands for ‘overdue’ and nothing puts quality people into panic mode faster than the realization they’ve got some items in their system that are overdue for calibration and, of course, a quality audit is happening within a day or so.
In keeping with my recent columns summarizing the calibration of various gages, I offer this one as a catchall for the many precision hand tools used for measurements.
This column is the fourth in a series of overviews on gage calibration to give you some idea of what is involved at this level of measurement. A book could probably be written on each subject and not do the job very well so I will continue keeping it simple to avoid completely ruining your day.
Starrett is offering a comprehensive white paper that discusses why traditional approaches to measurement data collection are inefficient and error-prone without the ability to support IoT/ Industry 4.0.
This situation pops up quite regularly when a relatively simple feature such as the diameter of a hole in a machined part doesn’t appear to be right when the part is at the assembly stage of manufacture. Like similar disputes, the finger pointing begins and compromises are made but the problem doesn’t go away.
I’m a Type-A personality with a sense of urgency to explain everything. Give me a little data, and I will use every statistical tool I can wrap around these rationalizations to help explain an observation. But here is something that I cannot explain: why do we tolerate such poor gages?