One capability of machine vision that goes beyond human vision is its ability to make measurements with high precision and accuracy. This first of three parts discusses techniques allowing vision systems to achieve high-precision measurements. The following two parts discuss challenges to achieving high precision and accuracy and how to deal with them.
We should start by agreeing on terms relating to measurement accuracy. The ISO 5725 specification defines accuracy using terms this article will use. (See Figure 1.)
The standard defines trueness as the fixed error between the average of a series of measurements and the accepted true dimensional value. Trueness is a systemic error better calibration could improve, at least in principle.
Precision is the random variation around trueness exhibited by individual measurements. It is considered caused by unpredictable noise. Most often precision is assumed to be Gaussian and expressed by its standard deviation.
Accuracy is the combination of trueness and some number of standard deviations, typically three, of the precision.
This definition addresses the end user’s question of how accurate the measurement is. That is, what is the maximum expected error between any measured dimension and the dimension’s true value?
High Precision Measurement Techniques
The techniques used in machine vision to achieve high-precision measurements rely on a principle in statistics that states if there is a series of N values (XI..N) representing repeated measurements, and where each value in the series has an uncertainty or variation with a standard deviation σ, then the average of the N values (X̿) will have an uncertainty uncertainty(¯σ):
¯σ= σ/√N
Therefore, image processing in machine vision improves the precision and not the trueness. Since calibration is performed using sub-pixel techniques, it is also common for trueness to be a small fraction of a pixel. Remember, machine vision image processing only provides sub-pixel precision. It does not automatically provide trueness or accuracy to a sub-pixel level.
Multiple Points Along an Edge
One way to achieve sub-pixel precision is to average independent measurement points. An example of this is to fit points along an edge as shown in Figure 2. As the number of data points along the edge increases, the error in the equation fit to the data will have its uncertainty reduced over any one of the measurement points. In this case, the software fits the data points to a model of the edge, typically using a least-squares fit, also known as a regression. The least-squares fit has the effect of averaging.
In many machine vision software systems, this type of measurement is accomplished with a tool often called a rake as shown in Figure 3. The rake has a span along the edge and several tines specifying where to take the individual measurements. The improvement, though, is realized at the center of the measurement range and degrades when moving out from the center toward either end of the measurement range.
The rake can have parallel tines to fit to a straight edge, or radial tines to fit to an arc or a circle. It is possible in principle to fit data points to any shape represented mathematically to reduce the measurement uncertainty.
Gray-Scale Edge Profile
Another technique to improve measurement precision is to fit the edge point to the gray-scale values crossing the edge – the edge profile (see Figure 4). The true edge position is taken to be the inflection of the gray-scale values along the edge. The inflection corresponds to a peak in the first derivative or the zero crossing in the second derivative of the gray-scale values. While it is computationally easier to find a zero crossing than a peak, the second derivative is likely noisier than the first derivative leading to decreased precision.
There is a general notion the image should not reach saturation or be clipped at the bottom end by the camera’s black level setting. This may be true if the edge profile inflection is near the top or the bottom of the gray-scale range. In general, though, if the clipping doesn’t distort the waveform in the vicinity of the inflection, saturation or black-level clipping will not affect the measurement precision.
Calibration Accuracy -> Part of Trueness in Measurement
You should realize calibration itself is subject to errors from trueness and precision.
Typical calibration is performed with a grid of dots or squares as shown in Figure 6 and Figure 5. The centers of dots or the corners of squares, determined using sub-pixel techniques, are fit to equations to correct for distortion and scaling. Because calibration involves the use of such a great number of data points, its precision should be very good. Most calibration targets are made to very stringent quality and dimensional standards. However, the accuracy and long-term stability of the target must be a major factor in its selection.
The equation for correcting for radial lens distortion is typically a polynomial. See Figure 7 for different types of lens distortions. Software packages use polynomials of different degrees – usually somewhere in the range of 3 to 7 – to correct for lens distortion. A third-degree polynomial can correct well for pincushion or barrel distortion. Some “low distortion” lenses, such as telecentric lenses, exhibit a distortion called wave or mustache distortion. This requires a fifth or seventh-order polynomial to correct.
You need to make sure your software’s calibration approach is appropriate for the lens you are planning to use.
Any errors in calibration contribute to trueness in the measurement. Calibration errors come from the deviation of the calibration target from perfect dimensions, the sub-pixel precision achievable in resolving calibration target features, and the ability of the generated equation to correct for distortion.
Conclusion
The techniques allowing high-precision measurements are well understood and based on solid principles. Calibration is critical to accurate measurements.
The next part of this series will deal with challenges that can detract from accurate measurements and how these challenges can be overcome.