Like the fighter pilots in the classic movie “Top Gun,” machine vision applications have always had a “need for speed.” This is understandable as faster moving conveyor belts and motion control systems mean more items per hour that can be inspected, measured, aligned, packed or otherwise processed down the line. The result is higher productivity, and that is the name of the game.
In the early days, vision systems based on analog video standards were limited to 25-30 triggers per second—the maximum frame rate of these cameras. Fast shutters (short exposure times) could be used to reduce motion blur, but the fundamental speed of operation could not exceed the 30 fps frame rate.
In recent years, new imagers, interface technologies, and ROI functions have dramatically increased the speed at which these vision systems can operate such that, when combined with sufficient processing power, it becomes possible to trigger hundreds or even thousands of images per second in a high speed inspection setup.
How speed affects quality
The high speed of modern machine vision systems, however, carries with it a significant challenge in terms of maintaining image quality. This is because higher frame rates (or line rates, in the case of line scan applications) result in shorter exposure windows. At the same time, faster movement of the items under inspection requires the use of extremely fast shutter times to avoid motion blur. The net result is very little time to capture light reflected from the scene.
This “light-starved” condition can degrade the quality of images in several ways. For example, the short exposure means a darker image—fewer photons captured by each pixel, hence fewer electrons produced. More importantly, this “signal” will be much closer in level to the random “noise” produced by the pixels themselves. This results in an image that is “grainy” to the eye, and can be especially problematic to machine vision processing routines that may be looking for small defects or trying to discern specific edges or features within the image.
While this might lead some to shy away from pushing the speed limit, there are a few basic strategies you can utilize to achieve both high quantity and high quality in your machine vision application.
Use larger pixels for line scan imaging
Line scan applications have wrestled with the “exposure time” issue for years. That is because the nature of a line scan camera means that tens of thousands of individual lines need to be captured every second in order to maintain continuous coverage of a fast-moving “web” of paper or steel rolls, or a conveyor belt filled with fruit, rice, cotton, or industrial parts. Hence, the integration time for each line is very short and can easily lead to the image quality issues described above.
For most, the only option has been to increase the brightness of the lighting used, a costly solution that isn’t always available. An alternative to this is to look for a line scan camera offering larger pixels, providing it can still meet your speed requirements.
For example, some 2K line scan cameras (2048 pixels per line) offer pixels that are only 7 microns square (7 µm x 7 µm). Moving to a camera with 14 µm pixels offers 4X the collection area (196 µm² vs. 49 µm²), improving the light sensitivity and the signal-to-noise ratio. A few new line scan cameras are even available with 20 µm pixels, offering more than 8X the sensitivity of the 7 µm cameras, yet still offering a top speed of 80,000 lines per second.
By capturing more light in the same amount of time, large pixels provide a bigger signal charge and a better image. Some cameras even provide a switchable function where the user can adjust the “depth” or “fill level” of the pixels to maximize dynamic range when light is plentiful, or contrast when light levels are low.
Minimize CMOS shutter leakage
Much of the increase in camera speed—both line scan and area scan—has come from a new generation of CMOS imagers. These new imagers can not only run much faster than comparable CCDs, they have also been equipped with global shutters that enable them to capture images of fast-moving objects without the spatial distortion seen in older “rolling” shutter CMOS imagers.
But like CCDs, the concept of a global electronic shutter requires the transferring of the image information to a buffer for readout while light continues to fall on the imager for the remainder of the frame period. The shorter the shutter time, the longer the charge must be held in the buffer waiting for the frame period to end and the readout to begin.
Shutter leakage refers to electrical charge that “leaks” into the image buffer during this waiting period. The more leakage, the more “washed out” the image becomes, which can reduce the effectiveness of machine vision processing software.
Since the short shutter times used with fast-moving objects not only mean more buffering time but also brighter lighting to achieve proper exposure, this makes minimizing shutter leakage imperative for the best image quality. Look for cameras that use imagers with good Parasitic Light Sensitivity (PLS) ratios—a term commonly used for CMOS shutter leakage. Lower quality imagers typically have PLS ratings below 1:1000, meaning one out of every 1,000 photons that strike the sensor while the shutter is closed “leak” into the image information. Higher quality sensors have PLS ratings of 1:3000 or above, while the best can achieve ratings as high as 1:50000.
Apply analog front-end gain when possible
The best image quality is achieved when little or no gain is applied to an image. This is because adding gain amplifies both the signal and the noise, making the noise increasingly visible in the image.
But sometimes, adding a brighter light source is not possible or desirable. In these instances, applying gain may be the only way to achieve a “normal” exposure due to the high frame rate and fast shutter being used.
Most modern cameras only offer the option to apply digital gain. This occurs after the electrical voltages from each pixel on the imager have been converted into specific numeric values on a scale linked to the image’s bit depth (8-bit = 0-255, 10-bit = 0-1023, etc.). Since only whole numbers are allowed, amplifying these numbers tends to introduce rounding errors referred to as “quantization.” The extra dose of “quantized” noise included in these rounding errors further degrades image quality.
Quality can be improved, however, by selecting a camera that allows some measure of analog gain to be applied before the sensor output is fully digitized. Since analog gain works with the raw electrical charge instead of the digital bit approximations, it can be amplified at a much more precise level, without introducing the digital quantization errors described above.
To keep things manageable, one such technique allows the user to first apply one of several base levels of analog gain to an image (+6dB or +12dB) then use only a small amount of digital gain if needed to fine-tune the exposure.
For example, to add +7dB of gain, the user might first add +6dB of analog gain then only an additional 1dB of digital gain. This keeps any quantized noise to a minimum, enabling gain to be applied with the least possible degradation of image quality.
This same principle can be applied to color images where gain is often used for white balancing. High frame rate cameras that can apply analog gain during white balancing, especially when done on an individual color channel basis, can minimize noise issues for the best possible color image quality.
Conclusion
New cameras and interfaces have created the potential for higher throughput in machine vision applications. This potential can best be realized if system developers take the proper steps to maintain high image quality.