You may not have noticed it, but there’s been a trend creeping into most of our lives. Its origins are rooted in consumer expectation. Consumers want to hold up their cell phones and snap the perfect selfie with the sun setting over the beach behind them. And they want to create panorama shots that challenge the best of wide angle lenses.
What do these two scenarios have in common? The answer is in how engineers solve the hard problems created by the common imaging conditions in the examples above.
In the case of the setting sun, the sun and sky are often thousands of times brighter than the subject of the selfie. The dynamic range of the small imagers used in cell phones is easily exceeded, yet the consumer expects to clearly see both their face and the brilliantly lit sky. To meet this expectation, many of the top cell phones now snap two or three pictures, each with a different exposure level in rapid succession. Invisible to the user, an algorithm in the background picks the most usable pixels in each image, weights them relative to each exposure level, and combines them into one single high dynamic range (HDR) image with a total dynamic range greater than possible with a single image capture.
Wide angle panorama images are created by grabbing many shots as the camera is swept left to right. An algorithm aligns each image to the last, merges the border between the two, and then repeats this for the next image in the sequence. The final wide angle panorama is a composite image created from the best pixels of all the aligned images and with a wider field-of-view (FOV) than would be possible with a single shot from a fixed imager and lens.
These techniques and many more are part of a trend to use multiple images to create a single computed output image and fit into an area called computational imaging. Computational imaging has slowly crept its way into the cameras on smart phones and other portable devices. Often unaware that the great image they just “snapped” is composed from multiple images captured in rapid succession, device users enjoy the exceptional images they expect in difficult situations without ever understanding the technical difficulties and limitations faced by the small format imagers used in most portable devices.
Advances in technology and the latest high speed CMOS cameras are making many computational imaging techniques viable for machine vision applications. System designers can start to think in new ways about creating solutions to difficult imaging problems using multi-image captures and treating the computed “super image” to create more robust solutions. Computational imaging can improve the capabilities of a camera or introduce features that were not possible at all. Better or previously impossible images for machine vision systems can be created at a lower cost.
A number of factors make computational imaging more interesting for machine vision than before. The starting point is computing power. Increased computing power is constantly increasing the capabilities of smart cameras and embedded computing. Many manufacturers are making vision specific embedded devices. With both more processing capability and speed, it’s more possible than ever to be implementing computational imaging techniques directly into the front end processing. Builders can choose the right amount of processing power; more is cheap.
The other main factor is higher performance CMOS imagers. Now commonplace in the market, the latest crop of high speed machine vision cameras can cover a range from VGA resolution at thousands of frames per second (fps) to 12 megapixel cameras capable of more than 160 fps. With trends toward higher speeds and resolutions coupled with improvements in sensitivity, the high speed CMOS trend is well suited to match the needs of the developing computational imaging market.
Computational imaging covers a wide range of techniques and processes. These include photometric stereo (shape from shading); HDR (high dynamic range) imaging; full color, full resolution image acquisition with monochrome cameras using sequential RGB lighting; extended depth of field (DOF) using sequences of images with varying focal points (Z-axis slicing); multi-spectral imaging with sequences of images timed to lights with varying spectrums; singly triggered, multiple camera acquisition for 360° object capture; and defect detection with structured lighting. These and many more possibilities allow vision system builders to choose a process to get the most beneficial image for their application at hand.
Computational imaging is easier than ever to implement into almost any vision system. Not sure of the benefits of computational imaging in practical machine vision solutions? Consider these three straightforward and easy to implement techniques next time you face a tough machine vision application.
Computational Imaging Technique 1 – Photometric Stereo (PMS)
Photometric stereo allows the user to separate the shape of an object from its 2D texture. It works by firing segmented light arrays from multiple angles and then processing the resulting shadows in a process called “shape from shading.” It is useful for the detection of small surface defects and 3D surface reconstruction. PMS is a height driven process which can enhance surface details like scratches, dents, pin holes, raised printing, or engraved characters. Because the final image is a computed surface based on the shading information surface coloring or features without height are removed. This can make visually noisy or highly reflective surfaces easier to inspect. This capability is rapidly becoming popular in the machine vision market. Numerous machine vision suppliers are offering photometric stereo tools.
Computational Imaging Technique 2 – High Resolution Color
Using a monochrome camera with a CCS full-color ring light, which has three-channel control of red, green, and blue output, the user can generate full resolution RGB color images at practical data rates. By grabbing a sequence of three monochrome images correlated to red, green, and blue strobes, a full color composite image at the full monochrome resolution can be created by aligning the images and using the red, green and blue values for each pixel to create the color pixel. The resulting composite color images are much sharper than that of a single image capture with a Bayer or mosaic color camera. The images are similar to those from three-CCD cameras without the expense, special prism or lens limitations, and at much higher resolutions than that of available three-CCD cameras.
The advantage of this method is the ability to have the best of both worlds: complete color information at the full pixel resolution of the imager. Due to the spatial effects of interpolation, Bayer color imagers capture the color information, but lose spatial resolution across several pixels.
Computational Imaging Technique 3 – High Dynamic Range (HDR) Imaging
All imagers have a limit to the ratio of the brightest object to the darkest object that can be distinguished in a single image. This is called the dynamic range. Many machine vision applications involve bright, shiny, or dark objects that challenge the dynamic range of the camera. To solve these cases, a series of images with different exposure level can be captured to create a single HDR image with all the detail that needs to be included for the inspection.
So next time you’re faced with a difficult imaging scenario, think not only of the right image, but look at which computational imaging techniques might get you the absolute best image. The examples mentioned here are only some of the many possibilities. If possible, think not in single images, but in what is possible in series of images. Then choose your lighting system and camera using one of the latest generation of high speed CMOS imagers that meets your computational imaging and application needs.