Frame grabbers are essential components in machine vision systems that provide ultra high-speed, high-data image capture from one camera or multiple cameras simultaneously. A new generation of frame grabbers is now capable of bringing together images and transferring the data with minimal or no latency to PCs for analysis, helping manufacturers meet zero defect inspection goals with greater precision, and still reduce the overall time and cost of inspection. Defects are caught earlier in the manufacturing process to eliminate costly waste and maximize production yield.
Before diving deeper into the state of frame grabber technology, let’s review the use of machine vision systems in manufacturing and quality control. In general, it falls into three categories: measurement, identification, and detection.
Measurement has to do with inspection of component dimensions and shapes. This can be in 2D or increasingly, 3D imaging. Identification has to do with reading tracking barcodes, alignment marks, 2D codes on parts, or OCR, among other applications. Detection is identifying defects in a part. In flat panel displays, for example, this could involve locating dead columns, mura, pixel or scratches. Product flaws are normally random, so algorithms seek out any changes in colors, textures, connections or patterns. If a part is rejected by algorithmic parameters, it can be removed before it moves further down the assembly process, wasting additional resources. If the system locates and identifies repairable defects, the part can be re-routed for possible repair. If not repairable, such as a cracked panel screen, it is of critical importance to reveal the source of the defect in the upstream production process.
In addition to those three categories are vision applications oriented towards troubleshooting production machinery. Conveyance transfer points, filling, capping, labeling, stamping, indexing, stacking, inserting, cutting and trimming are just a few of the operations that can occur in milliseconds and require hundreds to thousands of frames per second to capture and analyze the motion components of the machines.
Despite the amazing advances in machine vision, some industries still cling to the oldest form of inspection: the human eye. Machine vision holds a number of advantages of human inspection. Going back to the example of a flat panel display, the human eye is subjective when it comes to evaluating brightness and color. Human inspectors typically miss subtle variations in brightness and color when only given a short amount of time to look for defects, but those variations will be visible to a consumer viewing the display for an extended time period. Only machine vision can perform truly objective analysis. With the advent of larger sensors and more powerful frame grabbers and PCs, more data can be processed faster than could ever be achieved by a human inspector.
Introduced in 1993, the BitFlow Raptor was capable of interfacing to analog and digital industrial cameras.
Frame Grabbers: Then and Now
Frame grabbers have a long history dating back to the early days of machine vision in the 1960s when they performed the essential task of acquiring NTSC and PAL output signals from analog machine vision cameras. Analog output would be converted to digital for memory storage, with the frame grabber doing the de-interlacing and re-formatting.
In the 1990s the industry all but wrote off frame grabbers with the advent of USB, FireWire and Ethernet direct-to-PC interfaces. More powerful processors inside host computers allowed the PC to do both the buffering and processing. Major advancements certainly… but not enough to keep pace with bandwidth hungry megapixel cameras acquiring 120 frames or more a second. System integrators re-discovered frame grabbers as the ideal solution for buffering of the huge megapixel images, and for offloading from the host PC the reconstruction and enhancement of images, along with the pre-processing needed to minimize latency.
Anything that vision system integrators can do to limit the considerable burden on the host PC contributes directly to manufacturing throughput. Mostly based on the CameraLink interface, the re-born frame grabber delivered the higher sustained throughput that integrators needed, plus provided longer cable distances, less heat generation, and more compact footprints than computer peripheral interfaces could deliver.
Since then the frame grabber has continued to evolve, predominantly following the path of new machine vision interfaces. The 1990s and 2000s saw upgraded versions of Camera Link, from full to deca mode, enabling data rates now up to 850 MB/S. However the sensor development continued to get faster and demanded a revisit. With no room for expansion, in 2008 a consortium of six companies from different sections of the vision industry and under the guidance of the Japan Industrial Imaging Association met the integrators’ requirements for higher bandwidth and longer cable lengths with a new interface design: CoaXPress (CXP). Over the next few years this interface was developed into a standard with input from several additional companies. In 2011, CXP 1.0 was adopted as the newest vision standard.
CoaXPress versions 1.1 and 2.0 both allow multiple cameras to be linked by a single CXP frame grabber over long, inexpensive and very robust coaxial cables with zero latency and exact synchronization. Various resolution cameras set at high or low frame rates can be linked to a single CXP frame grabber, each performing a different inspection task. Even CMOS and CCD cameras can be mixed in the configuration. In some instances, additional sensors are required for hyperspectral imaging of these boards. To the frame grabber, that’s simply more data and CXP frame grabbers do not discern between image color or wavelength; it’s all bits and bytes. Having the capability to handle large volumes of data is where a CXP frame grabber excels in the high-speed, high-data market. CXP 1.1 supports a maximum data rate of 6.25 Gbps, approximately six times faster than GigE Vision and 40 percent faster than USB3 Vision, while CXP 2.0 has added two more speeds: 10 Gb/s (CXP-10) and 12.5 Gb/s (CXP-12). The higher speed, usually defined by a camera’s data rate, allows customers to inspect more products per minute and/or help to freeze the motion of fast-moving objects while limiting blur.
One of the arguments against CXP is that it requires a frame grabber, an expense that USB3 Vision and GigE Vision dodge. Mistakenly, the impression is given that a multiple-camera system based on CXP is therefore more complex and expensive. Yet this ignores the fact that the load on the PC significantly increases with USB3 Vision and GigE Vision. What savings are realized with USB3 Vision are quickly negated by the cost of additional computing resources. And an expensive network card must be purchased for GigE Vision for consistent reliable operation. In a high-speed inspection system, manufacturers cannot have down time to constantly repair or replace items. In the same manner you cannot miss images due to CPU interrupts, or missing triggers. This is where the frame grabber excels.
Ever faster machine vision cameras using modern interfaces are needed to ensure consistent quality in high-speed production lines. Source: Mikrotron
Moving Forward: Smart Frame Grabbers
The Internet of Things is made up of smart devices, which are basically electronics made intelligent with computing and connectivity to the Internet. Likewise, the Industrial Internet of Things, or IIoT, is comprised of smart devices using sensors that gather and share production data from the edge of the network. Manufacturers are given unprecedented visibility into their operations with two-way data mobility between local and remote assets, as well as suppliers.
Machine vision is an ideal technology for the IIoT since its very purpose is to supply greater awareness of its surroundings. In the context of the frame grabber, a “smart” frame grabber featuring embedded or board-level image processing could be integrated directly into a quality inspection line where it would make decisions on the edge that are now performed by the host PC and its software, instead of it transferring only the raw image data. Valuable operational data can then be shared in real-time across the rest of the network directly from the frame grabber. It could also convert cameras into GigE Vision devices, so that a uniform data set could be used across the network.
Frame grabber technology is accelerating in adoption and diversity of standards, making it possible for integrators to solve high-speed, complex vision applications and break new ground in vision-guided robotics, autonomous navigation, and other complex applications. V&S