Learn the fundamentals of vision guided robotics.

Vision guided robotics are a significant factor in flexible factory floor automation. Source: Aptúra


In the time since the first mechanical arm was installed in an automotive plant in 1961, robots have revolutionized industrial automation. The programmable robotic arm is the keystone for many efficient, cost-effective and flexible automation processes. As valuable as robotics has become in many diverse applications, the value, capability and flexibility of this technology has been dramatically increased by the introduction of machine vision for guidance. Vision guided robotics (VGR) is the use of machine vision technology to provide coordinates that allow the robot, with an appropriate gripper, to pick up a part with some degree of accuracy when the original position of the part was not exactly known, or conversely, to move a captured part precisely to a target location. Normally, a robot moves blindly but repeatably to a known position within the arm’s reach.

With VGR, machine vision provides the robot with a position for its motion. This added capability represents the next pinnacle of flexible automation: the reduction or possible elimination of the need for any hard fixturing of parts.

Some of the ways in which VGR is successfully implemented include:

• Discrete part pickup and placement, such as unloading parts from a conveyor, part placement and assembly.

• Process guidance, including sealant application, welding, cutting, trimming, de-flashing, de-gating, painting, finishing and repair.

• Picking randomly or semi-randomly oriented parts from bins or boxes.

VGR has long been a technology driver for automation in the electronics/electrical components industry, where, for example, machine vision guides the placement of small components during printed circuit board assembly. But with continuing advances in machine vision and robotic technologies, VGR now is in widespread use in the automotive, pharmaceutical, medical device, semiconductor, consumer product, and food and beverage industries.

Products and systems are readily available that deliver varying levels of robotic guidance in applications with varying complexity. Most machine vision systems, smart cameras and smart sensors have the ability, through proper calibration, to perform 2-D VGR. Some machine vision manufacturers and software providers are beginning to include 3-D calibration and guidance capabilities as part of their standard packages. Certain industrial robot manufacturers offer their own machine vision solutions for 2-D and 3-D guidance. And, there are third-party standard solutions available that perform specific guidance tasks for applications ranging from simple to very complex.

Vision guided robotics is a plant-floor reality, but the technology is not without its challenges and limitations. Fortunately, one key part of vision guided robotics is easy: robots go exactly (almost) where they are programmed to go. That is not to say that the robotics portion of a VGR application is without challenges.

Specifying the correct robot with adequate payload and speed for a particular application is always a key project design issue. In particular, the design of a flexible, compliant and robust part gripper often presents a mechanical challenge. Ultimately, the implementation of machine vision technology is often the limiting factor in potential VGR applications, and as such, it is important to understand the fundamentals and challenges of machine vision for guidance.



Vision guided robotics now is in widespread use in the automotive, pharmaceutical, medical device, semiconductor, consumer product, and food and beverage industries. Source: Aptúra

The Technology

A machine vision system for robotic guidance typically delivers a single coordinate-sometimes called a pick point-to a robot for each part or process. The point data may be 2-D or 3-D depending on the needs of the application and the capability of the robot. Any point data provided to a robot will be a world coordinate relative to a known (pre-defined or physical) plane. A 2-D coordinate will be an X, Y point on the plane, often with an angular rotation. With additional processing, a 2-D point may also provide the Z-axis height of the object. This type of coordinate is sometimes referred to as 2 ½ -D. Full 3-D points will contain an X, Y and Z position as well as rotation about the three axes, often referred to as yaw, pitch and roll. A 3-D point is said to represent all degrees, or six axes of motion.

VGR applications might use one, two or multiple cameras depending on the application and the required point data. For 2-D applications, often only one camera is required, but in some cases, for example where the position and angle of a large part must be detected, two or more cameras might be used. More widely varying architectures are used for providing 3-D position information. Application limitations and accuracy requirements will dictate the most appropriate method for capturing the pick point. Note that cameras may be stand-alone, or mounted on the robot arm.



Implementing machine vision for vision guided robotics is fundamentally a calibration task. Source: Aptúra

How It Works

At the lowest level, implementing machine vision for VGR is actually simple. It is fundamentally a calibration task. For most applications, each camera must extract a target feature and convert the position of that feature from camera coordinates (pixels) to a real-world coordinate. Virtually all machine vision systems provide a method to calibrate a single camera to standard measurement units (inches or millimeters). Usually this requires the camera to capture and process an image of a known calibration grid. For VGR applications, additional steps are necessary.

The position of the calibration grid features must be correlated to the position of the robot arm, keeping in mind that the robot normally will be operating in a specific nominal plane, often called a tool frame. Sometimes this is called mapping the camera coordinates to the robot coordinates. There are various techniques for mapping ranging from simply entering valid robot coordinates to match a point or points on a calibration grid, to fully automated calibration processes where the robot and vision system work together to automatically or semi-automatically perform a full system calibration.

In the case of camera calibration for 2 ½-D or 3-D applications, often the calibration of multiple planes or levels is necessary to fully map the pixels to world coordinates. Again, this process frequently is automated by machine vision software systems and packages designed for 3-D robot guidance.

Ultimately, however, the main task of any VGR application is extracting the correct and accurate position of a target feature to be passed along to the robot. As with any machine vision application, correct lighting and imaging are critical to success. However, the premise of vision guided robotics itself: the need to provide flexible automation where parts are not necessarily in repeatable positions, results in inherent lighting and imaging challenges. Simply put, when parts move around, it is more difficult to provide consistent illumination that creates adequate contrast between target features and background. It also is important to have robot gripper compliance.

One of the key considerations related to lighting and imaging is the way a part will rest on a surface from which it is to be picked. Some parts can be presented to a VGR system in only one stable resting state, with only changes in position and/or rotation. Other parts may have multiple possible resting states, and some, with shapes such as a cylinder or sphere, may have infinite resting positions. Parts that may come to rest in varying positions are more difficult to illuminate consistently, and particularly in 3-D applications, the variation in resting position may obscure target features. No single solution hint exists to overcome the effect of extreme part variation on imaging for machine vision, but knowing and recognizing the potential obstacle in advance is an important step to coming up with a correct solution for a specific application.

One type of imaging technology offers some potential in overcoming certain lighting and imaging issues found in VGR. This technology is point cloud imaging, or 3-D scanning. Using a structured light source-such as single or multiple laser lines, or laser dot array-a camera captures the geometric representation of a part. In some cases the imaging system is able to reproduce a full 3-D representation of the part relative to the resting or a nominal plane.

Finally, some sensors are being introduced for VGR that provide 3-D height data based on time of flight. Currently the resolution and accuracy of these sensors is limited, but they show some promise for applications where millimeter position resolution is not required.



Demands for automation improvement and enhanced manufacturing flexibility will continue to drive technology that advances the use of vision guided robotics. Source: Aptúra

High Demand

Dubbed the holy grail of vision guided robotics, automated bin-picking applications are in large demand in flexible automation environments with VGR. The bin-picking task typically requires that a robot grasp a single randomly oriented part that is on top of a pile of identical parts stacked randomly at varying depths.

As one can imagine, reliably locating such a part in 3-D poses a formidable challenge. A variety of methods have been incorporated in addressing bin-picking applications. It is not unusual for a bin-picking application to require two different sensing methods, one for coarse part location and another for more accurate position reporting.

Also, point cloud imaging has been demonstrated to be a viable technique in bin-picking. Dual pick methods also are being used, where a robot will first pick coarsely located parts from the bin, then place those parts on an intermediate surface where individual parts can be more accurately located for correct pick points.

Demands for automation improvement and enhanced manufacturing flexibility will continue to drive technology that advances the use of vision guided robotics. Keys to successful implementation of this valuable technology include an understanding of the capabilities and methods used for VGR, and a thorough analysis of the parts and application specification. VGR is certainly destined to be a significant factor in flexible factory floor automation. V&S

David Dechow is a member of the Vision & Sensors advisory board and the president of Aptúra Machine Vision Solutions (Lansing, MI). For more information, call (517) 272-7820, e-mail [email protected] or visit www.aptura.com.

VISION & SENSORS ONLINE

For more articles from David Dechow, visit www.visionsensorsmag.com to read the following:

• Reintroducing Machine Vision

• Machine Vision Lighting Demystified



Industrial Robots

The term robot commonly is used to describe the familiar multi-jointed robotic arm that is widely implemented in manufacturing and scientific applications. Although various implementations of a walking, talking semi-humanoid android, and planet roving autonomous rovers do exist, this is not the type of robot being discussed here.

An industrial robot is a programmable, multi-jointed arm, which is able to move in three (X, Y, Z) or more axes. Also called articulated arms, these commonly provide movement in rotation around all three of these axes. These robots are referred to as having six degrees of motion (X, Y, Z, roll, pitch, yaw), or as having 3-D motion. Sometimes the robot controller can accommodate additional axes of motion. These axes usually are external to the robot, for example, the control of a linear slide that moves the whole robot from side to side, or the tracking of a conveyor as a linear position relative to the robot arm.

Other robot arms widely in use are gantry or cartesian robots, and SCARA (Selective Compliant Assembly Robot Arm) robots. These robots usually are able to move in the X, Y and Z axes, and also may be able to provide some rotation about the Z axis. They are not fully articulated in that they normally do not provide rotation about the X or Y axes, and therefore are not fully 3-D. In addition, some unconventional kinematic robot designs are found in various applications.