Today’s innovative landscape is introducing sophisticated technologies for vision guided robotics (VGR) at a rapid pace, expanding robot functionality for diverse markets. Next-generation imaging systems, combined with the availability of more compact, highly efficient and less expensive robots, sensors and technologies, are allowing the implementation of robotic solutions for a wider range of applications, especially for small- to mid-size manufacturers.
Traditionally, picking and placing a random part from a bin in a timely manner has not been an easy task for a robot. This lack of functionality, combined with varying part shapes and poor lighting conditions on the factory floor, created a lack of confidence in robotic automation to efficiently meet unique customer requirements.
Robots are now able to perform tasks without operator intervention and reprogramming, and they are well-suited to meet specific requirements and high demand. Engineers have created high-tech imaging systems capable of finding an object and subsequently providing an acceptable and precise location to the robot, overcoming many manufacturing hurdles. Manufacturers that have dismissed the use of robotics in the past are now taking a fresh look due to advancements in precision and accuracy for machine vision technology.
2D Machine Vision
From part alignment to part inspection, 2D vision technology is the most commonly used form of machine vision used in manufacturing today. Occurring in a single plane (X, Y) where part depth data is not required, 2D machine vision uses grayscale or color imaging to create a two-dimensional map to show anomalies or variation in part contrast.
Advancements in computer and camera technology have paved the way for vision technology, increasing pixel count and processing speeds for 2D cameras and sensors. These significant improvements have greatly enhanced capabilities for a wide variety of applications, including: barcode reading, label inspection, basic positional verification and surface marking detection. This accelerated degree of optical efficiency has made the use of 2D machine vision more reliable, increasing functionality and quality.
Manufacturers looking to utilize VGR, especially for a pick and place application, are encouraged to first consider 2D technology, as it can often be a more affordable option than 3D vision. However, there are times when 2D vision simply will not do the job required. While 2D vision capabilities have grown, this type of vision system still requires: 1) a traditional lighting source, 2) the presence of contrast and color, and 3) slower processing speeds than 3D imaging.
3D Machine Vision Technology
Manufacturers are discovering that the extra dimension provided by 3D vision can offer game-changing benefits. The next generation of vision technology, 3D imaging uses height measurement to learn more about an object so that a robot can quickly and easily recognize a part and control it as programmed. This type of VGR technology eliminates the need for the pre-arranging of parts or objects prior to being located and moved by the robot, making it a viable asset for many applications today.
Material Handling with 3D Vision
While some manufacturers have found success with robotic arms equipped with two cameras for picking and placing, these systems can be challenging to program and set up. Three-dimensional vision technologies now have the ability to overcome these obstacles and more. Today’s 3D bin picking systems are equipped to provide the added benefit of actually doing something with the part once it is picked; not just moving it. Since 3D imaging enables the system to completely understand the shape and orientation of a given part, the part can now be manipulated or precisely placed into the next machining step or assembly phase in the production process.
Many manufacturers in the industrial world have turned to using an easy-to-use combination of 3D recognition hardware and software for robots and controllers to identify and work with a wide array of parts. This complete solution simply uses a single 3D camera integrated with lighting inside the unit, giving a robot the ability to work anywhere it is needed on the production line.
Implementing this technology has been quite helpful for manufacturers because there is no programming at the system level, as the solution automatically matches a pre-loaded 3D CAD model to any part, enabling the robot to move randomly placed parts to the desired location as programmed. Three-dimensional CAD matching provides simplified, accurate part registration, allowing even complicated parts to be identified. The use of a Windows-based app, running on a high-performance PC also makes this type of system an ideal solution.
When it comes to bin picking applications, most 3D vision systems have been designed to work well when parts are placed in the middle of the container, but as technology progresses, the insight into what these technologies can do continues to grow. Standard interfaces are now being designed to allow robots to maneuver random parts from the edges, recognizing shapes as never before.
Integrated AI, sensing and robotics into an automated solution allows the robot to handle the item diversity and process changes needed for piece-picking order fulfillment.
AI-Driven Automation
Technical innovations in artificial intelligence (AI) have helped to unlock massive potential for retailers to automate the handling of diverse stock keeping units (SKUs) in the order fulfillment process. Traditional bin picking methods have opened the door for flexible, high-speed, order fulfillment picking approaches, moving the complexity of the process from the hardware to the software.
Adaptive picking solutions of this nature combine intelligent 3D vision and interactive motion control to identify and handle unsorted items with a level of speed and accuracy that exceeds human ability. This is important because of the physical modifications required to meet supply chain variability today. As the need for on-time delivery accelerates, there will be more need for robotic solutions that can handle environmental changes in real-time.
Integrating AI, sensing and robotics into an automated solution enables the robot (or robotic cell) to handle item diversity, container variation and process changes required for piece-picking order fulfillment without additional engineering or programming. This type of solution is ideal for targeting high-mix / high-volume applications scaled to a human form factor.
Deep-learning Vision
A newer concept gaining traction is the use of deep-learning 3D vision for industrial robots for the random depalletization of multi-SKU, single SKU and random pallets. This type of system is easy to configure with a browser-based graphical user interface (GUI), and it utilizes a high-resolution 3D sensor (or 2D sensor in some cases) with advanced motion planning. A system like this requires minimal employee training, keeping integration time at a minimum.
Visual Guidance
A breakthrough technology, visual guidance combines the capacity to see and identify an object with the ability to locate. With one camera, one cable and zero engineering, this system can upgrade nearly any automated workforce within a few hours and keep it running for decades, making it a very cost-efficient option.
With a visual guidance system, an industrial robot or a collaborative robot is given the ability to mimic the human visual process. This is accomplished with the help of a unique algorithm that gives the robot true hand-eye coordination, distinguishing between up to 100 unique objects, two- and three-dimensional shapes and dramatically similar parts (regardless of orientation) with accuracy and speed.
A “New World” of Solutions
High-quality vision technology has evolved rapidly, creating a resurgence for vision-guided robotics in the manufacturing of parts and packaging of goods. For industrial and collaborative robots alike, to aid in the production value chain, technology must continue to evolve and be applied. Because one size does not fit all, best practices suggest that manufacturers looking to utilize vision technology should understand project requirements, along with the capabilities of the proposed solution. There is an entire “new world” of solutions available to manufacturers for the implementation of machine vision and robot technologies for flexible automation, and the growth of VGR is proof that when a robot is made more perceptive, more applications will follow. V&S