Image in modal.

The integration of machine vision into manufacturing processes for part inspections and handling has become increasingly popular. Vision systems are used to inspect parts for defects, ensure traceability, and automate repetitive or complex tasks that would be difficult for human operators. Automated systems powered by machine vision can increase production efficiency while maintaining high-quality output. As the use of vision systems continues to grow, having easy-to-program software is crucial to minimize implementation times and reduce downtime for troubleshooting. Two key software trends that are becoming critical to ease of use in machine vision are AI tools and vision-guided robotics software.

AI in Machine Vision: Revolutionizing Defect Detection

AI in manufacturing is more than just a buzzword; it’s a transformative technology that can significantly streamline operations. While AI is often touted as cutting-edge, its practical application is helping manufacturers reduce time spent on repetitive tasks and automate decision-making processes. This mimics the way humans would make decisions, but without the need for complex, manually written code to account for every possible scenario.

In the past, defect detection in machine vision involved capturing multiple images using different lighting techniques and programming multiple tools to identify various defects. Engineers would need to predefine all potential defect scenarios, which could be time-consuming and didn't always allow for flexibility when new types of defects emerged.

Today, AI-based learning tools are revolutionizing defect detection. Instead of spending time programming for every conceivable defect, engineers can simply teach the vision system what a “good” part looks like. By showing the system images of acceptable parts, it can then identify and flag parts that deviate from this standard with a specified level of certainty. Deep learning tools now enable machine vision systems to learn and adapt quickly: programming a system now takes minutes instead of days. After the system processes the initial training images, engineers can set thresholds for anomalies and test real-time defect detection. As the system encounters more defective parts, engineers can expand the dataset by adding new samples, improving the system's accuracy over time.

If categorizing specific defects remains an important part of the inspection process, AI classification tools can be used to define what a defect-free part looks like and then classify each type of defect by teaching the system examples of what each defect looks like. To distinguish between a bump and a scratch, programmers would use known defective samples of each to train the classification tool. In the past, separate tools would need to be programmed for each specific defect. With modern AI classification tools, however, the only programming required is capturing images and specifying whether the part should pass or fail.

This eliminates the “black-box” nature of older vision systems, where troubleshooting was difficult. With these AI advancements, software tools are easier to implement, more efficient, and much more transparent, saving valuable time during both the programming and troubleshooting phases.

Car Door
Image Source: KEYENCE America

Advancements in Vision-Guided Robotics

Machine vision is also transforming the world of vision-guided robotics. Cameras are integrated into robotic systems when parts cannot be consistently picked or placed from fixed locations and require real-time feedback from vision systems to guide the robot's movements. Initially, the communication between the camera and the robot was indirect, requiring a logic layer to pass data such as angle, coordinates, and height information. Engineers would manually program these systems to enable the robot to adjust its path accordingly. However, communication issues could arise when the robot programmer understands how to manipulate the robot program but lacks knowledge of how the logic systems interact with the vision systems and robots. In such cases, another engineer or programmer would be required to bridge the gap and ensure proper functionality.

However, recent advancements in machine vision software have enabled a direct 1-to-1 communication link between the camera and the robot. This seamless integration allows for more efficient control and calibration, and in some cases, operators can even jog the robot directly from within the vision software itself. The result is faster implementation times for vision-guided robotics systems, with greater confidence that the robots will be directed to the correct pick points, improving overall system performance.

Connector OK
Connector NG
Images Source: KEYENCE America

3D Bin Picking: A Game-Changer for Robotic Systems

One of the most notable advancements in vision-guided robotics is the rise of 3D robotic bin-picking systems. These systems are especially useful in applications where robots need to pick parts from bins with random orientations. Rather than relying on fixed or predictable part positions, 3D vision systems provide detailed spatial data that allows robots to pick parts with much greater accuracy and flexibility.

3D vision systems enable robots to image parts in three dimensions and calculate the best pick points based on the part's orientation and the gripper's geometry. These systems can also use CAD data from both the part and the robot's end effector to generate multiple possible pick points, ensuring that the robot can pick parts efficiently, even from complex or cluttered bins.

Software advancements now allow for advanced simulators that can generate a variety of bin configurations and simulate the entire picking process offline. These simulators can test different picking strategies, identify unpicked parts, and even suggest additional pick points to improve the system's efficiency. By performing this programming work in advance, manufacturers can reduce the downtime typically associated with system installations and ensure that the vision-guided robotic system will function as expected before it is even deployed.

Conclusion: Making Vision Systems Easier to Implement and Faster to Deploy

The integration of AI into machine vision software, coupled with advancements in vision-guided robotics and 3D bin picking, represents a significant shift toward more intuitive, efficient, and adaptable vision systems. The common theme among these trends is that software developments are making vision systems easier to program, quicker to install, and more adaptable to real-world challenges. These innovations not only save time during the programming and setup phases but also improve the overall performance of automated systems in manufacturing environments. As the technology continues to evolve, manufacturers can expect even greater improvements in flexibility, efficiency, and ease of use in machine vision systems.