Artificial intelligence (AI) is one of the most hyped technologies of recent years, and while it promises new cost and process benefits for inspection applications, deployment remains a challenge.
Part of the technology trepidation stems from uncertainty around the terms and definitions of ‘AI’ and ‘machine learning.’ Organizations are also unsure how to deploy new AI capabilities alongside existing infrastructure and processes. This is especially true in inspection systems, where there are significant investments in cameras, specialized sensors, and analysis software with well-established processes for end-users. The cost and complexity of algorithm training is also a concern for businesses evaluating AI.
At its most basic, AI is the ability for a machine to perform cognitive functions that we associate with our human mind, such as recognizing and learning. Machine learning, a subset of AI, involves coding a computer to process structured data and make decisions without constant human supervision. Once programmed with machine learning capabilities, a system can choose between types of answers and predict continuous values. Machine learning programs become progressively better as they access more data, but still require human oversight to correct their mistakes.
While AI is often seen as an emerging technology—you possibly have an email in your inbox right now titled “Get Ready for AI”—it already surrounds us in our consumer lives. A “smart” thermostat, for example, uses a combination of user-inputted data and monitored human activity to determine when we’re home, away, or inactive to set its estimated ideal temperature. Occasionally, the homeowner still needs to manually correct thermostat settings.
Machine learning still requires human input to make informed decisions and needs further programming to fix mistakes. Deep learning goes one step further, with algorithms that use a wider range of structured and unstructured data to make independent decisions and can learn from mistakes and adapt without requiring human programming. Autonomous applications, including vehicles and factory robotics, use deep learning to navigate consistently changing situations.
Hybrid AI to Inspection
AI is complex, but adopting a hybrid approach that marries classic and machine learning techniques for quality inspection can simplify deployment.
In a classic computer vision application, a developer manually tunes an algorithm for the job to be done. This can require significant customization if customers or products A and B have different thresholds on what is considered an error. Inaccuracies may generate excessive false positives that stop production and force costly manual secondary inspection, or missed errors that result in defective or poor quality products going to market.
Similarly, AI algorithm training has traditionally required multiple time-consuming steps and dedicated coding to input images, label defects, fine-tune detection, and optimize models. More recently, companies are developing a “no code” approach to training that allows users to upload images and data captured during traditional inspection to software that automatically generates plug-in AI skills with minimal human input.
For example, plug-in AI skills can be generated for machine learning-based classification, sorting, detecting, and hyperspectral capabilities. AI for inspection excels at locating, identifying, and classifying objects and segmenting scenes and defects, with less sensitivity to image variability or distortion. AI algorithms are also more easily adapted to identify different types of defects or meet unique pass/fail tolerances based on requirements for different customers without rewriting code.
From an infrastructure perspective, AI capabilities can be integrated into existing applications without changing hardware or software. In an inspection application, a gateway device intercepts the camera image feed and applies the selected plug-in AI skills. Users can also develop AI skills for custom requirements that are uploaded to the gateway. The gateway sends the AI processed data over a GigE Vision connection to the inspection and analysis application, which seamlessly receives the video as if it were still connected directly to the camera.
The device could also be used as a secondary inspection tool by processing imaging data with loaded plug-in skills in parallel to traditional processing tools. If a defect is detected, processed video from the gateway can confirm or reject results as a secondary inspection. For applications requiring distributed vision processing, additional gateways can be added to the system to build an AI mesh network. For example, individual gateways are configured for different defect types, with a master device combining each skill and transmitting data over GigE Vision to the processing application.
Image 2: AI plug-ins simplify the training for algorithms that are deployed on hardware that sits seamlessly between image sources and processing platforms. In this example, integrators can deploy machine learning hyperspectral capabilities without any additional programming knowledge. Images and data are uploaded to training software on a host PC, which automatically generates AI models that are deployed on gateway in a production environment.
Benefits in Inspection
AI and machine learning is set to help organizations reduce costly inspection errors, false-positives, and secondary screenings that waste human resources and slow processes. Key is the ability to simplify AI algorithm training, and ensuring new machine learning-based inspection capabilities are seamlessly integrated within existing infrastructure and processes. V&S