For some time now the term Industry 4.0 has often been mentioned in connection with image processing and it is predicted to turn our habits upside down. Together with the World Wide Web, Industry 4.0 connects the real production world with the virtual, for putting the flexibility of the production onto a new step.
However applications engineers often object that, to realize self-learning image processing systems such as Industry 4.0 scenarios demand, several standards have to be developed first. These should not only serve the check routines, but also should define the interfaces between the image processing and the whole system with decentralized communication.
Even if the implementation is not completed yet, the future of image processing is going to become more flexible and it will lay the framework for new business models.
If you have a look at the assembly line these days, then you can see that at the production process attention is focused on low material and energy consumption. In addition, 100% inspection has replaced random sampling. Because with the faultless product demanded today, random sampling cannot keep up any more. For some time now it has not been custom to insert samples into the measuring machine. Image processing has taken over the lion’s share in automation and quality control.
Machine vision systems measure, inspect, count and direct the robot-gripper to the desired position. Compared to the past, the quantity is increasing, but the production time should not get slower. With the newest camera and software technology the image processing is offering a big advantage:
The measurement is carried out extremely fast and within the production time. Within the producer and sales circles rumor has it that machine vision can be carried out in real time. However “real time” is a term that can be interpreted in many different ways. There are hard, soft and firm real time requirements, which have to be fulfilled from the system. Hard real time means that missing a deadline is a total system failure, whereas soft real time means that the usefulness of a result degrades after its deadline, thereby degrading the system’s quality of service. Hard real time means that after the time is up the result can be discarded.
Of course today with the help of different technical achievements, image processing can follow the aim to process image information from automation systems under the aspect of real time. Thereby especially the description, modeling and design of effectively implementable algorithms for microelectronic and resource-limited circuitry such as FPGAs and FPGA- and GPU-based systems have come to the fore. Those are used to take on parts of the processing tasks and to relieve the CPU.
The FPGA (Field Programmable Gate Array) can process tasks parallel, whereas an ordinary processor would complete the tasks one after another. This is due to the structure of the FPGA-chips, where several logic blocks are and can be configured. The blocks can be assigned with different concrete tasks. Therefore the efficiency of a concrete task remains, even when further processing tasks arrive.
Some smart cameras are characterized by their FPGA and with an optimal implementation the FPGA is not recognized by the user. On these cameras the FPGA is executing the commands of an image processing software. It depends on which inspection program the user has composed. An inspection program can consist of many commands or, depending on the application, of only the least necessary number of commands.
One of the basic image processing functions of some image processing software is the “object counter,” which is based on the Blob analysis. The Blob (Binary Large Object) algorithm is used for detecting the parameter of single objects inside an image. In plain terms, a Blob is an area of a digital image, in which some characteristics such as brightness or color are constant and differ from the background. In figure 5 for example every chocolate drop has an original Blob, which is silhouetted against the background with its gray and color values. The Blob-analysis now allows the separation of the relevant objects from the background (so called binarization) and then classifying the objects due to their size, geometry, position and orientation.
In the early days on the field of the Blob-analysis, it was used to maintain an image region (region of interest) for further processing. Those image regions could signal the presence of an object or parts of the object in an image, with the set task to recognize or trace objects. One of the first and also most popular Blob-analysis is based on the Laplacian of the Gaussian (LoG), which is a special form of the discrete Laplace-Filters and is used for detecting edges. In newer works the Blob descriptors are more and more used as interest-operators. These algorithms extract distinctive areas in images and deliver at the same time one or more parameters. Distinctive areas are points, which are in a bordered surrounding that is as unique as possible.
Today the Blob-analysis can be used in many applications with time consuming calculations. Thereby it can exclude connected regions based on their characteristics that are not of interest. Also even statistical information can be determined, such as the size or number of Blobs, the position and presence of Blob regions.
Thereby many visual inspections can be carried out, such as the detection of contamination, scratches, holes, and other defects on surfaces such as wood, foil, wafer, paper, etc. The “Object Counter” command of some image processing software is used for example at the production of sugar cones, to detect errors in coatings or else at the production of air conditioners in the automotive industry, or at the production of credit cards. As for producing credit cards, the IC chips have to be soldered to the card. The solder paste has to constantly consist of a certain shape and a certain amount. With the Blob command of the software it is inspected if a certain amount and shape is used on the card on the correct position. (Figure 1)
A Blob command can also be used to count colored chocolate drops. In figure 2 – 5 it is shown how the image processing software with a color filter only detects the blue candies and counts them with a Blob command.
A solution with the Blob-analysis can look as follows:
- Extraction: in the first step a threshold technique is used on the image, to get an area correspondent to the objects (or the object), which have to be inspected. Figure 2 shows the use of a color filter command to only detect the blue candies.
- Refinement: the area you got with the Extraction can often be impaired by some small errors such as noise or faulty illumination. This can be corrected with the refinement. Figure 2 shows the separated image regions.
- Analysis: in the last step the Blob-analysis is used and the result calculated. Figure 4 shows how the command “Object Counting” is used.
- Figure 5 shows how the blue chocolate drops got a Blob through the Blob-analysis. The application counts 10 candies.
For such applications the Blob-analysis is a powerful and flexible method for the search of an object. The different measurements, which are achieved with the Blob-analysis, can be used for the determination of the characteristics, which distinctly define the object. And so it can be used in the future as well. V&S