Image in modal.

Machine vision has always played a critical role in ensuring safe, efficient and reliable operations in many industrial settings. However, as vision-enabled machines become more numerous and the type and volume of data they can collect expands, challenges are forcing system makers to look at new approaches to efficiently acquire, process, and utilize visual data.

If we look at the current challenges, they span the spectrum in terms of improving operational efficiency, accuracy and reliability:

Data overload and processing efficiency that limits throughput are major issues as industries move towards more advanced, faster automation, tasking vision systems with capturing and analyzing vast amounts of data. Traditional vision systems often struggle with the sheer volume of images they capture, much of which can be redundant. The requirement now is not just about capturing high-resolution images but doing so in a way that first and foremost accelerates throughput (in part by minimizing irrelevant data), while maximizing the precision and relevance of the information captured.

Real-time processing is becoming increasingly important, especially in environments where machines need to make instantaneous decisions, such as in quality control or defect detection on production lines. This demands more efficient processing methods and data reduction techniques.

High-speed and high-precision demands as production lines get faster. High-speed processing, low latency, and the ability to capture minute changes in a scene in real-time are critical. Traditional frame-based systems struggle with motion blur and data overload when capturing fast-moving objects. For example, in applications like high-speed counting, even the slightest delay in image acquisition and processing can lead to errors.

Sustainability is a growing priority as many industrial systems operate in environments where power efficiency is key. Vision systems need to operate for extended periods without consuming significant amounts of energy. Traditional image processing systems, especially those that capture entire frames at a fixed rate, can be power-intensive and require sophisticated cooling or energy management.

Complex lighting and environmental conditions in many settings, including extreme brightness, low light, or dynamic lighting scenarios. Vision systems need to cope with high dynamic range (HDR) requirements to capture high-quality data without losing detail in either the darkest or brightest areas. Conventional frame-based systems have struggled in such conditions, leading to the need for more adaptable and sensitive vision technologies.

Predictive maintenance and condition monitoring is a growing need. Vision systems must not only react to issues but also help to predict potential problems before they occur. Predictive maintenance requires vision systems that can monitor machine vibrations, detect wear and tear, and identify early signs of equipment failure.

As a result of these growing challenges, developers of vision-enabled systems for monitoring, counting, inspecting, automating, controlling, and enabling autonomous processes are seeking new approaches to vision sensing and data acquisition.

Industrial Visual EVK4
Image Source: Prophesee

Event-based vision addresses these challenges

Event-based vision, inspired by the human eye and brain, is increasingly used in industrial machine vision to address these challenges. By mimicking biological vision, this technology utilizes efficient sensing and collection techniques that capture changes within a specific scene. This reduces processing requirements compared to traditional frame-based methods while revealing details that conventional systems miss, opening new possibilities for precision and performance in industrial applications.

Event-based vision is particularly suited for industrial automation, IoT, automotive, and mobile applications that demand high performance, low power consumption, and operation in challenging lighting conditions. The technology offers significant advantages in speed, power efficiency, dynamic range, and low latency, driving use cases like high-speed counting, preventive maintenance, and inspection.

What is an Event-Based Sensor (EVS)?

In conventional video systems, entire images (i.e., the light intensity at each pixel) are recorded at fixed intervals, known as the frame rate. Standard movies are recorded at 24 frames per second (fps), with some videos using higher frame rates like 60 fps (16.7 ms intervals). While effective for representing the “real world” on a screen, this method oversamples unchanged parts of an image, especially at high frame rates, while undersampling the most dynamic areas.

Event-based sensing offers a biologically inspired solution to this under- and over-sampling. Unlike traditional cameras, event sensors don’t use a uniform acquisition rate (frame rate) for all pixels. Instead, each individual pixel defines its own sampling points by reacting to changes in the amount of light it detects. Information about relative light changes (temporal contrast) is encoded in “events”—data packets containing the pixel’s coordinates and the precise time of the occurrence of the change. This mode of operation enables continuous acquisition of essential motion information at the pixel level. The pixels operate asynchronously (unlike traditional CMOS cameras) and at much higher speeds. And they don’t need to wait for a complete frame before reading out data.

The advantages of event sensors include high-speed operation (equivalent to 10,000 fps and more), extremely efficient power consumption (milliwatt to microwatt range), reduced data processing requirements (10-10,000x less than frame-based systems), resulting low end-to-end latency, and high dynamic range (more than 120 dB). These attributes make event sensors ideal for a wide range of applications and products.

One Sensor, Many Applications

What are some examples of typical use cases where event-based vision excels? Here are a few.

Safety: Object Tracking

Event-based sensors excel at tracking moving objects, leveraging their low data rate and sparse information capabilities. This approach allows for precise object tracking with minimal computational resources, eliminating traditional “blind spots” between frame acquisitions. Additionally, event sensors offer native segmentation, focusing solely on movement and disregarding static backgrounds for improved tracking accuracy and efficiency.

Event based vision enhances safety by monitoring worker and machine interactions in real time, even in complex lighting, without capturing images.

Productivity: High-Speed Counting

With event vision technology, small and fast moving objects can be counted at unprecedented speeds and with high accuracy. Objects independently trigger each pixel as they pass through the field of view of the event camera at speeds of many meters per second and rates of over 1,000 objects per second. Counting accuracies of more than 99.5% are achieved.

Predictive Maintenance: Vibration Monitoring

Event-based sensors enable contact-less multi-channel vibration monitoring with pixel-level precision. By tracking the temporal evolution of each pixel in the scene, the sensors are able to measure vibration frequencies at many points simultaneously. These data provide valuable insights into vibration patterns across frequencies from 1 Hz to the tens of kHz range, aiding e.g. in predictive maintenance.

Quality: Particle/Object Size Monitoring

In high-speed production environments, event-based sensors allow for real-time control, counting, and measurement of particle or object sizes e.g. on conveyors or in fluidic channels. The sensors capture instantaneous quality statistics, ensuring accurate process control at speeds of up to 500,000 pixels per second on the sensor plane with a counting precision of over 99%, optimizing quality assurance in production lines.

Quality Control

Event cameras help lower reject rates with real-time feedback and advanced processing down to a 5 µs time resolution and blur-free asynchronous event output. One specific use case is in the automatic detection and classification of the finest imperfections in manufacturing materials, for example in automotive part to perform paint defect inspection, scratch detection, and planarity testing (See Figure 2).

As event-based vision continues to evolve and address diverse market needs, it is establishing itself as a new industry standard. Over the past several years, the technology has expanded to serve a wide array of applications. Thousands of product developers are now adopting event-based vision for sophisticated camera systems, supported by open-source technology and a growing inventors community. These advancements are transforming how machines perceive, process, and react to visual information, bringing greater precision, efficiency, and intelligence to industrial automation operations.