The data handling of many sensors that scan a single target, called multi-sensor networking, is an important capability to support large object applications. Many sensors are required when the target object is larger than what a single sensor is able to scan, or when multiple views are required to capture key features of an object.
In this article, we will present four main challenges that arise in multi-sensor networks: (1) sensor wiring, (2) sensor discovery, assignment, and mapping, (3) sensor alignment to a common coordinate system, (4) sparse or dense network processing.
Sensor Wiring
Creating a successful, robust network of sensors requires careful design around a star topology. Unlike bus topologies such as USB or FireWire, a star-type network such as Ethernet continues to function even if a node stops functioning. In a bus topology, a malfunction anywhere along the main wire connecting nodes will cause the entire network to fail.
In a star topology, the wires connecting the sensor to the central computer must support power, data, synchronization, and laser safety. This can be achieved using a master controller and simple CAT5e cabling.
A master controller distributes power, microsecond synchronization and laser safety in a multi-sensor network. Synchronization data is broadcast to all sensors and includes a timestamp, encoder stamp, and the status of direct factory inputs (like photocells) wired to the controller. Sensor data is streamed through a Gigabit Ethernet switch to the central PC for processing.
Sensor Discovery, Assignment, and Mapping
Once a network of sensors is wired and powered up, software is needed to enumerate each sensor on the network, physically identify it (usually by serial number), and enable it for use in a layup.
Common layups are “wide,” “ring” or “opposite.” Each sensor is assigned a location in a layup and its physical to logical mapping is recorded. This mapping is essential in order to later align and stitch data or to simply replace a sensor that is no longer functioning.
The following are examples of the most common sensor layups.
Wide Layup for Scanning Large Objects (Fig. 2) - Multiple sensors are used to measure large objects that are wider than a single sensor’s field of view, such as automotive parts and assemblies.
Opposite Layout for Determining Object Thickness (Fig. 3) - Two sensors perform top and bottom differential measurements to calculate true thickness when the object cannot be referenced to a known surface like a conveyor.
Angled or Ring Layout for Measuring Entire Object Circumference (Fig. 4) - Multiple sensors are set up at an angle or in a ring to eliminate occlusions and scan the entire circumference of an object, as in log scanning applications.
Sensor Alignment
All sensors in a network have to be aligned in order to relate measurements from a sensor to an absolute position on the object. To do this, alignment transformations are required to convert from sensor coordinates to a common coordinate system (i.e., world coordinates).
This alignment process can be achieved in many ways. One common approach uses a known artifact with precise dimensions of a particular shape that all sensors can scan. These scans need to contain one or more unique artifact features that can be used to determine the position of the sensor in world coordinates. Another method is to use a laser tracker and attach retro reflectors to the sensor housing. This allows the tracker to pick out the orientation of each sensor and position the sensor in world coordinates.
Sparse and Dense Network Processing
In a dense network, data from multiple sensors is stitched into a single 3D point cloud. For example, a wood optimizer is a dense network where data from overlapping top and bottom sensors are stitched to produce a single 3D board model with two surfaces (top and bottom). These surfaces are then analyzed to maximum volume recovery. A second example is in food portioning, where data is stitched together into a 3D model representing the volume of an object that is optimized for cutting in order to minimize waste.
In contrast, a sparse sensor network is one where sensors do not overlap and there are missing gaps of data; instead, data is processed from each sensor’s individual view. As a result of the missing data in a sparse network, there is no generation of a combined 3D point cloud. An example of a sparse network can be found in automotive body-in-white inspection, where there are fixed sensors strategically placed around the metal body to measure key features. In this application, the sensors do not cover every millimeter of the body surface, only those regions of interest that require feature verification.
In both dense and sparse networks, the sensor produces data in world coordinates so it can be related back to the position on the object.
The Simplest Network: Dual Sensor Networking
The simplest form of network is made up of two sensors that we call a main and a buddy. These networks are used to calculate thickness or to find two edges on a wide web of material, such as strips on a tire.
In a 3D smart sensor the main/buddy configuration is a built-in feature. In this setup, the first sensor (main) is paired with a second sensor (buddy), and the main sensor is able to automatically “recognize” the buddy sensor when each is connected to the same network.
After pairing is complete, the buddy sensor sends its data to the main sensor. Both datasets are then merged into a common coordinate system and used for measurements. For ease-of-use, dual sensor systems in a 3D smart sensor use a single GUI to configure, measure, make decisions and display results.
Larger Network Support
In many cases, more than one buddy is needed. For example, imagine a case where 20 sensors (10 top, 10 bottom) are required to scan a very long object, from which a large 3D point cloud is generated in order to calculate the object’s volume.
Today’s 3D smart sensors support the discovery, assignment, mapping, alignment, and stitching of large multi-sensor networks to solve these types of applications using an “opposite” layup scheme. Because there is often a lot of data in multi-sensor networks, sensor acceleration is used to redirect all sensor data streams to a PC where processing and memory is sufficient to handle such an application.