Dear readers, dear machine vision fans, well I guess you are when you take time to read this ; )

In this very first BLOG article by EMSYS Visiongeek I would like you to help me challenge the statement of this article: “Laser triangulation, will it ever get outperformed?”

Figure 1
A laser triangulation or SOL setup with one laser line and one camera. The laser and camera are mounted with a fixed angle between them, i.e. a fixed angle between the light sheet of the laser and the optical axis of the camera. The SOL setup is mounted on a linear guide and is translated then along this guide such that the space between the bricks on the table is scanned.

Figure 2
The 3D reconstruction of the scan from Figure 1.

The expression “laser triangulation” is most likely familiar to you and has become almost a synonym for the action of measuring distances using a sensor (e.g. a camera) and a laser. In his article, I will however switch over to use the expression “Sheet Of Light” which is abbreviated SOL. SOL is a special version of laser triangulation and I will give the basic description of this technique a bit further down in this text, but first some examples.

In Figure 1 you will find three photos of a SOL setup and Figure 2 shows the 3D result generated by that setup. This setup uses one line laser and one camera. The setup is mounted on a linear guide, which allows the camera-laser combination to translate along this guide while an encoder generates pulses equidistantly. In this way camera and laser move over and along the “wally” of some bricks. This scanning result in a 3D model (or point cloud) that can capture shape and defects of the side of the bricks.  Figure 3 shows a sketch of another SOL setup.

Here two cameras and two line lasers are used. The two setups are fixed with respect to each other but the objects are translated relative to the setups on a conveyor. Here too, an encoder (integrated in the conveyor) generates pulses equidistantly as the conveyor brings the objects in motion. Each of the two setups generates a 3D model of the objects passing under them. Figure 4 shows the 3D model of one of these two setups.

OK, why the question in the title? Well it is only a reflection, but although the development of new and improved 3D technology is presented with a steadily increasing pace, one of the oldest 3D technologies, laser triangulation or SOL, does seams to keep up remarkably well. Here follows a bit more background.

 

SOL is one of a whole series of techniques to extract 3D information from objects with cameras. Some other techniques, also available in the Machine Vision world, are “Stereo Vision”, “Time Of Flight” and “Structured Light”. There are a few more out there like “Pattern Triangulation”, “Depth From Focus” and “Photometric Stereo”, but the first three mentioned are the most common ones in Machine Vision and I will spend a few more words on these. However, I am not going to give a complete overview of the pros and the contras of these different techniques. Your reactions might however give fuel for such comparison in a later article. I will here only highlight some facts in order to eventually getting back to my title. Here is very short description on the working principle, how long a technique has been around and what the major requirement is in order to make a specific technique work.  And one more thing; I will point out what the typical value is for the depth resolution compared to the working distance of that system.

Figure 3    
Principle of a double Laser Triangulation setup, using two line lasers and two cameras. By doubling the setup and choosing the positions well, the rectangular shaped objects can be measured on 5 out of 6 sides.

Figure 4
A result of a scan using the Laser Triangle setup or SOL setup from Figure 3. The real part was a polyurethane insulation board with an aluminum foil at the top. The specific object has a clear defect at one corner and a company name engraved in the side.

Here is what I mean with that last point. A system that can measure a distance (or distances) with a resolution of 2 mm at a working distance (WD) of 10 cm has a depth resolution of WD/50 (100 mm / 50 = 2 mm). A very small system, e.g. a microscope, could have an absolute depth resolution of let’s say 1µm at a working distance of 0.5 mm. Such system will hence have a depth resolution of WD/500 (500 µm /500 = 1µm). Hence, comparing the depth resolution with the working distance and looking at the size of the denominator, will quickly give us an idea about how “powerful” or “useful” a system is when it comes to measure distances accurately. As you will notice the values of this denominator varies between different 3D systems. The values I give each system is in this article based on my experience with different systems. For each system, these values can be altered by trimming a system, or e.g. using it in ideal circumstances. However, this is true for all of the 3D systems I will describe below. I have chosen values that correspond to a “practical” system, i.e. what you could expect to reach when applying a system in a true industrial context.

Now a quick look at four 3D systems.

“Stereo Vision”

Uses the principle of the human stereo vision. With our two eyes, placed at a certain distance from each other, viewing the same scene at two slightly different angles, we are able to judge how far away objects in the scene are. 
To use the same principle to measure distances and hence record 3D information of natural scenes has intrigued people for very long. The working principle was recognized already by Greek Euclid around 280 AD, hence the mathematics (Euclid was a mathematician) or at least the basic part has been around for very long. With stereo photography, the way of recording and displaying images such that the human viewer will experience depth has been popular for almost 100 years now. However, using a computer to reconstruct depth from a stereo image pair is a totally different ballgame. One part is a trigonometric calculation (triangulation) which is a trivial activity for a computer. But before this calculation can take place one has to determine “corresponding pixels” in the two camera images. This means finding the part of the left image that corresponds with a part in the right image. This is one of the many image processing tasks that our brains are excellent in. Today there are a series of algorithms which together with quite powerful PC’s can perform this 3D reconstruction sufficiently fast and stable. However, this has not been true for very long. I don’t have an exact year here but the possibility to perform digital stereo vision will origin from the early 1980’s. Both PC’s and algorithms have evolved quite a bit since, and there are both algorithms and applicable hardware to make stereo-vision useful in the industry of today.

A Stereo Vision system typically reaches a depth resolution of WD/100. The frame rate is typically lower than 20 fps for 1.3 Mpixels image sensors. Since stereo vision typically can only delivers a 3D point for about 80% of the sensor pixels-pairs, a stereo vision system delivers about 20 x 1.3*106 x 0.8 = 2.1*107 3Dp/s.

Figure 5
SOL or Stereo Vision.
(Source MVTec Halcon12)

“Sheet Of Light” (SOL)

The SOL technique is based on triangulation, as is also partly true for stereo vision, but instead of using two cameras, SOL uses one camera and one (laser) light source. The light source has its rays (ideally) located in only one plane or one sheet, and from there its name, Sheet Of Light. The simplest way is to use a spot laser and let it shine through a cylindric lens. The narrow and cylindrically shaped beam will when passing through the cylindric lens be transformed (re-shaped) to a narrow triangular sheet (see Figure 1). Today special optics are available which can perform this transformation such that we get a high-quality light-sheet, narrow and flat and with homogenous intensity. The light-sheet is directed towards the object we want to measure and we can observe a typically bright line on that object. From knowing the location and orientation of the light sheet relative to the location and orientation of the camera, it is easy to reconstruct where in space the different parts of this line is located (using triangulation). Hence, we can reconstruct the 3D shape of that object, however, temporarily only along that very line. In order to get the shape of the whole visible part of the object we will need to repeat the same action after e.g. translating the object (or the laser and camera setup) over a series of short distances. These short distances are typically measured very accurately by means of an encoder.

With this background, it might not be surprising that SOL typically reaches a depth resolution of about WD/1000 (!), and with a data rate of 10k 3Dprofiles/s where each profile contains 2k 3Dpoints, a speed of 2*107 3Dp/s is achieved.

“Time Of Flight” (TOF)

Time Of Flight or TOF is the youngest of the four techniques discussed here. The first TOF products came on the market a few years after the mellennium shift. TOF uses the principle of measuring the time it takes for light to travel from the camera to an object and back to the camera. Since it takes only about 3.3 ns for light to travel 1m in air, the electronics used to handle this “measurement” need to be very fast. It is fast electronics of the controllable LED illumination and fast electronics for the signal processing in combination with a dedicated CMOS image chips which has enabled this technique. The principle is based on the technique used in LIDAR’s (Light Detection And Ranging), but instead of the need for scanning a single laser-beam and using a single detector, TOF utilizes a matrix sensor and an LED source illuminating and captures the complete scene at once. TOF thus eliminates the need for scanning.

The illuminating source for TOF is typically NIR LED (Near InfraRed LED) and TOF is able to measure distances to the various objects in the scene (3D) as well as generating gray value images (2D).

A TOF system reaches a depth resolution of about WD/100 and a frame rate typically lower than 20 fps for 0.3 Mpix images or about 0.6*107 3Dp/s.

Figure 6
Illustration of the principle of a Time Of Flight system (Source Hyunjung Shim, Seungkyu Lee).

“Structured light”

This principle is closely related to SOL, but instead of using a light sheet, a projector (typically a DLP-projector) projects stripes on the object, alternating bright and dark. It is powerful DLP-projectors which has made this technique a valid 3D system candidate. Such projectors are on the market since around year 2000 and hence also this technique. It is here the transition between a bright and a dark stripe (the edge) which the algorithms are searching, instead of the light line in the case of SOL. The originally straight edge is deformed due to the shape of the object and as with SOL, triangulation is used for the 3D reconstruction. One benefit of using a DLP-projector rather than a fixed light line is that the projector, being able to project almost any pattern on the target, can move this edge over the object and the mechanical translation or motion required by SOL is not required for structured light. Due to the flexibility that the DLP-projector introduces, structured light uses much smarter patterns to project than simple edge which appear to translate over the object. Typically, a series of patterns are projected after each other. Starting with wide bright and dark stripes resulting in a few edges equidistantly distributed over the object, and systematically increasing the number stripes and hence edges, still equidistantly distributed over the object. In this way, it is possible to extract 3D information with a low spatial resolution first (only large variations), and increase the spatial resolution up to the required resolution. By applying clever patterns from the projector and ditto algorithms for the detection of edges, structured light can detect the edges (transitions) with sub-pixel precision.

A structured lights system typically reaches a depth resolution of about WD/500. For a sensor resolution of 1.3Mpixels and a 3D framerate of 20 fps the 3D point speed is 2.6*107 3Dpoints/s.

Figure 7 
Illustration of a structured light system.
(Source MVTec Halcon12)

Summing up the performance values for the different 3D techniques we get the table below.

From these values, it seems to me that the SOL still is the most “power full” 3D system type. What do you think?

Pin It on Pinterest

Share This