Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High dynamic range 3D laser scanning with the single-shot raw image of a color camera

Open Access Open Access

Abstract

As a typical technique of optical three-dimensional (3D) shape measurement, laser scanning can provide good measurement accuracy by using simple and low-cost optical configuration. The performance of 3D laser scanning greatly depends on the center detection of the laser stripe. In general, laser stripe detection algorithm expects the intensity of the laser stripe remaining moderate and stable. To deal with the negative impact of dramatic change in the intensity of the laser stripe, a high dynamic range (HDR) laser scanning technique with concise algorithm and simple hardware configuration is proposed in this paper. The Bayer filter in the sensor chip of a color camera is exploited to provide different intensity responses to the laser. Then the sub-images of the laser stripe, which correspond to different color channels and have different intensity levels, can be decomposed from the raw image captured by the color camera. A dedicated algorithm is proposed to achieve HDR laser stripe detection, which collects coordinates with the best quality from different sub-images. Finally, 3D surface of improved quality can be reconstructed with the detected laser stripe. The proposed HDR laser scanning technique can be achieved from single-shot raw image by trading pixel resolution for time efficiency. The validity of the proposed method is demonstrated in comparative experiments.

© 2021 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Three-dimensional (3D) imaging provides a bridge between the physical world and the digital world. Various techniques are exploited to achieve 3D imaging, among them the optical 3D imaging technique plays an increasingly important role due to its advantages of non-contact, non-destructive and easy to operate [1,2]. Optical 3D imaging is widely used in a series of fields, including industrial inspection, surveying, and mapping, medical treatment, and culture heritage. As a typical technique of optical 3D shape measurement, laser scanning provides moderate measurement accuracy by using simple and low-cost optical configuration, which has attracted lots of attentions over the past decades [310].

From the work principle of 3D laser scanning, it can be inferred that the performance of 3D measurement greatly depends on the center detection of the laser stripe. When designing or selecting an algorithm for center detection, accuracy and robustness to noise are considered [1117]. Moreover, the improvement of speed [18], and the suppression of spurious laser stripe [19,20] are investigated.

For optical 3D imaging with active illumination, e.g., structured-light techniques or laser scanning techniques, the change of the reflected light is determined by both the intensity distribution of illumination and variation of the surface reflectance. Moreover, because the laser is monochromatic, the color of the surface texture also greatly changes the intensity of the reflected light. Compared with the standard imaging, high dynamic range imaging (HDRI) responses to greater change of the intensity level within the field of view. Therefore, when the object has various surface reflectance, HDRI is necessary for laser scanning. Inherited from the HDRI technique in computer vision, the multi-exposure method, also referred to as exposure bracketing technique, was exploited in laser scanning [20]. It is obvious that the multi-exposure method has low time efficiency in scanning because multiple images should be captured in each position. Therefore, an exposure coding technique was proposed to achieve HDRI in laser scanning [21]. With the assistant of the liquid crystal on silicon (LCoS) device, we can adjust the brightness of the captured image pixel by pixel, which can enhance the environmental adaptability of the system. However, this system is complex in both device configuration and system algorithm, which restricts its popularity.

In this paper, a high dynamic range (HDR) laser scanning technique inspired by our previous work [22] is proposed. To capture the color information with single image sensor, the Bayer filter is employed in color camera. The Bayer filter is essentially a color filter array (CFA) mosaicking pixel by pixel with color filters of red, green, and blue (RGB). The laser is monochromatic light source, so it has little attenuation to the Bayer filter with the same color but large attenuation with the different color. Therefore, the laser stripe will have different intensity in the images corresponding to different color channels, which implies we can get the image of different intensity levels by decomposing the raw image of a color camera. Subsequently, the single-shot HDRI can be achieved by exploiting the decomposed images, which increases neither the capture time nor the device complexity. Since the decomposed image is down sampled from the raw image, the proposed method essentially trades spatial resolution for time efficiency.

This paper is organized as follows: Section 2 presents the principle of laser stripe detection in 3D laser scanning. Section 3 explains the motivation, principle, and method of proposed HDR laser scanning technique. Section 4 presents and analyzes the experimental results. Section 5 is discussion and Section 6 is conclusion.

2. Principle of laser stripe detection in 3D laser scanning

2.1 System model of 3D laser scanning

A typical 3D laser scanning system consists of a camera, a line laser source and a mechanical device for scanning. When the system is working, the line laser source emits a laser plane, which then intersects with the surface of the object. And the camera will record the image of the laser stripe on the surface. Intuitively, a 3D point is illuminated by the line laser and captured by the camera. The line laser emitted from the laser source is always on a certain plane in the 3D space. Therefore, 3D reconstruction of the point can be achieved by calculated the intersection between the laser plane and the imaging ray, as shown in Fig. 1. Strictly speaking, lens distortion in imaging cannot be ignored in vision-based measurement [23,24]. However, the current technique of camera calibration is quite mature, and we can easily undistort the image after camera calibration. To make the system model more concise, here we assume that the image has been undistorted and only consider the linear camera model. The imaging ray of the camera can be described with the well-known pinhole camera model, and the laser plane can be described with the general equation of a plane in the 3D space, which leads to the follows model of the system

$$\left\{ \begin{array}{l} s\tilde{{\mathbf m}} = {\mathbf K}{{\mathbf X}_c}\\ {[{a\;b\;c} ]{{\mathbf X}_c}} = d \end{array} \right. .$$

 figure: Fig. 1.

Fig. 1. The schematic diagram of the 3D laser scanning system.

Download Full Size | PDF

Here ${{\mathbf X}_c}\textrm{ = }{({{x_c},{y_c},{z_c}} )^T}$ is a 3D point in the camera coordinate system (CCS), ${\mathbf m} = {({u,v} )^T}$ is the image point of Xc, and $\tilde{{\mathbf m}} = {({u,v,1} )^T}$ is the homogeneous coordinate. K is the intrinsic parameter matrix of the camera, and s is a scale factor. The parameters (a, b, c, d) of the laser plane and the intrinsic parameter matrix K of the camera can be estimated in system calibration. And then once the image coordinate m is determined, the 3D coordinate Xc can be calculated with the system model.

2.2 Laser stripe detection with sub-pixel accuracy

From the system model, we can find out that detecting the image coordinate m of the laser stripe is the pre-step of 3D reconstruction. And the accuracy of m greatly determines the accuracy of 3D reconstruction. To detect the laser stripe with sub-pixel accuracy, the well-known unbiased detector with the derivatives of the Gaussian smoothing kernel [12] is employed. For the laser stripe in the 2D image I, first the direction perpendicular to the stripe is determined, and then the sub-pixel position of the stripe is calculated via analyzing the directional derivative in the perpendicular direction.

Considering the noise in real image, the Gaussian smoothing kernels are selected as the basic convolution kernels for image processing

$${g_\sigma }(x )= \frac{1}{{\sqrt {2\pi } \sigma }}{e^{ - \frac{{{x^2}}}{{2{\sigma ^2}}}}},\quad {g^{\prime}_\sigma }(x )= \frac{{ - x}}{{\sqrt {2\pi } {\sigma ^3}}}{e^{ - \frac{{{x^2}}}{{2{\sigma ^2}}}}},\quad {g^{\prime\prime}_\sigma }(x )= \frac{{{x^2} - {\sigma ^2}}}{{\sqrt {2\pi } {\sigma ^5}}}{e^{ - \frac{{{x^2}}}{{2{\sigma ^2}}}}}.$$

The convolution kernels for the first and second partial derivatives of the image can be generated by combining the basic kernels in the 2D coordinate system, as plotted with a mesh style in Fig. 2.

$$\left\{ {\begin{array}{*{20}{l}} {{g_{x,\sigma }}({x,y} )= {{g^{\prime}}_\sigma }(x ){g_\sigma }(y )}\\ {{g_{y,\sigma }}({x,y} )= {g_\sigma }(x ){{g^{\prime}}_\sigma }(y )}\\ {{g_{xx,\sigma }}({x,y} )= {{g^{\prime\prime}}_\sigma }(x ){g_\sigma }(y )}\\ {{g_{xy,\sigma }}({x,y} )= {{g^{\prime}}_\sigma }(x ){{g^{\prime}}_\sigma }(y )}\\ {{g_{yy,\sigma }}({x,y} )= {g_\sigma }(x ){{g^{\prime\prime}}_\sigma }(y )} \end{array}} \right..$$

 figure: Fig. 2.

Fig. 2. The convolution kernels for the first and second partial derivatives of the image (σ = 3). (a) gx, σ; (b) gy, σ; (c) gxx, σ; (d) gxy, σ; (e) gyy, σ.

Download Full Size | PDF

Then the second partial derivative of the image can be obtained by convolving with the combination of the basic kernels

$$\left\{ {\begin{array}{l} {{I_x}({x,y} )= I({x,y} ){\ast }{g_{x,\sigma }}({x,y} )}\\ {{I_y}({x,y} )= I({x,y} ){\ast }{g_{y,\sigma }}({x,y} )}\\ {{I_{xx}}({x,y} )= I({x,y} ){\ast }{g_{xx,\sigma }}({x,y} )}\\ {{I_{xy}}({x,y} )= I({x,y} ){\ast }{g_{xy,\sigma }}({x,y} )}\\ {{I_{yy}}({x,y} )= I({x,y} )\mathrm{\ast }{g_{yy,\sigma }}({x,y} )} \end{array}} \right..$$

Here the operator * denotes convolution. The direction perpendicular to the stripe can be determined by calculating the eigenvalues and eigenvectors of the Hessian matrix

$${\mathbf H} = \left[ {\begin{array}{*{20}{c}} {{I_{xx}}({x,y} )}&{{I_{xy}}({x,y} )}\\ {{I_{xy}}({x,y} )}&{{I_{yy}}({x,y} )} \end{array}} \right].$$

For each pixel (x, y) in the laser stripe, its Hessian matrix has two eigenvalues. The eigenvector corresponding to the eigenvalue having bigger absolute value indicates the direction perpendicular to the stripe, which is denoted with (nx, ny) after vector normalization. And the exact location of the laser stripe should be the point where the second directional derivative of I taking its maximum absolute value, which is determined by

$$\left\{ \begin{array}{l} t ={-} \frac{{{I_x}{n_x} + {I_y}{n_y}}}{{{I_{xx}}n_x^2 + 2{I_{xy}}{n_x}{n_y} + {I_{yy}}n_y^2}}\\ {\mathbf m} = ({x,y} )+ t({{n_x},{n_y}} )\end{array} \right..$$

2.3 Mask generation for detection speedup

In general, the above-mentioned laser stripe detection should be executed pixel by pixel, which leads to low time efficiency. A better approach is to make coarse localization for the laser stripe firstly. Here we generate a mask indicating the approximate location of the laser stripe.

Considering that the laser stripe has a peak intensity, while the whole background is dark, the mask is generated with image segmentation and peak search. Figure 3(a) is the image of the laser stripe smoothed with Gaussian filtering, and then it is segmented into the stripe area and the background area using image thresholding, as shown in Fig. 3(b). As a rough mask, Fig. 3(b) is multiplied with Fig. 3(a) pixel by pixel, which generates the masked image of the laser stripe, as shown in Fig. 3(c). Then the laser stripe is detected by maximum search algorithm line by line, which provides coordinate of the laser stripe with pixel accuracy. The cross section corresponding to the blue line in Fig. 3(c) is plotted in Fig. 3(d), and the red cross indicates the result of maximum search, which is the pixel location of the laser stripe on this line. The coordinates of the whole laser stripe are plotted in Fig. 3(c) with red dots. Finally, the refined mask of the laser stripe is generated based on the coordinates with pixel accuracy and their neighborhood, as shown in Fig. 3(e).

 figure: Fig. 3.

Fig. 3. Mask generation for the laser stripe. (a) The image of the laser stripe smoothed with Gaussian filtering. (b) The result of image thresholding. (c) The masked image of the laser stripe, in which the red dots indicate the pixel coordinates of the laser stripe. (d) The plot of the cross section corresponding to the blue line in Fig. 3(c). (e) The refined mask of the laser stripe.

Download Full Size | PDF

Finally, we have a flow chart of laser stripe detection as shown in Fig. 4. The sub-pixel localization algorithm described with Eqs. (2)–(6) is executed only within the mask.

 figure: Fig. 4.

Fig. 4. Flow chart of laser stripe detection with sub-pixel accuracy.

Download Full Size | PDF

3. HDR laser scanning with a color camera

3.1 Motivation

The intensity value of the laser stripe in an image captured by a camera proportional to the reflectance of the surface to be scanned. The factors that may affect the reflectance are complex, but it is difficult to list the factors exhaustively and discuss their effects thoroughly. Because there are many potential physical factors, while the mechanisms of some interactions are not clear yet. Meanwhile, for the surface deviating from the assumption of Lambertian, e.g., shiny or specular surface, the variation in directions of illumination and observation will introduce significant anisotropy and corresponding geometric-optical effects.

In the field of computer vision, the bidirectional reflectance distribution function (BRDF) is employed to describe the scattering of light from one incident direction to another exiting direction [25]. The advantage of BRDF is that it hides the underlying (maybe unknown) physical mechanism and provides an intuitive description to the reflectance. In the 3D laser scanning system, light is always emitted from the laser source and reflected to the camera, which provide a typical scenario that can exploit BRDF. For a certain point on the surface, the directions of both incident and reflected light can be determined in the spherical coordinate system with the zenith angle θ and the azimuthal angle φ. Therefore, BRDF can be expressed with a four-dimensional function $r({{\theta_i},{\varphi_i},{\theta_o},{\varphi_o}} )$, in which the subscripts i and o denote the incidence and reflection, respectively. For different points $({x,y} )$ on the surface, there are different BRDFs $r({{\theta_i},{\varphi_i},{\theta_o},{\varphi_o};\;x,y} )$. During the process of laser scanning, each point $({x,y} )$ corresponds certain incident and reflected directions $({{\theta_i},{\varphi_i},{\theta_o},{\varphi_o}} )$ thus has certain reflectance r, which means we can use a two-dimensional function $r({x,y} )$ to express the distribution of the reflectance on the surface.

Diffuse surface is usually treated as Lambertian, thus its BRDF is constant, which means a point on diffuse surface has constant reflectance in all directions. The reflectance of diffuse surface mainly depends on its color and texture. The surface with different colors or rich textures always has significantly changed reflectance in different areas. While shiny surface has anisotropic BRDF, which means the reflectance changes drastically in different directions. When the reflectance becomes low or high, the intensity of the laser stripe in image may be quite low or keep saturation, as shown in Fig. 5. If the laser stripe has significantly saturated intensity, as shown in Fig. 5(b), it will introduce more error due to the increasing width of the stripe [26]. Moreover, high intensity may cause obvious interreflections in the concavity. While if the laser stripe has quite low intensity, as shown in Fig. 5(c), the detection error will also increase due to the decreased signal-noise-ratio [14]. In general, the algorithm of laser stripe detection expects to that the laser stripe has moderate intensity, neither too low nor too high. Therefore, the technique of HDRI is desired to tolerate the great change in the surface reflectance. The well-known approach of HDRI is first capturing multiple images with different exposure times and then synthesizing them into one image with high dynamic range (HDR). Essentially, this approach trades time efficiency for dynamic range, which is intuitive and works well in stationary scene. However, laser scanning requires a scanning process by moving either the object or the laser line, which violates the stationary condition. Therefore, a new approach of HDRI by exploiting the Bayer filter on the color image sensor is proposed, which trades pixel resolution for time efficiency.

 figure: Fig. 5.

Fig. 5. Various intensity of the laser stripe in image. (a) An image of the laser stripe on the surface having different reflectance. (b) The enlarged mesh plot of the area marked with red square in (a), which has high surface reflectance. (c) The enlarged mesh plot of the area marked with green square in (a), which has low surface reflectance.

Download Full Size | PDF

3.2 Principle

The intensity value of a pixel in the laser stripe image captured by a camera can be determined by the following equation

$$I({x,y} )= \alpha {q_e}\tau ir({x,y} ).$$

Here qe is the quantum efficiency of the image sensor, τ is the exposure time of capturing the image, i is the irradiance of the laser, and α is a factor of energy utilization related to the configuration of the camera, all of them are invariant for a certain image. As discussed in Section 3.1, the surface reflectance $r({x,y} )$ may change a lot in different areas of the surface. HDRI technique requests a group of images with different intensity levels for data synthesis. The multi-exposure method changes the exposure time during each image capturing. Here we exploit the Bayer filter on the color image sensor to get different quantum efficiencies, which can generate images of different intensity levels from the single-shot raw image.

Typical image sensor, e.g., Complementary Metal-Oxide-Semiconductor (CMOS), has no ability to perceive different colors. Thus, an ordinary CMOS chip can only serve as a monochromatic image sensor. To capture the color image with single chip, the Bayer filter is designed to separate different colors. A Bayer filter is a color filter array (CFA) mosaicking red, green, and blue (RGB) color filters on a square grid to provide the corresponding color channels. The size of each filter unit exactly matches the size of each pixel on the CMOS chip. Thus, the red, green, and blue color will be recorded in different pixels corresponding to different filters of RGB. Filters of different colors have distinct transmittances for different wavelength, which leads to different quantum efficiency for different color channels. For example, Fig. 6 displays the curve of the quantum efficiency of the Sony IMX273 chip, in which we can find out that the blue channel is sensitive to the wavelength in the range of 400-500 nm, the green channel 480-600 nm, and the red channel 580-750 nm. Therefore, when we employ a laser source of 520nm for 3D scanning and capture the image of the laser stripe using a color image sensor with the Bayer filter, the green channel has the highest intensity in the image, the blue channel has the low intensity, and the red channel has even lower intensity. This is a reasonable inference form Eq. (7) and the fact that when the wavelength is 520nm, the quantum efficiencies of the three channels have the following relationship

$$q_e^G > q_e^B > q_e^R.$$

 figure: Fig. 6.

Fig. 6. Bayer filter and its quantum efficiency. (a) The distribution of the Bayer filter, which separates pixels on the CMOS chip into red, green, and blue channels. (b) The quantum efficiency curves of each channel. The values of $q_e^G > q_e^B > q_e^R.$ are the quantum efficiencies of the three channels corresponding to the wavelength of 650nm.

Download Full Size | PDF

Here we present a typical scene desiring HDRI, in which the laser stripe is on a plate with the chessboard pattern, as shown in Fig. 7. The black area has low reflectance, while the white area has high reflectance. Figure 7(a) is the true color image of the laser stripe, which is generated with demosaicing algorithm from the raw image of the camera shown in Fig. 7(b). Figure 7(c) is a partial enlarged view of the area marked with a yellow square in the raw image, where we can see various intensity corresponding to different color channels. Figure 7(d)–7(f) are the low dynamic range (LDR) sub-images decomposed form the raw image, which corresponds to the red, green, and blue channels, respectively. It is obvious that the laser stripe corresponding to the red channel has moderate intensity in the white area while the green channel in the black area. By selecting and combining laser stripe from different channels, we can in principle generate a HDR image for the laser stripe.

 figure: Fig. 7.

Fig. 7. Laser stripe desiring HDRI. (a) The true color image of the laser stripe generated with demosaicing algorithm. (b) The raw image of the laser stripe captured by the camera with Bayer filter. (c) The partial enlarged view of the area marked with yellow square in (b). (d), (e) and (f) The LDR sub-images decomposed form (b), which correspond to the red, green, and blue channels, respectively.

Download Full Size | PDF

3.3 Method

In 3D laser scanning system, HDRI technique services for the accuracy and reliability of laser stripe detection. In practice, we do not need to synthesize the HDR image for the laser stripe, but detect in the sub-image of each channel and then select the coordinate with the best accuracy and reliability. Based on above consideration, the HDR laser stripe detection can be realized as shown in Fig. 8.

 figure: Fig. 8.

Fig. 8. Flow chart of HDR laser stripe detection.

Download Full Size | PDF

The input raw image captured by a color camera can be decomposed into four sub-images corresponding to the channel R, G1, G2 and B, respectively. For each sub-image, the laser stripe is independently detected with the proposed algorithm in Section 2.2. Thus, we can get four coordinates for any point on the laser stripe from different sub-images. The four coordinates have different reliability and accuracy, which depends on the intensity level of the laser stripe on this point. Since the dynamic range of the sub-image is relatively low, in the sub-image of channel R, the intensity level of the laser stripe may be moderate in the area with high surface reflectance but too low in the area with low surface reflectance; while in the sub-image of channel G1, the intensity level may be moderate in the area with low surface reflectance but too high in the area with high surface reflectance.

In practice, it is hard to make a universal criterion to judge whether the intensity level is suitable. Thus, we use the eigenvalue to evaluate the quality of coordinates. Considering the algorithm of laser stripe detection in Section 2.2 again, it should be emphasized that each coordinate has a relevant eigenvalue. If we regard the intensity distribution of the laser stripe as a 3D surface, as shown in Fig. 5(b) and 5(c), the magnitude of the eigenvalue is just the curvature in the direction perpendicular to the laser stripe [27]. And the curvature describes how small movements result in changes to the surface normal. If the intensity level is high, there will be pixels of oversaturation on the laser stripe. Thus, the 3D surface of the laser stripe will have a high but flat top, which leads to a small curvature. Moreover, if the intensity level is low, the 3D surface of the laser stripe will have a low top, which leads to a small curvature, too. Only when the intensity level is moderate, the 3D surface of the laser stripe will have a high and sharp top, leading to a big curvature. In summary, bigger absolute value of the eigenvalue corresponds to bigger curvature, which implies that the related sub-image has moderate intensity level, and its coordinate has best reliability and accuracy comparing with the other three sub-images. Therefore, we can select the best coordinate by comparing the four eigenvalues corresponding to the four sub-images.

Finally, we should compensate the offset for the coordinate from different sub-images. According to the arrangement of CFA on image sensor, it can be inferred that there is a coordinate offset between different sub-image. If we chose the sub-image of channel R as reference, the coordinate offsets in the sub-image of channel G1, G2 and B should be (0, 0.5), (0.5, 0) and (0.5, 0.5), respectively.

4. Experiments

4.1 System calibration

When scanning with the system, 3D data are reconstructed with the system model. But before that, the unknown parameters of the system model should be estimated using the system calibration. According to the model expressed in Eq. (1), we should calibrate the parameters for the camera and the laser plane. Additionally, the proposed system uses a one-dimensional translation stage for scanning. Thus, the translation vector of the stage should also be calibrated. The well-known Zhang’s method [28] is applied for camera calibration, and a plate with chessboard pattern is employed as the calibration target. The calibration of both the laser plane and the translation vector is completed with the assistant of camera calibration. During image acquisition for system calibration, the calibration target is placed in ten different poses. And in each target pose, the camera first captures an image of the target under uniform illumination, as shown in Fig. 9(a). After that the light for illumination is turned off and the laser source is turned on, then the camera captures another image of the target without illumination but containing a laser stripe, as shown in Fig. 9(b). It should be emphasized that both images correspond to the same pose of calibration target.

 figure: Fig. 9.

Fig. 9. System calibration. (a) An image of the calibration target under uniform illumination. (b) An image of the calibration target without illumination but containing a laser stripe, which has the same target pose as (a). (c) The visualization figure of the system calibration plotted in CCS, which intuitively displays the laser plane, the translation vector, and different poses of the calibration target.

Download Full Size | PDF

When the image acquisition is completed, we can get two image sequences. Sequence A has ten images of different target poses, which are captured under uniform illumination. Sequence B has ten images of corresponding target poses to Sequence A, which are captured without uniform illumination but contain the laser stripe. Sequence A is employed to calibrate the camera by using the MATLAB Computer Vision Toolbox, from which we can get the intrinsic and lens distortion parameters of the camera, and the extrinsic parameters of each target pose. Sequence B is employed to calibrate the laser plane. The image coordinates of the laser stripes are first extracted by using the proposed HDR laser stripe detection algorithm, and then corrected to get the undistorted coordinates with lens distortion parameters. The 3D coordinates of the points on the laser stripes can be calculated in the world coordinate system (WCS) with the intrinsic and extrinsic parameters, and then transformed into the CCS. Finally, the parameters of the laser plane are estimated using plane fitting from all of the 3D points on the laser stripes. To calibrate the translation vector of scanning, the calibration target is put on the translation stage and two images are captured before and after translation. The two images are added into sequence A and participate in the camera calibration, then the translation vector can be calculated with the extrinsic parameters of these two poses. The relative poses of the camera, the laser plane and the translation vector corresponding to a typical result of system calibration is visualized in Fig. 9(c).

4.2 Comparative results

To demonstrate the validity of proposed HDR 3D laser scanning method, we choose a composite scene as the example. As shown in Fig. 10(a), the scene contains two objects. One is a toy house made of plaster, which is white and diffuse, thus it has high surface reflectance. Another is a headphone made of plastic, which is black, diffuse and slightly shiny, thus it has low surface reflectance. During scanning, any captured image with laser stripe can be decomposed into four sub-images corresponding to the channel R, G1, G2 and B respectively. Since the sub-images of the two channels G1 and G2 are almost the same, here and in subsequent content we just select channel G1 to indicate the green channel. The three LDR sub-images of the laser stripe corresponding to the red, green, and blue channels are shown in Fig. 10(c)–10(e). And the profiles reconstructed from the detected laser stripe of each sub-image are shown in Fig. 10(f)–10(h). The sub-image of red channel has intensity of low level, which leads to missing profile in the area with low reflectance, as shown in Fig. 10(c) and 10(f). The sub-image of green channel has intensity of highest level, which leads to strong saturation and interreflections. In this case, there will be artificial profile in the area of concavity, as marked with ellipse in Fig. 10(d) and 10(g). With proposed algorithm of HDR laser stripe detection, the detected coordinates of the laser stripe are plotted with magenta dots in Fig. 10(a), and the corresponding profile is shown in Fig. 10(b). Since the algorithm selects the coordinate of best accuracy and reliability from different sub-images, the reconstructed profile in Fig. 10(b) combines the high-quality profile segments from Fig. 10(f)–10(h).

 figure: Fig. 10.

Fig. 10. Comparative results of laser stripes with different intensity level and corresponding surface profiles. (a) The photography of the scene to be scanned, and the magenta dots are coordinates detected with proposed algorithm of HDR laser stripe detection. (b) The corresponding profile reconstructed from the detected coordinates in (a). (c)-(e) The LDR sub-images of laser stripe decomposed from a raw image, which correspond to the red, green, and blue channels, respectively. (f)-(h) The reconstructed profiles corresponding to the sub-images (c)-(e).

Download Full Size | PDF

The scanned 3D surface reconstructed from different channels are comparatively shown in Fig. 11. Similar to the analysis in Fig. 10, the surface from the red channel lacks the part having low reflectance. Moreover, the surface has wavelike errors in smooth area of high reflectance, as marked with ellipse in Fig. 11(a). In the green channel, due to the strong saturation of the laser stripe on the surface of the toy house, the intensity of scatting light is high. Thereby the interreflections in the concavities cannot be neglected, which leads to outlier in 3D reconstruction, as marked with ellipse in Fig. 10(d) and 10(g). The reconstructed point cloud from 3D laser scanning will be triangulated to generate surface mesh shown in Fig. 11. There is a threshold to control the longest edge in triangulation. Point having distances from its neighbors larger than threshold is judged to be outlier and does not participate in the triangulation, which leads to the discontinuity on the surface mesh, as marked with ellipse in Fig. 11(b). The blue channel has moderate intensity level, therefore its corresponding surface has fewer discontinuities. The synthetic surface via proposed HDR laser scanning technique has the best quality in the areas of both high and low reflectance, as shown in Fig. 11(d).

 figure: Fig. 11.

Fig. 11. Comparative results of scanned 3D surface. (a)-(c) The surfaces corresponding to the red, green, and blue channels, respectively. (d) The surface reconstructed with proposed HDR laser scanning technique.

Download Full Size | PDF

5. Discussion

The main advantage of proposed HDR laser scanning technique is that the whole processing is only based on one raw image captured with a commercial color camera, which increases neither the capture time nor the device complexity. However, the proposed technique detects laser stripe from the decomposed sub-images, which has reduced spatial resolution comparing with the raw image. This is the necessary cost as a single-shot HDRI method. Due to the RGGB mosaic arrangement of the Bayer filter, there are two green channels in the camera of our system. If the tiny displacement between the two channels is ignored, they provide the same intensity level of the same field of view. That means 1/4 pixels of the raw image, which correspond to one of the two green channels, do not play a role in the proposed technique. A feasible solution to make use of the remaining 1/4 pixels is selecting the CFA with four different color filters other than RGGB, i.e., RGBE or CYGM [29]. In this case, we may get four channels of different intensity levels if the wavelength of the laser is selected carefully considering the quantum efficiency of different color filter.

Comparing with the difference of reflectance between the white and black diffuse surface, the variation of reflectance on shiny surface may be much more drastic, depending on the directions of incident and reflected light. When handling scenarios containing shiny surface, a much smaller quantum efficiency is required to avoid oversaturation in the image of the laser stripe. From Fig. 6 we can see that the minimum quantum efficiency of the Bayer filter is approximate 1%. If the required quantum efficiency is much smaller than 1%, the performance of proposed method will inevitably decrease. A potential solution is designing dedicated filter array to replace the CFA of the Bayer filter. A filter can provide very small quantum efficiency by improving its optical density.

Because the HDR method usually aims to improve the robustness against the variation of surface reflectance, we do not emphasize the measurement accuracy in the experiments. Comparing with the scanning result of each color channel, the proposed method should not reduce the measurement accuracy, because the output coordinate of the laser stripe is selected from one of the color channels while without changing the value. However, comparing with the scanning using a monochromatic camera, the proposed method will reduce the measurement accuracy. Due to the mosaic arrangement of the Bayer filter, the actual pixel resolution of proposed method is only 1/4 of the monochromatic camera. The corresponding decrease in sampling distance will reduce both the accuracy and the precision.

6. Conclusion

In this paper, we propose a HDR laser scanning technique with concise algorithm and simple hardware configuration. First the system model of conventional laser scanning system is presented. And then the classical unbiased detector is employed to detect the center of the laser stripe, meanwhile an indicating mask generated via image segmentation and peak search is introduced to speed up the laser stripe detection. To deal with the dramatic change in the intensity of laser stripe, the Bayer filter in the sensor chip of a color camera is exploited to provide different intensity responses. Then the sub-images of the laser stripe, which correspond to different color channels and have different intensity levels, can be decomposed from the raw image captured by the color camera. A dedicated algorithm is proposed to achieve the HDR laser stripe detection, which collects the coordinate with best quality from sub-images rather than synthesize the HDR image of laser stripe from sub-images directly. In experiments, the sub-images of different color channels are regarded as LDR images of different intensity levels, which are used to generate results for comparison. From the comparative analysis of reconstructed 3D data, we can find out that the surfaces reconstructed from LDR images have missing part and artificial discontinuity, while the synthetic surface via proposed HDR laser scanning technique has the best quality.

Funding

National Natural Science Foundation of China (61775121); Natural Science Foundation of Shandong Province (ZR2019QF006); The Ministry of Education-China Mobile for Project of Scientific Research Fund (MCM20200401); Sino-German Center Mobility Programs (M-0044); Key Technology Research and Development Program of Shandong (2018GGX101002); Key R&D Program of Hubei Province of China (2020BAB120).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. R. Kulkarni and P. Rastogi, “Optical measurement techniques – A push for digitization,” Opt. Lasers Eng. 87, 1–17 (2016). [CrossRef]  

2. A. G. Marrugo, F. Gao, and S. Zhang, “State-of-the-art active optical techniques for three-dimensional surface metrology: a review [Invited],” J. Opt. Soc. Am. A 37(9), B60–B77 (2020). [CrossRef]  

3. E. Trucco, R. B. Fisher, A. W. Fitzgibbon, and D. K. Naidu, “Calibration, data consistency and model acquisition with laser stripers,” Int. J. Comput. Integr. Manuf. 11(4), 293–310 (1998). [CrossRef]  

4. G. Zhang and Z. Wei, “A novel calibration approach to structured light 3D vision inspection,” Opt. Laser Technol. 34(5), 373–380 (2002). [CrossRef]  

5. F. Blais, “Review of 20 Years of Range Sensor Development,” J. Electron. Imaging 13(1), 231–240 (2004). [CrossRef]  

6. J. Santolaria, D. Guillomía, C. Cajal, J. A. Albajez, and J. J. Aguilar, “Modelling and Calibration Technique of Laser Triangulation Sensors for Integration in Robot Arms and Articulated Arm Coordinate Measuring Machines,” Sensors 9(9), 7374–7396 (2009). [CrossRef]  

7. G. Genta, P. Minetola, and G. Barbato, “Calibration procedure for a laser triangulation scanner with uncertainty evaluation,” Opt. Lasers Eng. 86, 11–19 (2016). [CrossRef]  

8. J. Mei and L.-J. Lai, “Development of a novel line structured light measurement instrument for complex manufactured parts,” Rev. Sci. Instrum. 90(11), 115106 (2019). [CrossRef]  

9. Z. Zhang and L. Yuan, “Building a 3D scanner system based on monocular vision,” Appl. Opt. 51(11), 1638–1644 (2012). [CrossRef]  

10. A. Bodenmann, B. Thornton, and T. Ura, “Generation of High-resolution Three-dimensional Reconstructions of the Seafloor in Color using a Single Camera and Structured Light,” J. Field Robot. 34(5), 833–851 (2017). [CrossRef]  

11. R. Usamentiaga, J. Molleda, and D. F. García, “Fast and robust laser stripe extraction for 3D reconstruction in industrial environments,” Mach. Vis. Appl. 23(1), 179–196 (2012). [CrossRef]  

12. C. Steger, “An unbiased detector of curvilinear structures,” IEEE Trans. Pattern Anal. Mach. Intell. 20(2), 113–125 (1998). [CrossRef]  

13. L. Qi, Y. Zhang, X. Zhang, S. Wang, and F. Xie, “Statistical behavior analysis and precision optimization for the laser stripe center detector based on Steger's algorithm,” Opt. Express 21(11), 13442–13449 (2013). [CrossRef]  

14. Q. Sun, J. Chen, and C. Li, “A robust method to extract a laser stripe centre based on grey level moment,” Opt. Lasers Eng. 67, 122–127 (2015). [CrossRef]  

15. Y. Li, J. Zhou, F. Huang, and L. Liu, “Sub-Pixel Extraction of Laser Stripe Center Using an Improved Gray-Gravity Method,” Sensors 17(4), 814 (2017). [CrossRef]  

16. X.-Q. Yin, W. Tao, Y.-Y. Feng, Q. Gao, Q.-Z. He, and H. Zhao, “Laser stripe extraction method in industrial environments utilizing self-adaptive convolution technique,” Appl. Opt. 56(10), 2653–2660 (2017). [CrossRef]  

17. H. Wang, Y. Wang, J. Zhang, and J. Cao, “Laser Stripe Center Detection Under the Condition of Uneven Scattering Metal Surface for Geometric Measurement,” IEEE Trans. Instrum. Meas 69(5), 2182–2192 (2020). [CrossRef]  

18. X. Chen, G. Zhang, and J. Sun, “An Efficient and Accurate Method for Real-Time Processing of Light Stripe Images,” Adv. Mech. Eng. 5, 456927 (2013). [CrossRef]  

19. J. Clark, E. Trucco, and L. B. Wolff, “Using light polarization in laser scanning,” Image Vis. Comput. 15(2), 107–117 (1997). [CrossRef]  

20. Y. M. Amir and B. Thörnberg, “High Precision Laser Scanning of Metallic Surfaces,” Int. J. Opt. 2017, 1–13 (2017). [CrossRef]  

21. Z. Yang, P. Wang, X. Li, and C. Sun, “3D laser scanner system using high dynamic range imaging,” Opt. Lasers Eng. 54, 31–41 (2014). [CrossRef]  

22. Y. Yin, Z. Cai, H. Jiang, X. Meng, J. Xi, and X. Peng, “High dynamic range imaging for fringe projection profilometry with single-shot raw data of the color camera,” Opt. Lasers Eng. 89, 138–144 (2017). [CrossRef]  

23. Y. Yin, X. Peng, A. Li, X. Liu, and B. Z. Gao, “Calibration of fringe projection profilometry with bundle adjustment strategy,” Opt. Lett. 37(4), 542–544 (2012). [CrossRef]  

24. Y. Yin, B. Altmann, C. Pape, and E. Reithmeier, “Machine-vision-guided rotation axis alignment for an optomechanical derotator,” Opt. Lasers Eng. 121, 456–463 (2019). [CrossRef]  

25. R. Szeliski, Computer Vision: Algorithms and Applications (Springer, 2010).

26. P. Walecki and G. Taubin, “Super-Resolution 3-D Laser Scanning Based on Interval Arithmetic,” IEEE Trans. Instrum. Meas 69(10), 8383–8392 (2020). [CrossRef]  

27. A. C. Telea, Data visualization: principles and practice (Chemical Rubber Company, 2014).

28. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

29. P. Amba and D. Alleysson, “LMMSE Demosaicing for multicolor CFAs,” in Proceedings of 26th Color and Imaging Conference (Society for Imaging Science and Technology, 2018), pp. 151–156.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. The schematic diagram of the 3D laser scanning system.
Fig. 2.
Fig. 2. The convolution kernels for the first and second partial derivatives of the image (σ = 3). (a) gx, σ; (b) gy, σ; (c) gxx, σ; (d) gxy, σ; (e) gyy, σ.
Fig. 3.
Fig. 3. Mask generation for the laser stripe. (a) The image of the laser stripe smoothed with Gaussian filtering. (b) The result of image thresholding. (c) The masked image of the laser stripe, in which the red dots indicate the pixel coordinates of the laser stripe. (d) The plot of the cross section corresponding to the blue line in Fig. 3(c). (e) The refined mask of the laser stripe.
Fig. 4.
Fig. 4. Flow chart of laser stripe detection with sub-pixel accuracy.
Fig. 5.
Fig. 5. Various intensity of the laser stripe in image. (a) An image of the laser stripe on the surface having different reflectance. (b) The enlarged mesh plot of the area marked with red square in (a), which has high surface reflectance. (c) The enlarged mesh plot of the area marked with green square in (a), which has low surface reflectance.
Fig. 6.
Fig. 6. Bayer filter and its quantum efficiency. (a) The distribution of the Bayer filter, which separates pixels on the CMOS chip into red, green, and blue channels. (b) The quantum efficiency curves of each channel. The values of $q_e^G > q_e^B > q_e^R.$ are the quantum efficiencies of the three channels corresponding to the wavelength of 650nm.
Fig. 7.
Fig. 7. Laser stripe desiring HDRI. (a) The true color image of the laser stripe generated with demosaicing algorithm. (b) The raw image of the laser stripe captured by the camera with Bayer filter. (c) The partial enlarged view of the area marked with yellow square in (b). (d), (e) and (f) The LDR sub-images decomposed form (b), which correspond to the red, green, and blue channels, respectively.
Fig. 8.
Fig. 8. Flow chart of HDR laser stripe detection.
Fig. 9.
Fig. 9. System calibration. (a) An image of the calibration target under uniform illumination. (b) An image of the calibration target without illumination but containing a laser stripe, which has the same target pose as (a). (c) The visualization figure of the system calibration plotted in CCS, which intuitively displays the laser plane, the translation vector, and different poses of the calibration target.
Fig. 10.
Fig. 10. Comparative results of laser stripes with different intensity level and corresponding surface profiles. (a) The photography of the scene to be scanned, and the magenta dots are coordinates detected with proposed algorithm of HDR laser stripe detection. (b) The corresponding profile reconstructed from the detected coordinates in (a). (c)-(e) The LDR sub-images of laser stripe decomposed from a raw image, which correspond to the red, green, and blue channels, respectively. (f)-(h) The reconstructed profiles corresponding to the sub-images (c)-(e).
Fig. 11.
Fig. 11. Comparative results of scanned 3D surface. (a)-(c) The surfaces corresponding to the red, green, and blue channels, respectively. (d) The surface reconstructed with proposed HDR laser scanning technique.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

{ s m ~ = K X c [ a b c ] X c = d .
g σ ( x ) = 1 2 π σ e x 2 2 σ 2 , g σ ( x ) = x 2 π σ 3 e x 2 2 σ 2 , g σ ( x ) = x 2 σ 2 2 π σ 5 e x 2 2 σ 2 .
{ g x , σ ( x , y ) = g σ ( x ) g σ ( y ) g y , σ ( x , y ) = g σ ( x ) g σ ( y ) g x x , σ ( x , y ) = g σ ( x ) g σ ( y ) g x y , σ ( x , y ) = g σ ( x ) g σ ( y ) g y y , σ ( x , y ) = g σ ( x ) g σ ( y ) .
{ I x ( x , y ) = I ( x , y ) g x , σ ( x , y ) I y ( x , y ) = I ( x , y ) g y , σ ( x , y ) I x x ( x , y ) = I ( x , y ) g x x , σ ( x , y ) I x y ( x , y ) = I ( x , y ) g x y , σ ( x , y ) I y y ( x , y ) = I ( x , y ) g y y , σ ( x , y ) .
H = [ I x x ( x , y ) I x y ( x , y ) I x y ( x , y ) I y y ( x , y ) ] .
{ t = I x n x + I y n y I x x n x 2 + 2 I x y n x n y + I y y n y 2 m = ( x , y ) + t ( n x , n y ) .
I ( x , y ) = α q e τ i r ( x , y ) .
q e G > q e B > q e R .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.