Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Polarized-state-based coding strategy and phase image estimation method for robust 3D measurement

Open Access Open Access

Abstract

Polarized structured light is a novel method to measure shiny surface. However, the SNR of the captured image was affected by the additional polarizing filter. And the blurred influence of camera defocus was also strengthened. The accuracy of fringe edges detection was reduced. In this paper, a polarized-state-based structured light coding strategy and a phase image estimation method are proposed to improve the measurement robustness. To preserve the coding message in the complex environment, a special polarized-state-based coding strategy is adopted. To reduce the error which induced by additional polarizing filter and extracting the information from the saturated areas as much as possible, a phase image estimation method based on Stokes parameter is proposed. Compared with the traditional polarization-based structured light system, the experimental setup of proposed method is configured without any additional hardware. The experiment shows that the interference of camera defocus is remarkably reduced and the robustness of fringe edges detection is improved.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

With the rapid development of the manufacturing industry, 3D shape measurement technique has been extensively applied in precision machining, automatically assemble, mapping and so forth [1,2]. 3D shape measurement technique can be broadly classified into two categories: surface contact and surface noncontact. The surface contact measurement system like the mechanical probe-based coordinate measuring system has been applied in industry for many years. However, the surface contact system is relatively slow. And it is only measuring a limited number of points in an object’s surface. Furthermore, it is difficult to measure soft or fragile objects, such as biological organization or antiques.

In contrast, optical metrology is a kind of surface noncontact 3D shape measurement technique. These methods include timing flight [3], stereovision [4], estimate shape from shading [5], laser range scanning [6], and structured light method [7]. Among these, the structured light method is one of the extensively used 3D shape measurement technique. Typically, a structured light measurement system consists of a camera and a projector. The projector projects a sequence of coded patterns onto object surface. Simultaneously, the camera captures the reflected pattern from another direction. These reflected patterns carry valuable information about the object’s depth, which can be retrieved through phase shifting algorithm. Finally, the 3D coordinates of the object surface can be calculated from absolute phase map [8].

However, most structured light methods are susceptible to camera defocus and optical properties of object surface [9]. For instance, the points in the irregular surface cannot be simultaneously focused by the camera. Larger triangulation angle can boost the measurement accuracy, but also incurs more defocus in the image [10]. In addition, when the irradiance across a scene varies greatly, there will always be overexposed and underexposed areas in the captured image no matter what exposure time is used [11,12]. These issues make optical metrology unreliable in the complex environment [13]. Although many methods are proposed to improve the performance of structured light methods in the undesirable situation [1418]. But the effect of these methods is limited.

Researchers have tackled these challenges through various approaches. By selecting the brightest unsaturated intensities at each pixel, a sequence of images captured at different exposure time is combined into a single set of HDR images [19,20]. By using color invariant property in segmentation, the effect of highlights originating from the ambient light were eliminated [21]. By adjusting the projection intensity, the influence of overexposure and underexposure can be compensated [2226]. In addition, polarimetric imaging is a novel method to measure shiny surface [2731]. By changing the angle between the transmission axes of polarizing filter, the intense specular light is effectively removed. However, the SNR of the captured image was affected by the additional polarizing filter. And the blurred influence of camera defocus was also strengthened. The robustness of traditional polarization-based measurement system was seriously affected by above issues. On the other hand, most methods focused on remove the saturated areas, then measure these areas with normal technique. It needs too much work, and some unforeseeable errors are easy to occur in the reconstructed saturated areas.

In order to improve the robustness of the traditional polarization-based 3D shape measurement system, this paper proposed a polarized-state-based coding strategy and a phase image estimation method. The polarized-state-based coding strategy was applied to ensure robust transmission of the coding message. The phase image estimation method is proposed to reduce the interference of noise and camera defocus. Above methods are used to extract the information from the saturated areas as much as possible. The experiment shows that the interference of camera defocus is remarkably reduced and the robustness of fringe edges detection is improved.

2. Principle

2.1 Polarized-state-based coding strategy

The Stokes parameter was employed as a description of polarized light. Any polarized light can be represented as ${[{{S_0}, \, {S_1}, \, {S_2}, {S_3}} ]^T}$. Nevertheless, there is very little circularly polarized light in nature [32]. Without loss of generality, in this paper S3 is assumed as 0.

Due to the polarization characteristics of light rays were relatively robust in the complex environment [33], we present a polarized-state-based coding strategy, the basic coding strategy refers to Gray code [34]. Figure 1 illustrates the coding strategy of the proposed structured light. The traditional Gray code strategy is coding message based on the intensity of gray level. The proposed method is coding message based on the polarization direction of light rays. The bright regions with horizontal polarization are coded as 1 and the dark regions without any polarization characteristics are coded as 0.

 figure: Fig. 1.

Fig. 1. The polarized-state-based coding structured light pattern.

Download Full Size | PDF

In order to reduce the interference of camera defocus, the phase image is reconstructed by the Stokes parameter ${S_1}$. This process consists of 3 steps: (1) Capture a sequence of images under different angle of transmission axes of polarizing filter; (2) Based on the images captured at different angle of transmission axes in step 1, calculate the Stokes parameter by the least-square estimation method pixel by pixel; (3) Extract the Stokes parameter ${S_1}$ pixel by pixel, then composite it to a new phase image. In the composited ${S_1}$ image, most of noise was eliminated, and the interference of camera defocus was reduced significant.

2.2 Robust transmission of the coding message

The traditional polarization-based 3D shape measurement system consists of a projector and a camera, two polarizing filters were installed in front of the projector and the camera respectively. The light rays reflected from object can be expressed as [35]:

$$\overrightarrow {{S_{re}}} = {M_o}{\ast }\overrightarrow {{S_{in}}} $$
where $\overrightarrow {{S_{re}}} $ is the Stokes parameter of the light rays reflected from the object surface, ${M_o}$ is the Muller matrix of the object surface, $\overrightarrow {{S_{in}}} $ is the Stokes parameter of the light rays projected on the object surface. By adjusting the polarizing filter in front of the projector, the polarized state of projected light could be controlled. In the proposed method, $\overrightarrow {{S_{in}}} = [{1,\; 1,\; 0,\; 0} ]^T$

For reflection at air-dielectric interface, the Muller matrix of object could be written as [32]:

$$\frac{1}{2}{\left( {\frac{{tan{\theta_ - }}}{{tan{\theta_ + }}}} \right)^2}\left[ {\begin{array}{cccc} {co{s^2}{\theta_ - } + co{s^2}{\theta_ + }} &{co{s^2}{\theta_ - } - co{s^2}{\theta_ + }} &0 &0\\ {co{s^2}{\theta_ - } - co{s^2}{\theta_ + }} &{co{s^2}{\theta_ - } + co{s^2}{\theta_ + }} &0 &0\\ 0 &0 &{ - 2co{s^2}{\theta_ - }co{s^2}{\theta_ + }} &0\\ 0 &0 &0 &{ - 2co{s^2}{\theta_ - }co{s^2}{\theta_ + }} \end{array}} \right]$$
So $\overrightarrow {{S_{re}}} $ can be deduced form Eqs. (1) and (2)as:
$$\begin{aligned} \overrightarrow {{S_{re}}} &= \frac{1}{2}{\left( {\frac{{tan{\theta_ - }}}{{tan{\theta_ + }}}} \right)^2}\left[ {\begin{array}{@{}cccc@{}} {co{s^2}{\theta_ - } + co{s^2}{\theta_ + }} &{co{s^2}{\theta_ - } - co{s^2}{\theta_ + }} &0 &0\\ {co{s^2}{\theta_ - } - co{s^2}{\theta_ + }} &{co{s^2}{\theta_ - } + co{s^2}{\theta_ + }} &0 &0\\ 0 &0 &{ - 2co{s^2}{\theta_ - }co{s^2}{\theta_ + }} &0\\ 0 &0 &{0 } &{ - 2co{s^2}{\theta_ - }co{s^2}{\theta_ + }} \end{array}} \right] \, \left[ {\begin{array}{@{}c@{}} 1\\ 1\\ 0\\ 0 \end{array}} \right]\\ &= \frac{1}{2}{\left( {\frac{{tan{\theta_ - }}}{{tan{\theta_ + }}}} \right)^2}\left[ {\begin{array}{@{}c@{}} {2co{s^2}{\theta_ - }}\\ {2co{s^2}{\theta_ - }}\\ 0\\ 0 \end{array}} \right] = {\left( {\frac{{tan{\theta_ - }}}{{tan{\theta_ + }}}} \right)^2}co{s^2}{\theta _ - }\left[ {\begin{array}{@{}c@{}} 1\\ 1\\ 0\\ 0 \end{array}} \right] \end{aligned}$$

Equation (3) shows that the horizontal polarization characteristics of reflected light rays $\overrightarrow {{S_{re}}} $ are reserved. Therefore, the coded polarized state information of the incident light rays $\overrightarrow {{S_{in}}} $ are reserved in the composited ${S_1}$ image. The polarized state of the light ray is used in distinguish the code 1 (good horizontal polarized) and code 0 (have not obvious horizontal polarized). In the complex environment, the extra information embedded in the polarized state of light rays is relatively robust. Furthermore, the fringe edges between the code 1 and code 0 are robust in the case of camera defocus. The robustness of fringe edges detection is illustrated in section 2.3.

2.3 Statistical characteristics of phase image estimation

For the defocus optical system, the influence of camera defocus is equivalent to filtering captured images by two-dimensional Gauss function spatial filter. This filter can be expressed as [36]:

$$h({x,y} )= \frac{1}{{2\pi {\sigma ^2}}}\textrm{exp}\left( { - \frac{1}{2}{\ast }\frac{{{{({x - {x_0}} )}^2} + {{({y - {y_0}} )}^2}}}{{{\sigma^2}}}} \right)$$
Where $\sigma $ is the standard deviation of Gauss function, it is only relevant to the hardware. ${x_0}$ and ${y_0}$ are the coordinate of camera optical center.

Based on the theory of camera response function and polarization, the ideal grey level of captured image can be expressed as [37]:

$$kI = {S_0} + {S_1}\cos 2\theta + {S_2}\sin 2\theta $$
In the case of camera defocus, the Eq. (5) turns into:
$${h_{({m,n} )}}({kI} )= {S_0} + {S_1}\cos 2\theta + {S_2}\sin 2\theta $$
where I is the light intensity of the pixel which the coordinate is (m, n), k is the camera response coefficient, $\theta $ is the angle of transmission axes of the polarizing filter installed in front of the camera, ${h_{({m,n} )}}$ is the camera defocus function. Assume that n images are used in estimate the Stokes parameter, the Eqs. (5) and (6) could be written as an overdetermined equation:
$$\left\{ {\begin{array}{c} {{\boldsymbol{Z}}_{\boldsymbol{o}} = \textbf{HX}}\\ {{\boldsymbol{Z}}_{\boldsymbol{d}} = \textbf{HX}} \end{array}} \right.$$
In Eq. (7), ${\boldsymbol{Z}}_{\boldsymbol{o}},{\boldsymbol{Z}}_{\boldsymbol{d}},\textbf{X},\textbf{H}$ denote respectively:
$$\left\{ \begin{array}{l} {\boldsymbol{Z}}_{\boldsymbol{o}} = {[{k{I_1},k{I_2}, \ldots ,k{I_n}} ]^T}\\ {\boldsymbol{Z}}_{\boldsymbol{d}} = {[{{h_{({m,n} )}}({kI{)_1},{h_{({m,n} )}}(k{I_2}} ), \ldots ,{h_{({m,n} )}}({k{I_n}} )} ]^T}\\ \textbf{X} = {[{{S_0},{S_1},{S_2},0} ]^T}\\ \textbf{H} = \left[ {\begin{array}{cc} {\begin{array}{ccc} 1 &{\cos 2{\theta_1}} &{\sin 2{\theta_1}}\\ 1 &{\cos 2{\theta_2}}&{\sin 2{\theta_2}}\\ \vdots & \vdots & \vdots \end{array}}&{\begin{array}{c} 0\\ 0\\ \vdots \end{array}}\\ {\begin{array}{ccc} 1 &{\cos 2{\theta_n}}&{\sin 2{\theta_n}} \end{array}} &0 \end{array}} \right] \end{array} \right.$$
Based on the theory of overdetermined equation, the solution of overdetermined Eq. (7) is equivalent to the least-square estimation of Eq. (7) [38].
$$\left\{ {\begin{array}{c} {\widehat {{\boldsymbol{X}}_{\boldsymbol{o}}} = {{({{\boldsymbol{H}^T}\boldsymbol{H}} )}^{ - 1}}{\boldsymbol{H}^T}{\boldsymbol{Z}}_{\boldsymbol{o}}}\\ {\widehat {{\boldsymbol{X}}_{\boldsymbol{d}}} = {{({{\boldsymbol{H}^T}\boldsymbol{H}} )}^{ - 1}}{\boldsymbol{H}^T}{\boldsymbol{Z}}_{\boldsymbol{d}}} \end{array}} \right.$$
The unbiasedness and validity of the estimation are demonstrated as follows:

Unbiasedness: Without considering the effect of camera defocus. Assume that every point of the object surface is perfectly focused by the camera, the expectation of least-square estimation is:

$$E(\widehat {{\boldsymbol{X}}_{\boldsymbol{o}})} = E[{{{({{\boldsymbol{H}^T}\boldsymbol{H}} )}^{ - 1}}{\boldsymbol{H}^T}{\boldsymbol{Z}}_{\boldsymbol{o}}} ]= E({{\boldsymbol{H}^T}{\boldsymbol{Z}}_{\boldsymbol{o}}} )= E({{\boldsymbol{H}^T}\boldsymbol{HX}} )= E(\boldsymbol{X} )$$
On the other hand, for the real captured images that considering the effect of camera defocus, the expectation of least-square estimation is:
$$E(\widehat {{\boldsymbol{X}}_{\boldsymbol{d}})} = E[{{{({{\boldsymbol{H}^T}\boldsymbol{H}} )}^{ - 1}}{\boldsymbol{H}^T}{\boldsymbol{Z}}_{\boldsymbol{d}}} ] = E[{{\boldsymbol{H}^{ - 1}}h({\boldsymbol{HX}} )} ]$$
$$= E\left( {{\boldsymbol{H}^{ - 1}}\mathop {\int\!\!\!\int }\nolimits_{ - \infty }^{ + \infty } \frac{1}{{2\pi {\sigma^2}}}\textrm{exp}\left( { - \frac{1}{2}{\ast }\frac{{{{({x - {x_0}} )}^2} + {{({y - {y_0}} )}^2}}}{{{\sigma^2}}}} \right)\textbf{HX}\; \textrm{dxdy}} \right) = E(\boldsymbol{X} )$$
According to Eqs. (10) and (11), obviously:
$$E(\widehat {{\boldsymbol{X}_{\boldsymbol{o}}})} = E(\widehat {{\boldsymbol{X}}_{\boldsymbol{d}})}$$
Equation (12) shows that no matter whether the focused images or defocused images are used in, the expectation of least-square estimation are equal. Therefore, even if the defocused images are used to estimate the phase image, the estimation is also unbiasedness.

Validity: In the case of camera focused or defocused, the variance of the least-square estimation can be written as:

$$\left\{ {\begin{array}{c} {D(\widehat {{\boldsymbol{X}}_{\boldsymbol{o}})} = E{{[{\widehat {{\boldsymbol{X}}_{\boldsymbol{o}}} - E({\widehat {{\textbf{X}_\textbf{o}}}} )} ]}^2} = E{{({\widehat {{\boldsymbol{X}}_{\boldsymbol{o}}}} )}^2} - \textrm{E}(\textbf{X} )}\\ {D(\widehat {{\boldsymbol{X}}_{\boldsymbol{d}})} = E{{[{\widehat {{\boldsymbol{X}}_{\boldsymbol{d}}} - E({\widehat {{\boldsymbol{X}}_{\boldsymbol{d}}}} )} ]}^2} = E{{({\widehat {{\boldsymbol{X}}_{\boldsymbol{d}}}} )}^2} - \textrm{E}(\textbf{X} )} \end{array}} \right.$$
According to Eqs. (12) and (13), we can derive $\textrm{D}(\widehat {{\boldsymbol{X}}_{\boldsymbol{o}})} = \textrm{D}(\widehat {{\boldsymbol{X}}_{\boldsymbol{d}})}$. Based on the theory of statistic, the validity characteristic of $\widehat {{\boldsymbol{X}}_{\boldsymbol{o}}}$ and $\widehat {{\boldsymbol{X}}_{\boldsymbol{d}}}$ are equal. Obviously, the least-square estimation based on defocused images $\widehat {{\boldsymbol{X}}_{\boldsymbol{d}}}$ is validity [39].

The above proves demonstrate that compared with directly captured images, the composited image is closer to the focus image in the statistical sense. The influence of camera defocus is reduced. In macro sense, the composited image is clearer than the directly captured image.

3. Experiment

3.1 Experimental setup

The experimental scheme is shown in Fig. 2. The object is an oblique metal scrap. The coded structured light is projected by the Texas instrument DLP LightCrafter 4500 projector. The reflected light rays are captured by FLIR BFS-U3-23S3M-C camera (fitted HC1605 lens). The resolution of the projector is 912*1140 and the camera is 1920*1200.

 figure: Fig. 2.

Fig. 2. Experimental setup consists of a projector, a camera, and two polarizing filter.

Download Full Size | PDF

The camera and the projector are installed with a triangulation angle of about 10°- 15°. The two devices are approximate symmetry. Two polarizing filters are fixed in front of the camera and projector respectively. The distance from the projector-camera pair to the object is around 350mm.

By adjusting the focal planes of both the camera and projector until they focus on the object, a clear image is captured by the camera. To evaluate the effectiveness of the proposed theory in the case of camera defocus, the camera will be manually defocus to a certain extent.

3.2 Experimental steps

  • 1. Install the equipment, then make the camera defocus to a certain extent.
  • 2. Adjust the polarizing filter in front of the projector to 0 degree, the light rays projected on the object will be horizontally polarized.
  • 3. Adjust the polarizing filter in front of the camera to x degree, and capture image simultaneously (x = 0°,10°,20°……170°), 18 images in total.
  • 4. Based on the 18 images captured in step 3, calculate the Stokes parameter pixel by pixel through the least-square estimation.
  • 5. Composite the ${S_1}$ image based on the method proposed by this paper.

4. Evaluation of robustness

For the simplicity of discussion, the image captured at the angle of transmission axes 130° is taken for an example (this image was medium brightness, suitable for observation). Figures 3(a) and 3(c) show the image captured at transmission axes 130°, Figs. 3(b) and 3(d) are the composited images. Obviously, In the Fig. 3(c), the blurred transitional regions between white fringe and dark fringe were significant. And there are many visible hot pixels appeared in the dark fringe. These blurred transitional regions were induced by the camera defocus. However, compared with Fig. 3(c), the blurred transitional regions of Fig. 3(d) were shrunken clearly, and the visible hot pixels in the dark fringe were reduced.

 figure: Fig. 3.

Fig. 3. (a) Captured image by camera directly. (b) Composited image based on Stokes parameter ${S_1}$. (c)(d) Corresponding images after enlargement. The red rectangular: blurred transitional regions.

Download Full Size | PDF

In order to analysis of the sharpness and noise of above images, we applied 3 traditional sharpness evaluation algorithms to analyze [40]. The brief introduction of these algorithm was illustrated after this paragraph. Where $f({x,y} )$ is the gray level of the pixel where the coordinate is $({x,y} )$, $\mu $ is the average gray value of the whole image. Table 1 provides the quantitative evaluation for the sharpness. All images are normalized before calculation and no other pretreatment. The number in this table means the degree of blurred, the larger the number, the more blurred the image.

$$\textrm{Brenner}: \qquad D(f )= \mathop \sum \nolimits_y \mathop \sum \nolimits_x {[{f({x + 2,y} )- f({x,y} )} ]^2}$$
$$\textrm{SMD}: \quad D(f )= \mathop \sum \nolimits_y \mathop \sum \nolimits_x |{f({x,y} )- f({x + 1,y} )} |{\ast }|{f({x,y} )- f({x,y + 1} )} |$$
$$\textrm{Variance}: \qquad \qquad D(f )= \mathop \sum \limits_y \mathop \sum \limits_x {|{f({x,y} )- \mu } |^2}$$

Tables Icon

Table 1. Image sharpness analysis

Table 1 shows that the results of Brenner algorithm and SMD algorithm are improved slightly. The reason is that most regions of the image are monochromatic fringe, only the blurred transitional regions between white fringe and dark fringe are affected by the proposed method. However, the result of the Variance algorithm is improved significantly. The main reason for this improvement is that the Variance algorithm is sensitive to noise. Due to the process of composition is unbiased, most noise of captured image are eliminated. Then, the interference induced by the noise is reduced. According to the difference of the Variance and other algorithms, at least 33.2% noise of captured image is eliminated.

In pixel level, we choose one row of pixels in the image to analyze. Figure 4 directly shows the normalized gray level distribution of 150-th row pixels. The x-axis indicated the horizontal ordinate of 150-th row pixels, the y-axis indicated the normalized gray level. It is intuitive that the number of transitional pixels is decreased. Where the transitional pixel is determined by first-order difference. If the first-order difference of a pixel is greater than a threshold TH, this pixel was regarded as a transitional pixel.

$$if \, [{diff({image({150,y} )} )> TH} ]$$
$$then \, [{image({150,y} ) \, is \, a \, transition \, pixel} ]$$

 figure: Fig. 4.

Fig. 4. (a) Gray level distribution of captured image. (b) Gray level distribution of composited image.

Download Full Size | PDF

Figure 5(a) shows the number of transitional pixels in 150-th row, Fig. 5(b) shows the number of transitional pixels in whole image. The x-axis indicated the value of the threshold TH, the y-axis indicated the number of transitional pixels. It is apparent that whatever the value of threshold was, the number of transitional pixels of composited image was lower than the directly captured image.

 figure: Fig. 5.

Fig. 5. (a) The number of transitional pixels in 150-th row. (b) The number of transitional pixels in whole image.

Download Full Size | PDF

Figure 6 shows that the number of transitional pixels of images which captured at other angle of transmission axes. Table 2 shows that the concrete number of transitional pixels of corresponding images. Due to the image captured at transmission axes 70° is relatively dark, a large amount of noise is appeared in this image. After normalization process, many isolated hot pixels with larger first-order difference are generated. This phenomenon produced the abnormal number of transitional pixels. However, due to the unbiasedness property of proposed method, the noise of composited image is obviously reduced. The composited image keeps the minimum number of transitional pixels.

 figure: Fig. 6.

Fig. 6. The number of transitional pixels of images which captured at other angle of transmission axes.

Download Full Size | PDF

Tables Icon

Table 2. The number of transitional pixels of images which captured at transmission axes 70°, 90°, 110°, 130°

In the frequency domain, Fig. 7(a) shows the frequency distribution of the ideal rectangular wave, this image was projected on the object surface by the projector. Figure 7(b) shows the frequency distribution of the 150-th row pixels of the composited image. Obviously the high-frequency component is significantly increased. Because the camera defocus function h(x, y) is equivalent to a low-pass filter, the increase of high frequency component means that the effect of the low-pass filter is reduced. In the composited image, the first high-frequency component of rectangular wave was increased. Furthermore, the fundamental frequency and other low frequency components was not affected significantly by our method. Above characteristics of frequency changement proves that on the premise of keeping the coded message, the composited image was clearer than the directly captured images. Moreover, the composited image is closer to what projector projected (rectangular wave). Finally, the blurred transitional regions between the white fringe and the dark fringe were shrunken than the directly captured images.

 figure: Fig. 7.

Fig. 7. (a) Frequency distribution of ideal rectangular wave. (b) Frequency distribution of the captured image and the composited image.

Download Full Size | PDF

To reveal the direct effects of our approach in the measurement. An elaborate analysis of fringe edges detection was conducted. Figure 8 shows the result of subpixel edges detection of the image which captured at the angle of transmission axes 130° [41]. Obviously, in the region A, there are some detection errors occurred due to those noise. In order to evaluate our approach accurately, 6 fringe edges in the region B were selected to analyze (as arrows indicated).

 figure: Fig. 8.

Fig. 8. The result of subpixel edges detection of the image which captured at the angle of transmission axes 130°.

Download Full Size | PDF

Two indicators are applied to evaluate the quality of detected fringe edges: correlation coefficient and skewness. Figure 9 illustrates the meaning of above indicators in the case of our experiment. Due to our experiment was conducted in an optical platform, the horizontality of equipment was guaranteed (as Fig. 9(a) shows), so that the projected fringe pattern should be vertical and linear (the metal scarp was not a regular rectangle, so the top of Fig. 8 seems not horizontal). The robustness and verticality of fringe edges was evaluated by the indicator correlation coefficient and skewness. Figure 9(b) shows the ideal fringe edge, Fig. 9(c) and 9(d) show the traditional nonlinear fringe edge and skew fringe edge respectively. The evaluation of 4 image captured in different angle of transmission axes (TA) and processed image by our approach was shown in Table 3. Apparently, compared with the directly captured image, the composited image that processed by our approach was much closer to the projected fringe pattern (the correlation coefficient was relatively large, and the absolute value of skewness was relatively small).

 figure: Fig. 9.

Fig. 9. (a) The horizontality of experimental setup. (b) Ideal fringe edge. (c) Nonlinear fringe edge. (d) Skew fringe edge.

Download Full Size | PDF

Tables Icon

Table 3. Evaluation of 4 images captured in different angle of transmission axes (TA) and processed image

5. Discussion

Comparing with the state-of-art polarization-based methods to measure HDR surfaces discussed in the Introduction. the proposed method does not require any changes to the measurement conditions(e.g. camera exposure, projection fringe pattern), does not need any additional hardware, or pre-knowledge of the measured object. The only requirement is to capture sufficient number of fringe patterns for calculate Stokes parameters (4 images at least).

The idea of using polarization properties of light ray has been attempt before. Huang et al. [35] proposed to utilize the polarization properties of the LCD projector to distinguish the fringe pattern. The main idea of this method is still that suppress the influence of the complex environment. However, this paper, for the first time, proved that the polarization properties of structured light can be utilized for the robustness of fringe edge detection.

6. Conclusion

In this paper, a polarized-state-based structured light coding strategy is proposed to improve the measurement robustness. Phase image estimation method is proposed to eliminate the image noise and reduce the interference of camera defocus. The information hidden in the saturated areas could be extracted as much as possible. Compared with the traditional polarization-based 3D measurement system, the experiment setup of proposed method can be configured without any additional hardware. To demonstrate the effectiveness of this method, both gray level analysis and frequency level analysis are conducted. The experiment shows that the interference of camera defocus is remarkably reduced and the robustness of fringe edges detection is improved.

Our study supposes that the object surface is irregular. And the degree of camera defocus is manually adjusted. We would like to point out that for the flat surface, the effect of proposed method is not obvious. We expected that further analysis of this data by researcher community will be valuable in improving the accuracy and robustness of measurement

Funding

National Natural Science Foundation of China (61565005); 5511 Science and Technology Innovation Talent Project of Jiangxi Province (20162BCB23047).

Disclosures

The authors declare no conflicts of interest.

References

1. A. N. Belbachir, M. Hofstätter, M. Litzenberger, and P. Schön, “High-speed embedded-object analysis using a dual-line timed-address-event temporal-contrast vision sensor,” IEEE Trans. Ind. Electron. 58(3), 770–783 (2011). [CrossRef]  

2. A. Kumar, “Computer-Vision-Based Fabric Defect Detection: A Survey,” IEEE Trans. Ind. Electron. 55(1), 348–363 (2008). [CrossRef]  

3. H. Cho and S. W. Kim, “Mobile robot localization using biased chirp-spread-spectrum ranging,” IEEE Trans. Ind. Electron. 57(8), 2826–2835 (2010). [CrossRef]  

4. E. Grosso and M. Tistarelli, “Active/Dynamic Stereo Vision,” IEEE Transactions on Pattern Analysis and Machine Intelligence 17(9), 868–879 (1995). [CrossRef]  

5. S. Cho and T. W. S. Chow, “Neural-Learning-Based Reflectance Modelfor 3-D Shape Reconstruction,” IEEE Trans. Ind. Electron. 47(6), 1346–1350 (2000). [CrossRef]  

6. F. Marino, P. de Ruvo, G. de Ruvo, M. Nitti, and E. Stella, “HiPER 3-D: An omnidirectional sensor for high precision environmental 3-D reconstruction,” IEEE Trans. Ind. Electron. 59(1), 579–591 (2012). [CrossRef]  

7. S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Laser Eng. 48(2), 149–158 (2010). [CrossRef]  

8. H. Lin, J. Gao, G. Zhang, X. Chen, Y. He, and Y. Liu, “Review and Comparison of High-Dynamic Range Three-Dimensional Shape Measurement Techniques,” J. Sens. 2017, 1–11 (2017). [CrossRef]  

9. Y. Yin, M. Wang, B. Z. Gao, X. Liu, and X. Peng, “Fringe projection 3D microscopy with the general imaging model,” Opt. Express 23(5), 6846 (2015). [CrossRef]  

10. Z. Song, R. Chung, and X. T. Zhang, “An accurate and robust strip-edge-based structured light means for shiny surface micromeasurement in 3-D,” IEEE Trans. Ind. Electron. 60(3), 1023–1032 (2013). [CrossRef]  

11. N. Coniglio, A. Mathieu, O. Aubreton, and C. Stolz, “Characterizing weld pool surfaces from polarization state of thermal emissions,” Opt. Lett. 38(12), 2086–2088 (2013). [CrossRef]  

12. C. Stolz, N. Coniglio, A. Mathieu, and O. Aubreton, “Real time polarization imaging of weld pool surface,” Twelfth International Conference on Quality Control By Artificial Vision9534, (2015).

13. S. Lemeš and N. Zaimović-Uzunović, “Study Of Ambient Light Influence On Laser 3D Scanning,” in7th International Conference on Industrial Tools and Material Processing Technologies ICIT & MPT pp. 327–330 (2009).

14. Z. Qi, Z. Wang, J. Huang, C. Xing, and J. Gao, “Error of image saturation in the structured-light method,” Appl. Opt. 57(1), A181 (2018). [CrossRef]  

15. J. Peng, X. Liu, D. Deng, H. Guo, Z. Cai, and X. Peng, “Suppression of projector distortion in phase-measuring profilometry by projecting adaptive fringe patterns,” Opt. Express 24(19), 21846 (2016). [CrossRef]  

16. B. Chen and S. Zhang, “High-quality 3D shape measurement using saturated fringe patterns,” Opt. Laser Eng. 87, 83–89 (2016). [CrossRef]  

17. T. Yang, G. Zhang, H. Li, Z. Zhang, and X. Zhou, “Theoretical proof of parameter optimization for sinusoidal fringe projection profilometry,” Opt. Laser Eng. 123, 37–44 (2019). [CrossRef]  

18. Y. Hu, Q. Chen, Y. Liang, S. Feng, T. Tao, and C. Zuo, “Microscopic 3D measurement of shiny surfaces based on a multi-frequency phase-shifting scheme,” Opt. Laser Eng. 122, 1–7 (2019). [CrossRef]  

19. H. Jiang, H. Zhao, and X. Li, “High dynamic range fringe acquisition: A novel 3-D scanning technique for high-reflective surfaces,” Opt. Laser Eng. 50(10), 1484–1493 (2012). [CrossRef]  

20. S. Zhang and S.-T. Yau, “High dynamic range scanning technique,” Mechanical Engineering 70660A (2008).

21. R. Benveniste and C. Ünsalan, “Nary coded structured light-based range scanners using color invariants,” J. Real-Time Image Pr. 9(2), 359–377 (2014). [CrossRef]  

22. H. Lin, J. Gao, Q. Mei, G. Zhang, Y. He, and X. Chen, “Three-dimensional shape measurement technique for shiny surfaces by adaptive pixel-wise projection intensity adjustment,” Opt. Laser Eng. 91, 206–215 (2017). [CrossRef]  

23. Shaoxu Li, Feipeng Da, and Li Rao, “Adaptive fringe projection technique for high-dynamic range three-dimensional shape measurement using binary search,” Opt. Eng. 56(9), 1 (2017). [CrossRef]  

24. C. Chen, N. Gao, X. Wang, and Z. Zhang, “Adaptive projection intensity adjustment for avoiding saturation in three-dimensional shape measurement,” Opt. Commun. 410, 694–702 (2018). [CrossRef]  

25. D. Li and J. Kofman, “Adaptive fringe-pattern projection for image saturation avoidance in 3D surface-shape measurement,” Opt. Express 22(8), 9887 (2014). [CrossRef]  

26. C. Chen, N. Gao, X. Wang, and Z. Zhang, “Adaptive pixel-to-pixel projection intensity adjustment for measuring a shiny surface using orthogonal color fringe pattern projection,” Meas. Sci. Technol. 29(5), 055203 (2018). [CrossRef]  

27. B. Salahieh, Z. Chen, J. J. Rodriguez, and R. Liang, “Multi-polarization fringe projection imaging for high dynamic range objects,” Opt. Express 22(8), 10064 (2014). [CrossRef]  

28. O. Morel and P. Gorria, “Polarization imaging for 3D inspection of highly reflective metallic objects,” Opt. Spectrosc. 101(1), 11–17 (2006). [CrossRef]  

29. L. B. Wolff, “Polarization vision: A new sensory approach to image understanding,” Image Vision Comput. 15(2), 81–93 (1997). [CrossRef]  

30. T. Chen, H. P. A. Lensch, C. Fuchs, and H. P. Seidel, “Polarization and phase-shifting for 3D scanning of translucent objects,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2007).

31. S. Umeyama and G. Godin, “Separation of Diffuse and Specular Components of Surface Reflection by Use of Polarization and Statistical Analysis of Images,” IEEE Trans. Pattern Anal. Machine Intell. 26(5), 639–647 (2004). [CrossRef]  

32. L. G. Dennis, Polarized Light (Marcel Dekker, 2003).

33. L. B. Wolff, “Using polarization to separate reflection components,” in IEEE Computer Society Conference on Computer Vision & Pattern Recognition, Cvpr. IEEE. (1989).

34. R. Benveniste and C. Ünsalan, “Binary and ternary coded structured light 3D scanner for shiny objects,” Lect. Notes Electr. Eng. 62, 241–244 (2010). [CrossRef]  

35. X. Huang, J. Bai, K. Wang, Q. Liu, Y. Luo, K. Yang, and X. Zhang, “Target enhanced 3D reconstruction based on polarization-coded structured light,” Opt. Express 25(2), 1173–1184 (2017). [CrossRef]  

36. S. Lei and S. Zhang, “Digital sinusoidal fringe pattern generation: Defocusing binary patterns VS focusing sinusoidal patterns,” Opt. Laser Eng. 48(5), 561–569 (2010). [CrossRef]  

37. Z. Song, H. Jiang, H. Lin, and S. Tang, “A high dynamic range structured light means for the 3D measurement of specular surface,” Opt. Laser Eng. 95, 8–16 (2017). [CrossRef]  

38. S. G. Krein, Overdetermined Equations (Birkhäuser Boston, 1982).

39. D. Chalmers, Probability and Statistics (Cambridge University, 2018).

40. R. Hassen, Z. Wang, and M. M. A. Salama, “Image sharpness assessment based on local phase coherence,” IEEE Trans. Image Process. 22(7), 2798–2810 (2013). [CrossRef]  

41. A. Trujillo-Pino, K. Krissian, M. Alemán-Flores, and D. Santana-Cedrés, “Accurate subpixel edge location based on partial area effect,” Image Vision Comput. 31(1), 72–90 (2013). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. The polarized-state-based coding structured light pattern.
Fig. 2.
Fig. 2. Experimental setup consists of a projector, a camera, and two polarizing filter.
Fig. 3.
Fig. 3. (a) Captured image by camera directly. (b) Composited image based on Stokes parameter ${S_1}$. (c)(d) Corresponding images after enlargement. The red rectangular: blurred transitional regions.
Fig. 4.
Fig. 4. (a) Gray level distribution of captured image. (b) Gray level distribution of composited image.
Fig. 5.
Fig. 5. (a) The number of transitional pixels in 150-th row. (b) The number of transitional pixels in whole image.
Fig. 6.
Fig. 6. The number of transitional pixels of images which captured at other angle of transmission axes.
Fig. 7.
Fig. 7. (a) Frequency distribution of ideal rectangular wave. (b) Frequency distribution of the captured image and the composited image.
Fig. 8.
Fig. 8. The result of subpixel edges detection of the image which captured at the angle of transmission axes 130°.
Fig. 9.
Fig. 9. (a) The horizontality of experimental setup. (b) Ideal fringe edge. (c) Nonlinear fringe edge. (d) Skew fringe edge.

Tables (3)

Tables Icon

Table 1. Image sharpness analysis

Tables Icon

Table 2. The number of transitional pixels of images which captured at transmission axes 70°, 90°, 110°, 130°

Tables Icon

Table 3. Evaluation of 4 images captured in different angle of transmission axes (TA) and processed image

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

S r e = M o S i n
1 2 ( t a n θ t a n θ + ) 2 [ c o s 2 θ + c o s 2 θ + c o s 2 θ c o s 2 θ + 0 0 c o s 2 θ c o s 2 θ + c o s 2 θ + c o s 2 θ + 0 0 0 0 2 c o s 2 θ c o s 2 θ + 0 0 0 0 2 c o s 2 θ c o s 2 θ + ]
S r e = 1 2 ( t a n θ t a n θ + ) 2 [ c o s 2 θ + c o s 2 θ + c o s 2 θ c o s 2 θ + 0 0 c o s 2 θ c o s 2 θ + c o s 2 θ + c o s 2 θ + 0 0 0 0 2 c o s 2 θ c o s 2 θ + 0 0 0 0 2 c o s 2 θ c o s 2 θ + ] [ 1 1 0 0 ] = 1 2 ( t a n θ t a n θ + ) 2 [ 2 c o s 2 θ 2 c o s 2 θ 0 0 ] = ( t a n θ t a n θ + ) 2 c o s 2 θ [ 1 1 0 0 ]
h ( x , y ) = 1 2 π σ 2 exp ( 1 2 ( x x 0 ) 2 + ( y y 0 ) 2 σ 2 )
k I = S 0 + S 1 cos 2 θ + S 2 sin 2 θ
h ( m , n ) ( k I ) = S 0 + S 1 cos 2 θ + S 2 sin 2 θ
{ Z o = HX Z d = HX
{ Z o = [ k I 1 , k I 2 , , k I n ] T Z d = [ h ( m , n ) ( k I ) 1 , h ( m , n ) ( k I 2 ) , , h ( m , n ) ( k I n ) ] T X = [ S 0 , S 1 , S 2 , 0 ] T H = [ 1 cos 2 θ 1 sin 2 θ 1 1 cos 2 θ 2 sin 2 θ 2 0 0 1 cos 2 θ n sin 2 θ n 0 ]
{ X o ^ = ( H T H ) 1 H T Z o X d ^ = ( H T H ) 1 H T Z d
E ( X o ) ^ = E [ ( H T H ) 1 H T Z o ] = E ( H T Z o ) = E ( H T H X ) = E ( X )
E ( X d ) ^ = E [ ( H T H ) 1 H T Z d ] = E [ H 1 h ( H X ) ]
= E ( H 1 + 1 2 π σ 2 exp ( 1 2 ( x x 0 ) 2 + ( y y 0 ) 2 σ 2 ) HX dxdy ) = E ( X )
E ( X o ) ^ = E ( X d ) ^
{ D ( X o ) ^ = E [ X o ^ E ( X o ^ ) ] 2 = E ( X o ^ ) 2 E ( X ) D ( X d ) ^ = E [ X d ^ E ( X d ^ ) ] 2 = E ( X d ^ ) 2 E ( X )
Brenner : D ( f ) = y x [ f ( x + 2 , y ) f ( x , y ) ] 2
SMD : D ( f ) = y x | f ( x , y ) f ( x + 1 , y ) | | f ( x , y ) f ( x , y + 1 ) |
Variance : D ( f ) = y x | f ( x , y ) μ | 2
i f [ d i f f ( i m a g e ( 150 , y ) ) > T H ]
t h e n [ i m a g e ( 150 , y ) i s a t r a n s i t i o n p i x e l ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.