Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

RETRACTED: Single-shot 3D shape measurement based on RGB dot patterns and stereovision

Open Access Open Access

Abstract

One-shot projection structured light 3D measurement is a method to establish the stereo matching relationship and reconstruct 3D shape by projecting one pattern. However, the traditional stereo matching algorithm does not solve the problem of low matching accuracy and matching efficiency, which fundamentally limits the accuracy of 3D measurement. As the projector and imaging systems have daily higher resolution and imaging quality, RGB dots projection has more application prospects because of its ability to establish a stereo matching relationship through one projection. In this work, we proposed a single-shot 3D measurement method using line clustering stereo matching, and model correction methods. The projected RGB dots are extracted by slope differenced distribution and area constrained erosion method. Area constrained erosion can solve the problem of the segmented connected blobs caused by insufficient projection resolution. The clustering stereo matching method is utilized to coarse match the segmented center red points. A model correction method is utilized to restore and constrain the pattern that cannot be imaged. Experimental results demonstrated that our method achieves the best accuracy of about 0.089mm, better than the traditional disparity and RGB line method, which may shed light on the proposed method can accurately reconstruct the 3D surface.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

Retraction

This article has been retracted. Please see:
Yang Lu, Zihao Wang, Liandong Yu, Huakun Jia, Xiaozhe Chen, Rongke Gao, Haiju Li, Yeru Wang, and Chao Ma, "Single-shot 3D shape measurement based on RGB dot patterns and stereovision: retraction," Opt. Express 31, 8440-8440 (2023)
https://opg.optica.org/oe/abstract.cfm?uri=oe-31-5-8440

1. Introduction

The stereo vision technique is a non-contact 3D measurement method and has become daily higher popularity in these years. It calculates the corresponding points between the left and right camera views to achieve stereo matching and 3D reconstruction. However, two challenging problems have not been addressed in the past few decades, one is how to match two views of the cameras accurately and robustly [13], and another is the 3D reconstruction of non-Lambertian surfaces [4].

In order to improve the matching accuracy, researchers usually use interference patterns to increase the efficiency and accuracy of stereo matching. For instance, speckle projection usually uses a projector to project an irregular speckle pattern to establish correspondence relationships [5,6]. The RGB dot pattern efficiently establishes a stereo matching relationship and has a good application for reconstructing dynamic objects [7,8]. In contrast, the major drawback of the RGB dot pattern is that the projector’s resolution directly limits the reconstruction resolution. RGB fringe projection profilometry has continuous pixel distribution [911], which can establish a higher reconstruction resolution than dot pattern. However, it is not easy to establish an accurate correspondence between two camera views. The active stereo vision method based on phase mode and multi-frequency heterodyne interference is a high-precision measurement method [1215] that can establish stereo relationships robustly. Recently, deep learning methods optimized stereo vision matching with a vital ability for overall prediction [1618]. The above methods improve the efficiency of stereo matching to a certain extent, but the problems of insufficient spatial resolution and discontinuous phase have not been effectively solved.

On the other hand, due to the constraints of the dynamic imaging range of the CCD camera, the shiny parts cannot be imaged, resulting in the final 3D reconstruction error. The multiple exposure technique refers to fusing images into one image to avoid image saturation and achieve a higher signal-to-noise ratio, which has also been used to solve the high reflective problem [19]. Feng et al. [20] used automated predictive exposure technology to solve the problem of 3D reconstruction of metal objects with a shiny surface. This technology pioneered the mathematical model of camera imaging to solve the reflective problem. Hu et al. [21] extracted the shiny part through telecentric system and a multi-frequency phase-shift technique. Imaging quality is usually improved by adjusting the hardware parameters or camera exposure time in the above studies. Ideally, reducing human intervention as much as possible and optimizing the measurement accuracy of the measurement system have always been researchers’ goals.

In this paper, we proposed a precise and effective one-shot 3D measurement method based on line clustering stereo matching. Moreover, the model correction and restoration method are proposed to ameliorate the shiny surface reconstruction problem. The measurement results show that the proposed 3D reconstruction method can effectively reconstruct shiny parts such as the metal gauge block and human teeth.

2. Principle

The binocular system diagram is shown in Fig. 1. In the proposed method, a projector is used to project an RGB structured light dot pattern onto the object's surface. Two cameras are synchronously captured the projected pattern. The captured images are correct by the polar line correction method and extracted by RGB points extraction method. Then the extracted RGB points is directly fed to the line clustering and model correction stereo matching algorithm to obtain the stereo matching relationship matrix. Finally, the three-dimensional point cloud is obtained using the space analytical rays’ method.

 figure: Fig. 1.

Fig. 1. The diagram of the image acquired system.

Download Full Size | PDF

2.1 RGB points extraction method

Convert captured images from the RGB domain to the HSV domain, the slope difference distribution threshold selection method (SDD) [22] is utilized to obtain the region of interest (ROI) region in S domain. Then the ROI region is intersection with H domain and the V domain to filter the background noises respectively. RGB dots can obtain by using SDD method for H domain, the segmentation results of RGB dots and SDD curve are shown in Fig. 2(a), the blue curve represents the calculated slop difference of the histogram. The three largest peaks of the blue curve are selected as clustering center to divide the RGB dots respectively. The first zero point on the right of the three clustering centers are selected as segmentation thresholds, the segment process is formulated as:

$${Q_r}\left( {i,j} \right) = \left\{ {\begin{array}{{l}} {1, T_r^l < {I_h}\left( {i,j} \right) < T_r^h }\\ {0, else} \end{array}} \right.$$
$${Q_g}\left( {i,j} \right) = \left\{ {\begin{array}{{l}} {1,T_g^l < {I_h}\left( {i,j} \right) < T_g^h}\\ {0,else} \end{array}} \right.$$
$${Q_b}\left( {i,j} \right) = \left\{ {\begin{array}{{l}} {1,T_b^l < {I_h}\left( {i,j} \right) < T_b^h}\\ {0,else} \end{array}} \right.$$
where ${I_h}({i,j} )$ denotes the H channel image of HSV domain, ${Q_r}({i,j} )$, ${Q_g}({i,j} )$, ${Q_b}({i,j} )$ denotes the segmentation results of red, green, and blue dots. $T_r^l$, $T_r^h$ are the low and high thresholds for segmenting red dots, $T_g^l$, $T_g^h$ are the low and high thresholds for segmenting green dots, $T_b^l$, $T_b^h$ are the low and high thresholds for segmenting blue dots, respectively.

 figure: Fig. 2.

Fig. 2. RGB points extraction result (a). Calculated SDD curve, (b) red points extraction result, (c) blue points extraction result and (d) greed points extraction result.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Line clustering matching process. (a)-(b) Unrestored center red point of left view and right view, (c)–(d) Restored center red point of left view and right view.

Download Full Size | PDF

The segmentation results of all projection points are obtained in the V domain. Due to the defect of insufficient projector resolution, the connected blobs are often occurred in segmentation process. At this time, we propose an area constrained morphological erosion method to separate connected blobs, and the specific steps are as follows:

Step 1: Label every connected component of segmented results of V region ${V_i},i = 1,2, \cdots ,N$, where N is the total number labeled blobs in the segmented image. Each labeled ${V_i},i = 1,2, \cdots ,N$ blob is processed independently in the next processed.

Step 2: Utilize morphological erosion operation to every blob by following equations. Where k represent number of eroded.

$$V_i^k = \{ N |{(P )_N} \subseteq V_i^{k - 1}\} \;\;\;\;\;\;\; ({k \ge 2} )$$
$${(P )_N} = \{ c |c = b + {\rm N},b \in S\}$$
$$k = k + 1$$
where P is the cube structure element with length 1 and S denotes the whole operation are in Euclidean space.

Step 3: Keep repeating Step 2 and stop the erosion operation if the number of the eroded blob are decreases, i.e. the following condition hold:

$$label(V_i^k) < label(V_i^{k - 1})$$
where $label(V_i^k)$ denotes the ith labeled blob with eroded k times. The ith labeled blob after eroded k-1 times are selected as candidate seed. i.e.$label(V_i^{k - 1})$ is selected candidate seed.

Step 4: If the area of candidate seed $label(V_i^k)$ is bigger than area threshold Th1, i.e. the following condition holds, repeat Step 2-3 and sum all the candidate seed $V_i^{k - 1}$ to get the final seed or seeds.

$$Area[label(V_i^k)] > Th1$$

The Th1 should be changed with different object surfaces reflectivity theoretically, but Th1 = 15 is most acceptable parameter for most of cases in this paper.

Step 5: Make $i = 1,2, \cdots ,N$ to select all the ith labeled blob. Repeat Step 2-4 to get and combined all the seeds ${M_i},i = 1,2, \cdots ,N$.

All the projected RGB points can be obtained by following equation.

$$C({i,j} )= {Q_r}({i,j} )\bigcap {{Q_V}({i,j} )} + {Q_g}({i,j} )\bigcap {{Q_V}({i,j} )} + {Q_b}({i,j} )\bigcap {{Q_V}({i,j} )}$$
where ${Q_r}({i,j} )$, ${Q_g}({i,j} )$, ${Q_b}({i,j} )$ denotes the coordinates of segmentation results of red, green and blue dots, ${Q_V}({i,j} )$ denotes the collection of all RGB point coordinates.

2.2 Line clustering based stereo matching method

Traditional coarse matching usually uses edge recognition or feature matching relationships, and the convergence time of the optimal matching is long. In order to reduce the convergence time of coarse matching, line clustering matching method is proposed in this paper. This method consists of 5 steps, the specific steps are as follows:

Step 1: Extract the position coordinates of the center red point through the segmentation result of the red point, and a rectangle line is fitted by the relationship between the center red points on the same vertical line. The width of the line is constrained by the average abscissa of center red points in the two adjacent lines.

$$({x_i^k,y_i^k} )= {L^k}\bigcap {({{X_i},{Y_i}} )}$$
where $({{X_i},{Y_i}} )$ are coordinate of red points, ${L^k}$ denotes kth fitting rectangle line, $({x_i^k,y_i^k} )$ denotes ith center red dot coordinate of kth fitting rectangle line.

Step 2: The ordinate of center red points can be calculated and labeled by kth fitting rectangle line, at the same time, ordinate of labeled center red dots of the k-1th and k+1th fitting rectangle lines can also be calculated. Utilize Eq. (11) and Eq. (12) to calculate the ordinate deviation of the kth rectangle line and the abscissa coordinate deviation of the ordinates of the k-1th and k+1th rectangle lines adjacent to the center red points, the lost center red point can be obtained through these two constraints. And the Huber penalty scoring mechanism as a robust loss function, can smoothly judge abnormal points than mean square error, here, we utilized to constraint missing central red dots.

$${d_1} = \frac{{\sqrt {{{({x_i^k - x_{i - 1}^k} )}^2} + {{({y_i^k - y_{i\textrm{ - }1}^k} )}^2}} }}{{\sum\limits_{i = 2}^N {\sqrt {{{({x_i^k - x_{i - 1}^k} )}^2} + {{({y_i^k - y_{i\textrm{ - }1}^k} )}^2}} } }}$$
$${d_2}\textrm{ = }\frac{{\sqrt {{{({x_i^k - x_i^{k\textrm{ - }1}} )}^2} + {{({y_i^k - y_i^{k\textrm{ - }1}} )}^2}} }}{{\sum\limits_{k = 2}^M {\sqrt {{{({x_i^k - x_i^{k\textrm{ - }1}} )}^2} + {{({y_i^k - y_i^{k\textrm{ - }1}} )}^2}} } }}$$
$${e_i} = {W_1}\cdot {d_1} + {W_2}\cdot {d_2}$$
$${H_L}({{e_i}} )= \left\{ {\begin{array}{{l}} {\frac{1}{2}e_i^2 ||{{e_i}} ||\le {t_e}}\\ {\frac{1}{2}{t_e}({2||{{e_i}} ||- {t_e}} ) ||{{e_i}} ||\ge {t_e}} \end{array}} \right.$$
${e_i}$ is the distance error penalty score of ith point in rectangle line, and it’s jointly determined by the penalty weight ${W_1},{W_2}$ and distance error ${d_1},{d_2}$. Usually, since the number of central red dots in adjacent rectangle lines varies greatly, the weight ${W_2}$ will be relatively small. ${t_e}$ denotes the criterion threshold.

Step 3:Labeled every fitted Line and find their centroid point, calculated the abscissas difference of all adjacent rectangle lines and the average of all difference (Eq. (16)). When the abscissas difference of the adjacent labeled rectangles is less than searching range, cluster it into the same category. In this process, there is only have down sampling process, so there will be no disorder of clustering fringe.

$${L^k} = \left\{ {\begin{array}{{l}} {k + 1 {d_L} < {t_L}}\\ {k {d_L} > {t_L}} \end{array}} \right.$$
$${t_L} = {W_L}\cdot \frac{{\sum\limits_{i = 1}^{N - 1} {\sqrt {{{({x_{i + 1}^C - x_i^C} )}^2} + {{({y_{i + 1}^C - y_i^C} )}^2}} } }}{N}$$
where ${L^k}$ denotes the clustering number and ${d_L}$ denotes the abscissas difference of adjacent lines, ${t_L}$ is the distance threshold, N is the number of clustering fringe, $x_{i + 1}^C$, $y_{i + 1}^C$ are the abscissa and ordinate of the centroid of the i-th fringe, respectively.

Step 4: After the two images of left and right cameras are clustered, the coarse matching relationship are generated by clustering results. For the clustered image $Z_c^i$ in the left camera, select the same fringe assignment in the right camera $Y_c^i$, and calculate the Euclidean distance as the search range for coarse matching.

Step 5:After coarse matching of fringe is completed, all the center red dots are matched based on the search range of the coarse matching fringe, and the rest of dots are matched according to the position of the center red dots, and finally the matching of the left and right cameras is obtained. Find the best matched dot in the left matched fringe from the right matched fringe by making the following equation minimum.

$$\mathop {\min }\limits_{c \in [{1,M} ]} \sqrt {{{({x_c^R - \Delta x - x_c^L} )}^2} + {{({y_c^R - y_c^L} )}^2}}$$
where, c is the clustering fringe, $x_c^L$, $x_c^R$ are the abscissas of the c-th fringe in the left and right views, and $y_c^L$, $y_c^R$ are the ordinates of the c-th fringe in the left and right views, $\Delta x$ is the minimum step size of iterative optimization.

Finally, the points mapping relationship between two camera views is calculated by interpolation scattered data [23]. The clustering result is shown in Fig. 3.

2.3 Model-correction method

After the stereo matching is completed, due to the material of the measured object, there will be problems such as the inability of the camera to capture the dynamic range of imaging (shiny surface), which will cause errors in the reconstruction results. We proposed a pattern model-correction method to restore and correct patterns that cannot be imaged based on the four-corner relationship. The least-square fitting equation (Eq. (18)) fits the surrounded red dots according to the relative positional relationship of the central red dot, as shown by the yellow boundaries in Fig. 4(a). The position deviation between each dot and the corresponding yellow boundary can calculate by the (Eq. (19)), remove the red dots with large deviations, retaining candidate red dots, and repeat the least-square fitting once more, we can get the corrected yellow boundaries (Fig. 4(b)). Through the second fitting, the corrected red dots of the sub-pattern are shown in Supplement 1, Fig. S1 and then repeat the same steps for green dots and blue dots in turn until all sub-patterns are corrected. A pre-designed model restores the corrected boundary by the relative position relationship and number of points.

$$y = {a_1}x + {a_2}$$
$${\sigma _i} = \frac{{|{{a_1}{x_i} - {y_i} + {a_2}} |}}{{\sqrt {a_1^2 + 1} }}\cdot \frac{{{n_l}}}{{\sum\limits_{j = 1}^{{n_l}} {\frac{{|{{a_1}{x_j} - {y_j} + {a_2}} |}}{{\sqrt {a_1^2 + 1} }}} }}$$

 figure: Fig. 4.

Fig. 4. The fitted boundaries of sub-lattice (a) First fitted boundaries; (b) Second fitted boundaries.

Download Full Size | PDF

In Supplement 1, Fig. S1, the misidentified RGB dots in the sub-lattice can be constrained by Equation.19, RGB dots before and after correction are shown by different graphics in Supplement 1, Fig. S1.

2.4 3D reconstruction method

The space ray-based three-dimensional coordinates calculation method (SR) is utilized to reconstruct all matching dots. The most commonly used three-dimensional reconstruction method for stereo vision is the disparity in the past few decades. However, a few researchers only verified the reconstruction accuracy obtained by the feature matching disparity method. In the SR method, assumed that the optical path captured by the camera is an ideal straight-line propagation equation, the starting and ending position of this straight-line propagation is the world coordinate of the measured object and the camera's imaging coordinate, respectively. Hence, we need to determine the initial position of the outgoing vector through the mapping relationship between the two-dimensional camera imaging coordinate system and the three-dimensional world coordinate system. At this time, pixel coordinate is a known quantity. To calculate world coordinates process can assume that finding the minimum value of the intersection of two three-dimensional rays in the space. The principle of 3D reconstruction method is shown in Supplement 1, Fig. S2. InSupplement 1, Fig. S2, two space rays can list as follows:

$${l_p} = {k_p}\cdot \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over m} + {P_0}$$
$${l_q} = {k_q}\cdot \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over n} + {Q_0}$$
where $\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over m}$, $\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over n}$ are the unit direction vectors of the two space rays, kp, kq are the coefficients of the unit vector, ${P_0}$, ${Q_0}$ are the starting coordinates of the space rays. The distance of two rays can computed by following equation:
$$d = \sqrt {{{\left( {{P_0} - {Q_0} + {k_p}\cdot \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over m} - {k_q}\cdot \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over n} } \right)}^2}}$$

Calculate the shortest distance between two rays can be considered as solving the least square equations as follow:

$$\min \left( {{{\left( {{P_0} - {Q_0} + {k_p}\cdot \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over m} - {k_q}\cdot \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over n} } \right)}^2}} \right)$$

Since there are only four relationships between space rays: different planes, intersections, parallels, and coincidences, due to the imaging principle of the binocular measurement system built, the latter two cases will not consider in this paper, so they are not considered in the derivation process. In the case of intersections, as long as the above formula is equal to 0, the parameter of the two rays kp and kq in the intersection situation can be obtained as follows:

$${k_p} = \frac{{\left( {({{P_0} - {Q_0}} )\times \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over n} } \right)\cdot \left( {\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over m} \times \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over n} } \right)}}{{{{\left||{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over m} \times \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over n} } \right||}^2}}}$$
$${k_q} = \frac{{\left( {({{P_0} - {Q_0}} )\times \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over m} } \right)\cdot \left( {\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over m} \times \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over n} } \right)}}{{{{\left||{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over m} \times \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over n} } \right||}^2}}}$$
When the two rays do not intersect, the obtained coordinates of the two nearest points can be added and divided by 2 to obtain the final world coordinates. Finally, we integrate all the coordinates to obtain the three-dimensional coordinates of the measured object.

3. Experimental results and discussion

In our experiments, a BenQ AU716N projector is used to project the RGB dot pattern onto the object, and two Hikrobot MV-CA050-12UC cameras are used to capture the images synchronously. With the established systems shown in Fig. 1, Zhang’s camera calibration method [24] is utilized to get intrinsic and exterior parameters.

To verify the proposed line clustering method can obtain the stereo matching relationship more effectively, a comparative experiment with edge-based stereo matching method [7] and NCC (normalized cross-correlation) stereo matching was designed. Experimental results are listed in Table 1.

Tables Icon

Table 1. Computation comparison of different stereo matching method

Here, several objects are measured, including the plaster ball, human face, and high-precision metal gauge block. The plaster ball and its reconstruction results with the different methods are shown in Fig. 5(a)–5(c). For a more intuitive qualitative comparison, the disparity map of Fig. 5(c) is converted into a depth map. The actual radius of the ball is 84.23 mm, measured by a coordinate measuring machine.

 figure: Fig. 5.

Fig. 5. Results of reconstructing a plaster ball and Gauge block. (a) Reconstructed ball with proposed method; (b) Reconstructed ball with RGB line pattern; (c) Reconstructed ball with disparity; (d) Reconstructed block with proposed method; (e) Reconstructed block with RGB line pattern; (f) Reconstructed block with disparity.

Download Full Size | PDF

The accuracy of the ball we used mean distance (MD) to verify, the mean distance between the reconstruction points and fitting ball is computed as:

$$MD = \frac{1}{N}\sum\limits_{i = 1}^N {{D_i}}$$
where ${D_i}$ denotes the minimum distances of i-th reconstruction points and the fitted object surface, $N$ denotes the total number of reconstruction points.

The quantitative comparison of disparity method, RGB Line 3D reconstruction method [9] and our method are list in Table 2. The speckle projection is not compared in our comparative experiments because the method is difficult to reconstruct shiny surfaces. Repeated measures error is also list in Table 2 as mean accuracy plus and minus errors. As can be seen, our method has superior precision.

Tables Icon

Table 2. Quantitively comparison of different 3D reconstruction method

To quantitatively evaluate the accuracy of the shiny surface object reconstruction method, a high-precision metal gauge block with length of 80 mm and 100 mm and a man's face with exposed teeth is reconstructed. Figure 6(a) shows reconstructed 3D profiles of a stack-shaped object composed of 100 mm and 80 mm gauge blocks. The gray area in Fig. 6(b) indicates the potentially erroneous imaging regions at the two-block edges caused by camera ambiguity. The repeatability measured step height (X-Y) is shown in Fig. 6(c).

 figure: Fig. 6.

Fig. 6. Results of reconstructing a stack-shaped object. (a) 3D profiles; (b) Characterization map of height difference measured at A-B; (c) Height difference at X-Y after multiple measurements.

Download Full Size | PDF

Figure 7 shows the man's face reconstruction results, and it can be seen from the zoom-in part that due to the reflection of the teeth, some dots are hard to segment and match. However, the model correction and restoration method can restore the dot pattern projected to the teeth part, and the 3D profiles of the teeth part can be successfully reconstructed.

 figure: Fig. 7.

Fig. 7. Results of reconstructing a human face (see Visualization 1). (a) Reconstructed man face; (b) Zoomed in ear part; (c) Zoomed in teeth part.

Download Full Size | PDF

4. Conclusion

In this paper, we proposed a one-shot 3D measurement method using a line clustering stereo matching method. The SDD threshold selection method segments the projected RGB dots pattern. An area constrained morphological erosion method was applied to separate connected segmented blobs. In order to improve the efficiency of stereo matching, we propose a coarse matching method based on line clustering. This matching method can effectively limit the mapping relationship of matching points to two sampling intervals. For the shiny surface situation, the model correction method can restore the reflective part through the geometric relationship of the pre-designed pattern. The measurement of metal gauge block and plaster ball has higher precision than disparity and RGB line method, proving that proposed method is applicable for shiny surface and various complex reflection characteristics surface measurement.

Funding

National Key Research and Development Program of China (2019YFE0107400); National Natural Science Foundation of China (51927811, 52005147).

Disclosures

The authors declare no conflicts of interests.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. L. Kou, K. Yang, L. Luo, Y. Zhang, J. Li, Y. Wang, and L. Xie, “Binocular stereo matching of real scenes based on a convolutional neural network and computer graphics,” Opt. Express 29(17), 26876–26893 (2021). [CrossRef]  

2. Y. Yin, H. Zhu, P. Yang, Z. Yang, K. Liu, and H. Fu, “High-precision and rapid binocular camera calibration method using a single image per camera,” Opt. Express 30(11), 18781–18799 (2022). [CrossRef]  

3. M. Galar, A. Jurio, C. Lopez-Molina, D. Paternain, J. Sanz, and H. Bustince, “Aggregation functions to combine RGB color channels in stereo matching,” Opt. Express 21(1), 1247–1257 (2013). [CrossRef]  

4. Z. Zhu, Y. Xie, and Y. Cen, “Polarized-state-based coding strategy and phase image estimation method for robust 3D measurement,” Opt. Express 28(3), 4307–4319 (2020). [CrossRef]  

5. K. Fu, Y. Xie, H. Jing, and J. Zhu, “Fast spatial–temporal stereo matching for 3D face reconstruction under speckle pattern projection,” Image Vis. Comput. 85, 36–45 (2019). [CrossRef]  

6. D. Khan, M. A. Shirazi, and M. Y. Kim, “Single shot laser speckle based 3D acquisition system for medical applications,” Opt. Lasers Eng. 105, 43–53 (2018). [CrossRef]  

7. Y. Shuang and Z. Wang, “Active stereo vision three-dimensional reconstruction by RGB dot pattern projection and ray intersection,” Measurement 167, 108195 (2021). [CrossRef]  

8. Z. Wang, Q. Zhou, and Y. Shuang, “Three-dimensional reconstruction with single-shot structured light dot pattern and analytic solutions,” Measurement 151, 107114 (2020). [CrossRef]  

9. Z. Wang and Y. Yang, “Single-shot three-dimensional reconstruction based on structured light line pattern,” Opt. Lasers Eng. 106, 10–16 (2018). [CrossRef]  

10. Y. Li and Z. Wang, “RGB Line Pattern-Based Stereo Vision Matching for Single-Shot 3-D Measurement,” IEEE Trans. Instrum. Meas. 70, 1–13 (2021). [CrossRef]  

11. Q. Jiaming, F. Shijie, L. Yixuan, T. Tianyang, C. Qian, and Z. Chao, “Color deep learning profilometry for single-shot 3D shape measurement,” in Proc.SPIE, 2021).

12. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018). [CrossRef]  

13. X. Liu and J. Kofman, “Real-time 3D surface-shape measurement using background-modulated modified Fourier transform profilometry with geometry-constraint,” Opt. Lasers Eng. 115, 217–224 (2019). [CrossRef]  

14. T. Tao, Q. Chen, S. Feng, Y. Hu, J. Da, and C. Zuo, “High-precision real-time 3D shape measurement using a bi-frequency scheme and multi-view system,” Appl. Opt. 56(13), 3646–3653 (2017). [CrossRef]  

15. H. Yan, L. Yichao, T. Tianyang, Y. Wei, Q. Jiaming, F. Shijie, Z. Chao, and C. Qian, “High dynamic range and fast 3D measurement based on a telecentric stereo-microscopic system,” in Proc.SPIE, 2019).

16. Y. Li, J. Qian, S. Feng, Q. Chen, and C. Zuo, “Composite fringe projection deep learning profilometry for single-shot absolute 3D shape measurement,” Opt. Express 30(3), 3424–3442 (2022). [CrossRef]  

17. J. Qian, S. Feng, Y. Li, T. Tao, J. Han, Q. Chen, and C. Zuo, “Single-shot absolute 3D shape measurement with deep-learning-based color fringe projection profilometry,” Opt. Lett. 45(7), 1842–1845 (2020). [CrossRef]  

18. W. Yin, Y. Hu, S. Feng, L. Huang, Q. Kemao, Q. Chen, and C. Zuo, “Single-shot 3D shape measurement using an end-to-end stereo matching network for speckle projection profilometry,” Opt. Express 29(9), 13388–13407 (2021). [CrossRef]  

19. H. Jiang, H. Zhao, and X. Li, “High dynamic range fringe acquisition: A novel 3-D scanning technique for high-reflective surfaces,” Opt. Lasers Eng. 50(10), 1484–1493 (2012). [CrossRef]  

20. S. Feng, Y. Zhang, Q. Chen, C. Zuo, R. Li, and G. Shen, “General solution for high dynamic range three-dimensional shape measurement using the fringe projection technique,” Opt. Lasers Eng. 59, 56–71 (2014). [CrossRef]  

21. Y. Hu, Q. Chen, Y. Liang, S. Feng, T. Tao, and C. Zuo, “Microscopic 3D measurement of shiny surfaces based on a multi-frequency phase-shifting scheme,” Opt. Lasers Eng. 122, 1–7 (2019). [CrossRef]  

22. Z. Wang, “A New Approach for Segmentation and Quantification of Cells or Nanoparticles,” IEEE Trans. Ind. Inf. 12(3), 962–971 (2016). [CrossRef]  

23. A. Isaac, “Scattered data interpolation methods for electronic imaging systems: a survey,” J. Electron Imaging. 11(2), 157–176 (2002). [CrossRef]  

24. Z. Zhang, “Camera calibration: a personal retrospective,” Machine Vis. Appl. 27(7), 963–965 (2016). [CrossRef]  

Supplementary Material (2)

NameDescription
Supplement 1       Supplement to Figure.S1 and Figure.S2
Visualization 1       Dynamic measurement on man's face reconstruction result.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. The diagram of the image acquired system.
Fig. 2.
Fig. 2. RGB points extraction result (a). Calculated SDD curve, (b) red points extraction result, (c) blue points extraction result and (d) greed points extraction result.
Fig. 3.
Fig. 3. Line clustering matching process. (a)-(b) Unrestored center red point of left view and right view, (c)–(d) Restored center red point of left view and right view.
Fig. 4.
Fig. 4. The fitted boundaries of sub-lattice (a) First fitted boundaries; (b) Second fitted boundaries.
Fig. 5.
Fig. 5. Results of reconstructing a plaster ball and Gauge block. (a) Reconstructed ball with proposed method; (b) Reconstructed ball with RGB line pattern; (c) Reconstructed ball with disparity; (d) Reconstructed block with proposed method; (e) Reconstructed block with RGB line pattern; (f) Reconstructed block with disparity.
Fig. 6.
Fig. 6. Results of reconstructing a stack-shaped object. (a) 3D profiles; (b) Characterization map of height difference measured at A-B; (c) Height difference at X-Y after multiple measurements.
Fig. 7.
Fig. 7. Results of reconstructing a human face (see Visualization 1). (a) Reconstructed man face; (b) Zoomed in ear part; (c) Zoomed in teeth part.

Tables (2)

Tables Icon

Table 1. Computation comparison of different stereo matching method

Tables Icon

Table 2. Quantitively comparison of different 3D reconstruction method

Equations (26)

Equations on this page are rendered with MathJax. Learn more.

Q r ( i , j ) = { 1 , T r l < I h ( i , j ) < T r h 0 , e l s e
Q g ( i , j ) = { 1 , T g l < I h ( i , j ) < T g h 0 , e l s e
Q b ( i , j ) = { 1 , T b l < I h ( i , j ) < T b h 0 , e l s e
V i k = { N | ( P ) N V i k 1 } ( k 2 )
( P ) N = { c | c = b + N , b S }
k = k + 1
l a b e l ( V i k ) < l a b e l ( V i k 1 )
A r e a [ l a b e l ( V i k ) ] > T h 1
C ( i , j ) = Q r ( i , j ) Q V ( i , j ) + Q g ( i , j ) Q V ( i , j ) + Q b ( i , j ) Q V ( i , j )
( x i k , y i k ) = L k ( X i , Y i )
d 1 = ( x i k x i 1 k ) 2 + ( y i k y i  -  1 k ) 2 i = 2 N ( x i k x i 1 k ) 2 + ( y i k y i  -  1 k ) 2
d 2  =  ( x i k x i k  -  1 ) 2 + ( y i k y i k  -  1 ) 2 k = 2 M ( x i k x i k  -  1 ) 2 + ( y i k y i k  -  1 ) 2
e i = W 1 d 1 + W 2 d 2
H L ( e i ) = { 1 2 e i 2 | | e i | | t e 1 2 t e ( 2 | | e i | | t e ) | | e i | | t e
L k = { k + 1 d L < t L k d L > t L
t L = W L i = 1 N 1 ( x i + 1 C x i C ) 2 + ( y i + 1 C y i C ) 2 N
min c [ 1 , M ] ( x c R Δ x x c L ) 2 + ( y c R y c L ) 2
y = a 1 x + a 2
σ i = | a 1 x i y i + a 2 | a 1 2 + 1 n l j = 1 n l | a 1 x j y j + a 2 | a 1 2 + 1
l p = k p m + P 0
l q = k q n + Q 0
d = ( P 0 Q 0 + k p m k q n ) 2
min ( ( P 0 Q 0 + k p m k q n ) 2 )
k p = ( ( P 0 Q 0 ) × n ) ( m × n ) | | m × n | | 2
k q = ( ( P 0 Q 0 ) × m ) ( m × n ) | | m × n | | 2
M D = 1 N i = 1 N D i
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.