Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-precision dynamic three-dimensional shape measurement of specular surfaces based on deep learning

Open Access Open Access

Abstract

In order to solve the difficulty of traditional phase measuring deflectometry (PMD) in considering precision and speed, an orthogonal encoding PMD method based on deep learning is presented in this paper. We demonstrate for, what we believe to be, the first time that deep learning techniques can be combined with dynamic-PMD and can be used to reconstruct high-precision 3D shapes of specular surfaces from single-frame distorted orthogonal fringe patterns, enabling high-quality dynamic measurement of specular objects. The experimental results prove that the phase and shape information measured by the proposed method has high accuracy, almost reaching the results obtained by the ten-step phase-shifting method. And the proposed method also has excellent performance in dynamic experiments, which is of great significance to the development of optical measurement and fabrication areas.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

No accurate measurement, no accurate manufacturing. With the accelerating development of the optical fabrication industries, such as the integrated circuits, optical lenses, aerospace and car industries [15], the requirement for fast and high-precision measurement of specular surfaces increases. In the advanced manufacturing area, real-time 3D shape measurement of dynamic objects is the key for successfully implementing 3D coordinate display and measurement, manufacturing control and online quality inspection [6]. The existing traditional high-precision measuring methods, such as interferometry and contact measurement techniques, still have some shortcomings. The contact measurement machine is expensive and slow, and may damage fragile surfaces. The interferometry requires extremely strict measurement environments and is difficult to measure free-form surfaces [7]. These problems make them hard to satisfy the requirements of advanced manufacturing.

Phase measuring deflectometry (PMD) was proposed in 2004 for the measurement of specular surfaces [8], which has a similar principle to phase measurement profilometry (PMP) [9]. It has the advantages of high speed, large dynamic range as well as relatively high resolution and accuracy, which can be easily and stably used for phase measurement of specular surfaces [1,8,1013]. It should be noticed that PMD technology obtains gradient (rather than height) information of specular surfaces. Gradient information in two mutually perpendicular directions needs to be obtained to reconstruct the 3D shape, and thus it is necessary to capture distorted fringe patterns in two directions. But when using sinusoidal grating as the illuminant, gradient distributions in only one direction can be obtained by one frame distorted fringe pattern, which leads to difficulty in reconstructing 3D shape of the tested specular surface by single shot.

In order to retrieve gradient information in two directions from single-shot patterns and achieve dynamic measuring, a coding method of orthogonal composite pattern [14] has been proposed and applied to PMD to solve this difficulty. The phase distributions in x and y directions are extracted from one distorted composite fringe pattern by the 2D Fourier algorithm in this research [15]. However, both Fourier algorithm and spectrum aliasing will lead to phase error [6,16]. An orthogonal color fringe pattern reflection technique [17] was presented to decrease the spectrum aliasing, but the accuracy is still lower than that of phase-shifting PMD and need to be improved.

The orthogonal fringe helps to realize the dynamic measurement of specular surfaces, however traditional demodulation methods were faced with difficulty in completely separating the spectrum, which limits its simultaneous improvement in accuracy and speed. Deep learning technology provides a new solution to this difficulty.

Deep learning has been widely used in computational imaging technologies and has produced exciting results since 2017. In the 3D measurement area, deep learning has been successfully applied to phase retrieval [1823], phase unwrapping [2427], nonlinear error correction [28,29], depth retrieval [3032], fringe patterns enhancement [3335], phase result enhancement [36], uncertainty estimation [19] and so forth. In the field of PMD, Qiao et al. introduced a method based on dual neural networks to retrieve the phase of specular objects from single-shot distorted fringe patterns [20]. Fan et al. improved the method to retrieve high-precision phase and modulation information from single-shot distorted fringe patterns [37], which almost reach the accuracy of the results obtained by ten-step phase-shifting method. The biggest advantage of Fan and Qiao’s work is higher accuracy compared with traditional single-frame PMD demodulation methods (such as 2D Fourier transform method). Dou et al. designed a deflectometry U-Net network to reconstruct 3D shape information from the slope data in x and y directions, and the error is less than the result when using Southwell algorithm [38]. Relevant studies demonstrate the large potential of deep learning technology in the field of PMD. However, in the field of fringe analysis, both Qiao and Fan’s work can only obtain phase or modulation distributions in only one direction from one shot distorted fringe pattern. As mentioned before, phase distributions (or gradient distributions) in two directions need to be obtained to reconstruct the 3D shape of a specular surface. As a consequence, at least two distorted fringe patterns in different directions needs to be taken, which makes it impossible to achieve dynamic measuring. Considering the requirement of high-precision dynamic detection of specular surfaces, further research on this technology is necessary and meaningful.

Using composite fringe patterns in deep learning-based 3D surface measurement techniques has been a hot research field in recent years [39]. The existing technology of combining deep learning with composite encoding pattern often adopts one-dimensional composite fringe pattern, and the structured light patterns include monochromatic multi-frequency composite grating and color multi-frequency composite grating. Li et al. proposed a deep learning scheme using single-frame dual-frequency composite fringe patterns to obtain high-precision unwrapped phase results [40]. Two U-Nets were built and the inputs were single-frame dual-frequency composite sine grating patterns. The first neural network predicted high-precision wrapped phase. The second neural network predicted the unwrapped phase (with low accuracy), which was used to assist the high-precision wrapped phase result of network 1 to complete phase unwrapping. Qian et al. improved the scheme by replacing the monochromatic composite grating with the color composite grating [41], and used one neural network to simultaneously predict wrapped phase and phase levels to carry out phase unwrapping and obtain high-precision unwrapping phase information. In relevant researches, one-dimensional composite grating provides multi-frequency distorted fringes for neural network, and more accurate unwrapped phase results can be obtained by introducing the idea of temporal phase unwrapping. Composite fringe patterns can provide more features to neural networks for learning and prediction, and neural networks show higher accuracy and efficiency than traditional algorithms in the demodulation of composite fringe patterns.

Inspired by the current deep learning research and dynamic PMD methods, in this paper, we demonstrate for the first time that deep learning techniques can be combined with dynamic-PMD and reconstruct high-precision 3D shapes of specular surfaces from single-frame distorted orthogonal fringe patterns, enabling high-quality dynamic measurement of specular objects. Table 1 analyzes the differences between relevant specular surface measuring methods in terms of efficiency (shooting frame number required for measurement), precision and dynamic 3D measuring capability. Compared with traditional dynamic PMD method, the proposed method solves spectrum aliasing problem and evidently improved precision; and compared with other deep-learning based PMD method, our work achieves phase retrieval in two directions from single-shot of specular surfaces for dynamic 3D shape reconstruction. The proposed method realizes the 3D measurement of dynamic specular surface based on structured light and deep learning for the first time, and achieves better dynamic measurement performance of specular surfaces compared with traditional methods. Experiments demonstrate the ability of the proposed method in the measurement of dynamic specular surfaces, which almost reach the accuracy of the results obtained by ten-step phase-shifting method. In Section 2, the principle of PMD and the neural network architecture are introduced. Section 3 shows the design and results of our experiments, which demonstrate the ability of the presented method in achieving high-precision and dynamic measurement of specular surfaces. Section 4 outlines the conclusions of the proposed work.

Tables Icon

Table 1. The differences between relevant specular surface measuring methods.

2. Principle

2.1 Monoscopic phase measuring deflectometry system

Figure 1 shows the schematic setup of a monoscopic phase measuring deflectometry system. The system is mainly consisted only by an LCD screen and a CCD camera. Different fringe patterns are displayed on the screen. After reflection, the distorted fringe patterns are captured by the camera and demodulated by algorithms to retrieve phase and reconstruct 3D shape information of the tested specular surface.

 figure: Fig. 1.

Fig. 1. The schematic setup of monoscopic phase measuring deflectometry system.

Download Full Size | PDF

When using sinusoidal phase-shifting fringe for slope extraction, the pattern intensive collected by the camera ${I_n}$ can be mathematically expressed as:

$${I_n}(x,y) = A(x,y) + B(x,y) \cdot \cos [\varphi (x,y) + \frac{{2\pi n}}{N}], $$
where $(x,y)$ is the pixel coordinate of the camera, $A(x,y)$ is the average light intensity of fringe pattern, $B(x,y)$ is the modulation, $\varphi (x,y)$ is the absolute phase distribution, N is the number of phase-shifting steps, $\frac{{2\pi n}}{N}$ is phase-shifting size, and $n = 0,1,\ldots ,N - 1$.

The wrapped phase $\varphi (x,y)$ can be retrieved from an inverse trigonometric function:

$$\varphi (x,y) ={-} \arctan \frac{{\sum\nolimits_{n = 0}^{N - 1} {{I_n}} (x,y) \sin (\frac{{2\pi n}}{N})}}{{\sum\nolimits_{n = 0}^{N - 1} {{I_n}} (x,y) \cos (\frac{{2\pi n}}{N})}} = \arctan \frac{{M(x,y)}}{{D(x,y)}}, $$
where the numerator and the denominator of the arctangent function can be expressed as:
$$M(x,y) ={-} \sum\nolimits_{n = 0}^{N - 1} {{I_n}(x,y)\sin (\frac{{2\pi n}}{N}) ={-} \frac{N}{2}B(x,y)\sin \varphi (x,y)}, $$
$$D(x,y) = \sum\nolimits_{n = 0}^{N - 1} {{I_n}(x,y)\cos (\frac{{2\pi n}}{N}) = \frac{N}{2}B(x,y)\cos \varphi (x,y)}. $$

Ghiglia and Romero's least squares method [42] is used for phase unwrapping in our work, and then the gradient distributions in two directions of the tested specular surface can be acquired by the gradient-phase relation of fringe reflection technique, and the three-dimensional shape can be reconstructed by integral of gradients [43,44].

Phase-shifting method can be used for high-precision shape measurement of specular surfaces, but the camera needs to collect at least three frames of distorted fringe images in each direction, which limits the measurement speed. The labels and ground truth are obtained by phase-shifting method in the proposed work. Phase can also be retrieved from one-shot distorted pattern by Fourier algorithm as a solution of fast measurement [15,45], but this algorithm doesn’t perform well in degree of accuracy. What is more important, whether phase-shifting method or Fourier method is used for 3D reconstruction, it is necessary to collect distorted fringe patterns in two directions to obtain two gradient distributions at the same time. This leads to difficulties in realizing dynamic 3D measurement of specular surfaces.

Distorted orthogonal fringe patterns can reflect the slope distributions in two directions from one shot to realize dynamic measurement of specular surfaces. When using the orthogonal fringe pattern for slope extraction, the pattern intensive collected by the camera $J(x,y)$ can be mathematically expressed as:

$$J(x,y) = A(x,y) + B(x,y) \cdot \left\{ {\cos [{\varphi_x}(x,y) + 2\pi \frac{x}{{{p_x}}}] + \cos [{\varphi_y}(x,y) + 2\pi \frac{y}{{{p_y}}}]} \right\}, $$
where ${\varphi _x}(x,y)$ and ${\varphi _y}(x,y)$ are the absolute phase distributions in x and y directions modulated by the tested surface, respectively. ${p_x}$ and ${p_y}$ are fringe periods in two perpendicular directions. In our experiments, the pattern intensive in LCD screen can be designed as:
$$f(x,y) = 255 \times \left[ {\frac{1}{2} + \frac{1}{4}\cos \left( {2\pi \frac{x}{{{p_x}}}} \right) + \frac{1}{4}\cos \left( {2\pi \frac{y}{{{p_y}}}} \right)} \right],$$

Two-dimensional windowed Fourier ridges (2D WFR) method can be used to retrieve phase maps in both horizontal and vertical directions [45]. Then the gradient distributions in two directions of the tested specular surface can be acquired by the gradient-phase relation of fringe reflection technique, and the 3D shape can be reconstructed by the integral of gradients from single-shot distorted orthogonal fringe pattern. This makes dynamic 3D shape measurement possible. However, both 2D WFR algorithm and spectrum aliasing will result in phase error, leading to low accuracy of the shape measuring results of the tested specular surfaces.

2.2 Phase retrieval through deep neural networks

In our work, deep learning is expected to replace 2D WFR method for phase retrieval from single shot distorted orthogonal patterns, achieving high-precision dynamic measurement. The flow chart of the proposed method is shown in Fig. 2. Single-shot distorted orthogonal pattern of the tested specular surface is fed to two neural networks with the same structure, and the phase distributions of x and y directions are retrieved from these two networks separately, as shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. Flow chart of the proposed method.

Download Full Size | PDF

It should be mentioned that the networks are trained to predict M and D in Eq. (2) to obtain wrapped phase distributions. Researches demonstrate that this mode can effectively improve the accuracy of phase retrieval [18,37].

An improved U-Net network is designed for phase retrieval, as shown in Fig. 3. The core architecture of this network consists of a contracting path and an expansive path, which is capable of extracting and integrating feature information of different scales [46]. The residual structure is conducive to solve the problems of gradient disappearance and gradient explosion while improving the depth of neural network [47]. The depthwise separable convolution blocks can separate the mapping of cross-channel correlations and spatial correlations learned by neural networks to increase the efficiency of convolution kernel parameters [48]. The proposed research incorporates residual structure and depthwise separable convolution blocks into traditional U-Net, enabling the network to retrieve high-precision phase information from single shot distorted orthogonal patterns.

 figure: Fig. 3.

Fig. 3. Core architecture of the improved U-Net network. (a) Improved U-Net network architecture. (b) Depthwise separable residual convolution block.

Download Full Size | PDF

3. Experiments and results

A monoscopic PMD system, includes a 1920 × 1080 resolution LCD screen (PHILIPS 243V5QSB) and a 1600 × 1200 resolution CCD camera (AVT-GT1660C, 8-bit pixel depth), is constructed for data acquisition to verify the effectiveness of the proposed method, as shown in Fig. 4.

 figure: Fig. 4.

Fig. 4. Monoscopic PMD system.

Download Full Size | PDF

As explained in the principles section, two neural networks with the same structure were built to obtain phase distributions in two directions, separately. 30 groups of pictures from three mirrors (two concave mirrors with different radius and a plane mirror) were captured for building a high-quality dataset to train the neural networks. The curvature radii of the two concave mirrors are 400 mm and 5000 mm respectively. Each group includes one frame of distorted orthogonal pattern and 10-step phase-shifting distorted fringe patterns. High-precision phase distributions can be retrieved by phase-shifting method. Each distorted orthogonal fringe pattern corresponds to a phase map in the x direction and a phase map in the y direction respectively. As a result, the dataset is constructed with the distorted orthogonal fringe pattern as the input data and the corresponding phase (in fact, the phase refers to M and D in Eq. (2)) as the label. Then the number of the dataset was expanded to 300 by small-angle rotation operations, and the map size was reduced to 480 × 480 for training. Finally, the dataset was divided into training set, validation set, and test set in a ratio of 12:2:1. Figure 5 shows a set of data in our experiments.

 figure: Fig. 5.

Fig. 5. A set of data. (a) The input of U-Net 1 and U-Net 2: one shot distorted orthogonal fringe pattern of the tested mirror. (b) and (c) show the labels of U-Net 1, which correspond to M and D of phase on x direction in Eq. (2) respectively. (d) and (e) show the labels of U-Net 2, which correspond to M and D on y direction in Eq. (2) respectively.

Download Full Size | PDF

The networks are implemented using the Python language and the framework of TensorFlow 1.14 on NVIDIA GTX 2080. The loss function is mean squared error (MSE). The optimizer is Adam Optimizer, and its initial learning rate is set to 0.001. After 200 epochs, the learning rate is changed to 0.0001 and 200 epochs are continued. It takes about 5.7 hours to train one neural network on our deep learning platform by these datasets. The proposed method uses two neural networks with the same structure to get phase distributions in two directions, so the total training time is about 11.4 hours. After training, it takes about 0.21s to predict phase distributions in two directions from one shot distorted orthogonal fringe pattern by the proposed method.

It should be noticed that the proposed neural networks were used to predict phase distributions in both x and y directions from one frame distorted orthogonal fringe pattern of the tested surface. The prediction results were processed in different ways and then compared with other measuring methods to verify the performance of the proposed method.

The radius of the concave mirror sample measured in the subsequent experiments is 20,000 mm, and the concave mirror sample does not appear in the data set used by training neural networks.

3.1 Phase retrieving

In order to prove the performance of the proposed method in phase retrieving, the same one-frame distorted orthogonal fringe pattern was demodulated by 2D WFR method and proposed deep-learning method respectively to get unwrapping phase distributions. And the phase results were compared with the ground truth obtained by 10-step phase-shifting method, as shown in Fig. 6. Figure 6(a), Fig. 6(b) and Fig. 6(c) are the x-direction unwrapped phase distributions of a concave mirror obtained by phase-shifting method (as the ground truth), 2D WFR method and deep-learning method, separately. Figure 6(d) and Fig. 6(e) are the error distributions of the unwrapped phase obtained by 2D WFR method and deep-learning method compared with the ground truth. It should be mentioned that the phase distributions had been normalized for comparison.

 figure: Fig. 6.

Fig. 6. Single-shot phase retrieving results. (a) The ground truth obtained by 10-step phase-shifting method. (b) The unwrapped phase obtained by 2D WFR method. (c) The unwrapped phase obtained by deep-learning method. (d) The error distribution of the unwrapped phase obtained by 2D WFR method. (e) The error distribution of the unwrapped phase obtained by deep-learning method.

Download Full Size | PDF

Figure 6 also shows the mean relative error (MRE) of the phase results obtained by 2D WFR method and deep-learning method, separately. Compared with the MRE of 2D WFR method (5.09%), the MRE of our method (0.68%) is reduced to one-seventh. Considering that the proposed method and 2D WFR method require the same input data for phase retrieval, this experiment shows powerful information acquisition ability of the proposed deep-learning based method compared with traditional computational imaging technologies, which leads to better measuring performance.

3.2 3D shape reconstruction

To further evaluate the performance of the proposed method for shape measurement of specular surfaces, we compared the 3D shape measurement result reconstructed by the proposed method with the result obtained by 10-step phase-shifting method (as the ground truth), as shown in Fig. 7. The 3D reconstruction area is surrounded by the red box in Fig. 7(a), and Fig. 7(f) exhibits the error distribution of the shape result obtained by the proposed method. The mean absolute error (MAE) of this 3D shape measurement result is 1.3327 × 10−5 mm.

 figure: Fig. 7.

Fig. 7. 3D shape measurement results of specular surfaces. (a) Single-shot distorted orthogonal fringe pattern from a concave mirror. (b) The unwrapped phase obtained by phase-shifting method. (c) The unwrapped phase obtained by deep-learning method. (d) The 3D shape reconstructed from (b). (e) The 3D shape reconstructed from (c). (f) The error distribution of the 3D shape reconstructed by the proposed method.

Download Full Size | PDF

Compared to the MAE of the 3D shape result in Ref. [20] (0.00075 mm), the MAE of the proposed method is about 2% of Qiao’s work. This experiment proves that the proposed method not only performs well in single-shot phase retrieving, but also has strong performance in high-precision 3D shape measurement of specular surfaces.

3.3 Dynamic measurement experiments

An experiment of measuring a vibrating plane mirror was taken for the purpose of verifying the dynamic-measuring ability of the proposed method. The CCD camera in PMD system was set to capture the distorted orthogonal patterns of the vibrating plane mirror five times per second, and six frames of pictures were gathered in about 1.2 seconds. After gathering, the 6 frames of distorted orthogonal fringe patterns were demodulated by the proposed deep-learning method and then used to reconstruct the dynamic 3D shapes of this vibrating plane mirror, as shown in Fig. 8.

 figure: Fig. 8.

Fig. 8. Dynamic 3D shape measurement results of a vibrating plane mirror. (a) 3D topography reconstruction results. (b) One-dimensional height results of the tested vibrating plane mirror when x is 15 mm.

Download Full Size | PDF

Figure 8(a) shows the results of 3D surface measurement at six times, and Fig. 8(b) shows one-dimensional height results of this tested vibrating plane mirror when x is 15 mm. The experiment shows that the proposed method can reconstruct the vibration information of specular surfaces. This experiment verifies the dynamic measurement ability of the proposed method.

Another moving concave mirror (the nominal value of the curvature radius is 20,000 mm) was also measured by the proposed method. The CCD camera was set to capture the distorted orthogonal patterns of the mirror five times per second. Twelve frames of pictures were gathered in about 2.4 seconds.

The 12 distorted orthogonal fringe patterns were demodulated by proposed deep-learning method and then used to reconstruct the dynamic shapes of this moving mirror, as shown in Fig. 9. It should be noted that only six 3D shapes (first, third, fifth, seventh, ninth and eleventh frames) were drawn in Fig. 9(a) in order to make this picture clear. Figure 9(b) shows the middle rows data of frames 1-12. It is clearly demonstrated that the 3D shape information of the moving concave mirror can be basically realized by the proposed method.

 figure: Fig. 9.

Fig. 9. Dynamic 3D shape measurement results of a concave mirror. (a) 3D topography reconstruction results of frames 1, 3, 5, 7, 9 and 11. (b) Middle rows data of frames 1-12.

Download Full Size | PDF

It should be noted that the movement of the tested mirror occurs on a plane. Ideally, the moving track of this mirror is parallel to the x-axis. In this case, when we fix the y-axis coordinate to extract the data of 3D shape in a certain line, the corresponding section of the tested mirror should be constant, so the measured curvature should also be constant. However, in our experiments, the mirror also has a small movement along the y-axis, which results in that the curvature corresponding to the position of the tested mirror is not exactly the same every time. The height change in Fig. 9(b) is caused by different curvature.

Finally, to quantitatively determine the accuracy of the proposed dynamic measuring method, we obtained the curvature radius by sphere surface fitting of 12 surface shapes, and compared the results with the nominal values of this tested concave mirror. Table 2 shows the measurement results of curvature radius of the tested concave mirror. The average value of the results obtained from 12 measurements by the proposed method is 20,040 mm. The mean deviation of the results is 234 mm, and the mean relative error is 1.17%. This experiment shows the ability of the proposed method in high-accuracy measurement of dynamic specular surfaces.

Tables Icon

Table 2. Measurement results of curvature radius of the tested concave mirror

4. Conclusions

Orthogonal encoding technology in PMD can be used to obtain gradient distributions in two directions from only one frame of distorted fringe pattern. However, traditional PMD is hard to conquer spectrum aliasing introduced by the demodulation process, which limits its simultaneous improvement in accuracy and speed. Deep learning technology provides a new solution to this difficulty. In this work, an orthogonal encoding PMD method based on deep learning is presented for dynamic high-precision 3D shape measurement of specular surfaces. The experimental results show that compared with the phase result obtained by 2D WFR method, the error of the proposed method is reduced to one-seventh, which almost reaches the accuracy of the result obtained by ten-step phase-shifting method. And it also performs well in dynamic measurement experiments. This method demonstrates obvious advantages in the area of high-precision dynamic 3D measurement of specular surfaces, which is of great significance to the development of optical measurement and fabrication areas. Considering that the proposed method only needs single-frame shot and the demodulation process is convenient, we believe that this method is also conducive to the realization of industrial on-line detection and high-speed measurement schemes.

Funding

National Natural Science Foundation of China (61875033, 62075032).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available but may be obtained from the authors upon reasonable requests.

References

1. L. Huang, M. Idir, C. Zuo, and A. Asundi, “Review of phase measuring deflectometry,” Opt. Lasers Eng. 107, 247–257 (2018). [CrossRef]  

2. J. Qian, S. Feng, T. Tao, Y. Hu, K. Liu, S. Wu, Q. Chen, and C. Zuo, “High-resolution real-time 360 3d model reconstruction of a handheld object with fringe projection profilometry,” Opt. Lett. 44(23), 5751–5754 (2019). [CrossRef]  

3. J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010). [CrossRef]  

4. J. Leopold, H. Günther, and R. Leopold, “New developments in fast 3D-surface quality control,” Measurement 33(2), 179–187 (2003). [CrossRef]  

5. Y. Wu, H. Yue, J. Yi, M. Li, and Y. Liu, “Phase error analysis and reduction in phase measuring deflectometry,” Opt. Eng. 54(6), 064103 (2015). [CrossRef]  

6. X. Su and Q. Zhang, “Dynamic 3-D shape measurement method: A review,” Opt. Lasers Eng. 48(2), 191–204 (2010). [CrossRef]  

7. F. Chen, G. M. Brown, and M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000). [CrossRef]  

8. Z. H. Zhang, “Review of single-shot 3D shape measurement by phase calculation-based fringe projection techniques,” Opt. Lasers Eng. 50(8), 1097–1106 (2012). [CrossRef]  

9. V. Srinivasan, H. C. Liu, and M. Halioua, “Automated phase-measuring profilometry of 3-D diffuse objects,” Appl. Opt. 23(18), 3105–3108 (1984). [CrossRef]  

10. M. Knauer, J. Kaminski, and G. Hausler, “Phase measuring deflectometry: a new approach to measure specular free form surfaces,” Proc. SPIE 5457, 366–376 (2004). [CrossRef]  

11. R. Höfling, P. Aswendt, and R. Neugebauer, “Phase reflection—a new solution for the detection of shape defects on car body sheets,” Opt. Eng. 39(1), 175 (2000). [CrossRef]  

12. Y. Tang, X. Su, Y. Liu, and H. Jing, “3D shape measurement of the aspheric mirror by advanced phase measuring deflectometry,” Opt. Express 16(19), 15090–15096 (2008). [CrossRef]  

13. Y. Xu, F. Gao, and X. Jiang, “A brief review of the technological advancements of phase measuring deflectometry,” PhotoniX 1(1), 14 (2020). [CrossRef]  

14. L. Huang, C. S. Ng, and A. K. Asundi, “Dynamic three-dimensional sensing for specular surface with monoscopic fringe reflectometry,” Opt. Express 19(13), 12809 (2011). [CrossRef]  

15. K. Qian, “Two-dimensional windowed Fourier transform for fringe pattern analysis: principles, applications, and implementations,” Opt. Laser Eng. 45(2), 304–317 (2007). [CrossRef]  

16. X. Su and W. Chen, “Fourier transform profilometry: a review,” Opt. Lasers Eng. 35(5), 263–284 (2001). [CrossRef]  

17. Y. Wu, H. Yue, J. Yi, M. Li, and Y. Liu, “Dynamic specular surface measurement based on color encoded fringe reflection technique,” Opt. Eng. 55(2), 024104 (2016). [CrossRef]  

18. S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Photonics 1(2), 1 (2019). [CrossRef]  

19. S. Feng, C. Zuo, Y. Hu, Y. Li, and Q. Chen, “Deep-learning-based fringe-pattern analysis with uncertainty estimation,” Optica 8(12), 1507–1510 (2021). [CrossRef]  

20. G. Qiao, Y. Huang, Y. Song, H. Yue, and Y. Liu, “A single-shot phase retrieval method for phase measuring deflectometry based on deep learning,” Opt. Communications 476, 126303 (2020). [CrossRef]  

21. R. C. Machineni, G. E. Spoorthi, K. S. Vengala, S. Gorthi, and R. Gorthi, “End-to-end deep learning-based fringe projection framework for 3D profiling of objects,” Comput. Vis. Image Underst. 199, 103023 (2020). [CrossRef]  

22. W. Hu, H. Miao, K. Yan, and Y. Fu, “A Fringe Phase Extraction Method Based on Neural Network,” Sensors 21(5), 1664 (2021). [CrossRef]  

23. T. Yang, Z. Zhang, H. Li, X. Li, and X. Zhou, “Single-shot phase extraction for fringe projection profilometry using deep convolutional generative adversarial network,” Meas. Sci. Technol. 32(1), 015007 (2020). [CrossRef]  

24. W. Yin, Q. Chen, S. Feng, T. Tao, L. Huang, M. Trusiak, A. Asundi, and C. Zuo, “Temporal phase unwrapping using deep learning,” Sci. Rep. 9(1), 20175 (2019). [CrossRef]  

25. G. E. Spoorthi, R. K. S. S. Gorthi, and S. Gorthi, “PhaseNet 2.0: Phase Unwrapping of Noisy Data Based on Deep Learning Approach,” IEEE Trans. on Image Process. 29, 4862–4872 (2020). [CrossRef]  

26. J. Liang, J. Zhang, J. Shao, B. Song, B. Yao, and R. Liang, “Deep Convolutional Neural Network Phase Unwrapping for Fringe Projection 3D Imaging,” Sensors 20(13), 3691 (2020). [CrossRef]  

27. P. Yao, S. Gai, and F. Da, “Coding-Net: A multi-purpose neural network for Fringe Projection Profilometry,” Opt. Communications 489, 126887 (2021). [CrossRef]  

28. S. Feng, C. Zuo, L. Zhang, W. Yin, and Q. Chen, “Generalized framework for non-sinusoidal fringe analysis using deep learning,” Photonics Res. 9(6), 1084–1098 (2021). [CrossRef]  

29. Y. Yang, Q. Hou, Y. Li, Z. Cai, X. Liu, J. Xi, and X. Peng, “Phase error compensation based on tree-net using deep learning,” Opt. Lasers Eng. 143, 106628 (2021). [CrossRef]  

30. S. Fan, S. Liu, X. Zhang, H. Huang, W. Liu, and P. Jin, “Unsupervised deep learning for 3D reconstruction with dual-frequency fringe projection profilometry,” Opt. Express 29(20), 32547–32567 (2021). [CrossRef]  

31. H. Nguyen, K.L. Ly, T. Nguyen, Y. Wang, and Z. Wang, “MIMONet: Structured light 3D shape reconstruction by a multi-input multi-output network,” Appl. Opt. 60(17), 5134–5144 (2021). [CrossRef]  

32. H. Nguyen, Y. Wang, and Z. Wang, “Single-shot 3d shape reconstruction using structured light and deep convolutional neural networks,” Sensors 20(13), 3718 (2020). [CrossRef]  

33. J. Shi, X. Zhu, H. Wang, L. Song, and Q. Guo, “Label enhanced and patch based deep learning for phase retrieval from single frame fringe pattern in fringe projection 3D measurement,” Opt. Express 27(20), 28929–28943 (2019). [CrossRef]  

34. H. Yu, X. Chen, Z. Zhang, C. Zuo, Y. Zhang, D. Zheng, and J. Han, “Dynamic 3-D measurement based on fringe-to-fringe transformation using deep learning,” Opt. Express 28(7), 9405–9418 (2020). [CrossRef]  

35. H. Yu, D. Zheng, J. Fu, Y. Zhang, Z. Chao, and J. Han, “Deep learning-based fringe modulation-enhancing method for accurate fringe projection profilometry,” Opt. Express 28(15), 21692–21703 (2020). [CrossRef]  

36. V. Suresh, Y. Zheng, and B. Li, “PMENet: phase map enhancement for Fourier transform profilometry using deep learning,” Meas. Sci. Technol. 32(10), 105001 (2021). [CrossRef]  

37. L. Fan, Z. Wu, J. Wang, C. Wei, H. Yue, and Y. Liu, “Deep learning-based Phase Measuring Deflectometry for single-shot 3D shape measurement and defect detection of specular objects,” Opt. Express 30(15), 26504–26518 (2022). [CrossRef]  

38. J. Dou, D. Wang, Q. Yu, M. Kong, L. Liu, X. Xu, and R. Liang, “Deep-learning-based deflectometry for freeform surface measurement,” Opt. Lett. 47(1), 78–81 (2022). [CrossRef]  

39. C. Zuo, J. Qian, S. Feng, W. Yin, Y. Li, and P. Fan, “Deep learning in optical metrology: a review,” Light-Sci Appl. 11(1), 39 (2022). [CrossRef]  

40. Y. Li, J. Qian, S. Feng, Q. Chen, and C. Zuo, “Single-shot spatial frequency multiplex fringe pattern for phase unwrapping using deep learning,” Optics Frontier Online 2020: Optics Imaging and Display 11571, 314–319 (2020). [CrossRef]  

41. J. Qian, S. Feng, Y. Li, T. Tao, J. Han, Q. Chen, and C. Zuo, “Single-shot absolute 3D shape measurement with deep-learning-based color fringe projection profilometry,” Opt. Lett. 45(7), 1842–1845 (2020). [CrossRef]  

42. D. C. Ghiglia and L. A. Romero, “Direct phase estimation from phase differences using fast elliptic partial differential equation solvers,” Opt. Lett. 14(20), 1107–1109 (1989). [CrossRef]  

43. Z. Zhang, “A Flexible New Technique for Camera Calibration, Pattern Analysis and Machine Intelligence,” IEEE Transactions on 22(11), 1330–1334 (2000). [CrossRef]  

44. W. H. Southwell, “Wave-front estimation from wave-front slope measurements,” J. Opt. Soc. Am. 70(8), 998–1006 (1980). [CrossRef]  

45. K. Qian, “Windowed Fourier transform for fringe pattern analysis: addendum,” Appl. Opt. 43(17), 3472–3473 (2004). [CrossRef]  

46. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention (Springer), 234–241 (2015).

47. K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 770–778.

48. F. Chollet, “Xception: Deep Learning with Depthwise Separable Convolutions,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 1800-1807. [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available but may be obtained from the authors upon reasonable requests.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. The schematic setup of monoscopic phase measuring deflectometry system.
Fig. 2.
Fig. 2. Flow chart of the proposed method.
Fig. 3.
Fig. 3. Core architecture of the improved U-Net network. (a) Improved U-Net network architecture. (b) Depthwise separable residual convolution block.
Fig. 4.
Fig. 4. Monoscopic PMD system.
Fig. 5.
Fig. 5. A set of data. (a) The input of U-Net 1 and U-Net 2: one shot distorted orthogonal fringe pattern of the tested mirror. (b) and (c) show the labels of U-Net 1, which correspond to M and D of phase on x direction in Eq. (2) respectively. (d) and (e) show the labels of U-Net 2, which correspond to M and D on y direction in Eq. (2) respectively.
Fig. 6.
Fig. 6. Single-shot phase retrieving results. (a) The ground truth obtained by 10-step phase-shifting method. (b) The unwrapped phase obtained by 2D WFR method. (c) The unwrapped phase obtained by deep-learning method. (d) The error distribution of the unwrapped phase obtained by 2D WFR method. (e) The error distribution of the unwrapped phase obtained by deep-learning method.
Fig. 7.
Fig. 7. 3D shape measurement results of specular surfaces. (a) Single-shot distorted orthogonal fringe pattern from a concave mirror. (b) The unwrapped phase obtained by phase-shifting method. (c) The unwrapped phase obtained by deep-learning method. (d) The 3D shape reconstructed from (b). (e) The 3D shape reconstructed from (c). (f) The error distribution of the 3D shape reconstructed by the proposed method.
Fig. 8.
Fig. 8. Dynamic 3D shape measurement results of a vibrating plane mirror. (a) 3D topography reconstruction results. (b) One-dimensional height results of the tested vibrating plane mirror when x is 15 mm.
Fig. 9.
Fig. 9. Dynamic 3D shape measurement results of a concave mirror. (a) 3D topography reconstruction results of frames 1, 3, 5, 7, 9 and 11. (b) Middle rows data of frames 1-12.

Tables (2)

Tables Icon

Table 1. The differences between relevant specular surface measuring methods.

Tables Icon

Table 2. Measurement results of curvature radius of the tested concave mirror

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

I n ( x , y ) = A ( x , y ) + B ( x , y ) cos [ φ ( x , y ) + 2 π n N ] ,
φ ( x , y ) = arctan n = 0 N 1 I n ( x , y ) sin ( 2 π n N ) n = 0 N 1 I n ( x , y ) cos ( 2 π n N ) = arctan M ( x , y ) D ( x , y ) ,
M ( x , y ) = n = 0 N 1 I n ( x , y ) sin ( 2 π n N ) = N 2 B ( x , y ) sin φ ( x , y ) ,
D ( x , y ) = n = 0 N 1 I n ( x , y ) cos ( 2 π n N ) = N 2 B ( x , y ) cos φ ( x , y ) .
J ( x , y ) = A ( x , y ) + B ( x , y ) { cos [ φ x ( x , y ) + 2 π x p x ] + cos [ φ y ( x , y ) + 2 π y p y ] } ,
f ( x , y ) = 255 × [ 1 2 + 1 4 cos ( 2 π x p x ) + 1 4 cos ( 2 π y p y ) ] ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.