Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Phase unwrapping in optical metrology via denoised and convolutional segmentation networks

Open Access Open Access

Abstract

The interferometry technique is commonly used to obtain the phase information of an object in optical metrology. The obtained wrapped phase is subject to a 2π ambiguity. To remove the ambiguity and obtain the correct phase, phase unwrapping is essential. Conventional phase unwrapping approaches are time-consuming and noise sensitive. To address those issues, we propose a new approach, where we transfer the task of phase unwrapping into a multi-class classification problem and introduce an efficient segmentation network to identify classes. Moreover, a noise-to-noise denoised network is integrated to preprocess noisy wrapped phase. We have demonstrated the proposed method with simulated data and in a real interferometric system.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Phase measurements are of considerable interest for many applications such as optical metrology, medical diagnostics, and 3D imaging. However, by an inverse trigonometric computation, the measured phase is normally wrapped onto the range of (−π, +π], which does not reflect the true phase values. To address this problem, lots of phase unwrapping approaches [1–11] have been proposed. The discontinuity arisen at residues is firstly identified in the Goldstein’s branch-cut algorithm [1] and it is identified based on that the closed-path integral of phase gradient is not zero. Quality-guided algorithms [2–4] rely on quality map to guide the integration path rather than identifying residues. Approaches [5, 6] are designed to minimize the discontinuities in the unwrapped surface. For methods [7, 8], the phase unwrapping problem is formulated as a generalized minimum-norm sense to minimize the difference of local derivatives between true and measured phases. Unwrapping algorithms [9–11] based on the transport of intensity equation (TIE) were proposed to obtain the absolute phase map directly from intensity information. The TIE unwrapping method is easy to implement and path independent.

Recently, deep learning has been successfully applied in the fields such as image classification and restoration and many models based on deep convolutional neural network (CNN) achieve promising results. This advanced technology has also been adopted in the field of optical metrology. A multilayer perceptron neural network [12] was proposed to identify phase discontinuities. The residual neural network was adopted in [13] to address phase unwrapping and the network was trained on simulated phases with steep spatial gradients. However, the unwrapped phases shown in [13] were not correct for some regions. In 2018, we filed a provisional patent application on phase unwrapping using convolutional segmentation network and denoising network [14]. A similar method using convolutional segmentation network was also demonstrated with simulated data only in 2019 [15]. However, our work is different with [15] as followings: (1) Smaller filter size, dynamic filter number, and deeper layers make a wider network and enlarge the non-linear of our network. (2) Since the unwrapped phase is reconstructed by adding integral multiple n of 2π to the wrapped phase, denoised network for noisy wrapped phases is integrated into our phase unwrapping network.

In this paper, we propose a new approach based on CNN to unwrap phase, where we transfer the task of phase unwrapping into a multi-class classification problem. A noisy wrapped phase is firstly denoised by our proposed noise-to-noise denoised network. Then, it is fed into a segmentation network to generate integral multiple and a post-processing is used to correct the integral multiple. Finally, denoised wrapped phase and corrected integral multiple are combined to generate the unwrapped phase. Simulated data and experiments with a real interferometric system have demonstrated the effectivity of our proposed method.

2. Phase unwrapping via a convolutional segmentation network

The phase unwrapping problem is to add the integral multiple n of 2π at each pixel of wrapped phase φw for obtaining the unwrapped phase φunw as:

φunw(x,y)=φw(x,y)+2π*n(x,y),
where (x, y) is a position, φw, φunw ∈ ℝH×W and n ∈ {0, ±1, ±2, ±3, . . .}. As formulated in Eq. (1), phase unwrapping aims to determine the integral multiple n. From the other perspective, phase unwrapping is a multi-class classification problem, where an integral multiple represents a class. Thus, we can use a classification network to figure out this problem. To identify classes accurately, we introduce an efficient segmentation network [16] that is used for image semantic segmentation. To solve our task, the network is designed as shown in Fig. 1.

 figure: Fig. 1

Fig. 1 The network architecture for phase unwrapping.

Download Full Size | PDF

The goal of phase unwrapping is to determine the integral multiple n and then add 2 at each pixel of wrapped phase. Thus, our network outputs the integral multiple with wrapped phase as inputs. Moreover, the network includes an encoder path and a decoder path, followed by a final pixel-wise classification layer. A constant kernel size of 3-by-3 over all convolutional layers is chosen. Each encoder layer in the encoder path consists of operations: convolution (Conv), followed by batch normalization (BN) [17] and rectified linear unit (Relu, f(x) = max(0, x)) [18]. Following that, max-pooling with a 2-by-2 window and stride 2 is used to reduce the dimension of feature maps and achieve translation invariance over small spatial shifts. Moreover, the max-pooling indices are memorized for subsequent up-sampling operations. The encoder path consists of 13 encoder layers which correspond to the first 13 convolutional layers in the VGG16 network [19] designed for object classification. Each encoder layer has a corresponding decoder layer, hence the decoder path has 13 decoder layers. An up-sampling operation is done on feature maps before they go through a decoder layer. This operation is to up-sample input feature maps using the memorized max-pooling indices from the corresponding encoder location. Finally, the high dimensional feature representation is fed into a convolutional layer and a soft-max classifier to produce a N-channel image of probability, where N is the number of classes. The output corresponds to the class with maximum probability at each pixel. Actually, the output is nonnegative because of soft-max operation. There is a constant difference C between and n. In practice, the constant C can be chosen to make sure that C ≥ − min(n). To train the network, the number of categories N need to be fixed and it can be set: N = max(n) + C + 1. Too small category limits the range of fringe pattern. Too large N needs more training data to obtain a well-trained network. After we get the integral multiple , the unwrapped phase can be reconstructed as:

φ^unw(x,y)=φw(x,y)+2π*(n^(x,y)C).

For training, given a training dataset {φw(k),φunw(k)}k=1M, where M is the number of the training dataset, the network aims to learn the best end-to-end mapping F(·), which minimizes the difference between prediction n^k=softmax(pk=F(φw(k))) and ground-truth nk=(φunw(k)φw(k))/(2π)+C. The item pk ∈ ℝH×W×N represents the intermediate output before going through soft-max operation which produces the position with maximum probability at each pixel. To measure the difference, cross entropy loss function [20] is used since it is commonly used in the field of image classification. Thus, the loss function can be formulated as:

loss=1Mk=1Mx,ylog(pk,t(x,y)),
where pk,t is a predicted probability belongs to class t = nk.

To train the network, we built datasets (simulated and real datasets) which consist of training and testing set, and these two sets were not intersected. Besides, we initiated the weights by the method described in [21] and used the Adam optimizer [22] with the mini-batch size of 3. The learning rate was initially set to 0.0001 and exponentially decayed with a rate of 0.99. The max epoch number was set to 1000. For quantitative measure, we calculated the pixel classification accuracy on the testing set. The number of pixel in class i predicted to belong to class j is denoted as sij (includes sii) and the total number of pixels in class i is denoted as qi=jsij. Thus, the pixel classification accuracy can be calculated as: isii/iqi. We also calculated the root mean square error (RMSE, ‖φ̂unwφunw2/(H × W)) between reconstructed unwrapped phase φ̂unw and ground-truth φunw.

To generate the simulated data set, a list of Zernike coefficients up to 35 terms was randomly generated to represent the surface φunw. The interferograms were generated by Eq. (4):

Ia=A+B(φunw+a*π/2)+noise,
where A and B are constant, a ∈ {0, 1, 2, 3} is used to generate phase shift fringe and noise is zero for clean data. Then a four step phase shifting algorithm was used for calculating the wrapped phase as:
φw=arctan(I3I1I0I2).

For the experiment on simulated clean data, 10000 pairs of data (wrapped and unwrapped phases) with size of 400-by-400 were generated, where 9500 pairs were used for training and the rest was used for testing. Besides, the class number N and the constant C were set to 29 and 7, respectively. The average pixel classification accuracy and the average RMSE on the testing data are 94.62% and 0.0022, respectively. The experimental result on simulated clean data is shown in Fig. 2. The wrapped phase in Fig. 2(a) was fed into the network as a input. The network output and reconstructed unwrapped phase are shown in Figs. 2(b) and 2(d), respectively. Comparing with ground-truth, we can see that our network produces pretty good results. The difference between reconstructed unwrapped phase and the corresponding ground-truth is shown in Fig. 2(f). One can see that the difference is so small and a majority of pixel value is zero, which demonstrates that our network works well on clean data for phase unwrapping.

 figure: Fig. 2

Fig. 2 Phase unwrapping results on simulated clean data. (a) Wrapped phase (input), (b) output (integral multiple ), (c) ground-truth (integral multiple n), (d) reconstructed unwrapped phase, (e) ground-truth (unwrapped phase), (f) difference.

Download Full Size | PDF

However, as shown in Figs. 3(c1) and 3(c2), there are corrupted regions in unwrapped phases due to misclassification. To improve the classification accuracy, a post-processing operation was done on the network output (integral multiple ). We firstly identified phase discontinuity locations since they played a key role in phase unwrapping [23]. Fortunately, the identification of phase discontinuity can be done by our network because the phase discontinuity task can be considered as a two-class classification problem. We just changed the final classification layer to predict two-class problem, correspondingly, binary images of phase discontinuity were treated as ground-truth to train the network. The locations of phase discontinuity of wrapped phase Figs. 3(a1) and 3(a2) are shown in Figs. 4(a) and 4(c), respectively. Next, connected regions were labeled based on the extraction result of phase discontinuity, as shown in Figs. 4(b) and 4(d). Finally, based on the labeling results, we corrected the integral multiple according to a principle that same label corresponds to same value of integral multiple. The post-processed unwrapped phases are shown in Figs. 3(d1) and 3(d2) and they more closely resemble the ground-truth (Figs. 3(b1) and 3(b2)). The last row in Fig. 3 shows differences, where Figs. 3(e1) and 3(e2) represent the differences between reconstructed unwrapped phases and ground-truth, Figs. 3(f1) and 3(f2) represent the differences between post-processed unwrapped phases and ground-truth. We can see that the difference is much smaller after post-processing. Besides, after post-processing, the average pixel classification accuracy and the average RMSE on the testing data are 96.88% and 0.0014, respectively, which demonstrates that the post-processing is benefit to unwrapping precision.

 figure: Fig. 3

Fig. 3 Phase unwrapping results based on post-processing. From top to bottom are: wrapped phases ((a1), (a2)), ground-truth (unwrapped phase, (b1), (b2)), reconstructed unwrapped phases ((c1), (c2)), post-processed unwrapped phases ((d1), (d2)) and differences ((e)=(c)-(b), (f)=(d)-(b)).

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Phase discontinuity extraction ((a), (c)) and connected region labeling ((b), (d)).

Download Full Size | PDF

3. Phase unwrapping via a convolutional segmentation and denoised networks

To validate the effectivity of our proposed approach, an experiment was carried out on the noisy wrapped phase. Since the unwrapped phase is reconstructed by adding integral multiple n of 2π to the wrapped phase for our method, denoising is necessary on noisy wrapped phase. In practice, clean data is difficult to obtain.

To address this problem, a noise-to-noise denoised strategy [24] was integrated into a modified version of U-Net [25]. The network architecture of denoising noisy wrapped phase is shown in Fig. 5. Since image details were very important for our task, we removed pooling layers and up-sampling convolutions. In the U-Net, the output image was smaller than the input, which was unsuitable for our task. Thus, zero padding was used to keep the sizes of all feature maps (including the output image) the same. A constant kernel size of 3-by-3 and a constant filter number of 64 over all convolutional layers were chosen. Besides, we used two observations (for the same fringe pattern, two images with different random noise) of noisy wrapped phase to train our network. The observation 1 was treated as input and the observation 2 was used to calculate the loss. We generated two observations of noisy wrapped phases for each group and 500 × 2 groups of data were cropped into small patches with a size of 40-by-40 to train the network.

 figure: Fig. 5

Fig. 5 The network architecture of denoising noisy wrapped phase.

Download Full Size | PDF

The denoised result on wrapped phase is shown in Fig. 6. The noisy wrapped phase shown in Fig. 6(a) was generated as followings: a combination of Possion and Salt-Pepper random noise was added to interferograms Ia, then noisy wrapped phase was obtained according to Eq. (5). The SNR(Signal-to-noise ratio) of the noisy wrapped phase is 4.0 dB. The denoised wrapped phase and the corresponding ground-truth are shown in Figs. 6(b) and 6(c), respectively. We can see that the proposed denoised network reconstructs almost clean result even for badly corrupted wrapped phase. What’s more, the result was obtained by only using noisy data (not clean wrapped phase) to train our network. After we obtained the denoised wrapped phase, it was fed into our unwrapping network to produce the integral multiple . Then, the post-processing and smooth constraint were used to reconstruct unwrapped phase due to noisy contours of wrapped phase and the reconstructed unwrapped phase is shown in Fig. 6(d). The difference between the reconstructed unwrapped phase and corresponding ground-truth (Fig. 6(e)) is shown in Fig. 6(f) and it is very small. The more badly corrupted data (SNR= 0.6 dB) was also used to test our network. The same results are shown in Fig. 7. One can see that, by integrating noise-to-noise denoised network, our unwrapping network still works well on noisy wrapped phase. The average pixel classification accuracy and the average RMSE on the testing noisy data are 90.04% and 0.0034, respectively.

 figure: Fig. 6

Fig. 6 Unwrapping result on simulated noisy data (SNR = 4.0 dB). (a) noisy wrapped phase, (b) denoised wrapped phase, (c) ground-truth (wrapped phase), (d) unwrapped phase, (e) ground-truth (unwrapped phase), (f) difference.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Unwrapping result on more badly corrupted data (SNR = 0.6 dB). (a) noisy wrapped phase, (b) denoised wrapped phase, (c) ground-truth (wrapped phase), (d) unwrapped phase, (e) ground-truth (unwrapped phase), (f) difference.

Download Full Size | PDF

Moreover, we compared our proposed method with Goldstein’s branch cut algorithm [1] and Quality-guided path-following method [26] on noisy wrapped phases. For the noisy wrapped phases shown in Fig. 6(a) and Fig. 7(a), the unwrapping results by these two methods are expressed in Fig. 8. The results shown in Figs. 8(a) and 8(e) are unwrapped by Goldstein’s branch cut algorithm, and the corresponding differences are displayed in Figs. 8(b) and 8(f), respectively. The unwrapped phased by Quality-guided path-following method are shown in Figs. 8(c) and 8(g), and the differences are expressed in Figs. 8(d) and 8(h), respectively. Compared with our results (Fig. 6(d) and Fig. 7(d)), one can see that our method produces the best results and the differences are the smallest. We also compared the running time and the RMSE results, as shown in Table 1. The running time was evaluated on a machine with 3.4 GHz Intel(R) Core(TM) i3-2130 CPU (8G RAM). We also evaluated the running time of our network on a machine with a 3.5 GHz CPU and a Titan X GPU and the image size is 400-by-400. The value on the left of “/” represents the RMSE result of unwrapped phase on Fig. 6(a), and the right value is on Fig. 7(a). From Table 1, we can see that our network achieves the highest precision and the running speed is the fastest. Besides, the running time on GPU makes our network available for real-time applications.

 figure: Fig. 8

Fig. 8 Unwrapping results of other methods. Unwrapped phases (a) and (e) are produced by Goldstein’s branch cut algorithm, (c) and (g) are obtained by Quality-guided path-following method, (b), (d), (f), and (h) are differences.

Download Full Size | PDF

Tables Icon

Table 1. RMSE results and running time of the different methods.

4. Experimental demonstration

Finally, we tested the network on real data. A HeNe laser source with wavelength of 632.8 nm was used as light source. An AlpaoDM with 97 actuators and 13.5 mm aperture diameter was placed in test arm to generate different fringe patterns, as shown in Fig. 9. To achieve snapshot measurement, a pixelated polarization camera PolarCam from 4D Technology, Inc. was used to capture 4 interferograms simultaneously [27]. Up to 15 terms of Zernike coefficients were random generated and applied to DM during the experiment. Because there was no ground-truth of unwrapped phase, the reconstructed unwrapped phases by a modified Goldstein’s algorithm (followed by denoising wrapped phase, MG) were used as the ground-truth to train the network. Totally, 1500 groups of data (wrapped and unwrapped phases) were obtained, where 1300 groups were used for training and the rest was used to test.

 figure: Fig. 9

Fig. 9 Experimental setup to demonstrate the phase unwrapping method with denoised and convolutional segmentation networks. L1: collimating lens; P: polarizer; PBS: polarized beam splitter; QWP1, QWP2, QWP3: quarter waveplate; DM: deformable mirror; L2: imaging lens.

Download Full Size | PDF

As show in Fig. 10, the first column represents wrapped phases obtained by our setup. The second and third columns show the reconstructed unwrapped phases by our network and MG, respectively. The differences between these two are shown in the last column. From the results, we can see that our network still works well on real data.

 figure: Fig. 10

Fig. 10 Unwrapping results on real data. From left to right are: wrapped phases (input, (a), (e)), reconstructed unwrapped phases by our network ((b), (f)) and MG ((c), (g)), and differences ((d), (h)).

Download Full Size | PDF

5. Conclusion

In conclusion, we propose a new approach based on CNN for phase unwrapping. The phase unwrapping issue is transferred into a multi-class classification problem and an efficient segmentation network is introduced to identify classes. Besides, this network can be used to identify phase discontinuity locations and a post-processing operation is adopted to improve the performance. Moreover, a noise-to-noise denoised network is integrated to preprocess noisy wrapped phase. Since our network is fully convolutional, it also works on other image sizes (different with training image size). Simulated and experimental data have demonstrated the effectivity of our approach. Our current networks were trained with and work well with continuous wrapped phase maps which are typical cases in interferometric optical metrology. We are working on more complex cases with discontinuous wrapped phases and will report the new approaches in the near future.

Funding

China Scholarship Council (CSC) (201704910730); National Science Foundation (NSF) (1455630).

References

1. R. M. Goldstein, H. A. Zebker, and C. L. Werner, “Satellite radar interferometry: Two-dimensional phase unwrapping,” Radio Sci. 23, 713–720 (1988). [CrossRef]  

2. T. J. Flynn, “Consistent 2-d phase unwrapping guided by a quality map,” in Geoscience and Remote Sensing Symposium, vol. 4 (IEEE, 1996), pp. 2057–2059.

3. M. Zhao, L. Huang, Q. Zhang, X. Su, A. Asundi, and Q. Kemao, “Quality-guided phase unwrapping technique: comparison of quality maps and guiding strategies,” Appl. Opt. 50, 6214–6224 (2011). [CrossRef]   [PubMed]  

4. H. Zhong, J. Tang, S. Zhang, and M. Chen, “An improved quality-guided phase-unwrapping algorithm based on priority queue,” IEEE Geosci. Remote. Sens. Lett. 8, 364–368 (2011). [CrossRef]  

5. T. J. Flynn, “Two-dimensional phase unwrapping with minimum weighted discontinuity,” J. Opt. Soc. Am. A 14, 2692–2701 (1997). [CrossRef]  

6. J. Xu, D. An, X. Huang, and P. Yi, “An efficient minimum-discontinuity phase-unwrapping method,” IEEE Geosci. Remote. Sens. Lett. 13, 666–670 (2016). [CrossRef]  

7. D. C. Ghiglia and L. A. Romero, “Robust two-dimensional weighted and unweighted phase unwrapping that uses fast transforms and iterative methods,” J. Opt. Soc. Am. A 11, 107–117 (1994). [CrossRef]  

8. R. Juarez-Salazar, C. Robledo-Sanchez, and F. Guerrero-Sanchez, “Phase-unwrapping algorithm by a rounding-least-squares approach,” Opt. Eng. 53, 024102 (2014). [CrossRef]  

9. C. Zuo, Q. Chen, W. Qu, and A. Asundi, “Direct continuous phase demodulation in digital holography with use of the transport-of-intensity equation,” Opt. Commun. 309, 221–226 (2013). [CrossRef]  

10. N. Pandey, A. Ghosh, and K. Khare, “Two-dimensional phase unwrapping using the transport of intensity equation,” Appl. Opt. 55, 2418–2425 (2016). [CrossRef]   [PubMed]  

11. J. Martinez-Carranza, K. Falaggis, and T. Kozacki, “Fast and accurate phase-unwrapping algorithm based on the transport of intensity equation,” Appl. Opt. 56, 7079–7088 (2017). [CrossRef]   [PubMed]  

12. W. Schwartzkopf, T. E. Milner, J. Ghosh, B. L. Evans, and A. C. Bovik, “Two-dimensional phase unwrapping using neural networks,” in Proceedings of IEEE Conference on Image Analysis and Interpretation, (IEEE, 2000), pp. 274–277.

13. G. Dardikman and N. T. Shaked, “Phase unwrapping using residual neural networks,” in Imaging and Applied Optics 2018, (Optical Society of America, 2018), pp. CW3B–5.

14. R. Liang, J. Zhang, X. Tian, and J. Shao, “Phase unwrapping using segmentation,” (2018). U.S. Provisional Patent Application No. 62/768,624.

15. G. Spoorthi, S. Gorthi, and R. K. S. S. Gorthi, “Phasenet: A deep convolutional neural network for two-dimensional phase unwrapping,” IEEE Signal Process. Lett. 26, 54–58 (2019). [CrossRef]  

16. V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep convolutional encoder-decoder architecture for image segmentation,” IEEE Transactions on Pattern Analysis Mach. Intell. 39, 2481–2495 (2017). [CrossRef]  

17. S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Proceedings of International Conference on Machine Learning, (2015), pp. 448–456.

18. V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of the 27th international conference on machine learning, (2010), pp. 807–814.

19. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556 (2014).

20. K. Janocha and W. M. Czarnecki, “On loss functions for deep neural networks in classification,” arXiv preprint arXiv:1702.05659 (2017).

21. K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in Proceedings of IEEE International Conference on Computer Vision, (2015), pp. 1026–1034.

22. D. P. Kingma and J. L. Ba, “Adam: A method for stochastic optimization,” in Proceedings of International Conference for Learning Representations, (2015).

23. F. Sawaf and R. M. Groves, “Phase discontinuity predictions using a machine-learning trained kernel,” Appl. Opt. 53, 5439–5447 (2014). [CrossRef]   [PubMed]  

24. J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala, and T. Aila, “Noise2noise: Learning image restoration without clean data,” arXiv preprint arXiv:1803.04189 (2018).

25. R. Olaf, F. Philipp, and B. Thomas, “U-net: Convolutional networks for biomedical image segmentation,” in Proceedings of International Conference on Medical Image Computing and Computer-assisted Intervention, (Springer, 2015), pp. 234–241.

26. D. C. Ghiglia and M. D. Pritt, Two-dimensional phase unwrapping: theory, algorithms, and software (Wiley-Interscience, 1998).

27. X. Tian, X. Tu, J. Zhang, O. Spires, N. Brock, S. Pau, and R. Liang, “Snapshot multi-wavelength interference microscope,” Opt. Express 26, 18279–18291 (2018). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 The network architecture for phase unwrapping.
Fig. 2
Fig. 2 Phase unwrapping results on simulated clean data. (a) Wrapped phase (input), (b) output (integral multiple ), (c) ground-truth (integral multiple n), (d) reconstructed unwrapped phase, (e) ground-truth (unwrapped phase), (f) difference.
Fig. 3
Fig. 3 Phase unwrapping results based on post-processing. From top to bottom are: wrapped phases ((a1), (a2)), ground-truth (unwrapped phase, (b1), (b2)), reconstructed unwrapped phases ((c1), (c2)), post-processed unwrapped phases ((d1), (d2)) and differences ((e)=(c)-(b), (f)=(d)-(b)).
Fig. 4
Fig. 4 Phase discontinuity extraction ((a), (c)) and connected region labeling ((b), (d)).
Fig. 5
Fig. 5 The network architecture of denoising noisy wrapped phase.
Fig. 6
Fig. 6 Unwrapping result on simulated noisy data (SNR = 4.0 dB). (a) noisy wrapped phase, (b) denoised wrapped phase, (c) ground-truth (wrapped phase), (d) unwrapped phase, (e) ground-truth (unwrapped phase), (f) difference.
Fig. 7
Fig. 7 Unwrapping result on more badly corrupted data (SNR = 0.6 dB). (a) noisy wrapped phase, (b) denoised wrapped phase, (c) ground-truth (wrapped phase), (d) unwrapped phase, (e) ground-truth (unwrapped phase), (f) difference.
Fig. 8
Fig. 8 Unwrapping results of other methods. Unwrapped phases (a) and (e) are produced by Goldstein’s branch cut algorithm, (c) and (g) are obtained by Quality-guided path-following method, (b), (d), (f), and (h) are differences.
Fig. 9
Fig. 9 Experimental setup to demonstrate the phase unwrapping method with denoised and convolutional segmentation networks. L1: collimating lens; P: polarizer; PBS: polarized beam splitter; QWP1, QWP2, QWP3: quarter waveplate; DM: deformable mirror; L2: imaging lens.
Fig. 10
Fig. 10 Unwrapping results on real data. From left to right are: wrapped phases (input, (a), (e)), reconstructed unwrapped phases by our network ((b), (f)) and MG ((c), (g)), and differences ((d), (h)).

Tables (1)

Tables Icon

Table 1 RMSE results and running time of the different methods.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

φ unw ( x , y ) = φ w ( x , y ) + 2 π * n ( x , y ) ,
φ ^ unw ( x , y ) = φ w ( x , y ) + 2 π * ( n ^ ( x , y ) C ) .
loss = 1 M k = 1 M x , y log ( p k , t ( x , y ) ) ,
I a = A + B ( φ unw + a * π / 2 ) + noise ,
φ w = arctan ( I 3 I 1 I 0 I 2 ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.