Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Reconstruction and analysis of wavefront with irregular-shaped aperture based on deep learning

Open Access Open Access

Abstract

Convolutional neural networks (CNNs) have been successfully applied to solve optical problems. In this paper, a method is proposed for the reconstruction and analysis of a wavefront with an irregular-shaped aperture based on deep learning, for which a U-type CNN (U-net) was used to reconstruct the wavefront image. The data generated by the simulation contain several types of wavefront images with irregularly shaped apertures for training the U-net. The results indicate that modal wavefront reconstruction of irregular-shaped apertures is feasible based on deep learning; it will be very helpful for the reconstruction and analysis of wavefronts in real time applications, and the method is robust.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In modern optics, optical components with irregular-shaped apertures are involved in optical systems and play an important and irreplaceable role. Large telescope systems, such as the KECK Observatory [1], the James Webb Space Telescope [2], the California Extremely Large Telescope [3], and the Large Sky Area Multi-Object Fiber Spectroscopic Telescope [4], comprised primarily of mirrors with hexagonal segments. The pupil of the optical system of some astronomical telescopes is annular [5]. Whereas, in the case of off-axis imaging and for wide field of view optical systems, the pupil is elliptical. In such cases, the illuminated spot is elliptical, and a plane mirror is tested by shining a circular beam on it with non-normal incidence in a Ritchey Common test [69]. There are circular and annular sector pupils in a cube-corner retroreflector [10]. High-power laser systems, for example, the National Ignition Facility [11,12] and SG-III laser facility [13,14], or anamorphic optical systems tend to comprise optical elements with square or rectangular apertures. Meanwhile, the optical systems that contain cylindrical lenses often have rectangular apertures [15,16]. Freeform surfaces, which have irregular-shaped apertures, provide more freedom in the optical design process in space optics and head-mounted display systems [17,18]. Moreover, a SiC reflector with an irregular boundary and a polygon optical element are irreplaceable in space camera systems [19].

Wavefront reconstruction helps to obtain the phase distribution of the measured beam, as well as the surface shape of the measured element and even the assembly of the system, so that the optical element or system can be analyzed and improved. Recently, detailed research has been conducted on wavefront reconstruction with a circular aperture. The mode method and area method [20,21] are two commonly used wavefront reconstruction methods. In the former, the phase of the measured wavefront is represented by a set of standard orthogonal polynomials, such as the Legendre polynomial, as a basis function. The latter is used to estimate the value of a phase point through the measurement data of adjacent points in the measured area. Optical surfaces with circular aperture are tested by phase-shifting interferometry, i.e., radial shearing interferometry [22], and the surface figure can be obtained from the measured wavefront data. Furthermore, using phase-shifting interferometry, the surface figure of an optical surface with an irregular-shaped aperture can be obtained from the measured wavefront data. Zernike polynomials can represent the classical aberrations in optical systems because they have orthogonality over a unit circular aperture [23]. Zernike polynomials comprise an orthonormal functions set in the unit circle. Additionally, through the orthogonalization process, a polynomial orthogonal to the noncircular aperture can be obtained [24]. With the development of optical engineering, expressions for orthogonal polynomials with some irregular apertures, for instance, annular, hexagonal, elliptical, rectangular, square, and olivary apertures, have been obtained [9,2533]. Above literature about the particular modified Zernike polynomials sets that are orthogonal over those particular special apertures. To the best of our knowledge, wavefronts with regular-shaped apertures can be reconstructed using full-aperture surface absolute measurements. However, optical surfaces with irregular-shaped aperture cannot be measured using this approach. Although traditional technique (such as singular value decomposition (SVD) [34]) can be used to reconstruct the wavefront with a particular aperture using noise-free data and reach near error-free, it is needed that a new set of Zernike polynomials is developed for a particular aperture such that they are orthogonal. Whenever the shape of the aperture is different, it needs to be recalculated to consume a lot of time, thus there is the disadvantage that it cannot be reconstructed and analyzed in real time.

In recent years, machine learning has emerged to be a widely investigated and discussed topic. Deep learning is a branch of machine learning that uses multilayer artificial neural network algorithms, and has been widely used in the field of optics, such as super-resolution reconstruction [35,36], for improving the spatial resolution of optical microscopy [37], and lensless compressive imaging [38], among others. Convolutional neural networks (CNNs) are a type of deep neural network. CNNs can learn the interested features directly from data by feature extractor which composed of convolutional layer and pooling layer. In convolution layer, all neuron in a feature map share the same convolution filter. In other words, these neurons share weights. The benefit of sharing weights in convolutional layers is to dramatically reduce the number of parameters as well as the risk of over-fitting. The pooling layer is a special convolution process that reduces data dimension and the parameters of the next network layer. CNNs have shown superior performance when confronted with many challenging imaging problems, such as autofocusing [39], denoising [40,41], tomography [42], holography [43,44], and phase recovery [45]. Besides, CNNs already have applications in phase retrieval in image [46] and communication [47,48]. There are many types of CNN architectures. The first was LeNet5, introduced in 1998 [49]. Alexnet won the 2012 championship of the ImageNet image classification competition [50]. In addition, U-net, which is a CNN for biomedical image segmentation, is utilized in image segmentation. For example, U-net is used to automatically divide the intra-retinal cystoid fluid in SDOCT [51]. It can accept the input image of any size and use a deconvolution layer to upsample the feature map of the last convolution layer to restore it to the same size as the input image.

In this paper, we demonstrate the reconstruction and analysis of a wavefront with an irregular-shaped aperture based on deep learning. To achieve this, we employed a U-net for the modal wavefront reconstruction from a wavefront image of irregular-shaped aperture. The wavefront images with irregular-shaped apertures and the wavefront images with circular aperture with the same wavefront by numerical simulation, are regarded as the studying samples. The wavefront images reconstructed from the U-net were then used to obtain the Zernike polynomial coefficients by fitting Zernike polynomials to verify the accuracy of the predict results. In this study, U-net produced wavefront images with reconstruction faster than the existing methods, by boosting computational efficiency, with an imaging speed of less than 1 s, which can be applied in real time detection. Furthermore, the process of reconstruction of the network was found to be robust. The results shown regarding the accuracy and efficiency in the reconstruction and analysis of wavefronts.

2. Method

The CNN model used in this paper is the U-net model. Its name come from the U-shaped geometry of its architecture. The architecture of our U-net is depicted in Fig. 1. U-net is a fully convolutional encoder-decoder network that contains two parts: down-sampling and up-sampling.

 figure: Fig. 1.

Fig. 1. U-net convolutional neural network model architecture. Each blue box corresponds to a multi-channel feature map. The number of channels is on top of the box. The x-y-size is provided at the edge of the box. Red boxes represent copied feature maps. The arrows denote the different operations.

Download Full Size | PDF

The input layer is a wavefront image of irregular-shaped aperture, with a volume of 256×256×3. First, the network encodes the input image into a multidimensional feature representation through 3×3 convolutional layers, interleaved with 2×2 max pooling layers. Max pooling is a form of non-linear down-sampling that eliminates nonmaximal values, and helps in reducing the computational complexity of upper layers by reducing the dimensionality of the intermediate layers. In the up-sampling path, the decoding process is a shape generator thatproduces an object of the same size as the input image through unpooling, and the extracted features are obtained from convolution. The unpooling is a non-linear form of up sampling a previous layer by using nearest neighbor interpolation of the features obtained by max pooling, and resulting gradually in the shape of the input image. Figure 2 depicts some examples of wavefront images over the irregular-shaped aperture.

 figure: Fig. 2.

Fig. 2. Examples of wavefront images over the irregular-shaped aperture: (a) cross wire; (b) random cutout; (c) hexagonal; (d) optical window; (e) square; (f) oval; (g) triangle; (h) annular.

Download Full Size | PDF

For both the decoding and encoding processes, features are extracted by the convolutional layer. The ReLU activation function was applied to effectively obtain nonlinearities in the data. In addition, this function can prevent the issue of a vanishing gradient and improve the computational speed of the training stage of the neural networks. Meanwhile, the dropout layer, which closes some neurons to accommodate the output accordingly, was added to our network structure.

To validate the feasibility of our CNN technique for wavefront reconstruction of irregular-shaped aperture, the simulated data were generated by the sum of the Zernike polynomials Zi (x, y); 35 terms of Zernike polynomials were used in our numerical simulation and the absolute value of the Zernike polynomial coefficient is less than 0.5. In total, we generated 56,000 pairs of wavefront images with irregular-shaped apertures and their corresponding wavefront images with circular aperture, which are used as the training set. The training set consists of eight kinds of wavefront images with irregular-shaped apertures (cross wire, random cutout, hexagonal, optical window, square, elliptical, triangle shapes, and annular) with a size of 256×256, each kind has 7000 samples. We used the U-net to train the seven types of aperture data separately. During the training progress, 1000 pairs of data are used as validation data, which is employed to check the training effect. Besides, the validation data are used to test the generalization ability of the model and whether overfitting existence. After training, the remained 1,000 pairs of data are employed to assess the performance of the network. The U-net network was trained with respect to pixel-wise root mean squared error (RMSE) as the loss function between the exact (${y_n}$) and the estimated ($y_n^{{\prime}}$) wavefront images, and is given by:

$$RMSE = \sqrt {{{\mathop \sum \nolimits_{u = 1}^W \mathop \sum \nolimits_{v = 1}^H {{\left( {{y_n}\left( {u,v} \right) - y_n^{{\prime}}\left( {u,v} \right)} \right)}^2}} \over {WH}}} $$
where $W, H$ are the total numbers of pixels of the length and width channels of the image, respectively.

The backpropagation algorithm was used to backpropagate the error into the U-net network, and adaptive moment estimation (Adam) was used to optimize the weights. The learning rate of Adam was set according to the data used to train the network. When the aperture shape of the training data was circular, the learning rate was set to 1e-4. Otherwise, the learning rate was set to 1e-6. The network was training with 25 epochs totally. And the epoch means that all samples in the data set are learned once in the neural network. We used the data generator to train the network to save memory. The network was fed two images at a time and the training step was 2,000 for each epoch. Figure 3 shows the losses and accuracies of the training and validation data obtained from 25 epochs. Our U-net was implemented in Python using the Keras framework, which was GPU-accelerated with NVIDIA P4000. It took approximately 4 h to train the network. With the help of this model, only one iteration is used to predict the wavefront images with circular aperture. For each prediction, the time consumption is less than 1s. It is believed that when a higher-performance computer and more learning samples or epochs are implemented, the speed can be further improved.

 figure: Fig. 3.

Fig. 3. (a)Training and validation accuracies, and (b) training and validation losses for the 25 epochs.

Download Full Size | PDF

3. Results and discussion

We used the trained U-net to make predictions on the test set, and the time to predict an image was in the order of milliseconds with GPU-acceleration. After training the U-net, the performance of the trained network was tested. Furthermore, to validate the reconstruction performance of the network, the image predicted from the U-net was fitted with Zernike polynomials to obtain the Zernike coefficients, and the results were compared with the Zernike coefficients used to generate the true images because using Zernike polynomials are only meaningful for circular pupils at present. The wavefront images with irregular-shaped aperture were found to effectively recover, thereby proving the performance of the U-net network when the aperture of the wavefront image is circular, as shown in Figs. 4(a), 4(b) and 4(h). And as can be seen from the figure, the error is very small. In addition, the networks showed excellent prediction accuracy when the irregular aperture was a hexagonal or square, as seen in Figs. 4(c) and 4(d). Although the Zernike coefficient obtained by fitting the wavefront image reconstructed using U-net cannot completely correspond to the true value when the shape of the irregular aperture is an optical window, ellipse, or triangle [Figs. 4(e), 4(f), and 4(g)], the error between the two is within 5%, which is acceptable within this error range.

 figure: Fig. 4.

Fig. 4. Some examples to verify the performance of U-net using(a) cross wire, (b) random cutout, (c) hexagonal, (d) square, (e) optical window, (f) oval, and (g) triangle shapes; (h) annular.

Download Full Size | PDF

To further validate the performance of the network, the training data with Gamma noise was generated through simulation and its Gamma parameter is 0.1 and 0.2, respectively. These data were then used to train the U-net network, and the results proved that the network also has a great performance when noise was present in the wavefront images. The result is depicted in Fig. 5.

 figure: Fig. 5.

Fig. 5. Example of reconstructing wavefront image with noise.

Download Full Size | PDF

Although the results show that this scheme has high efficiency and good accuracy, we think we just do the preliminary work about the performance of deep learning. We believe this work can be further optimized by using some ways. On the one hand, hyperparameters in the network, such as learning rate and batch size, can be further optimized for a lower loss function and a shorter calculation time. As well as the number of samples is increased to train the network for better performance. On the other hand, other CNNs model (such as VGG-Net, Alexnet), whose the input is the wavefront image of irregular-shaped aperture and the output is the Zernike coefficient vector, could have different performances. In addition, the 256×256×3 RGB images can be replaced by the 256×256×1 grayscale images as inputs for smaller data volumes. By the above approaches, we believe that the reconstruction and analysis of a wavefront with irregular-shaped aperture can be faster.

4. Conclusion

The reconstruction and analysis of a wavefront with irregular-shaped aperture based on deep learning was proposed and demonstrated. The features were extracted from the wavefront image of irregular-shaped aperture and the reconstructed wavefront image was obtained by U-net. Meanwhile, the Zernike coefficient of the reconstructed wavefront image was obtained by Zernike polynomial fitting. Furthermore, we demonstrated that a noise-added wavefront image of irregular-shaped aperture can also be well reconstructed. More specifically, the network demonstrated robustness. In conclusion, this paper verifies the feasibility of the U-net network as an effective tool to reconstruct a wavefront image with an irregular-shaped aperture. The reduction in computational time was also demonstrated with the use of the proposed approach. Although the accuracy is not as high as the reconstruction of numerical calculation, the present result will be great helpful for the reconstruction and analysis of wavefronts in real time applications.

Funding

Natural Science Foundation of Jiangsu Province (BK20190954); China Postdoctoral Science Foundation (2018M630773); National Natural Science Foundation of China (61905131); Natural Science Foundation of Shandong Province (ZR2019QF013); Chinese Academy of Sciences Key Project (CAS-KLAOT-KF201804); Jiangsu Key Laboratory of Spectral Imaging and Intelligence Sense (3091801410413).

Disclosures

The authors declare no conflicts of interest.

References

1. L. Feinberg, M. Clampin, R. Keski-Kuha, C. Atkinson, S. Texter, M. Bergeland, and B. Gallagher, James Webb Space Telescope optical telescope element mirror development history and results, SPIE Astronomical Telescopes + Instrumentation (SPIE, 2012), Vol. 8442.

2. M. Troy and G. Chanan, “Diffraction effects from giant segmented-mirror telescopes,” Appl. Opt. 42(19), 3745–3753 (2003). [CrossRef]  

3. J. Nelson, Design concepts for the California Extremely Large Telescope (CELT), Astronomical Telescopes and Instrumentation (SPIE, 2000), Vol. 4004.

4. X.-Q. Cui, Y.-H. Zhao, Y.-Q. Chu, G.-P. Li, Q. Li, L.-P. Zhang, H.-J. Su, Z.-Q. Yao, Y.-N. Wang, X.-Z. Xing, X.-N. Li, Y.-T. Zhu, G. Wang, B.-Z. Gu, A. L. Luo, X.-Q. Xu, Z.-C. Zhang, G.-R. Liu, H.-T. Zhang, D.-H. Yang, S.-Y. Cao, H.-Y. Chen, J.-J. Chen, K.-X. Chen, Y. Chen, J.-R. Chu, L. Feng, X.-F. Gong, Y.-H. Hou, H.-Z. Hu, N.-S. Hu, Z.-W. Hu, L. Jia, F.-H. Jiang, X. Jiang, Z.-B. Jiang, G. Jin, A.-H. Li, Y. Li, Y.-P. Li, G.-Q. Liu, Z.-G. Liu, W.-Z. Lu, Y.-D. Mao, L. Men, Y.-J. Qi, Z.-X. Qi, H.-M. Shi, Z.-H. Tang, Q.-S. Tao, D.-Q. Wang, D. Wang, G.-M. Wang, H. Wang, J.-N. Wang, J. Wang, J.-L. Wang, J.-P. Wang, L. Wang, S.-Q. Wang, Y. Wang, Y.-F. Wang, L.-Z. Xu, Y. Xu, S.-H. Yang, Y. Yu, H. Yuan, X.-Y. Yuan, C. Zhai, J. Zhang, Y.-X. Zhang, Y. Zhang, M. Zhao, F. Zhou, G.-H. Zhou, J. Zhu, and S.-C. Zou, “The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST),” Res. Astron. Astrophys. 12(9), 1197–1242 (2012). [CrossRef]  

5. G.-M. Dai and V. N. Mahajan, “Zernike annular polynomials and atmospheric turbulence,” J. Opt. Soc. Am. A 24(1), 139–155 (2007). [CrossRef]  

6. S. Han, E. Novak, and M. Schurig, Application of Ritchey-Common test in large flat measurements, Lasers in Metrology and Art Conservation (SPIE, 2001), Vol. 4399.

7. L. Lundström and P. Unsbo, “Transformation of Zernike coefficients: scaled, translated, and rotated wavefronts with circular and elliptical pupils,” J. Opt. Soc. Am. A 24(3), 569–577 (2007). [CrossRef]  

8. S. Y. Hasan and A. S. Shaker, “Study of Zernike polynomials of an elliptical aperture obscured with an elliptical obscuration,” Appl. Opt. 51(35), 8490–8497 (2012). [CrossRef]  

9. J. A. Díaz and V. N. Mahajan, “Study of Zernike polynomials of an elliptical aperture obscured with an elliptical obscuration: comment,” Appl. Opt. 52(24), 5962–5964 (2013). [CrossRef]  

10. J. A. Díaz and V. N. Mahajan, “Orthonormal aberration polynomials for optical systems with circular and annular sector pupils,” Appl. Opt. 52(6), 1136–1147 (2013). [CrossRef]  

11. G. Miller, E. Moses, and C. Wuest, “The National Ignition Facility,” Opt. Eng. 43(12), 2841 (2004). [CrossRef]  

12. Z. Liao, M. Nostrand, W. Carr, J. Bude, and T. Suratwala, Modeling of laser-induced damage and optic usage at the National Ignition Facility, Pacific Rim Laser Damage 2016: Optical Materials for High Power Lasers (SPIE, 2016), Vol. 9983.

13. H. Peng, X. M. Zhang, X. Wei, W. Zheng, F. Jing, Z. Sui, D. Fan, and Z. Lin, Status of the SG-III solid state laser project, Third International Conference on Solid State Lasers for Application to Inertial Confinement Fusion (SPIE, 1999), Vol. 3492.

14. W. Zheng, X. Wei, Q. Zhu, F. Jing, D. Hu, J. Su, K. Zheng, X. Yuan, H. Zhou, W. Dai, W. Zhou, F. Wang, D. Xu, X. Xie, B. Feng, Z. Peng, L. Guo, Y. Chen, X. Zhang, L. Liu, D. Lin, Z. Dang, Y. Xiang, and X. Deng, “Laser performance of the SG-III laser facility,” High Power Laser Sci. Eng. 4, e21 (2016). [CrossRef]  

15. P. Reardon, F. Liu, and J. Geary, “Schmidt-like corrector plate for cylindrical optics,” Opt. Eng. 49(5), 053002 (2010). [CrossRef]  

16. J. Peng, Y. Yu, and H. Xu, “Compensation of high-order misalignment aberrations in cylindrical interferometry,” Appl. Opt. 53(22), 4947–4956 (2014). [CrossRef]  

17. X. Zhang, L. Zheng, X. He, L. Wang, F. Zhang, S. Yu, G. Shi, B. Zhang, Q. Liu, and T. Wang, Design and fabrication of imaging optical systems with freeform surfaces, SPIE Optical Engineering + Applications (SPIE, 2012), Vol. 8486.

18. Q. Wang, D. Cheng, Y. Wang, H. Hua, and G. Jin, “Design, tolerance, and fabrication of an optical see-through head-mounted display with free-form surface elements,” Appl. Opt. 52(7), C88–C99 (2013). [CrossRef]  

19. J. Ye, X. Li, Z. Gao, S. Wang, W. Sun, W. Wang, and Q. Yuan, “Modal wavefront reconstruction over general shaped aperture by numerical orthogonal polynomials,” Opt. Eng. 54(3), 034105 (2015). [CrossRef]  

20. W. H. Southwell, “Wave-front estimation from wave-front slope measurements,” J. Opt. Soc. Am. 70(8), 998–1006 (1980). [CrossRef]  

21. L. Huang and A. K. Asundi, “Framework for gradient integration by combining radial basis functions method and least-squares method,” Appl. Opt. 52(24), 6016–6021 (2013). [CrossRef]  

22. C. Tian, X. Chen, and S. Liu, “Modal wavefront reconstruction in radial shearing interferometry with general aperture shapes,” Opt. Express 24(4), 3572–3583 (2016). [CrossRef]  

23. Optical shop testing. John Wiley & Sons (2007).

24. G.-M. Dai and V. N. Mahajan, “Orthonormal polynomials in wavefront analysis: error analysis,” Appl. Opt. 47(19), 3433–3445 (2008). [CrossRef]  

25. V. N. Mahajan, “Zernike annular polynomials for imaging systems with annular pupils,” J. Opt. Soc. Am. 71(1), 75–85 (1981). [CrossRef]  

26. V. N. Mahajan and J. A. Díaz, “Imaging characteristics of Zernike and annular polynomial aberrations,” Appl. Opt. 52(10), 2062–2074 (2013). [CrossRef]  

27. V. N. Mahajan and G.-M. Dai, “Orthonormal polynomials for hexagonal pupils,” Opt. Lett. 31(16), 2462–2464 (2006). [CrossRef]  

28. J. A. Díaz and V. N. Mahajan, “Imaging by a system with a hexagonal pupil,” Appl. Opt. 52(21), 5112–5122 (2013). [CrossRef]  

29. J. A. Díaz and R. Navarro, “Orthonormal polynomials for elliptical wavefronts with an arbitrary orientation,” Appl. Opt. 53(10), 2051–2057 (2014). [CrossRef]  

30. M. Azimipour, F. Atry, and R. Pashaie, “Calibration of digital optical phase conjugation setups based on orthonormal rectangular polynomials,” Appl. Opt. 55(11), 2873–2880 (2016). [CrossRef]  

31. J. Ye, Z. Gao, S. Wang, J. Cheng, W. Wang, and W. Sun, “Comparative assessment of orthogonal polynomials for wavefront reconstruction over the square aperture,” J. Opt. Soc. Am. A 31(10), 2304–2311 (2014). [CrossRef]  

32. Y. Zheng, S. Sun, and Y. Li, “Zernike olivary polynomials for applications with olivary pupils,” Appl. Opt. 55(12), 3116–3125 (2016). [CrossRef]  

33. J. Du, Z. Yang, Z. Liu, and G. Fan, “Three-step shift-rotation absolute measurement of optical surface figure with irregular shaped aperture,” Opt. Commun. 426, 589–597 (2018). [CrossRef]  

34. S. Kindermann, A. Neubauer, and R. Ramlau, “A singular value decomposition for the Shack–Hartmann based wavefront reconstruction,” J. Comput. Appl. Math. 236(8), 2186–2199 (2012). [CrossRef]  

35. I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” Adv. Neural Inf. Process. Syst. 3, 2672–2680 (2014).

36. Y. Yoon, H. Jeon, D. Yoo, J. Lee, and I. S. Kweon, “Light-Field Image Super-Resolution Using Convolutional Neural Network,” IEEE Signal Process. Lett. 24(6), 848–852 (2017). [CrossRef]  

37. Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017). [CrossRef]  

38. X. Yuan and Y. Pu, “Parallel lensless compressive imaging via deep convolutional neural networks,” Opt. Express 26(2), 1962–1977 (2018). [CrossRef]  

39. T. Pitkäaho, A. Manninen, and T. J. Naughton, “Focus classification in digital holographic microscopy using deep convolutional neural networks,” Proc. SPIE 10414, 104140K (2017. [CrossRef]  

40. H. C. Burger, C. J. Schuler, and S. Harmeling, “Image denoising: Can plain neural networks compete with BM3D?” in 2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012), 2392–2399.

41. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising,” IEEE Trans. on Image Process. 26(7), 3142–3155 (2017). [CrossRef]  

42. K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep Convolutional Neural Network for Inverse Problems in Imaging,” IEEE Trans. on Image Process. 26(9), 4509–4522 (2017). [CrossRef]  

43. Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018). [CrossRef]  

44. Z. Ren, Z. Xu, and E. Y. Lam, “Learning-based nonparametric autofocusing for digital holography,” Optica 5(4), 337–344 (2018). [CrossRef]  

45. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4(9), 1117–1125 (2017). [CrossRef]  

46. Y. Zhang, C. Wu, Y. Song, K. Si, Y. Zheng, L. Hu, J. Chen, L. Tang, and W. Gong, “Machine learning based adaptive optics for doughnut-shaped beam,” Opt. Express 27(12), 16871–16881 (2019). [CrossRef]  

47. Q. Tian, C. Lu, B. Liu, L. Zhu, X. Pan, Q. Zhang, L. Yang, F. Tian, and X. Xin, “DNN-based aberration correction in a wavefront sensorless adaptive optics system,” Opt. Express 27(8), 10765–10776 (2019). [CrossRef]  

48. J. Liu, P. Wang, X. Zhang, Y. He, X. Zhou, H. Ye, Y. Li, S. Xu, S. Chen, and D. Fan, “Deep learning based atmospheric turbulence compensation for orbital angular momentum beam distortion and communication,” Opt. Express 27(12), 16671–16688 (2019). [CrossRef]  

49. Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86(11), 2278–2324 (1998). [CrossRef]  

50. A. Krizhevsky, I. Sutskever, and G.E. Hinton, “ImageNet classification with deep convolutional neural networks, “in: International Conference on Neural Information Processing Systems. 2012.

51. F. G. Venhuizen, B. van Ginneken, B. Liefers, F. van Asten, V. Schreur, S. Fauser, C. Hoyng, T. Theelen, and C. I. Sánchez, “Deep learning approach for the detection and quantification of intraretinal cystoid fluid in multivendor optical coherence tomography,” Biomed. Opt. Express 9(4), 1545–1569 (2018). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. U-net convolutional neural network model architecture. Each blue box corresponds to a multi-channel feature map. The number of channels is on top of the box. The x-y-size is provided at the edge of the box. Red boxes represent copied feature maps. The arrows denote the different operations.
Fig. 2.
Fig. 2. Examples of wavefront images over the irregular-shaped aperture: (a) cross wire; (b) random cutout; (c) hexagonal; (d) optical window; (e) square; (f) oval; (g) triangle; (h) annular.
Fig. 3.
Fig. 3. (a)Training and validation accuracies, and (b) training and validation losses for the 25 epochs.
Fig. 4.
Fig. 4. Some examples to verify the performance of U-net using(a) cross wire, (b) random cutout, (c) hexagonal, (d) square, (e) optical window, (f) oval, and (g) triangle shapes; (h) annular.
Fig. 5.
Fig. 5. Example of reconstructing wavefront image with noise.

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

R M S E = u = 1 W v = 1 H ( y n ( u , v ) y n ( u , v ) ) 2 W H
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.