Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Noise robust Zernike phase retrieval via learning based algorithm only with 2-step phase shift measurements

Open Access Open Access

Abstract

We present a noise robust deep learning based aberration analysis method using 2-step phase shift measurement data. We first propose a realistic aberration pattern generation method to synthesize a sufficient amount of real-world-like aberration patterns for training a deep neural network by exploiting the asymptotic statistical distribution parameters of the real-world Zernike coefficients extracted from a finite number of experimentally measured real-world aberration patterns. As a result, we generate a real-world-like synthetic dataset of 200,000 different aberrations from 15 sets of real-world aberration patterns obtained by a Michelson interferometer under a variety of measurement conditions using the 4-step derivative fitting method together with the exploitation of the Gaussian density estimation. We then train the deep neural network with the real-world-like synthetic dataset, using two types of network architectures, GoogLeNet and ResNet101. By applying the proposed learning based 2-step aberration analysis method to the analysis of numerically generated aberrations formed under 100 different conditions, we verify that the proposed 2-step method can clearly outperform the existing 4-step iterative methods based on 4-step measurements, including the derivative fitting, transport of intensity equation (TIE), and robust TIE methods, in terms of noise robustness, root mean square error (RMSE), and inference time. By applying the proposed 2-step method to the analysis of the real-world aberrations experimentally obtained under a variety of measurement conditions, we also verify that the proposed 2-step method achieves compatible performance in terms of the RMSE between the reconstructed and measured aberration patterns, and also exhibits qualitative superiority in terms of reconstructing more realistic fringe patterns and phase distributions compared to the existing 4-step iterative methods. Since the proposed 2-step method can be extended to an even more general analysis of aberrations of any higher order, we expect that it will be able to provide a practical way for comprehensive aberration analysis and that further studies will extend its usefulness and improve its operational performance in terms of algorithm compactness, noise robustness, and computational speed.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

To describe quasi-single frequency light propagation through an optical system, the paraxial approximation is often assumed [1,2]. That is, all the rays are supposed to be nearly parallel to the optical axis, forming a well collimated beam. However, this assumption is rarely fulfilled in real situations, so that the light characteristics after the optical system deviate from the ideal case of the paraxial approximation [1,3,4]. These deviations, known as monochromatic aberrations (MAs), lead directly to a reduction in the performance of the optical system. Therefore, rigorous analysis of MAs is essential for its better design and implementation [57].

It is well known that the wavefront can be expressed as a superposition of Zernike polynomials, and that its decomposition into Zernike polynomials and the identification of the coefficient of each Zernike polynomial can lead to a complete analysis of the MAs of an optical system [813]. Since the aberrations arise from phase distortion, the interferogram plays a crucial role in wavefront analysis [14]. The single-shot method, which uses the Fourier transform of the spatial frequency components, and the multi-step method, which measures fringe patterns with different optical path differences (OPDs), are well-known interferometric techniques for identifying the Zernike coefficients and thus the given aberrations [1416]. In theory, it is quite straightforward to extract Zernike coefficients and to retrieve the phase distribution from the measured interferometric pattern; however, phase wrapping typically occurs in the decomposition process, so a suitable algorithm that can unwrap the phase with high speed and high accuracy is always required [11,15,17]. In addition, a large tilt angle required in the single-shot measurement, vibrations, and errors in determining the OPDs in the multi-step measurement can degrade the aberration reconstruction [14,15,18]. In this respect, improving the robustness of the measurement to noise without sacrificing the speed of measurement and analysis is a critical issue in phase retrieval problems.

Recently, several methods have shown that deep learning can significantly improve the performance of phase retrieval with a smaller number of measurements. For example, a method of training four interferometric patterns with corresponding aberrations has been proposed without the use of a phase unwrapping algorithm; however, it is not entirely free from the factors such as vibrations and OPD errors, which may ultimately reduce the aberration analysis performance. [16]. While the use of deep learning has the advantage of reducing the computational time and the number of measurements required compared to the methods based on unwrapping algorithms, it has a clear limitation in that it requires a large number of training samples, which are multiple interferometric patterns and corresponding Zernike coefficients. [16,19,20]. Synthetic patterns can be generated numerically to overcome the lack of training samples [16]; however, such a method can suffer from performance degradation if there are significant discrepancies between synthetic and real-world aberration patterns.

This study reports on a deep learning based aberration analysis method that exhibits excellent performance in terms of root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and noise robustness. This algorithm requires only two measurements for phase retrieval by training a deep learning network to provide a direct prediction of the Zernike coefficients. To train a high-performance deep learning network, a novel synthetic data generation method is also proposed, which can probabilistically synthesize near-realistic aberration patterns by exploiting the distribution statistics of the real-world Zernike coefficients initially obtained from the conventional 4-step measurements under a variety of realistic conditions. Based on the proposed 2-step method, a synthetic dataset of 200,000 near-realistic aberrations is statistically generated from only 15 sets of real-world aberration patterns under the so-called asymptotic condition, with which the deep learning network is trained to cope with a variety of different aberrations occurring in real-world optical systems. Through a combinatorial investigation applying the proposed 2-step method to both numerically generated and experimentally measured aberration patterns, it is verified that the proposed 2-step method exhibits excellent operational performance over other existing 4-step iterative methods for Zernike decomposition and aberration reconstruction.

2. Background theory and ideas

2.1 MAs and Zernike polynomial representation

Figure 1 below illustrates the conceptual description of MAs. When the input light is well collimated and aligned, it should be focused at a single point after passing through the lens [see Fig. 1(a)]. However, in real situations the input light is often not perfectly collimated and aligned due to various reasons such as optical misalignments, collimation errors, imperfections in optical components, and material impurities. These non-parallel light rays cause unwanted distortions such as defocus, astigmatism, coma, field curvature, etc., as shown in Fig. 1(b).

 figure: Fig. 1.

Fig. 1. Conceptual description on the MAs: (a) Paraxial input light and MA-free focusing. (b) Non-paraxial input light induced MAs. Examples of Zernike polynomials and their corresponding aberrations: (c) Z2: x-tilt, (d) Z3: astigmatism, (e) Z4: defocus, and (f) Z20: pentafoil.

Download Full Size | PDF

In principle, the MAs can be represented by Zernike polynomials as given below:

$$Z_n^m({\rho ,\phi } )= \left\{ {\begin{array}{c} {R_n^m(\rho )\cos ({m\phi } )}\\ {R_n^m(\rho )\sin ({m\phi } )} \end{array}} \right., $$
$$R_n^m(\rho )= \mathop \sum \nolimits_{k = 0}^{({n - m} )/2} \frac{{{{({ - 1} )}^k}({n - k} )!}}{{k!\left( {\frac{{n + m}}{2} - k} \right)!\left( {\frac{{n - m}}{2} - k} \right)!}}{\rho ^{n - 2k}}, $$
where the terms factored by the cosine and sine functions denote even and odd Zernike polynomials, respectively.

For simplicity, the above double-index Zernike expression using n and m can be mapped to a single-index expression based on some specific indexing rules such as Noll’s indices, Wyant indices, Arizona indices, and OSA indices [18]. In this paper, we follow the convention of OSA indices. Since the Zernike polynomials are a complete set of orthogonal functions, any complex aberration Φ can be represented by their superposition, as given below:

$$\mathrm{\Phi } = \mathop \sum \nolimits_{i = 0}^d {a_i}{Z_i}$$
where ${a_i}$ are the coefficients of the corresponding Zernike polynomials and d is the upper limit of the single index. Figures 1(c)-(f) show some examples of Zernike polynomials and their corresponding aberrations. Conversely, decomposing an arbitrary wavefront generated by a given optical system into Zernike polynomials allows us to identify the complete information about the aberration of the given optical system. Furthermore, the aberration can readily be reconstructed from the Zernike coefficients obtained from the decomposition.

2.2 Interferogram based Zernike decomposition and aberration reconstruction

As shown in Fig. 1, MAs arise from wavefront distortion, i.e., phase distortion. Therefore, interferometric measurement plays an important role in investigating and decomposing the wavefront into Zernike polynomials. Figure 2 below shows the experimental configuration based on a Michelson interferometer that was built for the wavefront measurement in this study. When an input collimated 1064-nm laser beam (QDBRLD-1064-150, Qphotonics) enters the 50:50 beam splitter (CCM1-BS014/M, Thorlabs), it is split into transmitted and reflected beams of equal power. The transmitted beam then passes through a test lens (LA1145-B, Thorlabs), and the reflected beam is used as a reference beam for a mirror on a piezo-transducer stage. After the lens, the transmitted beam is reflected by the concave spherical mirror (43-469, Edmund Optics) and combined with the reference beam at the beam splitter. The combined beam is focused after passing through the imaging lens (ACL3026-B, Thorlabs), and a pinhole is located at the focal length of the imaging lens for filtering. A charge-coupled device (CCD) camera (CS135MUN, Thorlabs) placed after the pinhole records the interferometric fringe patterns. The measurement is repeated for different positions of the movable mirror, which is varied by the piezo-transducer stage with a step of λ/4. As a result, the intensity recorded at the photodetector can be expressed by

$${I_{det}} = {I_{ref}} + {I_{aber}} + 2\sqrt {{I_{ref}}{I_{aber}}} \cos \left( {\phi ({x,y} )+ \frac{{2\pi d}}{\lambda }} \right),$$
where Iref, Iaber, ϕ, d, and λ are the reference arm intensity, aberration arm intensity, aberration, path length difference, and wavelength, respectively. At least three interference intensities with different OPDs should be measured to eliminate the DC term and to extract the aberration term in a sinusoidal form. In general, the 3-step-, 4-step-, and 5-step measurement methods are widely used. Depending on the measurement method, the relationship between the aberration and interference intensities for each method is determined as follows [14]:
$$3 - \textrm{step measurement}:{\; }\phi ({x,y} )= {\tan ^{ - 1}}\left( {\frac{{{I_{\lambda /2}} - {I_{\lambda /4}}}}{{{I_0} - {I_\lambda }}}} \right),$$
$$4 - \textrm{step measurement}:{\; }\phi ({x,y} )= {\tan ^{ - 1}}\left( {\frac{{{I_{3\lambda /4}} - {I_{\lambda /4}}}}{{{I_0} - {I_\lambda }}}} \right),$$
$$5 - \textrm{step measurement}:{\; }\phi ({x,y} )= {\tan ^{ - 1}}\left( {\frac{{2({I_{\lambda /4}} - {I_{3\lambda /4)}}}}{{2{I_{\lambda /2}} - {I_\lambda } - {I_0}}}} \right).$$

 figure: Fig. 2.

Fig. 2. Experimental configuration based on a Michelson interferometer for the wavefront measurement.

Download Full Size | PDF

The use of fewer steps may reduce the time required for the measurement; however, since the length order required to induce the phase shift is typically a few hundred nm, it is quite difficult to control the length difference that corresponds exactly to the phase shift of integral multiples of π/2. By obtaining more fringes at different path differences, the error from the imprecise phase control mentioned above is reduced, resulting in a calibration effect. In particular, for the 5-step measurement, Iλ gives the same fringe pattern as that of I0 as long as the path difference λ is perfectly matched to the wavelength of the input light. Then the aberration expression for the 5-step measurement has exactly the same form as that for the 4-step measurement. In general, more measurements can be helpful for fine calibration; however, they can also be subject to auxiliary errors due to small misalignments or vibrations. This aspect can undermine the advantages of multi-step methods, which may end up giving a completely wrong result [14]. Therefore, if there is an algorithm that has strengths in both measurement speed and accuracy, it should be preferred over other aberration analysis algorithms.

2.3. Deep-learning-based Zernike decomposition with 2-step measurements

The concept of inferring the output via the trained relationship between the input and output without performing any computation makes deep learning a favorable candidate for avoiding phase unwrapping during Zernike decomposition. In practice, studies using a convolutional neural network (CNN) to train the interferometric fringe patterns obtained from the 4-step measurement with the wrapped phases have shown acceptable operational performance in terms of computational speed and accuracy [16,21,22]. However, considering that at least thousands of aberration data should be used for the learning process, the fact that it still requires four sequential measurements for each aberration dataset can be a critical issue in terms of time consumption and errors caused by the large number of multiple measurements.

In this study, we use a CNN that can perform Zernike decomposition based on only two measurements, thus significantly reducing the number of measurements required. In particular, the CNN is made to learn an underlying nonlinear mapping between the two aberration patterns and corresponding Zernike coefficients from the training dataset. Existing studies that have used a CNN for Zernike decomposition have some limitations in that a large amount of training data is essential to achieve reasonable performance in real-world systems [21,22]. Since measuring aberration patterns from real-world systems is obviously a demanding and time-consuming task, synthetic aberration patterns can be exploited [16], which should somehow be generated massively and automatically such that they remain “near-realistic” compared to the real-world aberration patterns. Notwithstanding, the use of synthetic data for training still has a problem that the CNN trained on it can generally show poor performance on real aberration patterns if there are significant discrepancies between synthetic and real-world aberration patterns, as shown in Figs. 3(a) and 3(b), respectively. This consequence is mainly due to the fact that the synthetic data set is usually generated by randomly selecting Zernike polynomial orders that are too arbitrary, i.e., it is determined on the basis of a uniform probability distribution without taking into account any distribution statistics of the Zernike coefficients of the real-world aberration patterns [16], although in realistic experimental situations MAs usually arise from a few dominant sources among the various phase distorting factors that can occur when light passes through an optical system, and thus exhibit some specific distribution statistics in the corresponding Zernike coefficient strengths.

 figure: Fig. 3.

Fig. 3. Measured and generated interferometric aberration patterns: (a) Measured real-world aberration pattern, (b) synthetic aberration pattern randomly sampled from a uniform probability distribution, and (c) synthetic aberration pattern statistically generated by the near-realistic synthetic data generation under the asymptotic condition.

Download Full Size | PDF

Therefore, we propose a novel synthetic data generation method, the so-called near-realistic synthetic data generation method under the asymptotic condition, to generate “real-world-like” aberration patterns based on the statistics of the real-world fringe patterns of a finite number, the latter of which are obtained by means of direct 4-step measurements under a variety of experimental conditions. In fact, the aberration pattern shown in Fig. 3(c) represents an example pattern out of the 200,000 synthetic aberrations statistically generated based on the proposed 2-step method. Although it is not necessarily the same as the real-world aberration pattern shown in Fig. 3(a), it has a very similar format to the real-world one, indicating that it was generated under much more realistic conditions than that of simple random selection without considering the statistics of the real-world aberration patterns. This is indeed the main significance of the proposed 2-step method, which, to the best of our knowledge, has not yet been demonstrated before. The details of how the near-realistic synthetic data generation method is implemented together with the CNN to be used are discussed in the following sections.

2.3.1. Near-realistic synthetic data generation under the asymptotic condition

To generate the near-realistic synthetic data, the first step is to acquire a finite number of real-world aberration patterns under a wide range of measurement conditions and extract their relevant statistical distribution parameters, i.e., the mean and variance parameters of the corresponding Zernike coefficients. The whole process of collecting the Zernike coefficients of the real-world aberration patterns is summarized in Fig. 4.

 figure: Fig. 4.

Fig. 4. Collection of the Zernike coefficients from the real-world aberration patterns of a finite number.

Download Full Size | PDF

In order to obtain real-world aberration patterns of a finite number, repeated 4-step based aberration measurements are performed using the aforementioned Michelson interferometer. For each measurement, the corresponding Zernike coefficients are extracted using the 4-step methods to form ${{\boldsymbol a}_j}$, which is a $({d + 1} )$-dimensional vector representing the Zernike coefficients collected from the $j$-th 4-step measurement trial. N times measurements will result in a real-world dataset of $\{{{{\boldsymbol a}_j}} \}_{j = 1}^N$. Hereafter, a group of these Zernike coefficient vectors representing the corresponding aberration patterns is called as a dataset.

In addition, the Zernike coefficients are collected using the derivative fitting method by taking the first-order difference between the true phase and the wrapped phase as the cost function [15]:

$$\textrm{Cost function} = \mathop \sum \nolimits_{x,y} \left( {\smallint {{\left( {\mathop \sum \nolimits_{i = 1}^n {a_i}\frac{{\partial {Z_i}}}{{\partial x}}} \right)}^2} - {{\tan }^{ - 1}}\left( {\frac{{\sin \left( {\frac{{d\varphi }}{{dx}}} \right)}}{{\cos \left( {\frac{{d\varphi }}{{dx}}} \right)}}} \right)} \right),$$
where φ is the wrapped phase.

The statistical distribution parameters of the measured real-world dataset are then estimated, assuming that it is sampled from the population as follows:

$${\mu _{RW}}: = \mathop \sum \nolimits_{j = 1}^N {{\boldsymbol a}_j}/N,$$
$$\sigma _{RW}^2 = \mathop \sum \nolimits_{j = 1}^N {({{{\boldsymbol a}_j} - {\mu_{RW}}} )^2}/({N - 1} ),$$
where all the arithmetic operations involving vectors are executed in an elementwise manner, ${\mu _{RW}}$ and ${\mathrm{\sigma }_{RW}}$ are the sample mean and sample standard deviation of the corresponding Zernike coefficient vectors, and N is the size of the real-world dataset. In fact, based on these statistical distribution parameters, together with a relevant probability density estimation, a synthetic dataset of arbitrary size may be generated, even though the measured real-world dataset is of finite size. (A further discussion of how to convert these statistical distribution parameters to those obtained under the asymptotic condition follows in the next paragraph.) The Gaussian distribution can be used for a relevant probability density estimation as it is known to be the most robust distribution estimation in terms of maximum entropy, which corresponds to the first and second moments [30]. It is worth noting that in this study, the simplest but effective distribution estimation method is chosen, which shows reasonable performance, although various other statistical techniques [31] can be used to estimate distributions from real-world datasets. Since the synthetic dataset is not generated randomly based on a uniform probability distribution, but statistically based on the statistical distribution parameters extracted from the measured real-world dataset, it can reflect a much more realistic nature than the one simply formulated numerically without reference to a real-world dataset, as already verified in Fig. 3. The whole process of generating the near-realistic synthetic dataset is summarized in Fig. 5.

 figure: Fig. 5.

Fig. 5. Near-realistic synthetic data generation under the asymptotic condition: The black cross and shaded area indicate the mean and standard deviation range by the Gaussian density estimation, respectively.

Download Full Size | PDF

The real-world dataset should be obtained under a variety of different conditions so that the measured aberration patterns represent a wide range of realistic aberrations, the size of which should also be large enough to provide relevant statistical distribution parameters of themselves. To obtain the “true” distributions of the real-world Zernike coefficients, it may be necessary to take an infinite number of measurements, which is practically impossible or irrelevant. Nevertheless, the asymptotic behaviors of the statistical distribution parameters of the measured real-world dataset can be readily derived as its size increases to infinity. Since the mean and variance parameters of the measured real-world dataset can be viewed as those of a dataset sampled from the population, as N goes to infinity, they will eventually converge to those of the population. It is therefore necessary to find out this asymptotic behavior of the statistical distribution parameters of the measured real-world dataset as its size increases. The key to answering this question lies in the rotational symmetry of the Zernike polynomials and the rotational invariance of the statistical distribution parameters of the asymptotic real-world aberration patterns. Note that the asymptotic real-world dataset hereafter means a virtual dataset of real-world aberration patterns obtained with an infinite number of trials under a variety of realistic conditions.

It is worth noting that Zernike polynomials are periodic in the azimuthal direction and therefore have rotational symmetry. This implies that the statistics of any Zernike coefficient should eventually exhibit a circularly symmetric or rotational-invariant distribution as the number of real-world aberration patterns increases to infinity. That is, it should be pointless to choose a particular azimuthal orientation with the asymptotic real-world dataset, since there must be infinitely many aberration patterns with infinitely many different azimuthal orientations. Conversely, under the asymptotic condition, a single measurement has the same effect as measuring all the cases with the same aberration pattern but with different orientations. This means that the asymptotic real-world dataset can be obtained even with a single aberration pattern by simply varying its azimuthal orientation, although it may represent too specific a situation. The statistical distribution parameters of the asymptotic real-world dataset can therefore be derived from those of the real-world dataset obtained with a finite number of measurements.

The procedure for implementing this asymptotic condition is slightly different depending on whether the second index for the Zernike coefficients, denoted by m, is 0 or not. When m = 0, the mean and variance of the Zernike coefficients of the measured real-world dataset can be taken as those of the asymptotic real-world dataset, since the Zernike polynomials with m = 0 are obviously independent of azimuthal rotation. When m ≠ 0, the mean of the Zernike coefficients of an individual order of the asymptotic dataset should vanish, while the variance of them should remain identical for both even and odd cases, which is easily understood in that the Zernike polynomials with m ≠ 0 are periodic in the azimuthal direction and there must be an infinite number of different cases in terms of the azimuthal orientation in the asymptotic real-world dataset. Thus, when m ≠ 0, zero can simply be taken as the mean of the Zernike coefficients of the specific order of the asymptotic real-world dataset while the mean of the variances of the even and odd coefficients of the measured real-world dataset can be taken as the variance of the corresponding Zernike coefficients of the asymptotic dataset. Using these “updated” statistical distribution parameters of the asymptotic real-world dataset projected from the measured real-world dataset, a synthetic dataset of any size can be statistically generated together with the aforementioned Gaussian distribution estimation. Since this synthetic dataset contains a wide range of different aberrations spanned by the updated statistical distribution parameters, the aberration analysis capability of the network to be trained with the synthetic dataset is not limited only to the cases of the measured real-world aberrations. In fact, this broad coverage contributes to the general applicability of the proposed deep learning based aberration analysis method, making it a valuable tool for analyzing a wide range of optical systems beyond the ideally corrected ones.

2.3.2. Training method and inference

In deep learning, a CNN is one of the most popular networks that has consistently shown strong performances on image datasets [23]. Therefore, the favorable capability of a CNN is applied to the Zernike decomposition problem. The input of the CNN is a concatenation of fringe patterns obtained from the measurements by the Michelson interferometer measurements at different OPDs, and the output of the CNN is a vector representing the Zernike coefficients of the corresponding MAs. The number of aberration patterns required for inference is set to two, which is, in fact, the minimum number required to obtain proper aberration analysis without the conjugate phase retrieval problem [26]. The size of the output vector, i.e., the number of the Zernike coefficients considered for MA analysis is determined to a relevant number, denoted by d + 1, by inspecting the overall MA characteristics of the measured real-world aberration patterns, as mentioned above that in realistic experimental situations MAs usually arise from a few dominant sources among the various phase distorting factors. The CNN architectures can be separated into two subgroups as shown in Fig. 6. The first group is a feature extractor that plays an important role in extracting useful visual information from the input fringe patterns. The feature extractor consists of a combination of convolution layers and pooling layers. The second group consists of fully connected layers that predict the Zernike coefficients from the extracted visual features from the feature extractor. In fact, the detailed CNN architecture can have different configurations of the feature extractors and the fully connected layers. For instance, the size of the filters, the number of filters, and the stride of the convolution operations can be different depending on the type of the CNN architecture. For each set of Zernike coefficients two synthetic real-world fringe patterns are generated with two different OPDs. The CNN is then trained and optimized by minimizing the mean square errors with the generated synthetic real-world dataset together with the corresponding Zernike coefficients. In the test phase, the resulting CNN is to infer the Zernike coefficients from just two aberration patterns obtained from the Michelson interferometer measurements at two different OPDs.

 figure: Fig. 6.

Fig. 6. Architecture of the neural network for the proposed 2-step aberration analysis method.

Download Full Size | PDF

3. Results and discussion

It is worth noting that we used a computer system based on Intel Core i7-13700KF CPU (3.4 GHz), NVIDIA GTX3090 GPU, and 16 GB of memory for the entire computational work of this study, including data collection, training, and network testing.

3.1. Collection of real-world aberration patterns, generation of the near-realistic synthetic aberration patterns, and training

As a rule of thumb, 15 different fringe patterns were experimentally obtained under a variety of measurement conditions, i.e., N = 15, using the Michelson interferometer shown in Fig. 2. The asymptotic condition was then applied to the measured real-world aberration patterns to obtain the distribution statistics of their corresponding Zernike coefficients. As discussed in Section 2.3.1, the estimated results for ${\mu _{RW}}$ and ${\mathrm{\sigma }_{RW}}$ are shown in the inset of Fig. 7. In the case of m ≠ 0, the respective ${\mathrm{\sigma }_{RW}}$ values were first estimated for the even and odd cases, and then the average of the variances by these two values was taken as the asymptotic variances for both cases as shown in the main illustration of Fig. 7. It was observed that the Zernike coefficients of orders of 0 to 5 dominated, while those higher than order 5 usually remained close to zero. This observation clearly indicates that the lens used in our experiment mainly produced x-tilt, y-tilt, defocus, and astigmatism, so a pattern generated by a simple uniform sampling strategy [16] can be very different from a real-world one, as already shown in Fig. 3(b). On the contrary, our method has a strong advantage in that the synthetic patterns are generated from the distribution statistics quantified from the measured real-world aberration patterns, so they are inherently near-realistic, as already shown in Fig. 3(c). In the final step, 200,000 different sets of Zernike coefficients were synthesized, being sampled from the Gaussian distributions with ${\mu _{ARW}}$ and ${\mathrm{\sigma }_{ARW}}$, which denote the corresponding asymptotic distribution parameters, and numerically generating the corresponding aberration patterns for two different OPDs: 0 rad and π/4 rad matched to the 2-step measurement. This synthetic dataset was used to train a CNN used for the deep learning process.

 figure: Fig. 7.

Fig. 7. Zernike coefficients estimated by the 4-step derivative fitting method: Dots indicate estimated coefficients for each aberration pattern. Estimated asymptotic mean and standard deviation are marked with a green bold line and a shaded area, respectively. Inset shows the statistical information of the measured real-world aberration patterns.

Download Full Size | PDF

The input to the CNN was set to a concatenation of two 256 × 256 aberration patterns, since our method only requires two aberration patterns. The output of the CNN was 6 real values corresponding to the 6 Zernike coefficients of the given MAs, i.e., d = 5. In this study, two CNN architectures were tested as feature extractors: GoogLeNet and ResNet101. The CNN was trained by minimizing the mean square errors on the 200,000 synthetic aberrations and their corresponding Zernike coefficients, of which 180,000 and 20,000 aberrations were used for training and validation, respectively. The network was implemented using PyTorch. The Adam optimizer was used to optimize the network with a learning rate of 1e-3. Both networks were trained for 50 epochs. For instance, the training curve of ResNet101 is shown in Fig. 8. For each epoch, the RMSE loss was measured on both the training and validation datasets. The orange dashed and blue dash-dotted lines in Fig. 8 indicate the training and validation losses, respectively. It is worth noting that both training and validation curves tended to decrease with epochs and converged after ∼ 20 epochs. In the test phase, the resulting CNN inferred the Zernike coefficients with two aberration patterns with different OPDs.

 figure: Fig. 8.

Fig. 8. Training curve of ResNet101 in comparison with the validation counterpart.

Download Full Size | PDF

3.2. Zernike decomposition of numerically generated aberration patterns

We first investigated the operational performance of the proposed 2-step aberration analysis method on the numerically generated aberrations. 100 sets of aberration patterns and corresponding Zernike coefficients for the first 6 orders were arbitrarily generated based on the given Gaussian density function, and their Zernike decomposition performances in terms of computational speed and accuracy were compared between the proposed 2-step method and the existing 4-step iterative methods, including the derivative fitting [15], the transport of intensity equation (TIE) [17], and the robust TIE [24] methods. Note that for fair comparisons, these numerical test samples were newly generated and had nothing to do with the synthetic aberration patterns used to train the two CNNs. We added a certain amount of Gaussian noise to the generated aberration patterns in order to see the noise robustness of each method. It is worth noting that although the deep learning based methods, such as the proposed 2-step method, could achieve an improvement in noise resilience if the network is trained with additive random noise [16,19], we did not consider including additional random noise in the synthetic dataset for training for a fair comparison of the proposed 2-step method with the other non-learning based iterative methods. In addition, in this situation the ground truth phase information was fully available, so we could readily estimate the RMSEs of the reconstructed phase distributions and their corresponding peak signal-to-noise ratios (PSNRs) between the reconstructed and original aberrations. The results obtained by the proposed 2-step and existing 4-step iterative methods are plotted with respect to the added noise level in Fig. 9.

 figure: Fig. 9.

Fig. 9. Phase accuracy of the aberration analysis for 100 sets of numerically generated aberration patterns with respect to the added noise level: (a) RMSE and (b) PSNR.

Download Full Size | PDF

It is worth noting that the proposed 2-step method together with the use of ResNet101 showed a superior performance in terms of RMSE to all other decomposition methods when the noise level was increased above 0.01, as shown in Fig. 9(a). In particular, at the noise level of 0.05, the ResNet101 model outperformed the derivative fitting, TIE, and robust TIE methods by more than 79%, 35%, and 29%, respectively, in terms of RMSE. Unlike the ResNet101 model, the GoogLeNet model showed relatively poor performance even when the noise level was increased above 0.01. We believe that this result was caused by the difference in model capacity between GoogLeNet and ResNet101, as the latter has more parameters than the former, and therefore has the capacity to learn and remember a larger amount of data than the former [25]. Thus, the simulation result implies that the ResNet101 model equipped with the proposed method showed a robust performance against the different noise levels due to the large learning capacity of the neural network. A similar tendency can also be observed for the PSNR, as shown in Fig. 9(b). It is worth noting that the higher the PSNR, the better the reconstruction, as the PSNR is generally inversely proportional to the RMSE. Therefore, it is reasonable to conclude that ResNet101 performed best when the noise level was greater than 0.01. We also compared the inference times of all the methods by measuring the average inference time during the aberration analysis of the 100 sets of the numerically generated aberration patterns. As summarized in Table 1, the learning-based methods absolutely outperformed the existing 4-step iterative methods. In particular, the ResNet101 model showed the fastest inference time. As the noise level was increased, the TIE and robust TIE methods generally required a much larger number of iterations to complete the phase reconstruction. In contrast, the derivative fitting method showed the second best performance in terms of computation time. This is because the derivative fitting method only uses matrix inversion. However, this advantage cannot always be maintained, as the computational cost tends to increase with the number of pixels in the pattern being tested. It is worth noting that, unlike the conventional aberration analysis methods, the learning-based models do not require additional iterations to reconstruct the phases. In this respect, it is reasonable to conclude that the learning-based methods can consistently outperform the existing 4-step iterative methods in terms of robustness to noise and inference time. However, it should be noted that although the simulation results showed reasonable performance, the deep learning based approaches do have some limitations in that if the aberration patterns are significantly different from the training dataset, the trained models tend to perform poorly and to show increased prediction errors. In fact, this problem is known as the distribution shift problem in machine learning [27]. However, this can be solved by using the domain adaptation or few-shot learning methods [28,29], which can readily adapt the learned network to the new situations with only a few additional datasets.

Tables Icon

Table 1. Comparison of the inference times among methods

3.3. Zernike decomposition of experimentally measured aberration patterns

We also investigated the operational performance of the proposed 2-step aberration analysis method on the 15 sets of the real-world aberration patterns that had been experimentally obtained by the Michelson interferometer measurements under a variety of conditions and used to extract the asymptotic statistical distribution parameters of the real-world aberration patterns. Note that although the statistical distribution parameters of these measured real-world aberration patterns were used to generate the synthetic aberration patterns, the measured fringe patterns themselves were in no way used to train the network. In addition, in this situation the ground truth phase information was fully unknown, so we just estimate the normalized RMSEs (NRMSEs) of the reconstructed aberration patterns relative to the experimentally measured aberration patterns. The results obtained by the proposed and existing 4-step iterative methods are summarized in Table 2, and three typical cases are graphically illustrated in Figs. 10, 11, and 12. Although the overall performance of these methods was similar quantitatively, the differences among them were found to be quite significant qualitatively.

 figure: Fig. 10.

Fig. 10. Results of the aberration analysis: (a) Real-world aberration pattern, reconstructed aberration patterns (b) with GoogLeNet, (c) with ResNet101, (d) with derivative fitting, (e) with TIE, and (f) with robust TIE methods, (g) obtained Zernike coefficients, reconstructed aberration phase distributions (h) with GoogLeNet, (i) with t ResNet101, (j) with derivative fitting, (k) with TIE, and (l) with robust TIE, respectively.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Results of the aberration analysis: (a) Real-world aberration pattern, reconstructed aberration patterns (b) with GoogLeNet, (c) with ResNet101, (d) with derivative fitting, (e) with TIE, and (f) with robust TIE methods, (g) obtained Zernike coefficients, reconstructed aberration phase distributions (h) with GoogLeNet, (i) with t ResNet101, (j) with derivative fitting, (k) with TIE, and (l) with robust TIE, respectively.

Download Full Size | PDF

Tables Icon

Table 2. Reconstruction errors for measured real-world aberration patterns.

In this situation, as summarized in Table 2, the existing 4-step iterative method based on Robust TIE and the proposed 2-step method based on ResNet101 seemed to perform best and second best, respectively. However, an important observation is that the reconstruction errors, i.e., the NRMSEs, were relatively high for all methods compared to those of the numerical simulation results (see Fig. 9). This is due to the fact that the NRMSEs were estimated in terms of the intensity distributions rather than the phase distributions and that there were significant intensity variations in the measured real-world aberration patterns. The latter is well illustrated in Fig. 10(a). That is, since the NRMSEs were simply quantified on the basis of pixel-wise intensity differences, the tendency for the intensity to decrease at the edges of the measured pattern could not be fully accounted for, resulting in a significant increase in the NRMSEs, which ranged from 0.31 to 0.47 for all the methods, including the proposed 2-step methods. Therefore, in the case of the measured real-world aberration patterns, where the ground truth information was by no means known, a simple quantitative evaluation, such as an evaluation in terms of the NRMSE between the reconstructed and measured aberration patterns, was not sufficient to understand the effectiveness of each method in more detail. Therefore, it is necessary to consider a qualitative evaluation in addition, as described below.

Figures 10, 11, and 12 show three typical results obtained by the proposed and existing methods. For each set of figures, the sub-figure (a) illustrates the measured real-world aberration pattern, the sub-figures (b-f) illustrate the reconstructed aberration patterns by the methods based on GoogLeNet, ResNet101, the derivative fitting, TIE, and robust TIE, respectively, the sub-figure (g) illustrates the magnitudes of the corresponding Zernike coefficients of the reconstructed aberration patterns, and the sub-figures (h-l) illustrate the corresponding phase distributions, respectively.

The results shown in Fig. 10 represent the case where the measured real-world aberration pattern contained a relatively low level of noise, while only showing a tendency for the intensity to decrease at the edges of the measured pattern. In this case, all methods were able to achieve reasonable reconstruction performance, with no significant differences in reconstructed aberration patterns and phase distributions, although the existing 4-step iterative method based on TIE was slightly outperformed. In fact, the two proposed 2-step methods were able to demonstrate fully compatible performance compared to the existing iterative methods, even though the former required only two measurements of aberration patterns instead of four. The effectiveness of the proposed 2-step methods was further highlighted in Figs. 11 and 12, which represent the cases where the measured real-world aberration patterns contained relatively high levels of noise. The existing 4-step iterative methods produced rather peculiar or unrealistic results, causing discontinuities and drastic changes in the reconstructed aberration patterns and phase distributions. However, no such anomalies were observed in the results of the two proposed 2-step methods. In particular, the proposed 2-step method based on ResNet101 produced the reconstructed aberration patterns most similar to the original measured patterns, demonstrating its superiority over the other methods. It should also be noted that even though the existing 4-step iterative method based on the derivative fitting used to extract the statistical distribution parameters of the measured real-world aberration patterns could not satisfactorily reconstruct the aberration pattern and phase distribution, especially for the case of Fig. 12, this does not mean that its validity is questionable. This is because the original measured aberration pattern considered in Fig. 12 had a highly skewed beam intensity distribution together with a high level of noise, so that it could not perform the Zernike decomposition satisfactorily. The consequences can be readily seen in the large discrepancy between the reconstructed and measured aberration patterns. However, this rather inaccurate prediction could not completely disrupt the distribution statistics of the real-world aberration patterns, as it was only an outlier from a statistical point of view. In fact, such statistical consequences were fully taken into account when generating the synthetic fringe patterns under the asymptotic condition. This is the reason why the two proposed 2-step methods trained by the corresponding deep learning networks were able to produce reasonably well matched reconstructed patterns even when a highly noisy and skewed fringe pattern was tested. This is indeed a clear and unsurpassed advantage of the proposed 2-step methodology based on statistical analysis, apart from the unmatched fast inference time.

 figure: Fig. 12.

Fig. 12. Results of the aberration analysis: (a) Real-world aberration pattern, reconstructed aberration patterns (b) with GoogLeNet, (c) with ResNet101, (d) with derivative fitting, (e) with TIE, and (f) with robust TIE methods, (g) obtained Zernike coefficients, reconstructed aberration phase distributions (h) with GoogLeNet, (i) with t ResNet101, (j) with derivative fitting, (k) with TIE, and (l) with robust TIE, respectively.

Download Full Size | PDF

4. Conclusion

We have proposed a noise robust deep learning based 2-step aberration analysis method that requires only two aberration interference measurements. By statistically analyzing the experimentally measured fringe patterns and their Zernike coefficients using the Gaussian density estimation, a real-world-like synthetic dataset of 200,000 different aberrations were numerically generated under the asymptotic condition from 15 sets of real-world aberrations patterns obtained by Michelson interferometer measurements under a variety of measurement conditions using the 4-step derivative fitting method [15]. We used the two types of CNN architectures based on GoogLeNet and ResNet101 for the data training and inference, and verified that only two fringe measurements were sufficient for complete aberration analysis. Through the combinatorial analysis of the aberration decomposition and reconstruction using the proposed method in comparison with the existing 4-step iterative methods based on the derivative fitting, TIE, and robust TIE, we verified that the proposed deep learning based 2-step aberration analysis method was able to outperform the other algorithms in terms of computational speed and noise robustness in both numerical and experimental cases. To the best of our knowledge, there have been no noise robust algorithms that can reconstruct the aberration from only two interference patterns. Although in the present study we have only considered the aberrations represented by Zernike polynomials of the orders of up to six, it is noteworthy that in principle there is no limit to the number of the aberration orders that can be considered simultaneously. Of course, other algorithms, such as derivative fitting, TIE, and robust TIE, can be used to decompose an aberration with many higher-order components. However, TIE and robust TIE can have difficulty in computing the matrix inversion because the higher-order aberrations contribute to the formation of a more complicated fringe pattern, resulting in a more complex interference matrix. In addition, since the higher-order aberrations can cause a drastic intensity fluctuation in the fringe pattern, which requires a long time for the derivative fitting method in the computation, the proposed 2-step aberration analysis method based on deep learning can achieve excellent aberration decomposition and reconstruction performance that surpasses the existing 4-step iterative aberration analysis methods, even when the higher-order aberrations are extensively involved. We hope that our method will be used as an effective tool for comprehensive aberration analysis and expect that further studies will extend its usefulness and improve its operational performance in terms of algorithm compactness, noise robustness, and computational speed.

Funding

National Research Foundation of Korea (2021R1A5A1032937); Brain Korea 21 Four Program; Institute for Information and Communications Technology Promotion (2021-0-01341); AI Graduate School Program of CAU.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. B. E. Saleh and M. C. Teich, Fundamentals of Photonics (John Wiley & Sons, 2019).

2. E. Hecht, Optics (Pearson Education India, 2012).

3. J. C. Wyant, “Optical imaging and aberrations, part i, ray geometrical optics, by Virendra N. Mahajan,” Opt. Photonics News 10, 54–58 (1999). [CrossRef]  

4. H. Gross, Handbook of Optical Systems, Volume 3: Aberration Theory and Correction of Optical Systems (Wiley-VCH, 2005).

5. P. Dey, A. Neumann, and S. Brueck, “Image quality improvement for optical imaging interferometric microscopy,” Opt. Express 29(23), 38415–38428 (2021). [CrossRef]  

6. C. Sheppard and T. Wilson, “Effect of spherical aberration on the imaging properties of scanning optical microscopes,” Appl. Opt. 18(7), 1058–1063 (1979). [CrossRef]  

7. H. Qin, “Aberration correction of a single aspheric lens with particle swarm algorithm,” Opt. Commun. 285(13-14), 2996–3000 (2012). [CrossRef]  

8. J. C. Wyant and K. Creath, “Basic wavefront aberration theory for optical metrology,” Appl. Opt. and Opt. Eng. 11, 28–39 (1992).

9. V. N. Mahajan, “Zernike circle polynomials and optical aberrations of systems with circular pupils,” Appl. Opt. 33(34), 8121–8124 (1994). [CrossRef]  

10. V. Lakshminarayanan and A. Fleck, “Zernike polynomials: a guide,” J. Mod. Opt. 58(7), 545–561 (2011). [CrossRef]  

11. G.-M. Dai, “Zernike aberration coefficients transformed to and from Fourier series coefficients for wavefront representation,” Opt. Lett. 31(4), 501–503 (2006). [CrossRef]  

12. J. C. Zingarelli and S. C. Cain, “Phase retrieval and Zernike decomposition using measured intensity data and the estimated electric field,” Appl. Opt. 52(31), 7435–7444 (2013). [CrossRef]  

13. R. Yazdani, S. Petsch, H. R. Fallah, and M. Hajimahmoodzadeh, “Use of Zernike polynomials and SPGD algorithm for measuring the reflected wavefronts from the lens surfaces,” Int. J. Opt. Photonics 10, 47–53 (2016).

14. E. P. Goodwin and J. C. Wyant, Field Guide to Interferometric Optical Testing (SPIE, 2006).

15. Z. Zhao, H. Zhao, L. Zhang, F. Gao, Y. Qin, and H. Du, “2D phase unwrapping algorithm for interferometric applications based on derivative Zernike polynomial fitting technique,” Meas. Sci. Technol. 26(1), 017001 (2015). [CrossRef]  

16. A. J.-W. Whang, Y.-Y. Chen, C.-M. Chang, Y.-C. Liang, T.-H. Yang, C.-T. Lin, and C.-H. Chou, “Prediction technique of aberration coefficients of interference fringes and phase diagrams based on convolutional neural network,” Opt. Express 28(25), 37601–37611 (2020). [CrossRef]  

17. J. Martinez-Carranza, K. Falaggis, and T. Kozacki, “Fast and accurate phase-unwrapping algorithm based on the transport of intensity equation,” Appl. Opt. 56(25), 7079–7088 (2017). [CrossRef]  

18. J. Schwiegerling, “Review of Zernike polynomials and their use in describing the impact of misalignment in optical systems,” Proc. SPIE 10377, 103770D (2017). [CrossRef]  

19. G. Spoorthi, R. K. S. S. Gorthi, and S. Gorthi, “PhaseNet 2.0: Phase unwrapping of noisy data based on deep learning approach,” IEEE Trans. on Image Process. 29, 4862–4872 (2020). [CrossRef]  

20. S. Ma, R. Fang, Y. Luo, Q. Liu, S. Wang, and X. Zhou, “Phase-aberration compensation via deep learning in digital holographic microscopy,” Meas. Sci. Technol. 32(10), 105203 (2021). [CrossRef]  

21. G. Spoorthi, S. Gorthi, and R. K. S. S. Gorthi, “PhaseNet: A deep convolutional neural network for two-dimensional phase unwrapping,” IEEE Signal Process. Lett. 26(1), 54–58 (2019). [CrossRef]  

22. Z. Zhao, M. Zhou, Y. Du, J. Li, C. Fan, X. Zhang, X. Wei, and H. Zhao, “Robust phase unwrapping algorithm based on Zernike polynomial fitting and Swin-Transformer network,” Meas. Sci. Technol. 33(5), 055002 (2022). [CrossRef]  

23. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Commun. ACM 60(6), 84–90 (2017). [CrossRef]  

24. Z. Zhao, H. Zhang, Z. Xiao, H. Du, Y. Zhuang, C. Fan, and H. Zhao, “Robust 2D phase unwrapping algorithm based on the transport of intensity equation,” Meas. Sci. Technol. 30(1), 015201 (2019). [CrossRef]  

25. M. Mohri, A. Rostamizadeh, and A. Talwalkar, Foundations of Machine Learning, Adaptive Computation and Machine Learning Series (MIT Press, 2012).

26. B. Kim, J. Na, J. Kim, H. Kim, and Y. Jeong, “Modal decomposition of fiber modes based on direct far-field measurements at two different distances with a multi-variable optimization algorithm,” Opt. Express 29(14), 21502–21520 (2021). [CrossRef]  

27. J. Quionero-Candela, M. Sugiyama, A. Schwaighofer, and N. D. Lawrence, Dataset Shift in Machine Learning (The MIT Press, 2009).

28. J. Snell, K. Swersky, and R. Zemel, “Prototypical Networks for Few-shot Learning,” Adv Neur In 30 (2017).

29. J. Cha, K. Lee, S. Park, and S. Chun, “Domain generalization by mutual-information regularization with pre-trained models,” in European Conference on Computer Vision (Springer, 2022), pp. 440–457.

30. E. T. Jaynes, “On the rationale of maximum-entropy methods,” Proc. IEEE 70(9), 939–952 (1982). [CrossRef]  

31. B. W. Silverman, Density Estimation for Statistics and Data Analysis (CRC Press, 1986).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Conceptual description on the MAs: (a) Paraxial input light and MA-free focusing. (b) Non-paraxial input light induced MAs. Examples of Zernike polynomials and their corresponding aberrations: (c) Z2: x-tilt, (d) Z3: astigmatism, (e) Z4: defocus, and (f) Z20: pentafoil.
Fig. 2.
Fig. 2. Experimental configuration based on a Michelson interferometer for the wavefront measurement.
Fig. 3.
Fig. 3. Measured and generated interferometric aberration patterns: (a) Measured real-world aberration pattern, (b) synthetic aberration pattern randomly sampled from a uniform probability distribution, and (c) synthetic aberration pattern statistically generated by the near-realistic synthetic data generation under the asymptotic condition.
Fig. 4.
Fig. 4. Collection of the Zernike coefficients from the real-world aberration patterns of a finite number.
Fig. 5.
Fig. 5. Near-realistic synthetic data generation under the asymptotic condition: The black cross and shaded area indicate the mean and standard deviation range by the Gaussian density estimation, respectively.
Fig. 6.
Fig. 6. Architecture of the neural network for the proposed 2-step aberration analysis method.
Fig. 7.
Fig. 7. Zernike coefficients estimated by the 4-step derivative fitting method: Dots indicate estimated coefficients for each aberration pattern. Estimated asymptotic mean and standard deviation are marked with a green bold line and a shaded area, respectively. Inset shows the statistical information of the measured real-world aberration patterns.
Fig. 8.
Fig. 8. Training curve of ResNet101 in comparison with the validation counterpart.
Fig. 9.
Fig. 9. Phase accuracy of the aberration analysis for 100 sets of numerically generated aberration patterns with respect to the added noise level: (a) RMSE and (b) PSNR.
Fig. 10.
Fig. 10. Results of the aberration analysis: (a) Real-world aberration pattern, reconstructed aberration patterns (b) with GoogLeNet, (c) with ResNet101, (d) with derivative fitting, (e) with TIE, and (f) with robust TIE methods, (g) obtained Zernike coefficients, reconstructed aberration phase distributions (h) with GoogLeNet, (i) with t ResNet101, (j) with derivative fitting, (k) with TIE, and (l) with robust TIE, respectively.
Fig. 11.
Fig. 11. Results of the aberration analysis: (a) Real-world aberration pattern, reconstructed aberration patterns (b) with GoogLeNet, (c) with ResNet101, (d) with derivative fitting, (e) with TIE, and (f) with robust TIE methods, (g) obtained Zernike coefficients, reconstructed aberration phase distributions (h) with GoogLeNet, (i) with t ResNet101, (j) with derivative fitting, (k) with TIE, and (l) with robust TIE, respectively.
Fig. 12.
Fig. 12. Results of the aberration analysis: (a) Real-world aberration pattern, reconstructed aberration patterns (b) with GoogLeNet, (c) with ResNet101, (d) with derivative fitting, (e) with TIE, and (f) with robust TIE methods, (g) obtained Zernike coefficients, reconstructed aberration phase distributions (h) with GoogLeNet, (i) with t ResNet101, (j) with derivative fitting, (k) with TIE, and (l) with robust TIE, respectively.

Tables (2)

Tables Icon

Table 1. Comparison of the inference times among methods

Tables Icon

Table 2. Reconstruction errors for measured real-world aberration patterns.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

Z n m ( ρ , ϕ ) = { R n m ( ρ ) cos ( m ϕ ) R n m ( ρ ) sin ( m ϕ ) ,
R n m ( ρ ) = k = 0 ( n m ) / 2 ( 1 ) k ( n k ) ! k ! ( n + m 2 k ) ! ( n m 2 k ) ! ρ n 2 k ,
Φ = i = 0 d a i Z i
I d e t = I r e f + I a b e r + 2 I r e f I a b e r cos ( ϕ ( x , y ) + 2 π d λ ) ,
3 step measurement : ϕ ( x , y ) = tan 1 ( I λ / 2 I λ / 4 I 0 I λ ) ,
4 step measurement : ϕ ( x , y ) = tan 1 ( I 3 λ / 4 I λ / 4 I 0 I λ ) ,
5 step measurement : ϕ ( x , y ) = tan 1 ( 2 ( I λ / 4 I 3 λ / 4 ) 2 I λ / 2 I λ I 0 ) .
Cost function = x , y ( ( i = 1 n a i Z i x ) 2 tan 1 ( sin ( d φ d x ) cos ( d φ d x ) ) ) ,
μ R W := j = 1 N a j / N ,
σ R W 2 = j = 1 N ( a j μ R W ) 2 / ( N 1 ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.