Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fast dynamic correction algorithm for model-based wavefront sensorless adaptive optics in extended objects imaging

Open Access Open Access

Abstract

A major concern for wavefront sensorless adaptive optics (WFSless AO) is how to improve the algorithm’s efficiency which is critical for dynamic aberration correction. For extended objects and dynamic aberration, a typical model-based WFSless AO algorithm is called “3N” which uses three image measurements to estimate each aberration mode and then corrects it immediately. The three images include an initial aberrated image and two biased images with deliberately introduced predetermined positive or negative modal aberrations. In this paper, an improved algorithm called “2N” that requires only one biased image is proposed. The reduction of one biased image is achieved by the estimation of a parameter that is considered unknown in the 3N algorithm. It is demonstrated that the 2N algorithm can achieve convergence with less image measurements and have better performance in dynamic correction.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

A conventional adaptive optics (AO) system uses a wavefront sensor (WFS) like Shack-Hartmann to measure the aberration and a wavefront corrector like deformable mirror (DM) to compensate it. But in wavefront sensorless (WFSless) AO, the dedicated WFS is discarded and the wavefront corrector is driven to iteratively optimize an image quality metric. The optimization algorithm is expected to be efficient in searching the metric’s global extremum, which means a minimum number of image measurements and wavefront corrector deformations. For dynamic aberration correction, an efficient optimization algorithm leads to high control bandwidth [1]. For fluorescence biomicroscopy, a small number of exposures are preferred to reduce the risk of phototoxicity and photobleaching [23].

Various optimization algorithms for WFSless AO have been proposed, either model-free [46] or model-based [714]. The common disadvantage of model-free algorithms is that a large number of iterations are usually needed to converge. The model-based algorithms are generally more efficient than model-free by utilizing a deterministic relationship between a set of modes and a well-chosen metric function. The aberration modes are estimated from a sequence of images which are captured in turn after introducing predetermined modal biases into the system using a wavefront corrector. For point source imaging, several model-based algorithms have been proposed in Refs. [710]. For extended objects imaging, Débarre et al. proposed an algorithm that uses the low spatial frequency content of image as the metric function and performs 2N+1 image measurements to simultaneously correct N Lukosz modes [11]. Recently, we proposed “N+2” algorithm that uses the same metric and N+2 images to simultaneously correct N aberration modes [12], almost two times faster than the 2N+1 algorithm. Moreover, the N+2 algorithm is insensitive to the type of bias modes, providing more flexibility in choosing which set of modes for correction. Using the linear relationship between the second moment of the image and the averaged square of the phase gradient, Yang et al. proposed a method using N+1 image measurements to simultaneously correct N modes [13]. However, simultaneous correction algorithms including 2N+1, N+2 and N+1 are only suitable for static or slow-changing aberrations since they rely on the assumption that the aberration is quasi-static during the whole process of image measurements. For dynamic aberration, it is better to use sequential (mode by mode) correction scheme [1,10]. A typical sequential correction algorithm is called “3N” which uses three image measurements to estimate each aberration mode and then corrects it immediately [14]. The three images include an initial aberrated image and two biased images with deliberately introduced positive or negative modes. The aberration is only required to be quasi-static during the three image measurements if using the 3N algorithm.

In this paper, an improved sequential correction algorithm called “2N” is proposed for extended objects imaging. Unlike the 3N algorithm using two biased images, only one biased image is needed in the 2N algorithm. The time delay between corrections in the 2N algorithm is reduced from 3T to 2T, where T is roughly the time of a single image measurement. The reduction of one biased image is achieved by estimating a parameter that is considered unknown in the 3N algorithm.

2. Principle of 2N algorithm

To better understand the proposed 2N algorithm, first we will briefly review the 2N+1 and 3N algorithms. Following the notation in Ref. [11], the integral of the low spatial frequency content of image is given by

$$f\textrm{ = }\int_0^{2\pi } {\int_{{M_1}}^{{M_2}} {{S_J}({m,\xi } )} } \;m{\kern 1pt} {\kern 1pt} \textrm{d}m{\kern 1pt} {\kern 1pt} \textrm{d}\xi$$
where SJ is the image spectral density; m is the spatial frequency; ξ is the polar coordinate angle. The region of integration is an annulus with an inner radius of M1 and an outer radius of M2. When M2 is small, the metric function which is the reciprocal of Eq. (1) can be related to the pupil aberration Φ as
$$g = {f^{ - 1}} \approx {q_2} + \frac{{{q_3}}}{\pi }\int\!\!\!\int_P {{{|{\nabla {\mathbf \Phi }} |}^2}\textrm{d}A}$$
where $\nabla $ is the gradient operator; P denotes the pupil area. ${q_2}$ and ${q_3}$ are parameters related to the imaging system and objects as
$${q_2}\textrm{ = }{1 / {{q_0}}}\; \quad {q_3} = {{{q_1}} / {{q_0}^2}}$$
where
$${q_0} = \int_0^{2\pi } {\int_{{M_1}}^{{M_2}} {{S_{{J_0}}}({\mathbf m} )\;m{\kern 1pt} {\kern 1pt} \textrm{d}m{\kern 1pt} {\kern 1pt} \textrm{d}\xi } }$$
$${q_1} = \frac{1}{2}\int_0^{2\pi } {\int_{{M_1}}^{{M_2}} {\frac{{{S_{{J_0}}}({\mathbf m} )}}{{{H_0}({\mathbf m})}}{m^3}{\kern 1pt} \textrm{d}m{\kern 1pt} {\kern 1pt} \textrm{d}\xi } }$$
where ${\mathbf m} = (m\cos \xi ,m\sin \xi )$;${H_0}({\mathbf m} )\in [{0,1} ]$ is the normalized diffraction-limited optical transfer function (OTF); ${S_{{J_0}}}$ is the diffraction-limited image’s spectral density. The parameter ${q_2}$ is actually the metric function without aberration.

The aberration can be expanded by a set of modes {Xi} as

$${\mathbf \Phi }({\mathbf r} )= \sum\limits_{i = 1}^N {{a_i}{{\mathbf X}_i}({\mathbf r} )}$$
where ai denotes the modal coefficient.

Substituting Eq. (6) into Eq. (2), we have

$$g \approx {q_2} + \frac{{{q_3}}}{\pi }\int\!\!\!\int_P {{{\left|{\sum\limits_{i = 1}^N {{a_i}\nabla {{\mathbf X}_i}} } \right|}^2}\textrm{d}A}$$
If the gradients of modes {Xi} are orthogonal, i.e.
$$\frac{1}{\pi }\int\!\!\!\int\limits_P {\nabla {{\mathbf X}_i} \cdot \nabla {{\mathbf X}_j}dA} = {\alpha _{ij}}{\delta _{ij}}$$
where ${\delta _{ij}}$ is the Kronecker delta, then Eq. (7) can be simplified as
$$g \approx {q_2} + {q_3}\sum\limits_{i = 1}^N {a_i^2{\alpha _{ii}}}$$
In Ref. [11], {Xi} are chosen as Lukosz modes whose gradients are orthonormal (i.e. ${\alpha _{ij}}\textrm{ = }1$), then ${a_i}$can be estimated independently from three image measurements as
$${a_i} = \frac{{b({g_ + } - {g_ - })}}{{2{g_ + } - 4{g_0} + 2{g_ - }}}$$
where ${g_0}$ is the unbiased metric function; ${g_ + }$ and ${g_ - }$ are the metric functions with positive bias and negative bias respectively; b is the bias amplitude.

To correct N aberration modes, one can apply correction simultaneously after all modes are estimated, requiring 2N+1 images in one correction cycle. Alternatively, sequential correction can be implemented, requiring 3N images in one correction cycle since the unbiased metric must be updated after each mode’s correction.

Here we provide an alternative way to solve the modal coefficients from Eq.9. By introducing a positive modal bias Xj with known amplitude b, the parameter ${q_2}$ can be eliminated by

$${g_{j + }} - {g_0} \approx {q_3}({2b{a_j}{\alpha_{jj}} + {b^2}{\alpha_{jj}}} ).$$
In the 2N+1, 3N and N+2 algorithms, ${q_3}$ is treated as an unknown parameter, so an additional negative bias is required to cancel it. However, if ${q_3}$ is known or can be well estimated, the negative bias is no longer necessary. According to Eq. (4) and Eq. (5), ${q_3}$ is determined by H0 and ${S_{{J_0}}}$. H0 is known since it depends solely on the pupil shape. ${S_{{J_0}}}$ is related to the object’s diffraction-limited image and is usually unknown. What we can measure is the aberrated image’s spectral density${S_J}$. If we use ${S_J}$ to replace ${S_{{J_0}}}$, ${q_3}$ can be estimated by
$${\hat{q}_3}\textrm{ = }\frac{{{{\hat{q}}_1}}}{{\hat{q}_0^2}}\textrm{ = }\frac{{\frac{1}{2}\int_{{M_1}}^{{M_2}} {\int_0^{2\pi } {{S_J}({\mathbf m} )\textrm{/}{H_0}({\mathbf m} ){m^3}\textrm{d}m\textrm{d}\xi } } }}{{{{\left( {\int_{{M_1}}^{{M_2}} {\int_0^{2\pi } {{S_J}({\mathbf m} )\textrm{d}m\textrm{d}\xi } } } \right)}^2}}}.$$
We will prove that Eq. (12) is a reasonable estimation under certain conditions. From Ref. [11], the aberrated image’s spectral density at low spatial frequency is given by
$${S_J}({\mathbf m} )= \left[ {{H_0}{{({\mathbf m})}^2} - \frac{{{H_0}({\mathbf m})}}{\pi }\int\!\!\!\int_P {{{({{\mathbf m} \cdot \nabla {\mathbf \Phi }} )}^2}\textrm{d}A} } \right]{S_T}({\mathbf m} )$$
where ${S_T}({\mathbf m} )$ is the object’s spectral density and is related to ${S_{{J_0}}}$ as
$${S_{{J_0}}}({\mathbf m} )\textrm{ = }{H_0}{({\mathbf m})^2} \cdot {S_T}({\mathbf m} ).$$
Substituting Eq. (14) into Eq. (13), we can have
$$\begin{aligned} \frac{{{S_J}({\mathbf m} )}}{{{S_{{J_0}}}({\mathbf m} )}} & = 1 - \frac{{\int\!\!\!\int_P {{{({{\mathbf m} \cdot \nabla {\mathbf \Phi }} )}^2}\textrm{d}A} }}{{\pi {H_0}({\mathbf m})}}\\ & = 1 - \frac{{{m^2}\int\!\!\!\int_P {{{|{\nabla {\mathbf \Phi }} |}^2}[{1 + \cos ({2\xi - 2\chi } )} ]\textrm{d}A} }}{{2\pi {H_0}({\mathbf m})}}\\ & \ge 1 - \frac{{{m^2}\sum\nolimits_i {a_i^2} }}{{{H_0}({\mathbf m})}} \end{aligned}$$
where $\chi$ is the polar angle of $\nabla {\mathbf \Phi }({\mathbf r} )$.

On condition that the spatial frequency m and the aberration magnitude are small, ${H_0}({\mathbf m})$will be close to 1 and ${S_J}$ close to ${S_{{J_0}}}$. In this case, Eq. (12) should be a reasonable estimation of ${q_3}$. Fortunately, the above constraints on spatial frequency and aberration are not additional requirements, they are the basic assumptions in model-based algorithms for extended objects in order to derive Eq. (2) [1112]. In practice, ${\hat{q}_3}$ can be updated after each time correction and will converge to the ground truth very quickly. Using Eq. (11), the modal coefficient can be estimated by

$${a_j} = \frac{{{{({{g_{j + }} - {g_0}} )} / {{{\hat{q}}_3}}} - {b^2}{\alpha _{jj}}}}{{2b{\alpha _{jj}}}}.$$
From Eq. (16), each aberration mode can be estimated from two image measurements, one with a positive bias and the other without bias. Considering the sequential correction scheme, the required image number in one correction cycle is reduced from 3N to 2N.

3. Simulation

As mentioned before, the simultaneous correction algorithms like 2N+1 assume that the aberration doesn’t change during the whole process of image measurement, making it not suitable for dynamic correction. The sequential correction algorithms like 3N and 2N greatly reduce the time delay between corrections. The schematic representation of the correction timelines of 2N+1, 3N and 2N are shown in Fig. 1. The time of single image measurement (including image capture and metric evaluation) is T1. The time of applying bias mode voltages or correction voltages is T2. The 2N algorithm should have the best dynamic correction performance as it has the shortest time delay.

 figure: Fig. 1.

Fig. 1. The correction timelines of 2N+1, 3N and 2N algorithms.

Download Full Size | PDF

The foundation of 2N algorithm is using the aberrated image to estimate the parameter ${q_3}$. We simulated the influence of integration region and aberration magnitude on the estimation. The simulation system is the same as that used in Ref. [12] where a 37-channel DM is used as the wavefront corrector. Although Lukosz modes are originally employed in the 2N+1 and 3N algorithms to represent the aberration, gradient orthogonal mirror modes (GOMM) is a better choice in practice as the DM fitting error can be totally avoided [12,15,16]. In the following simulation and experiment, the GOMM is used to represent the aberration and is introduced to generate biased images. We have normalized the GOMM so that the RMS (root-mean-square) of each term has a value of 1 rad over the pupil.

For one hundred aberration samples whose RMS values are normalized to 1rad, 2rad or 3rad, the ratio of ${\hat{q}_3}$ to ${q_3}$ varies with the integral’s upper limit M2 is plotted in Fig. 2. The extended object is a USAF resolution test chart sampled by 256×256. The cut-off spatial frequency of the imaging system is normalized to 1. The lower limit M1 is set as 1/128 to exclude the central peak of image spectrum. From Fig. 2, ${{{{\hat{q}}_3}} / {{q_3}}}$ can significantly deviate from 1 when M2 and the aberration are too large. For small aberration, ${\hat{q}_3}$ is not sensitive to M2. For large aberration, a small M2 can be empirically chosen to get a better estimation of ${q_3}$. We set M2 as 5/128 hereafter.

 figure: Fig. 2.

Fig. 2. For aberrations with different RMS, ${{{{\hat{q}}_3}} / {{q_3}}}$varies with the integral’s upper limit M2. The asterisk, triangle and circle denote the mean value. The vertical bar contains 90% of data points.

Download Full Size | PDF

The static aberration correction accuracy of the 2N algorithm is compared with the 3N algorithm by simulation. For 100 random aberration samples with RMS normalized to 3rad, the residual aberrations’ RMS after using the two algorithms are compared in Fig. 3. It is shown that the aberrations can be almost fully corrected after two correction cycles using either algorithm. However, the 2N algorithm has a third less image measurements and thus a faster convergence compared with the 3N algorithm.

 figure: Fig. 3.

Fig. 3. The RMS of residual aberration varies with image sampling number using 2N or 3N algorithm. The solid curves represents the mean value. The shaded area consists of all 100 samples. N=15.

Download Full Size | PDF

To investigate the dynamic correction performance of 2N and 3N algorithms, a sequence of time-varying periodical aberration samples are produced by

$${\mathbf \Phi }(t )= {\mathbf XA}\sin ({\omega \;t + {\mathbf \varphi }} )\textrm{ + }{\mathbf X}{{\mathbf A}_{\mathbf 0}}$$
where A and A0 are random coefficients of the dynamic and constant components of aberrations respectively. φ is the initial phase. ω is the angular frequency that can be adjusted to simulate aberrations with different variation period. X is the aberration modes of GOMM.

In dynamic simulation, the time of image measurement T1 is set as 3ms and time of applying bias or correction T2 is 1ms. The first 10 GOMMs are used to generate aberration and |A0| is fixed as 3 rad. For ω = 1.1π rad/s, |A|=2.8 rad and bias amplitude b=0.3 rad, the aberration’s RMS variation curve before and after correction using 3N or 2N algorithm is depicted in Fig. 4(a). As the simulated dynamic aberration is periodic, the residual aberration after correction is also periodic. The 2N algorithm has smaller residual aberration than the 3N algorithm throughout the correction as it has a shorter time delay.

 figure: Fig. 4.

Fig. 4. The simulation results of dynamic aberration correction using 3N and 2N algorithms. (a) The variation of aberration’s RMS before and after correction. Corrected aberration’s RMS varies with (b) the angular frequency, (c) aberration amplitude and (d) bias amplitude.

Download Full Size | PDF

The algorithms’ dynamic correction performance is dependent on the frequency and amplitude characteristics of time-varying aberrations. The aberration’s mean RMS after correction changing with the angular frequency and aberration amplitude are plotted in Fig. 4(b) and Fig. 4(c) respectively. As the increasing of frequency and amplitude of aberration, the performance of both methods is getting worse, but the 2N algorithm is always better than the 3N algorithm, indicating that the correction bandwidth and working range of 2N algorithm are larger than 3N.

Furthermore, we found that the bias amplitude also affect the dynamic correction performance. The influence of bias amplitude is shown in Fig. 4(d). In principle, the variation of metric function caused by the introduced bias mode should be significantly larger than that caused by the dynamic variation of aberration between corrections. As the time delay is longer in the 3N algorithm, a larger bias amplitude is required. From Fig. 4(d), the bias amplitude should be larger than 0.1rad for the 2N algorithm, and larger than 0.5rad for the 3N algorithm. In practice, a smaller bias is preferred to have a smoother image variation and avoid exceeding the limited stroke of DM.

4. Experimental demonstration

The algorithms are further tested by experiments. The experimental system layout is shown in Fig. 5. The extended object illuminated by a LED (λ0 = 625nm) is a resolution target. DM1 (Thorlabs, DMP40) is a 40-actuator piezoelectric DM used to generate static or dynamic aberrations. DM2 which is conjugated to DM1 is a 37-channel membrane DM (OKO Tech) used for correction. The focal plane camera is a 12-bit CMOS sensor with 5.86 µm pixel size (Point Grey, GS3-U3-23S6M-C). The influence functions of the two DMs have been measured by a commercial Shack-Hartmann WFS (Imagine Optic, HASO3-76GE). The system can also be calibrated without using a WFS if it is unavailable.

 figure: Fig. 5.

Fig. 5. Experimental system layout. LED: Light Emitting Diode. DM: Deformable Mirror. S-H WFS: Shack-Hartmann wavefront sensor. BS: Beam splitter. L1∼L6: achromatic doublet lens.

Download Full Size | PDF

We first compare the performance of 2N and 3N algorithms on static aberration. The blurred image with a metric function of 3.895×105 in Fig. 6(a1) is caused by the aberration consisting of Zernike modes depicted in Fig. 6(a2). Figure 6(a3) is the corrected image based on the WFS measurement which can be viewed as ground truth. The corrected images and their metric function values after each WFSless correction cycle are shown in Fig. 6(b1∼b3) for 3N algorithm and in Fig. 6(c1∼c3) for 2N algorithm. As N is 10, the required number of image measurements in one correction cycle is 30 for 3N and 20 for 2N. Both algorithms converge to a similar metric function value after two correction cycles. However, the total number of image measurements is a third less in the 2N algorithm.

 figure: Fig. 6.

Fig. 6. Static aberration correction results. Blurred image (a1), corresponding Zernike coefficients (a2) and corrected image based on WFS measurement (a3). Corrected images after three correction cycle using 3N (b1∼b3) and 2N (c1∼c3) algorithms. N=10.

Download Full Size | PDF

Next, we compare the dynamic correction performance of 2N and 3N algorithms. DM1 is used to generate periodic aberrations as described by Eq. (1)7. The aberrations are consisted of the Zernike modes from Z4 to Z8 and the bias modes are the first 10 GOMMs. The dynamic correction results are plotted in Fig. 7 and the collected images before and after correction are displayed in Visualization 1 as a supplement. We can see that the image quality of 2N algorithm is better than that of 3N algorithm during the correction process, which is consistent with the simulation results.

 figure: Fig. 7.

Fig. 7. Dynamic correction results of 3N and 2N algorithms. N=10.

Download Full Size | PDF

5. Conclusion

In conclusion, we proposed a fast WFSless dynamic correction algorithm called 2N for extended objects imaging and compared it with the traditional 3N algorithm. The basic idea is to use the aberrated images to estimate the unknown parameter in the metric function. It is demonstrated that the 2N algorithm can achieve convergence with less image measurements and have better performance in dynamic correction. To further accelerate the correction, we may only correct the dominant aberration modes if we have prior knowledge of the aberration or develop a control method being able to predict the varying aberration [17]. In our experiment, a WFS is needed to calibrate the DM, which might be a limitation in practice as the WFS may be unavailable. In this case, we can use data-driven calibration methods that don’t require a WFS [1820].

Funding

National Natural Science Foundation of China (11874087).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. B. Dong, Y. Li, X. Han, and B. Hu, “Dynamic aberration correction for conformal window of high-speed aircraft using optimized model-based wavefront sensorless adaptive optics,” Sensors 16(9), 1414 (2016). [CrossRef]  

2. R. A. Hoebe, C. H. Van Oven, T. W. J. Gadella, P. B. Dhonukshe, C. J. F. Van Noorden, and E. M. M. Manders, “Controlled light-exposure microscopy reduces photobleaching and phototoxicity in fluorescence live-cell imaging,” Nat. Biotechnol. 25(2), 249–253 (2007). [CrossRef]  

3. M. J. Booth, “Adaptive optical microscopy: the ongoing quest for a perfect image,” Light: Sci. Appl. 3(4), e165 (2014). [CrossRef]  

4. M. A. Vorontsov, G. W. Carhart, and J. C. Ricklin, “Adaptive phase-distortion correction based on parallel gradient-descent optimization,” Opt. Lett. 22(12), 907–909 (1997). [CrossRef]  

5. P. Yang, M. Ao, Y. Liu, B. Xu, and W. Jiang, “Intracavity transverse modes controlled by a genetic algorithm based on Zernike mode coefficients,” Opt. Express 15(25), 17051–17062 (2007). [CrossRef]  

6. Q. Yang, J. Zhao, M. Wang, and J. Jia, “Wavefront sensorless adaptive optics based on the trust region method,” Opt. Lett. 40(7), 1235–1237 (2015). [CrossRef]  

7. M. J. Booth, “Wave front sensor-less adaptive optics: a model-based approach using sphere packings,” Opt. Express 14(4), 1339–1352 (2006). [CrossRef]  

8. M. J. Booth, “Wavefront sensorless adaptive optics for large aberrations,” Opt. Lett. 32(1), 5–7 (2007). [CrossRef]  

9. L. Huang and C. Rao, “Wavefront sensorless adaptive optics: a general model-based approach,” Opt. Express 19(1), 371–379 (2011). [CrossRef]  

10. W. Lianghua, P. Yang, Y. Kangjian, C. Shanqiu, W. Shuai, L. Wenjing, and B. Xu, “Synchronous model-based approach for wavefront sensorless adaptive optics system,” Opt. Express 25(17), 20584–20597 (2017). [CrossRef]  

11. D. Débarre, M. J. Booth, and T. Wilson, “Image based adaptive optics through optimisation of low spatial frequencies,” Opt. Express 15(13), 8176–8190 (2007). [CrossRef]  

12. H. Ren and B. Dong, “Improved model-based wavefront sensorless adaptive optics for extended objects using N + 2 images,” Opt. Express 28(10), 14414–14427 (2020). [CrossRef]  

13. H. Yang, O. Soloviev, and M. Verhaegen, “Model-based wavefront sensorless adaptive optics system for large aberrations and extended objects,” Opt. Express 23(19), 24587–24601 (2015). [CrossRef]  

14. A. Facomprez, E. Beaurepaire, and D. Débarre, “Accuracy of correction in modal sensorless adaptive optics,” Opt. Express 20(3), 2598–2612 (2012). [CrossRef]  

15. B. Wang and M. J. Booth, “Optimum deformable mirror modes for sensorless adaptive optics,” Opt. Commun. 282(23), 4467–4474 (2009). [CrossRef]  

16. H. Ren, B. Dong, and Y. Li, “Alignment of the active secondary mirror of a space telescope using model-based wavefront sensorless adaptive optics,” Appl. Opt. 60(8), 2228–2234 (2021). [CrossRef]  

17. P. Piscaer, O. Soloviev, and M. Verhaegen, “Predictive wavefront sensorless adaptive optics for time-varying aberrations,” J. Opt. Soc. Am. A 36(11), 1810–1819 (2019). [CrossRef]  

18. J. Antonello, M. Verhaegen, R. Fraanje, T. van Werkhoven, H. C. Gerritsen, and C. U. Keller, “Semidefinite programming for model-based sensorless adaptive optics,” J. Opt. Soc. Am. A 29(11), 2428–2438 (2012). [CrossRef]  

19. A. Thayil and M. J. Booth, “Self calibration of sensorless adaptive optical microscopes,” J. Eur. Opt. Soc. Rapid Publ. 6, 11045 (2011). [CrossRef]  

20. D. Débarre, A. Facomprez, and E. Beaurepaire, “Assessing correction accuracy in image-based adaptive optics,” Proc. SPIE 8253, 82530F (2012). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       The left video shows the blurred images induced by periodic dynamic aberrations generated by a deformable mirror. The middle video shows the images corrected by the proposed 2N algorithm. The right video shows the images corrected by the original 3N

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. The correction timelines of 2N+1, 3N and 2N algorithms.
Fig. 2.
Fig. 2. For aberrations with different RMS, ${{{{\hat{q}}_3}} / {{q_3}}}$varies with the integral’s upper limit M2. The asterisk, triangle and circle denote the mean value. The vertical bar contains 90% of data points.
Fig. 3.
Fig. 3. The RMS of residual aberration varies with image sampling number using 2N or 3N algorithm. The solid curves represents the mean value. The shaded area consists of all 100 samples. N=15.
Fig. 4.
Fig. 4. The simulation results of dynamic aberration correction using 3N and 2N algorithms. (a) The variation of aberration’s RMS before and after correction. Corrected aberration’s RMS varies with (b) the angular frequency, (c) aberration amplitude and (d) bias amplitude.
Fig. 5.
Fig. 5. Experimental system layout. LED: Light Emitting Diode. DM: Deformable Mirror. S-H WFS: Shack-Hartmann wavefront sensor. BS: Beam splitter. L1∼L6: achromatic doublet lens.
Fig. 6.
Fig. 6. Static aberration correction results. Blurred image (a1), corresponding Zernike coefficients (a2) and corrected image based on WFS measurement (a3). Corrected images after three correction cycle using 3N (b1∼b3) and 2N (c1∼c3) algorithms. N=10.
Fig. 7.
Fig. 7. Dynamic correction results of 3N and 2N algorithms. N=10.

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

f  =  0 2 π M 1 M 2 S J ( m , ξ ) m d m d ξ
g = f 1 q 2 + q 3 π P | Φ | 2 d A
q 2  =  1 / q 0 q 3 = q 1 / q 0 2
q 0 = 0 2 π M 1 M 2 S J 0 ( m ) m d m d ξ
q 1 = 1 2 0 2 π M 1 M 2 S J 0 ( m ) H 0 ( m ) m 3 d m d ξ
Φ ( r ) = i = 1 N a i X i ( r )
g q 2 + q 3 π P | i = 1 N a i X i | 2 d A
1 π P X i X j d A = α i j δ i j
g q 2 + q 3 i = 1 N a i 2 α i i
a i = b ( g + g ) 2 g + 4 g 0 + 2 g
g j + g 0 q 3 ( 2 b a j α j j + b 2 α j j ) .
q ^ 3  =  q ^ 1 q ^ 0 2  =  1 2 M 1 M 2 0 2 π S J ( m ) / H 0 ( m ) m 3 d m d ξ ( M 1 M 2 0 2 π S J ( m ) d m d ξ ) 2 .
S J ( m ) = [ H 0 ( m ) 2 H 0 ( m ) π P ( m Φ ) 2 d A ] S T ( m )
S J 0 ( m )  =  H 0 ( m ) 2 S T ( m ) .
S J ( m ) S J 0 ( m ) = 1 P ( m Φ ) 2 d A π H 0 ( m ) = 1 m 2 P | Φ | 2 [ 1 + cos ( 2 ξ 2 χ ) ] d A 2 π H 0 ( m ) 1 m 2 i a i 2 H 0 ( m )
a j = ( g j + g 0 ) / q ^ 3 b 2 α j j 2 b α j j .
Φ ( t ) = X A sin ( ω t + φ )  +  X A 0
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.