Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Digital adaptive optics confocal microscopy based on iterative retrieval of optical aberration from a guidestar hologram

Open Access Open Access

Abstract

Guidestar hologram based digital adaptive optics (DAO) is one recently emerging active imaging modality. It records each complex distorted line field reflected or scattered from the sample by an off-axis digital hologram, measures the optical aberration from a separate off-axis digital guidestar hologram, and removes the optical aberration from the distorted line fields by numerical processing. In previously demonstrated DAO systems, the optical aberration was directly retrieved from the guidestar hologram by taking its Fourier transform and extracting the phase term. For the direct retrieval method (DRM), when the sample is not coincident with the guidestar focal plane, the accuracy of the optical aberration retrieved by DRM undergoes a fast decay, leading to quality deterioration of corrected images. To tackle this problem, we explore here an image metrics-based iterative method (MIM) to retrieve the optical aberration from the guidestar hologram. Using an aberrated objective lens and scattering samples, we demonstrate that MIM can improve the accuracy of the retrieved aberrations from both focused and defocused guidestar holograms, compared to DRM, to improve the robustness of the DAO.

© 2017 Optical Society of America

1. Introduction

Optical imaging of human retinal photoreceptors is prevented from achieving diffraction limited resolution due to the ocular aberration. Hardware-based adaptive optics (HAO), originated to compensate for atmospheric turbulence in telescope systems [1,2], was successfully adapted to measure and correct the ocular aberration of human eyes [3]. With the aid of HAO, individual retinal photoreceptors were able to be resolved by the wide-field ophthalmoscope [3]. HAO was later introduced to other imaging modalities, such as scanning laser ophthalmoscope (SLO) [4,5] and optical coherence tomography (OCT) [6]. HAO SLO has achieved improved retinal image contrast and enabled optical sectioning capability, compared to the wide-field ophthalmoscope, due to confocal configuration which could reject the light scattered from out of focus region in the sample. Despite these successes, HAO imaging systems require complicated opto-mechanical hardwares, such as Shack-Hartmann wavefront sensor, deformable mirror, etc. Practical operation of such sophisticated imaging system is difficult, which limits its wide deployment in laboratory and clinical settings.

To mitigate complexity of the opto-mechanical system and related electronic control required by HAO, digital adaptive optics (DAO) was recently demonstrated in wide-field imaging systems [7–9]. In a DAO imaging system, complex field of the image is recorded by an off-axis digital hologram or a full-field hologram and numerically reconstructed by a typical digital holographic reconstruction method [10,11]. Then a narrow beam is sent into the objective lens to form a focused beacon onto the sample to generate a guide star, from which the scattered light goes back through the objective lens and carries the optical aberration that occurs at the pupil plane. The optical field arriving at the image plane from this guide star is then recorded by an off-axis hologram, and the optical aberration could be reconstructed from this guidestar hologram through numerical processing. Finally, the optical aberration measured by the guidestar hologram is removed from the distorted complex image numerically. Therefore, by using the off-axis digital holographic method [11], the hardware pieces such as the wavefront sensor, deformable mirror, and coordination between them could be eliminated in a DAO imaging system. In the original DAO imaging system [7], the CCD is put at the image plane of the aberration plane, which needs be accurately adjusted to achieve accurate measurement and correction of the optical aberration. To overcome this limitation, Fourier transform DAO system and DAO for arbitrary optical system were demonstrated in [8,9]. Due to the use of coherent light sources, DAO wide-field imaging suffers from strong coherent noise, and it also lacks of optical sectioning capability. To address these issues, a DAO confocal microscope was proposed [12], which applies the confocal configuration to reject out of focus light. DAO adopts line-scanning confocal configuration [13], instead of point-scanning confocal configuration [14,15], to improve the image speed. In a line-scanning DAO, each line scan is recorded by an off-axis digital line hologram and the optical aberration is obtained from a separate digital off-axis guidestar hologram. The distorted line fields are restored in a line-by-line manner, and the final confocal image is obtained by putting together these recovered line fields in sequence. Because there are no closed-loop feedbacks as in a HAO imaging system, the optical aberration retrieved from the guidestar hologram solely determines the quality of the final corrected confocal images. In previous embodiments of the DAO, the optical aberration was directly retrieved by taking Fourier transform of the guidestar hologram and then extracting the phase information. In early studies [12], we found that this direct retrieval method (DRM) encountered a practical limitation. That is when the sample is not coincident with the guidestar focal plane or beam waist plane, the accuracy of the optical aberrations retrieved by DRM will decay, leading to rapid deterioration of corrected images.

To address this issue, we adapt an image metrics-based iterative method (MIM) to retrieve the optical aberration from the guidestar hologram. MIM was proposed to achieve autofocus of synthetic aperture radar [16]. The detailed characteristics of different image metrics have been discussed in the publication [17]. Later, MIM was applied to address optical aberration and improve the image quality in digital holography [18,19] and optical coherence tomography [20,21]. Our experimental results demonstrate that MIM can improve the accuracy of the retrieved optical aberrations from both the focused and defocused guidestar holograms, compared to DRM.

2. Methods and materials

2.1. Experimental setup of DAO

One schematic diagram of the DAO system is shown in Fig. 1(a), which upgrades the previous system [12] by replacing the stage scanning with the beam scanning and using a data acquisition device to synchronize the line scan and image acquisition. A laser diode (LD) with a center wavelength of 830 nm and bandwidth ~0.001 nm is used as the light source. The laser beam, collimated by a collimator (CO), is condensed by a pair of lens L1 (150 mm focal length) and L2 (60 mm focal length) and then sent to the cylindrical lens CL (75 mm focal length) to generate a line focus (along the x direction) onto the sample S at the back focal plane of the objective lens (OL). The coordinate system in the sample plane is shown in Fig. 1(a). A piece of broken glass is put at the pupil plane of the objective lens to introduce the optical aberration, as shown in Fig. 1(b). The pupil size is set to ~5.4 mm in diameter, so that the returning beam scattered from the sample can go through the scanning mirror GSM which has an effective aperture of ~6 mm in diameter considering an angle of ~45 degree between the mirror face and the returning laser beam. The objective lens OL has an effective focal length of 18 mm. Therefore, the imaging numerical aperture (NA) can be estimated by 2.7 mm/18 mm as ~0.15, which corresponds to an ideal imaging resolution of ~3.4 μm in the sample space. The image of the line field from sample is formed by a lens L3 (300 mm focal length) onto the camera sensor with a magnification of ~16.7. The distances between the pupil plane and the lens L3, and the lens L3 and the camera are both 300 mm so that the light field at the camera plane is the inverse Fourier transform of that at the pupil plane under reflection coordinate system [22]. The camera has 640 by 480 pixels with a pixel size of 7.4 μm by 7.4 μm, which corresponds to 0.45 μm by 0.45 μm in the sample plane. In the experiment, 384 by 384 pixels of the camera sensor are employed to acquire the line holograms at a speed of 50 frames / second. To perform off-axis holography, a beam expander of lens L6 and L7 with a pinhole at the focal plane of L6 as a spatial filter is utilized to generate a collimated plane wave to illuminate the camera at a few degrees with respect to the line field. A total of 480 line holograms are acquired when the line focus is swept along the y direction at a scanning step of ~0.45 μm, so that the final en face confocal image has the same pixel resolutions along both dimensions. The first 384 line holograms acquired during the forward scanning are used to reconstruct the en face confocal image. To measure the aberration introduced by the aberrator A at the pupil plane of the objective lens OL, a narrow beam with a diameter of ~2 mm obtained by a pair of lens L4 (120 mm focal length) and L5 (60 mm focal length) is sent to the objective lens OL to generate a guide star onto the sample at a fixed scanning mirror position. The light scattered or reflected from the guide star will go through the aberrator and the corresponding aberration will be recorded by an off-axis guidestar hologram at the camera sensor. Narrow beams are used to form line fields and the guide star to remove the effect from the aberration in the illumination passage, which has been confirmed by imaging and comparing the line illumination patterns on the sample through an imaging system behind the objective lens OL, as we have demonstrated in previous publication [12].

 figure: Fig. 1

Fig. 1 Experimental setup of DAO. LD: laser diode. CO: collimator. BS1-BS6: beamsplitters. L1-L7: regular lens. Their focal lengths are 150 mm, 60 mm, 300 mm, 120 mm, 60 mm, 20mm, and 100 mm respectively. CL: cylindrical lens of a focal length 75 mm. GSM: Galvanometer scanning mirror. A: aberrator. OL: objective lens. S: sample. (A) Schematic drawing of the optical apparatus. (B) Photo of the aberrated objective lens that consists of the aberrator A and objective lens OL.

Download Full Size | PDF

2.2. Principle of DAO

Basic principle of DAO was explained in the recent publication [12]. Two adaptations are made to meet the requirement of the work reported in this article. First, we will adopt reflection coordinate system to accommodate existing theories on general image metrics-based methods [16,17]. Second, the description of the optical aberration obtained by DRM will be modified to account for the measurement error, which was not considered in the previous study [12]. According to Fourier optics [22], under reflection coordinate system, the distorted nth line field O(xc, yc, n) at the camera plane (i.e. image plane) can be related to the optical field O(xp, yp, n) at the pupil plane by

O(xc,yc,n)=IFT{O(xp,yp,n)}(fx,fy),
where IFT is inverse Fourier transform, (xc, yc) and (xp, yp) are the physical coordinates at the image plane and the pupil plane, respectively, and
fx=xcλd1fy=ycλd1,
where λ is the wavelength of the light source and d1 is the focal length of the lens L3. O(xp, yp, n) can be expressed as
O(xp,yp,n)=OU(xp,yp,n)P(xp,yp)exp[jΦ(xp,yp)],
where OU(xp, yp, n) is the undistorted field at the pupil plane from the nth scan, P(xp, yp) denotes the ideal pupil function, Φ(xp, yp) represents the optical aberration introduced by the aberrator A in the imaging path, and j is the imaginary unit. The distorted confocal intensity from nth scan can be expressed as
IConf(xc,n)=yCslit|O(xc,yc,n)|2,
where the slit means the applied numerical horizontal aperture slit on the image plane. To measure the optical aberration Φ(xp, yp) and restore the image, a guidestar hologram is taken and the complex guidestar field g(fx, fy) at the image plane is reconstructed, which can be expressed as
g(fx,fy)=IFT{A(xp,yp)exp[jΦG(xp,yp)]},
where the coordinates (fx, fy) at the image plane and the coordinates (xp, yp) at the pupil plane are related by Eq. (2). A(xp, yp) represents the amplitude distribution of the light field at the pupil plane from the guide star field. Note that Eq. (5) adopts (fx, fy) as the coordinates to facilitate the derivation of the MIM in the next section. Due to the finite size of the guide star and light scattering from the region outside the guidestar focal plane, the phase term ΦG(xp, yp) will contain deviation/error term ΦE(xp, yp) from the actual optical aberration Φ(xp, yp), which can be decomposed as,
ΦG(xp,yp)=Φ(xp,yp)+ΦE(xp,yp).
The optical aberration ΦR(xp, yp) retrieved from the guidestar field by DRM will be equal to ΦG(xp, yp), which is obtained by directly taking Fourier transform of the guidestar field and extracting the phase term. As we can see, by taking Fourier transform of Eq. (5), we can get A(xp, yp)exp[jΦG(xp, yp)], where the phase term is indeed ΦG(xp, yp). The aim of this work is to find a method that can retrieve, from the guidestar field, an optical aberration ΦR(xp, yp) that is closer to the Φ(xp, yp) than the DRM. After we retrieve an aberration ΦR(xp, yp) either by DRM or by other methods, we remove it from O(xp, yp, n) and taking IFT to obtain the corrected line field, as
O(xc,yc,n)C=IFT{O(xp,yp,n)exp[jΦR(xp,yp)]}(fx,fy).
The final corrected confocal intensity is then given by

IConfC(xc,n)=yCslit|OC(xc,yc,n)|2.

2.3. Metrics-based iterative method

In this section, we present a masked image MIM to retrieve the optical aberration ΦR(xp, yp) from the guidestar field as described by Eq. (5). General metrics-based methods have been well documented [16,17]. Following the description outlined in these two references, we describe the MIM specific to our work as follows.

MIM is to iteratively find the aberration by minimizing a masked image metrics, which is given by

T=fx,fyM(fx,fy)S[I(fx,fy)].
The mask function M(fx, fy) is given by
M(fx,fy)={1fx2+fy2R0fx2+fy2>R.
Different from the metrics discussed in [16, 17], the proposed metrics utilizes the mask function Eq. (10). We will discuss how to set the radius R in the mask function in section 3.2, which plays a key role in obtaining global minimization. In this paper, the function S[I(fx, fy)] is given by [16,17]
S[I(fx,fy)]=I(fx,fy)1.5,
where I(fx, fy) is given by
I(fx,fy)=gC(fx,fy)gC*(fx,fy),
where ‘*’ means complex conjugate. By performing a discrete inverse Fourier transform (IDFT), gC(fx, fy) can be obtained by
gC(fx,fy)=1N2xp,yp{A(xp,yp)exp[jΦG(xp,yp)]×exp[jΦR(xp,yp)]exp[j2π(xpfxN+ypfyN)]},
where N by N is the pixel number of the discrete guidestar field, which is 384 by 384 in this work, and the retrieved optical aberration ΦR(xp, yp) can be decomposed into Zernike polynomial terms in the pupil plane as
ΦR(xp,yp)=k=1JakZk(xp,yp),
where J is a positive integer determining number of Zernike modes to be used. The proposed MIM is to minimize the image metrics T as given in Eq. (9) by iteratively adjusting Zernike coefficient vector a = {ak}, k = 1, …, J, alternatively using a conjugate gradient method and steepest gradient method [23]. Specifically every five conjugate gradient steps, one steepest gradient step is applied to avoid stagnation. To perform the conjugate gradient and steepest gradient methods, the partial derivatives of T with respect to the components of Zernike coefficient vector a = {ak} need be computed, which can be calculated by
Tak=2N2xP,yPZk(xp,yp)Im{A(xp,yp)exp[jΦG(xp,yp)]exp[jΦR(xp,yp)]×{DFT[1.5I0.5(fx,fy)gC(fx,fy)M(fx,fy)]}*},
where Im{ . } is the operator that takes the imaginary part of the operand, { . }* means the complex conjugate of the operand, and DFT is the discrete Fourier transform, defined as,
DFT[1.5I0.5(fx,fy)gC(fx,fy)M(fx,fy)]=fx,fy[1.5I0.5(fx,fy)gC(fx,fy)M(fx,fy)]exp[j2π(xpfxN+ypfyN),
DFT and IDFT can be efficiently computed by fast Fourier transform (FFT) and inverse FFT routines in a commercial software such as Matlab. The derivation of Eq. (15) is detailed in the Appendix. The algorithm adopted here is similar to Fletcher-Reeves algorithm [23], which is briefly summarized as follows:

  • Step 1. Set a starting point of Zernike coefficient vector a0 = {ak} = {0,…,0}, which is zero vector and a radius R for the mask function M(fx, fy).
  • Step 2. Set the iterate m = 0. Compute the partial derivative vector { Tak} using Eq. (15). Set the directional vector h0 = g0 = -{ Tak}.
  • Step 3. Perform a line search along the directional vector, find a positive number βm to minimize T{am + βmhm}. We apply a golden section method to do the line search [21]. Note that the superscript m means the mth iteration not the power.
  • Step 4. Set am + 1 = am + βm h m.
  • Step 5. Compute { Tak} at am + 1. If the modulus of it is smaller than a preset tolerance, stop. If not, go to step 6.
  • Step 6. Set gm + 1 = -{Tak} at am + 1, and update the directional vector byhm+1=gm+1+<gm+1,gm+1><gm,gm>hm, where <., .> means taking inner product of the two operand vectors. Go to step 3.

At step 6, every 5 loops, the directional vector is reset by the gradient gn + 1, which is a typical method for escaping the stagnation in conjugate gradient algorithms. Also, in our case, if we use one mask size, the algorithm will be stuck to local minimization. To escape it, the mask size needs to be enlarged and the algorithm restarts using the exiting Zernike coefficient vector from the first-round optimization as the new initial Zernike coefficient vector. From our experimental results, it is found that changing the mask once is enough to achieve global minimization, which will be demonstrated in section 3.

3. Results

3.1. Distorted and baseline digital confocal images

As a baseline, a digital confocal image, without aberrator in place, is first taken. 384 off-axis digital line holograms are acquired and reconstructed line by line. Figure 2(a) shows one representative off-axis line hologram. Taking Fourier transform of it, the phase map at the pupil can be obtained, as shown Fig. 2(b). The diameter of the boundary circle has 62 pixels that corresponds to 5.4 mm pupil size as we set for the experiment. Throughout this paper, the phase map is displayed in blue-white-red color map corresponding to phase values from -π to + π. The sampling spacing at the pupil plane is computed λd1/(NΔxc) as ~88 μm along both directions, where Δxc is 7.4 μm, i.e., the camera pixel size. Taking inverse FT of the complex field at the pupil plane, the complex line field can be reconstructed. The resulting reconstructed line intensity is shown in Fig. 2(c). A numerical horizontal slit smaller than one airy disk width (~15 pixels) is applied to the line field intensity and the pixel values are averaged along the vertical direction (y direction) to get one confocal line intensity, and we then stitch together these 384 confocal line intensities into one en face confocal image as shown in Fig. 2(d), which has a field of view 170 μm x 170 μm with a pixel resolution 0.45 μm in both dimensions. The sample in use is a positive resolution test chart with a piece of Teflon film tightly pressed behind it on the side with chrome bars. In the experiment, to remove the specular reflection from the glass substrate and the chrome bars, the sample is slightly tilted. To be clear, a white light full-field transmission image of the resolution test chart is shown in Fig. 2(e). Due to the directional tilt, the horizontal borders between the Teflon film and the chrome bars undergoes strong scattering, which appear as bright horizontal lines in the confocal image as highlighted by the red arrows in the Fig. 2(d). Because the distance between the centers of the two neighboring bright lines is ~8.7 μm, we can estimate that the imaging resolution is close to the diffraction limitation, which is ~3.4 μm. To further quantify the resolution, we measure the line spread function using 0.1 to 0.9 rule of the vertical edge profile through the middle point of the bright line indicated by the upward red arrow in Fig. 2(d) [13]. This way, the resolution is estimated as ~3.0 μm, which is better than the diffraction limit one due to confocal configuration in the scanning direction. The contrast is obtained by averaging the contrast values of four areas with bar features as 0.93. We then put at the pupil plane of the objective lens the aberrator A, a piece of broken glass as shown in Fig. 1(b). One representative distorted line hologram is shown in Fig. 2(f). The corresponding distorted phase map at the pupil plane is shown in Fig. 2(g). The reconstructed distorted line intensity is shown in Fig. 2(h), and the distorted confocal image is shown in Fig. 2(i). The resolution is measured as ~7.0 μm and the contrast decreases to 0.85. As shown in Figs. 2(d) and 2(i), due to the strong scattering nature of the sample, coherent noise exists, which could be suppressed using speckle averaging or light sources of short temporal coherence [5,24].

 figure: Fig. 2

Fig. 2 Digital confocal imaging without and with aberrator in place. (a) One representative baseline off-axis digital line hologram, generated without aberrator in place. (b) Phase distribution at the pupil obtained by taking FT of (a) and extracting the image order [11]. It is displayed in a blue-white-red colormap corresponding to the phase value (-π, π]. (c) Reconstructed line field intensity obtained by taking inverse FT of the complex amplitude represented by (b). (d) Reconstructed baseline en face confocal image. (e) White light transmission image of the target with the teflon film removed. (f) One representative off-axis digital line hologram distorted by the aberrator. (g) Distorted phase map at the pupil plane from (f). (h) Distorted line intensity. (i) Distorted en face confocal image. Scale bars: 30 μm.

Download Full Size | PDF

3.2. Optical aberrations from the focused guidestar hologram

To restore the distorted confocal image as shown in Fig. 2(i), a separate collimated narrow beam with a diameter ~2 mm is sent into the aberrated objective lens to generate a guide star onto the sample. An off-axis digital guidestar hologram is obtained, and the optical aberration is retrieved by DRM. The complex guidestar field, as shown in Fig. 3(a), is achieved when the sample is put at the focal plane of the guide star beam. Figure 3(b) shows the optical aberration ΦG(xp, yp) retrieved by DRM. The corrected en face confocal image is shown in Fig. 3(c). In the experiment, we carefully adjust axial location of the sample so that corrected image is the best by visual evaluation. The axial location which corresponds to the best guidestar hologram is used as the reference plane or focal plane, because only at this plane the guidestar field is most concentrated. We then test the proposed image metrics-based iterative method MIM to extract the optical aberration from the guide star field as shown in Fig. 3(a). At the end of the iterations, the guidestar field after being corrected by the retrieved aberration ΦR(xp, yp) is shown in Fig. 3(d), which appears as a dominatingly bright speckle spot surrounded by many much weaker speckles. This appearance indicates that the MIM is to extract most informational phase part from Fig. 3(b) while rejecting those coherent artifacts or noises as evidenced in Fig. 3(b). The resulting aberration ΦR(xp, yp) retrieved by MIM is shown in Fig. 3(e), which indeed shows strong resemblance from original phase map ΦG(xp, yp), without any coherent noise. The image corrected by using ΦR(xp, yp) is shown in Fig. 3(f), which shows a certain improvement compared to Fig. 3(c). Specifically, the resolution measured from Fig. 3(f) is ~3.2 μm while it is measured for Fig. 3(c) as ~4.0 μm. Also, the contrast of Fig. 3(f) is 0.91 while it is 0.87 for Fig. 3(c). Especially, the contrast of the vertical bars in Fig. 3(f) is as high as 0.95 while it is only 0.84 for Fig. 3(c). That is why the vertical bars in the image corrected by MIM are well resolved while they are not clearly resolved in the image corrected by DRM. That means MIM can improve the accuracy of the optical aberration compared to DRM by iteratively extracting the aberration and removing those coherent artefacts. Or equivalently speaking, ΦR(xp, yp) is closer to Φ(xp, yp) the actual optical aberration than ΦG(xp, yp). The root mean square of the retrieved optical aberration ΦR(xp, yp) is computed as ~0.43 λ, which is slightly stronger than the average optical aberration of human eyes [25].

 figure: Fig. 3

Fig. 3 Image corrections by the focused guiestar hologram. (a) Intensity of the guidestar field. (b) Phase aberration from (a) through DRM. (c) Corrected image by (b). (d) Corrected guidestar field. (e) Phase distribution from (a) through MIM. (f) Corrected image by (f). (g) First mask. (h) Second mask. (i) Convergence curves. Scalebars: 30 μm.

Download Full Size | PDF

To achieve global minimization during the optimization, MIM first uses a small mask as defined by Eq. (10) with a radius R of 7 pixels, as shown in Fig. 3(g), then updates the mask into a larger one with a radius of 21 pixels, as shown in Fig. 3(h). The specific sizes of the small and large masks are not tight. Several pixels change around the given numbers do not alter the final result. However, the order is important. If we first apply a large mask and then a small mask, the algorithm could not converge to a global solution. The convergence process is shown in Fig. 3(i). The black solid curve represents the convergence process in the first-stage optimization using the small mask, where we can see that the algorithm hits a plateau, indicating the algorithm encounters stagnation. Setting the exiting Zernike coefficients vector from the first-stage optimization as the starting point of the second-round optimization using a large mask, the algorithm can find a global solution as shown in Fig. 3(e). The red dashed curve in Fig. 3(i) represents the convergence of the second-stage optimization. Note that we have divided the metrics values of the red dashed curve for the second mask by a factor of three to better display the two convergence curves in one figure. In this algorithm, we employ the Zernike polynomials up to 14 terms, which account for first 4 orders of optical aberrations. Further increase in number of terms does not show improvement for this aberrator. Our current version of algorithm takes ~10 minutes to finish the iterations on a regular desktop computer. The majority of the computational time is consumed by the FFT. FFT is computed using Cooley-Tukey algorithm [26]. If the input array of FFT has N by N pixels, then the number of multiplication involved can be estimated by 2N2log2N. In the present version of MIM, one iteration needs compute ~20 FFTs. That means it needs compute ~4000 FFTs for 200 iterations. Since FFT can be efficiently calculated in parallel mode. GPU modules can be used to significantly improve the computational speed. Since DRM takes one FFT of the guidestar hologram, it takes ~0.14 s.

3.3. Optical aberrations from the defocused guidestar holograms

We test the MIM on the guidestar holograms that are defocused when the sample is shifted away from the focal plane by a distance. We first show the case where the sample is 50 μm away from the focal plane towards the objective lens, as shown in Fig. 1(b). Figure 4(a) shows the guidestar field, which shows a certain spread in the intensity distribution due to the defocus. Figure 4(b) shows the phase aberration by DRM, where stronger coherent noise is evident. The image corrected by this phase aberration is shown in Fig. 4(c), which fails to improve the distorted image compared to Fig. 2(i). Figure 4(d) shows the guidestar field after correction, and the phase aberration retrieved by MIM is shown in Fig. 4(e). The resulting corrected image is shown in Fig. 4(f), which shows significant improvement compared to Fig. 4(c). The resolution computed from Fig. 4(f) is ~3.4 μm and the contrast is 0.93, which has approached the quality of the baseline image as shown in Fig. 2(d). Figures 3(e) and 4(e) show similar appearance, which indicates that both of them can be used to correct the image well. The root mean square of the difference between them is ~0.13 λ.

 figure: Fig. 4

Fig. 4 Corrections by defocused guidestar when the sample is 50 μm away from the focal plane towards the objective lens side. (a) Intensity of the guidestar field. (b) Phase aberration from (a) through DRM. (c) Corrected image by (b). (d) Corrected guide star. (e) Phase distribution from (a) through MIM. (f) Corrected image by (e). Scale bars: 30 μm.

Download Full Size | PDF

Considering the axial asymmetry of defocus [27], we present the case where we move the sample a distance of 50 μm from the focal plane opposite the side of the objective lens as shown in Fig. 1(b). The corresponding guidestar field is shown in Fig. 5(a). Compared to Fig. 4(a), it shows more spread along the direction perpendicular to the line spread direction as indicated by the red arrow in Fig. 3(a), while shows a certain compressed along the line spread direction, which can be accounted for by the difference in the aberration balance effects from the opposite defocuses [27]. The optical aberration retrieved by DRM is shown in Fig. 5(b). The corresponding corrected image is shown in Fig. 5(c). We then apply MIM to retrieve the optical aberration from this guidestar field. The corrected guidestar field is shown in Fig. 5(d). The retrieved optical aberration is shown in Fig. 5(e), and the corrected image is shown in Fig. 5(f), which shows evident improvement compared to the one corrected by the optical aberration by DRM. The resolution computed from Fig. 5(f) is 0.45 μm, and the contrast is 0.86. The image quality of Fig. 5(f) appears worse compared to Fig. 3(f), which indicates that a certain accuracy is lost in this case, which is due to stronger coherent artefacts in the original phase map as shown in Fig. 5(b). However, we can still see that the retrieved optical aberrations show similarities with the one as shown in Fig. 3(e). To compare the difference between them, the root mean square of the phase difference of these two optical aberrations is computed as ~0.20 λ, which is larger than the difference between Figs. 4(e) and 3(e).

 figure: Fig. 5

Fig. 5 Corrections by the defocused guidestar hologram generated when the sample is put 50 μm from the focal plane opposite the side of the objective lens. (a) intensity of the guidestar field. (b) phase aberration from (a) through DRM. (c) Corrected image by (b). (d) corrected guide star. (e) phase distribution from (a) through MIM. (f) corrected image by (e). Scalebars: 30 μm.

Download Full Size | PDF

4. Discussions

As demonstrated in section 3, the proposed MIM could improve the accuracy of the retrieved optical aberrations from both the focused and defocused guidestar holograms, compared to DRM. However, it is found that there is a certain tolerance of the defocus so that the optical aberration retrieved by MIM could improve the distorted confocal image. When we further increase in the distance on both sides from the focal plane, the optical aberrations retrieved by MIM begin to fail to correct the image, indicating the defocus tolerance of the proposed MIM can be estimated as ~50 μm. Given the input beam for the guide star generation, which is approximately Gaussian beam with a diameter D of ~2 mm, the Rayleigh length L can by calculated as 85 μm by L=πω2/λ [28], where the beam waist radius ω is computed as ~4.7 μm by ω=2λd/(πD). The focal length of the objective lens d = 18 mm. It is reasonable that the guidestar holograms need be generated where the sample is well within the Rayleigh range, so that the MIM can retrieve the optical aberration, which is close to the actual one. If the sample is too far away from the focal plane, the guide star will be significantly enlarged and weakened, the optical aberration beyond strong coherent noise will be quite different from the actual optical aberration. Even MIM can faithfully retrieve the informational phase part from the guide star field, the retrieved one may be still different from the actual one.

To apply DAO for in vivo eye imaging and to achieve a 2D imaging speed of ~20 Hz as acquired by a typical scanning laser ophthalmoscope, we plan to use a high speed 2D camera (e.g., Photron FastCam Mini 50) with image speed up to 10,000 frames/ second at the frame size adopted in this work. Compared to HAO system, the total cost of DAO is projected to be much cheaper. DAO adopts a strategy of image acquisition first and then post processing, therefore the interaction time with the human subject will be much shorter than HAO system where closed feedbacks take much longer time with human subjects. Although the post processing step is comparatively longer in DAO, it can be significantly reduced by using GPU modules because most of the computation is consumed on FFT, which can be efficiently computed in a parallel mode. DAO employs the guidestar hologram to measure the aberration, which does not have any limit on the dynamic range, while the strength of the aberration that can be measured in HAO is limited by the aperture and focal length of the lenslet of the wavefront sensor. Also, DAO records complex amplitude of the light field, and allows further improvement of the corrected image after initial correction, which is however not possible for HAO. We expect that the DAO imaging will achieve at least the same level of imaging resolution and field of view as HAO with significantly reduced opto-mechanical complexity and cost, and increased imaging features such as phase-contrast, and digital refocusing capability.

5. Conclusions

In summary, we applied an image metrics-based method (MIM) to iteratively retrieve the optical aberration from the guidestar holograms in the DAO. We have experimentally demonstrated that, MIM can improve the accuracy of the retrieved optical aberration, compared to DRM. While the DRM is only applicable to well-focused guidestar hologram, the MIM can work well for both focused and defocused guidestar holograms. The tolerance of the defocus of MIM is found to be ~half the Rayleigh length of the guide star beam.

Appendix

In this appendix, we present the detailed derivation of Eq. (15). From Eq. (13), we can get the partial derivative of gC(fx, fy) with respect to ΦR(xp, yp) at a point (xp, yp) can be expressed as

gC(fx,fy)ΦR(xp,yp)=jN2A(xp,yp)exp[jΦG(xp,yp)]×exp[jΦR(xp,yp)]exp[j2π(xpfxN+ypfyN)].
From Eq. (12), we can obtain the partial derivative of I(fx, fy) with respect to the ΦR(xp, yp) as,
I(fx,fy)ΦR(xp,yp)=gC(fx,fy)ΦR(xp,yp)gC*(fx,fy)+gC*(fx,fy)ΦR(xp,yp)gC(fx,fy)=gC(fx,fy)ΦR(xp,yp)gC*(fx,fy)+C.C.=jN2gC*(fx,fy)A(xp,yp)exp[jΦG(xp,yp)]×exp[jΦR(xp,yp)]exp[j2π(xpfxN+ypfyN)]+C.C.=2N2Im{gC*(fx,fy)A(xp,yp)exp[jΦG(xp,yp)]×exp[jΦR(xp,yp)]exp[j2π(xpfxN+ypfyN)]},
where C.C. means complex conjugate. Then, from Eqs. (9), (10), and (11), the partial derivative of T with respect to the ΦR(xp, yp) can be expressed as
TΦR(xp,yp)=fx,fyM(fx,fy)dS[I(fx,fy)]dI(fx,fy)I(fx,fy)ΦR(xp,yp)=2N2fx,fyIm{A(xp,yp)exp[jΦG(xp,yp)]exp[jΦR(xp,yp)]×M(fx,fy)[1.5I0.5(fx,fy)]gC*(fx,fy)exp[j2π(xpfxN+ypfyN)]}=2N2Im{A(xp,yp)exp[jΦG(xp,yp)]exp[jΦR(xp,yp)]×{DFT[1.5I0.5(fx,fy)gC(fx,fy)M(fx,fy)]}*}.
Finally, following Eq. (14), we can obtain the partial derivative of T with respect to the component of the Zernike coefficient vector a = {ak} as,
Tak=xp,ypTΦR(xp,yp)ΦR(xp,yp)ak=xp,ypTΦR(xp,yp)Zk(xp,yp).
Plugging Eq. (19) into Eq. (20), we can obtain Eq. (15).

Funding

National Institutes of Health (NIH) R01 EY023522; NIH R01 EY024628; NIH P30 EY001792; NSF CBET-1055889.

Acknowledgments

The authors would like to thank Yanan Zhi for his help on experimental setup, and Yiming Lu for his help on sample preparation.

References and links

1. H. W. Babcock, “The possibility of compensating astronomical seeing,” Publ. Astron. Soc. Pac. 65, 229–236 (1953). [CrossRef]  

2. J. W. Hardy, J. E. Lefebvre, and C. L. Koliopoulos, “Real-time atmospheric compensation,” J. Opt. Soc. Am. 67(3), 360–369 (1977). [CrossRef]  

3. J. Liang, D. R. Williams, and D. T. Miller, “Supernormal vision and high-resolution retinal imaging through adaptive optics,” J. Opt. Soc. Am. A 14(11), 2884–2892 (1997). [CrossRef]   [PubMed]  

4. A. Roorda, F. Romero-Borja, W. Donnelly Iii, H. Queener, T. Hebert, and M. Campbell, “Adaptive optics scanning laser ophthalmoscopy,” Opt. Express 10(9), 405–412 (2002). [CrossRef]   [PubMed]  

5. M. Mujat, R. D. Ferguson, N. Iftimia, and D. X. Hammer, “Compact adaptive optics line scanning ophthalmoscope,” Opt. Express 17(12), 10242–10258 (2009). [CrossRef]   [PubMed]  

6. R. J. Zawadzki, S. S. Choi, A. R. Fuller, J. W. Evans, B. Hamann, and J. S. Werner, “Cellular resolution volumetric in vivo retinal imaging with adaptive optics-optical coherence tomography,” Opt. Express 17(5), 4084–4094 (2009). [CrossRef]   [PubMed]  

7. C. Liu and M. K. Kim, “Digital holographic adaptive optics for ocular imaging: proof of principle,” Opt. Lett. 36(14), 2710–2712 (2011). [CrossRef]   [PubMed]  

8. C. Liu, X. Yu, and M. K. Kim, “Fourier transform digital holographic adaptive optics imaging system,” Appl. Opt. 51(35), 8449–8454 (2012). [CrossRef]   [PubMed]  

9. C. Liu, X. Yu, and M. K. Kim, “Phase aberration correction by correlation in digital holographic adaptive optics,” Appl. Opt. 52(12), 2940–2949 (2013). [CrossRef]   [PubMed]  

10. U. Schnars and W. Jüptner, “Direct recording of holograms by a CCD target and numerical reconstruction,” Appl. Opt. 33(2), 179–181 (1994). [CrossRef]   [PubMed]  

11. M. K. Kim, Digital Holographic Microscopy: Principles, Techniques, and Applications (Springer, 2011), pp. 55–93.

12. C. Liu and M. K. Kim, “Digital adaptive optics line-scanning confocal imaging system,” J. Biomed. Opt. 20(11), 111203 (2015). [CrossRef]   [PubMed]  

13. C. Liu, S. Marchesini, and M. K. Kim, “Quantitative phase-contrast confocal microscope,” Opt. Express 22(15), 17830–17839 (2014). [CrossRef]   [PubMed]  

14. A. S. Goy and D. Psaltis, “Digital confocal microscope,” Opt. Express 20(20), 22720–22727 (2012). [CrossRef]   [PubMed]  

15. A. S. Goy, M. Unser, and D. Psaltis, “Multiple contrast metrics from the measurements of a digital confocal microscope,” Biomed. Opt. Express 4(7), 1091–1103 (2013). [CrossRef]   [PubMed]  

16. J. R. Fienup, “Synthetic-aperture radar autofocus by maximizing sharpness,” Opt. Lett. 25(4), 221–223 (2000). [CrossRef]   [PubMed]  

17. J. R. Fienup and J. J. Miller, “Aberration correction by maximizing generalized sharpness metrics,” J. Opt. Soc. Am. A 20(4), 609–620 (2003). [CrossRef]   [PubMed]  

18. J. C. Marron, R. L. Kendrick, N. Seldomridge, T. D. Grow, and T. A. Höft, “Atmospheric turbulence correction using digital holographic detection: experimental results,” Opt. Express 17(14), 11638–11651 (2009). [CrossRef]   [PubMed]  

19. S. T. Thurman and J. R. Fienup, “Correction of anisoplanatic phase errors in digital holography,” J. Opt. Soc. Am. A 25(4), 995–999 (2008). [CrossRef]   [PubMed]  

20. D. Hillmann, H. Spahr, C. Hain, H. Sudkamp, G. Franke, C. Pfäffle, C. Winter, and G. Hüttmann, “Aberration-free volumetric high-speed imaging of in vivo retina,” Sci. Rep. 6(1), 35209 (2016). [CrossRef]   [PubMed]  

21. P. Pande, Y. Z. Liu, F. A. South, and S. A. Boppart, “Automated computational aberration correction method for broadband interferometric imaging techniques,” Opt. Lett. 41(14), 3324–3327 (2016). [CrossRef]   [PubMed]  

22. J. Goodman, Introduction to Fourier optics (Roberts & Company Publishers, 2005), pp. 105–107.

23. E. Polak, Computational Methods in Optimization (Academic, 1971), pp. 28–58.

24. J. Goodman, Speckle Phenomena in Optics (Roberts & Company, 2005), Chap. 3.

25. K. M. Hampson, “Adaptive optics and vision,” J. Mod. Opt. 55(21), 3425–3467 (2008). [CrossRef]  

26. J. W. Cooley and J. W. Tuley, “An algorithm for the machine calculation of complex Fourier series,” Math. Comput. 19(90), 297–301 (1965). [CrossRef]  

27. V. N. Mahajan, Aberration Theory Made Simple (SPIE, 1991), pp. 69–109.

28. R. Paschotta, Field Guide to Lasers (SPIE, 2008), pp. 16–21.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 Experimental setup of DAO. LD: laser diode. CO: collimator. BS1-BS6: beamsplitters. L1-L7: regular lens. Their focal lengths are 150 mm, 60 mm, 300 mm, 120 mm, 60 mm, 20mm, and 100 mm respectively. CL: cylindrical lens of a focal length 75 mm. GSM: Galvanometer scanning mirror. A: aberrator. OL: objective lens. S: sample. (A) Schematic drawing of the optical apparatus. (B) Photo of the aberrated objective lens that consists of the aberrator A and objective lens OL.
Fig. 2
Fig. 2 Digital confocal imaging without and with aberrator in place. (a) One representative baseline off-axis digital line hologram, generated without aberrator in place. (b) Phase distribution at the pupil obtained by taking FT of (a) and extracting the image order [11]. It is displayed in a blue-white-red colormap corresponding to the phase value (-π, π]. (c) Reconstructed line field intensity obtained by taking inverse FT of the complex amplitude represented by (b). (d) Reconstructed baseline en face confocal image. (e) White light transmission image of the target with the teflon film removed. (f) One representative off-axis digital line hologram distorted by the aberrator. (g) Distorted phase map at the pupil plane from (f). (h) Distorted line intensity. (i) Distorted en face confocal image. Scale bars: 30 μm.
Fig. 3
Fig. 3 Image corrections by the focused guiestar hologram. (a) Intensity of the guidestar field. (b) Phase aberration from (a) through DRM. (c) Corrected image by (b). (d) Corrected guidestar field. (e) Phase distribution from (a) through MIM. (f) Corrected image by (f). (g) First mask. (h) Second mask. (i) Convergence curves. Scalebars: 30 μm.
Fig. 4
Fig. 4 Corrections by defocused guidestar when the sample is 50 μm away from the focal plane towards the objective lens side. (a) Intensity of the guidestar field. (b) Phase aberration from (a) through DRM. (c) Corrected image by (b). (d) Corrected guide star. (e) Phase distribution from (a) through MIM. (f) Corrected image by (e). Scale bars: 30 μm.
Fig. 5
Fig. 5 Corrections by the defocused guidestar hologram generated when the sample is put 50 μm from the focal plane opposite the side of the objective lens. (a) intensity of the guidestar field. (b) phase aberration from (a) through DRM. (c) Corrected image by (b). (d) corrected guide star. (e) phase distribution from (a) through MIM. (f) corrected image by (e). Scalebars: 30 μm.

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

O( x c , y c ,n)=IFT{O( x p , y p ,n)}( f x , f y ),
f x = x c λ d 1 f y = y c λ d 1 ,
O( x p , y p ,n)= O U ( x p , y p ,n)P( x p , y p )exp[jΦ( x p , y p )],
I Conf ( x c ,n)= y C slit | O( x c , y c ,n) | 2 ,
g( f x , f y )=IFT{A( x p , y p )exp[j Φ G ( x p , y p )]},
Φ G ( x p , y p )=Φ( x p , y p )+ Φ E ( x p , y p ).
O ( x c , y c ,n) C =IFT{O( x p , y p ,n)exp[j Φ R ( x p , y p )]}( f x , f y ).
I Conf C ( x c ,n)= y C slit | O C ( x c , y c ,n) | 2 .
T= f x , f y M( f x , f y )S[I( f x , f y )] .
M( f x , f y )={ 1 f x 2 + f y 2 R 0 f x 2 + f y 2 >R .
S[I( f x , f y )]=I ( f x , f y ) 1.5 ,
I( f x , f y )= g C ( f x , f y ) g C * ( f x , f y ),
g C ( f x , f y )= 1 N 2 x p , y p {A( x p , y p )exp[j Φ G ( x p , y p )]× exp[j Φ R ( x p , y p )]exp[j2π( x p f x N + y p f y N )]},
Φ R ( x p , y p )= k=1 J a k Z k ( x p , y p ),
T a k = 2 N 2 x P , y P Z k ( x p , y p )Im{A( x p , y p )exp[j Φ G ( x p , y p )]exp[j Φ R ( x p , y p )]× {DFT[1.5 I 0.5 ( f x , f y ) g C ( f x , f y )M( f x , f y )]} * },
DFT[1.5 I 0.5 ( f x , f y ) g C ( f x , f y )M( f x , f y )] = f x , f y [1.5 I 0.5 ( f x , f y ) g C ( f x , f y )M( f x , f y )]exp[j2π( x p f x N + y p f y N ) ,
g C ( f x , f y ) Φ R ( x p , y p ) = j N 2 A( x p , y p )exp[j Φ G ( x p , y p )]× exp[j Φ R ( x p , y p )]exp[j2π( x p f x N + y p f y N )].
I( f x , f y ) Φ R ( x p , y p ) = g C ( f x , f y ) Φ R ( x p , y p ) g C * ( f x , f y )+ g C * ( f x , f y ) Φ R ( x p , y p ) g C ( f x , f y ) = g C ( f x , f y ) Φ R ( x p , y p ) g C * ( f x , f y )+C.C. = j N 2 g C * ( f x , f y )A( x p , y p )exp[j Φ G ( x p , y p )]× exp[j Φ R ( x p , y p )]exp[j2π( x p f x N + y p f y N )]+C.C. = 2 N 2 Im{ g C * ( f x , f y )A( x p , y p )exp[j Φ G ( x p , y p )]× exp[j Φ R ( x p , y p )]exp[j2π( x p f x N + y p f y N )]},
T Φ R ( x p , y p ) = f x , f y M( f x , f y ) dS[I( f x , f y )] dI( f x , f y ) I( f x , f y ) Φ R ( x p , y p ) = 2 N 2 f x , f y Im{A( x p , y p )exp[j Φ G ( x p , y p )]exp[j Φ R ( x p , y p )] × M( f x , f y )[1.5 I 0.5 ( f x , f y )] g C * ( f x , f y )exp[j2π( x p f x N + y p f y N )]} = 2 N 2 Im{A( x p , y p )exp[j Φ G ( x p , y p )]exp[j Φ R ( x p , y p )]× {DFT[1.5 I 0.5 ( f x , f y ) g C ( f x , f y )M( f x , f y )]} * }.
T a k = x p , y p T Φ R ( x p , y p ) Φ R ( x p , y p ) a k = x p , y p T Φ R ( x p , y p ) Z k ( x p , y p ).
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.