## Abstract

This paper proposes a sub-aperture correlation based numerical phase correction method for interferometric full field imaging systems provided the complex object field information can be extracted. This method corrects for the wavefront aberration at the pupil/ Fourier transform plane without the need of any adaptive optics, spatial light modulators (SLM) and additional cameras. We show that this method does not require the knowledge of any system parameters. In the simulation study, we consider a full field swept source OCT (FF SSOCT) system to show the working principle of the algorithm. Experimental results are presented for a technical and biological sample to demonstrate the proof of the principle.

©2013 Optical Society of America

## 1. Introduction

The use of Shack -Hartmann sensor based adaptive optics for wavefront aberration correction is well established in astronomy and microscopy for point like objects to achieve diffraction limited imaging [1–3]. It is currently an active field of research in optical coherence tomography/microscopy (OCT/OCM) [4, 5]. Recently, adaptive optics via pupil segmentation using SLM was demonstrated in two photon microscopy [7]. These results showed that the optical aberrations introduced by the sample, due to change in the refractive index with depth in the sample, can be reduced to recover diffraction-limited resolution. This can improve the imaging depth in tissues. Such segmented pupil approach has also been shown with scene based adaptive optics [8]. Lately, Tippie and Fienup demonstrated the subaperture correlation based phase correction as a post processing technique in the case of synthetic aperture digital off axis holographic imaging of extended objects [9].

Especially for coherent imaging techniques the advantage lies in the availability of phase information. This information has been successfully exploited for implementing digital refocusing techniques in OCT, by measuring the full complex field backscattered from the sample. Current methods rely however on two assumptions: first, that the samples exhibit an isotropic and homogenous structure with respect to its optical properties, and secondly, that the aberrations, if present, are well defined, or accessible. Whereas the first limitation has not been addressed so far, the second issue can be solved either by assuming simple defocus and applying spherical wavefront corrections, or by iteratively optimizing the complex wavefront with a merit function that uses the image sharpness as metric [10,11].

In this paper, we investigate the method of subaperture correlation based wavefront error detection and correction as a post processing technique to achieve near diffraction limited resolution in the presence of aberrations. This method has the advantage of directly providing the local wavefront gradient for each subaperture in a single step. As such it operates as a digital equivalent to a Shack-Hartmann sensor. In section 2 we present the theory behind the algorithm which is applicable to any coherent interferometric imaging system. We show that this algorithm does not require the knowledge of any system parameters and furthermore does not rely on the assumption of sample homogeneity with respect to its refractive index. In section 3 we present simulation results demonstrating the practical implementation and performance of the algorithm. We have considered a full field swept source OCT system for our simulation, which is a type of an interference microscopic system. We also show how this method allows for a fast and simple way to correct for defocus aberration. Finally in section 4 we present experimental proof of the principle.

## 2. Theory

For the theoretical analysis we consider a system based on a Michelson interferometer as shown in Fig. 1, but the theory presented here is valid for any interferometric setup. The interference of the light reflected from the object and reference mirror is adequately sampled using a 2-D camera placed at the image plane of the object. For simplicity, we assume that we have a 4-f telecentric imaging system, and the object field is bandlimited by a square aperture at the pupil plane. In the 2-D interferometric imaging setup, the recorded signal ${I}_{d}$ at point $\xi $ of the detection plane and at the wavenumber $k=2\pi /\lambda $ of the laser illumination is given by

In case of FF SSOCT, the complex signal ${I}_{s}\left(\xi ,z\right)$, obtained after $k\to z$ Fourier transformation, containing field information about the object’s ${z}^{th}$ layer can be written in the discrete from as

We assume that in each subaperture the local phase error can be approximated by the first order Taylor expansion. We measure the translation between the intensity of the images formed by the different subapertures with respect to intensity of the image formed by the central subaperture after normalizing them to the same level [16]. We use this information to calculate the relative local wavefront slope in each subaperture given by

After we have estimated the phase error, ${\varphi}_{e}$ we apply a phase correction $\mathrm{exp}\left(-i{\varphi}_{e}\right)$ to the Fourier data $\tilde{D}$ and then perform the 2-D IDFT to get the phase corrected image${\tilde{I}}_{m,n,z}$ given by

## 3. Computer simulation

For the computer simulation we consider the same optical setup shown in Fig. 1 with focal length of lens L1 and L2 as $f=50$ mm. We consider a sweeping laser source with center wavelength ${\lambda}_{0}=850nm$ and bandwidth $\Delta \lambda =80nm$ Fig. 4(a) shows the normalized power spectrum with respect to wavenumber $k$. We assume that the sweeping is linear in$k$. For simplicity, just for the demonstration of the working principle of the method, we consider only a 2-D object which gives diffuse reflection on illumination. Our object is a USAF bar target of size $512\times 512$ pixels, multiplied with circular complex Gaussian random numbers to simulate the speckle noise. We zero-pad the object to the size of $1024\times 1024$ pixels and apply the fast Fourier transform (FFT) to get to the pupil plane. We multiply the result with a square aperture, which is an array of unity value of size $512\times 512$ pixels zero padded to size $1024\times 1024$ pixels. To apply a phase error we multiply the result with a factor $\mathrm{exp}\left(i{\varphi}_{e}\right)$ using Eq. (16) and then compute the IFFT to get to the image plane. In the last step, we compute the IFFT instead of FFT simply to avoid inverted images without actually affecting the phase correction method. We multiply the resulting field at the image plane with a phase factor of $\mathrm{exp}\left(i4kf\right)$ to take into account the propagation distance, then add the delayed reference on-axis plane wave with phase factor of $\mathrm{exp}\left[ik\left(4f+\Delta z\right)\right]$ with $\Delta z=120\mu \text{m}$ and compute the squared modulus to obtain the simulated interference signal. This is done for each $k$ in the sweep to create a stack of 2-D interference signal with $k$ varying from $7.0598\times {10}^{6}{\text{m}}^{-1}$to $7.757\times {10}^{6}{\text{m}}^{-1}$ in 256 equal steps. The reference wave amplitude was $100$ times the object field amplitude. Figure 4(b) shows the spectral interferogram at one lateral pixel.

For the reconstruction, we first calculate the FFT along $k$, being 256 spectral pixels after zero padding to size 512 pixels, for each lateral pixel of the detection plane$\left(1024\times 1024\text{pixels}\right)$ to separate the DC, autocorrelation and the complex conjugate term from the desired object term. The resulting sample depth profile (A-scan) is plotted in Fig. 4(c). It allows separating the DC, autocorrelation and the complex conjugate term from the desired object term (red dashed rectangle). Figure 4(d) shows the image slice from the reconstructed image volume at the position of the peak in Fig. 4(c) without any phase error. The simulated phase error consisted of Taylor monomials up to 6th order and the coefficients were selected from the random Gaussian distribution such that peak to valley phase error across the aperture is 58.03 radians. Figure 4(e) shows the simulated phase error in radians and Fig. 4(f) is the corresponding aberrated image obtained after applying it. We have assumed that the phase error is independent of the light frequency and that the system is compensated for any dispersion effects.

Figure 5 shows the results for the various cases of aperture segmentation. In Fig. 5(a) with $K=3$ we have 9 sampling points, and hence we could fit only first 9 Taylor monomials according to Eq. (16). Comparing the degraded image in Fig. 4(f) to the corrected image in Fig. 5(a) we immediately see the improvement in visual image quality. Except for the image obtained in the case when $K=3$ with 50 percent shown in Fig. 5(h), all the images obtained for different cases are of similar visual quality. We found in our simulation that in order to see any significant smearing or blurring in the images, the peak to valley (P-V) aberration should be more than 15 radians. Otherwise the image is close to the diffraction limited image in visual quality. After phase corrections are applied under different conditions of pupil splitting, we find that the P-V residual phase errors are below 10 radians in all cases under consideration except for the case when $K=3$ with 50 percent overlap, shown in Fig. 5(k). Hence, we judge the performance of the different pupil splitting conditions based on the residual phase error maps. In Fig. 5(b) with $K=5$ we have 25 sampling points and hence we could fit Taylor monomials upto ${6}^{th}$ order with 25 coefficients. We see that the residual phase error has further reduced as compared to the $K=3$ case in Fig. 5(d). In Fig. 5(c) and (g) we have $K=7$ and $K=9$ respectively, but we fit only Taylor monomials upto 6th order to see the effect of increasing sampling points. We see that residual error decreases for $K=7$ but increases for $K=9$. This is because with increasing $K$ the size of the subapertures becomes smaller and hence the resolution of the images corresponding to these subapertures also reduces leading to registration error for shift calculations. This results in false slope calculation and phase error estimation. Obviously increasing the number of apertures allows in principle for higher order phase correction, but the smaller number of pixels in the subaperture leads to increasing phase errors. A possible solution is to use overlapping apertures as shown in Fig. 2.

We used overlapping subapertures with 50 percent overlap in order to increase the sampling points with uniform spacing and to maintain the number of pixels of the subapertures. 50 percent overlap ensures that we have maximum number of sampling points without the over redundancy of the subaperture data due to overlap. In case of overlapping apertures, *K* no longer stands for the number of subapertures but defines the size of each subaperture as $\lfloor N/K\rfloor \times \lfloor N/K\rfloor $ pixels. Nevertheless, for $K=3$ we have a higher residual phase error as compared to the non-overlapping case, also visible in the image in Fig. 5(h) as compared to Fig. 5(a). This is because we have assumed in our algorithm that phase error in each subaperture can be described by a 1st order Taylor series. However in the presence of higher order aberrations this assumption is no longer true if the size of the subaperture is big enough to capture large variation of phase error. This can result in an error in phase estimation, and using overlapping with large subapertures of same size can further add up the errors. Betzig et.al refer to errors induced due to overlapping subapertures as “residual coupling between the subregions” [7]. This is what we see in the case of $K=3$ with 50 percent overlap. It shows that for $K=3$ the size of subapertures are not optimal for the 1st order Taylor approximation of local phase error. However, the residual error decreases rapidly for the overlapping case when$K=5$. In case of $K=5$ the subaperture size is small enough for the approximation to be valid and we see a significant reduction in the residual phase error in Fig. 5(l).

Figure 6 shows the plot of rms residual phase error for different values of $K$ for both non-overlapping and overlapping subapertures with different percentage of overlap. We see that with overlapping subapertures of appropriate size with respect to the overall aberrations we obtain in general a lower residual phase error. Only if the subaperture size is chosen too large to cover significant amount of aberrations, which cannot be approximated by the 1st order Taylor series, the overlapping aperture approach performs worse. A smaller size of subapertures ultimately leads to loss of resolution and increased registration error, which in consequence leads to an increase in residual phase error. For bigger size of subapertures the wavefront is sampled with less points and hence one can find coefficients only for the lower order Taylor monomials. The optimal number, size and percentage of overlap of sub-apertures depend of course on the amount of aberrations present. However, from the plots in Fig. 6 we can see that the overlapping subapertures do not yield significantly different results from the non overlapping case for *K* = 5, 7 9 and 11. Also, increasing the percentage of overlap to more than 50 percent between subapertures does not significantly reduce the residual phase error. This is due to the redundancy of data caused by the overlap. We can see that for *K* = 3 the residual rms errors are lower for the cases of more than 50 percent overlap as compared to the 50 percent overlap case. But still the residual rms errors are higher than in the non overlapping case. On the other hand, for *K* = 13, the residual rms errors for the cases of more than 50 percent overlap are higher than both the 50 percent overlap and the non-overlapping case. Since both *K* = 3 and *K* = 13 fall in the regimes where the approximations and assumptions of the presented algorithm are no longer valid, it is difficult to explain the exact reasons behind the observed differences. Nevertheless, in Fig. 6 we can see that the condition $K=5$ with 50 percent overlap has performed the best with residual rms of 0.3412 radians which is well within the Marechal’s criterion of rms error of 0.449 radians for diffraction limited performance.

In the case of symmetric quadratic phase error across the aperture which results in defocus aberration, we can find the estimate of the phase error with just two subapertures. The aperture can be split into two equal halves vertically or horizontally. The phase error term in each half has a linear slope with equal magnitude but opposite direction. The opposite slopes in each half cause the images formed by each half to shift in opposite direction relative to each other. Using the relative shift information by cross correlating the images formed using two half apertures, one can easily derive the estimate of the coefficient of the quadratic phase as

where $\Delta m$ is the shift in terms of pixels in $x$ direction, $2M$ is the total size of the data array in pixels and*N*is the size of the aperture in pixels. From Eq. (9) we can estimate the quadratic phase error. This method is simple and fast as compared to the defocus correction method based on sharpness optimization which requires long iterations.

## 4. Experimental results

The schematic of the FF SS OCT setup for the proof of principle study is shown in Fig. 7(a). The system is a simple Michelson interferometer based on Linnik configuration with same type of microscope objectives (5 × Mitotuyo Plan Apo NIR Infinity-Corrected Objective, NA = 0.14, focal length *f* = 40 mm) in both the sample and the reference arm. The sample consisted of a layer of plastic, a film of dried milk and an USAF resolution test target (RTT). A thin film of dried milk was used to produce scattering and diffuse reflection. The plastic layer of non uniform thickness and structure to create random aberration was created by heating a plastic used for compact disc (CD) cases. The output beam from the sweeping laser source (Superlum BroadSweeper 840-M) incident on the lens L1 is of diameter 10 mm and the field of view on the sample is ~2 mm. The power incident on sample is 5 mW. The RTT surface was in focus while imaging. The image of the sample formed by L2 is transferred to the camera plane using a telescope formed by lens L3 and L4 with effective magnification of 2.5$\times $. A circular pupil P of diameter 4.8 mm is placed at the focal plane of lens L3 and L4 to spatially band limit the signal. Figures 7(c) and 7(e) show that the image obtained is speckled due scattering at the milk film and aberrated by the plastic layer.

For imaging, the laser is swept from wavelength $\lambda =831.4\text{nm}\text{to}\lambda =873.6\text{nm}$, with $\Delta \lambda =\text{0}\text{.0824}\text{nm}$ step width, and the frames at each wavelength are acquired using a CMOS camera (Photon Focus MV1-D1312I-160-CL-12) at the frame rate of 108 fps synchronized with the laser. A total of 512 frames are acquired. After zero padding and $\lambda \text{to}k$ mapping of the spectral pixels, a 1-D FFT is performed along the $k$ dimension for each lateral pixel, the standard FDOCT procedure, which provides the depth location of the different layers of the sample. The layer corresponding to the RTT is picked for aberration correction and the corresponding enface intensity image is displayed in Fig. 7(c). Figure 7(e) shows the zoomed in image of Fig. 7(c) consisting of $390\times 390$ pixels. The numbers 4 and 5 indicate the location of the 4th and 5th group elements. We can barely resolve the RTT element (4, 3). The horizontals bars are visually more affected by the aberration due to the anisotropic nature of the distorted plastic layer. FFT of the image after zero padding shows the spatial frequency information within a circular area. For further processing we filtered out a square area ($600\times 600$ pixels) from the spatial frequency distribution. The side length was chosen approximately equal to the diameter of the circular pupil (dotted square in Fig. 7(d)).

Figure 8 shows the phase correction results after subaperture processing. The Taylor monomials upto 5th order are fitted to determine the phase error. Figure 8(a) shows the result obtained using 9 non-overlapping subapertures with $K=3$. In this case we could fit only upto first 9 monomial terms given by Eq. (16), and hence monomial fitting limited to the 4th order. We also tried non-overlapping subapertures with $K=5$ and overlapping subaperture with 50% overlap for $K=5$. Note that Figs. 8(d)-(f) show only the estimation of the phase error that may be present. Unlike the simulation where we calculated the residual phase error knowing the reference phase error, here we look for the improvement in image quality to judge the best pupil splitting condition. We can clearly appreciate the improvement in resolution as the bars are more evident after phase correction. We can resolve horizontal bars upto element (5, 1) in Fig. 8(a) and (5, 4) in Figs. 8(b) and 8(c). This corresponds to the improvement in resolution by a factor of 1.6 for Fig. 8(a), and a factor of 2 for the case in Fig. 8(b) and (c) respectively over the aberrated image with the highest resolvable element being (4, 3). The improvement in resolution is calculated using the relation ${2}^{\raisebox{1ex}{$\Delta m$}\!\left/ \!\raisebox{-1ex}{$6$}\right.}$ where $\Delta m$ is the improvement in elements. Vertical bars in the corrected images appear much sharper after phase correction. The theoretically calculated diffraction limited resolution of our experimental setup is 6.5 $\mu \text{m}$. Note that in our experiment the resolution is limited by the size of the pupil P. The best resolution obtained in case of Figs. 8(b) and 8(c) corresponding to element (5,4) is about 11 $\mu \text{m}$which seems to be far from the theoretical limit. This is primarily because of the strong local variations of the wavefront across the pupil plane due to the highly distorted plastic layer causing anisotropic imaging condition. Also, this may be the reason why phase maps shown in Figs. 8(d)-8(e) appear different. However, they have the same general profile.

We also investigated the effect of refractive index change within the sample. We replaced the non uniform plastic layer of the sample with an uniform plastic layer of thickness ~1 mm. Without plastic layer the RTT was placed at the focal plane of the microscope objective while imaging. The presence of Plastic layer causes the image to be defocused as seen in Fig. 9(a). Figure 9(b) shows the result using nonoverlapping subapertures with $K=3$. The corresponding estimated phase error in Fig. 9(d) shows the presence of quadratic and 4th order terms which is expected due to the presence of defocus and spherical aberration caused by the change in refractive index within the sample. Since spherical aberration can be balanced with defocus, we applied a simple half aperture method to find the effective defocus error according to Eq. (9) as shown in Fig. 9(e) Successful focusing is clearly evident in these images as elements upto (6, 1) in Fig. 9(b) and (5, 6) in 9(c) can be resolved which corresponds to resolution of 7.8 $\mu \text{m}$and 8.7 $\mu \text{m}$respectively. These values are close to the calculated theoretical diffraction limited resolution of 6.5 $\mu \text{m}$. In the subaperture processing we filtered out the square aperture, whereas the actual pupil was circular. So the subapertures at the edges were not completely filled with spatial frequency data. This may have contributed to the residual phase error.

For demonstrating the method on a biological sample we applied the defocus correction using two non-overlapping subapertures to the 3-D volume image of a grape. The digital refocusing method enables to effectively achieve an extended depth of focus. The first layer of the grape was at the focal plane of the microscope objective. Since the theoretical depth of field is only 130 µm in the sample (assuming refractive index of 1.5), the deeper layers out of focus get blurred. Figure 10(a) shows a tomogram of the grape sample with an arrow indicating a layer at the depth of 424.8 $\mu \text{m}$from the focal plane. Figure 10(c) shows an enface view of that layer and Fig. 10(d) shows the defocus corrected image. We can appreciate the improvement in lateral resolution as the cell boundaries are clearly visible now.

In Fig. 11 blue dotted plot shows the coefficients of the defocus error calculated using the two subaperture method, given by Eq. (9), for each layer of a grape sample. Whereas, the solid red line shows the theoretically calculated coefficients using the knowledge of the optical distance of each layer of the grape sample from the focal plane given by

where $\Delta z{N}_{z}$ denotes the optical distance of the ${z}^{th}$ layer from the focal plane with $\Delta z$ being the pixel pitch in depth and ${N}_{z}$being the location of layer in terms of pixels. Green dotted line indicates the theoretically calculated depth of focus at $z=130\mu \text{m}$. Here the number of pixels in the image is $M=500$ and pixel pitch of the camera is $\Delta \xi =8\mu \text{m}$. The pixel pitch in depth is calculated assuming uniform refractive index of $n=1.5$ in the sample given bywith ${\lambda}_{0}=852.5\text{nm}$ and $\Delta \lambda =42.2\text{nm}$. The effect of applying theoretically calculated defocus correction is basically the same as Fresnel propagating the field of out of focus layers to the focal plane. We can see from the plot that after a depth of about 380 $\mu \text{m}$, there is increasing deviation of the result obtained by using the two subaperture method from the theoretically calculated one using Eq. (10). This may give an impression that the subaperture method is inaccurate in estimating defocus error at greater depth. We further investigated the effect of defocus correction, calculated theoretically, on the visual quality of the images and compared it with the defocus corrected images obtained using the two subaperture method. We found that the images obtained using both methods appear similar in visual quality till a depth of about 380 $\mu \text{m}$. However, the images obtained using theoretically calculated defocus corrections appear blurry at greater depths. Whereas, in case of the two subaperture method the images obtained at increased depth are visually in sharp focus. This shows that for deeper layers the theoretically corrected defocus error requires further optimization or processing to achieve the level of performance demonstrated by the single step subaperture method. Media 1 shows the fly through the depth of the original and digitally focused 3-D volume images of the grape sample. Here we can clearly see that the same lateral resolution is maintained throughout the depth after defocus correction.Fig. 12: (a) Original grape sample layer at $z=285\text{\mu m}$, (b) image obtained by applying theoretically calculated the phase error correction to (a), (c) theoretically estimated phase error in radians for (b), (d) defocus corrected image of (a) obtained using the two subaperture method, and (e) estimated phase error in radians for the case (d). (f) Original grape sample layer at $z=511\text{\mu m}$, (g) image obtained by applying theoretically calculated the phase error correction to (f), (h) theoretically estimated phase error in radians for (g), (i) defocus corrected image of (f) obtained using the two subaperture method, and (j) estimated phase error in radians for the case (i) (See Media 1).

The result shown in Fig. 12(b) show that at the lower depth of z = 285 $\mu \text{m}$ the image obtained by applying theoretically calculated defocus correction is visually sharp. However, at z = 511 $\mu \text{m}$ the image obtained appears blurry as shown in Fig. 12(g). On the other hand, the images obtained by subaperture method appear in sharp focus for both z = 285 $\mu \text{m}$ and z = 511 $\mu \text{m}$ as shown in Figs. 12(d) and 12(i) respectively. This shows that higher P-V defocus errors estimated by the two subaperture method at greater depths do in fact yield better focusing performance. The difference in results between the theoretically calculated defocus error and the ones estimated using the two subaperture method may be due to the fact that for the theoretical calculations we assumed the sample to be of uniform refractive index. However, the sample may have random local refractive index fluctuations which may cause spherical aberration and with increasing depth this effect can add up. In our experimental result in Fig. 9, we have demonstrated that the two subaperture method is capable of balancing spherical aberration to achieve a focused image. The better results obtained for deeper layers of a grape sample using the subaperture method only supports this fact. The quantification of induced spherical aberration requires accurate knowledge of refractive index fluctuations within the sample. This will also require theoretical modelling of the sample and the system which is out of the scope of the present paper and will be investigated in future. Also, it is difficult to understand exactly the reason behind the abrupt strong defocus induced in the images after a certain depth which is also noticeable in Media 1. This is currently under investigation and will be answered by future research. Nevertheless, we have clearly demonstrated that the two subaperture method can correct for the defocus and spherical aberrations which are commonly encountered while imaging biological samples.

## 5. Discussion and conclusion

In this paper we investigated the subaperture correlation based phase correction as post processing technique for interferometric imaging where phase information can be accessed. In the theoretical analysis we showed that this method is independent of any system parameters such as propagation distance or physical detector size. We can correct for the aberration present in the images without knowing the system details provided the image is band limited and not severely aliased. The method works very well when the spatial frequency content is spread uniformly across the pupil which is the case when imaging diffuse scattering object with laser light. We further divided the pupil plane into overlapping subapertures to obtain more sampling points to reduce error in estimation of the phase error. We used Taylor monomials for fitting the phase estimate which are simple and easy to use. In our simulation we showed that the least square method for finding the coefficients of the monomials is reliable. We also conducted simple experiments to show the proof of principle using a FF SS OCT system. We were able to reduce the aberration introduced in the sample using the theory of subaperture processing. The images obtained after processing clearly shows an improvement in resolution. This method does not require long iterations as in case of optimization methods used for phase correction [10,11] and provides the phase error estimation in a single step. It can thus be seen as digital equivalent of a Hartmann Shack wavefront sensor. We have demonstrated the method using Taylor monomials as basis functions which are not orthogonal over the square aperture. Hence, in the presence of higher order aberration there might be coupling of errors between the coefficients of Taylor monomials which could leave the accurate quantification of the contributions due to various components of aberration such as defocus, spherical etc. difficult. Nevertheless, we have demonstrated that by using Taylor monomials we can reduce the overall aberration to achieve performance close to the diffraction limit. However, in the future it would be interesting to use Zernike like polynomials that are orthogonal over the sampled square aperture. This would help for an accurate quantification of various higher order aberration contributions. A potential application would be the characterization and quantification of aberrations present in the human eye while imaging the retina. For results presented in this paper the cross correlation of the subapertures were done in a sequential manner in MATLAB. The processing time for each frame is around 2 to 4 seconds. However, the speed can be greatly improved with parallel processing on a graphics processing unit (GPU). In case of 3-D imaging, the phase error correction can be applied throughout the isoplanatic volume as for example done in [18], where the aberration is uniform and the sample has uniform refractive index, using the phase error estimation from a single layer. Furthermore, subaperture processing opens the possibility of doing region of interest based phase correction if the aberrations and the sample are not uniform across the volume.

## 6. Appendix A

Let ${\tilde{D}}_{p}$ and ${\tilde{D}}_{q}$ represent any two square subapertures of the discretized, filtered and segmented pupil data $\tilde{D}$given by

We model the phase error ${\varphi}_{e}$ as a linear combination of Taylor monomials ${T}_{J}$ given by

## Acknowledgments

This research was supported by Carl Zeiss Meditec Inc. Dublin, USA, Medical University Vienna, European project FAMOS (FP7 ICT, contract no. 317744), FUN OCT (FP7 HEALTH, contract no. 201880) and the Christian Doppler Society (Christian Doppler Laboratory “Laser development and their application in medicine”).

## References and links

**1. **B. C. Platt and R. Shack, “History and principles of Shack-Hartmann wavefront sensing,” J. Refract. Surg. **17**(5), S573–S577 (2001). [PubMed]

**2. **J. L. Beverage, R. V. Shack, and M. R. Descour, “Measurement of the three-dimensional microscope point spread function using a Shack-Hartmann wavefront sensor,” J. Microsc. **205**(1), 61–75 (2002). [CrossRef] [PubMed]

**3. **M. Rueckel, J. A. Mack-Bucher, and W. Denk, “Adaptive wavefront correction in two-photon microscopy using coherence-gated wavefront sensing,” Proc. Natl. Acad. Sci. U.S.A. **103**(46), 17137–17142 (2006). [CrossRef] [PubMed]

**4. **M. Pircher and R. J. Zawadzki, “Combining adaptive optics with optical coherence tomography: Unveiling the cellular structure of the human retina in vivo,” Expert Rev. Ophthalmol. **2**(6), 1019–1035 (2007). [CrossRef]

**5. **K. Sasaki, K. Kurokawa, S. Makita, and Y. Yasuno, “Extended depth of focus adaptive optics spectral domain optical coherence tomography,” Biomed. Opt. Express **3**(10), 2353–2370 (2012). [CrossRef] [PubMed]

**6. **L. A. Poyneer, “Scene-based Shack-Hartmann wave-front sensing: analysis and simulation,” Appl. Opt. **42**(29), 5807–5815 (2003). [CrossRef] [PubMed]

**7. **N. Ji, D. E. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods **7**(2), 141–147 (2010). [CrossRef] [PubMed]

**8. **T. Haist, J. Hafner, M. Warber, and W. Osten, “Scene-based wavefront correction with spatial light modulators,” Proc. SPIE **7064**, 70640M, 70640M-11 (2008). [CrossRef]

**9. **A. E. Tippie and J. R. Fienup, “Sub-Aperture Techniques Applied to Phase-Error Correction in Digital Holography,” in *Digital Holography and Three-Dimensional Imaging*, OSA Techinal Digest (CD) (Optical Society of America, 2011), paper DMA4. http://www.opticsinfobase.org/abstract.cfm?URI=DH-2011-DMA4 [CrossRef]

**10. **S. T. Thurman and J. R. Fienup, “Phase-error correction in digital holography,” J. Opt. Soc. Am. A **25**(4), 983–994 (2008). [CrossRef] [PubMed]

**11. **S. G. Adie, B. W. Graf, A. Ahmad, P. S. Carney, and S. A. Boppart, “Computational adaptive optics for broadband optical interferometric tomography of biological tissue,” Proc. Natl. Acad. Sci. U.S.A. **109**(19), 7175–7180 (2012). [CrossRef] [PubMed]

**12. **P. Hariharan, *Optical Interferometry* (Academic, 2003).

**13. **D. Malacara, *Optical Shop Testing* (Wiley, 1992).

**14. **M. Rueckel and W. Denk, “Properties of coherence-gated wavefront sensing,” J. Opt. Soc. Am. A **24**(11), 3517–3529 (2007). [CrossRef] [PubMed]

**15. **W. Drexler and J. G. Fujimoto, *Optical Coherence Tomography: Technology and Applications* (Springer, 2008).

**16. **M. Guizar-Sicairos, S. T. Thurman, and J. R. Fienup, “Efficient subpixel image registration algorithms,” Opt. Lett. **33**(2), 156–158 (2008). [CrossRef] [PubMed]

**17. **A. E. Tippie, A. Kumar, and J. R. Fienup, “High-resolution synthetic-aperture digital holography with digital phase and pupil correction,” Opt. Express **19**(13), 12027–12038 (2011). [CrossRef] [PubMed]

**18. **D. Hillmann, G. Franke, C. Lührs, P. Koch, and G. Hüttmann, “Efficient holoscopy image reconstruction,” Opt. Express **20**(19), 21247–21263 (2012). [CrossRef] [PubMed]

**19. **V. N. Mahajan and G. M. Dai, “Orthonormal polynomials in wavefront analysis: analytical solution,” J. Opt. Soc. Am. A **24**(9), 2994–3016 (2007). [CrossRef] [PubMed]