Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Hybrid technique for high resolution imaging of the eye fundus

Open Access Open Access

Abstract

We study the performance of a hybrid technique for high resolution imaging of the eye fundus, in which the main part of the ocular aberration is compensated by an optical device (phase plate, deformable mirror, etc.) and the remaining aberration is compensated by deconvolution from wavefront sensing. A comparison among imaging with partial compensation, deconvolution from wavefront sensing and the presented hybrid technique is made using numerical simulations based on actual aberration data.

©2003 Optical Society of America

1. Introduction

Improvements in high resolution imaging of the eye fundus are highly interesting in order to determine retinal diseases in their early stages of development. Efforts have been made for years to overcome the image degradation produced by the optical aberrations of the eye. Adaptive Optics [1] (AO), compensation with static elements [2,3] and pure deconvolution from wavefront sensing [4,5] (DWFS) have been proposed and demonstrated to achieve partially this goal. Any actual device (static or dynamic) designed to compensate eye aberrations will leave some residual aberration. On the other hand, the wavefront distortion introduced by an uncompensated human eye is in some cases strong enough to require several tens of image and aberration measurement pairs to get an acceptable signal-to-noise ratio using standard DWFS techniques. Here we study the performance of a hybrid procedure [6] (referred to as PCDWFS) which combines both approaches: in PCDWFS the main part of the ocular wavefront aberration is compensated by either an active or a static optical device, and the remaining aberration is compensated via DWFS. In doing so, deconvolution algorithms have to cope with reasonably small amounts of aberration, which improves the signal to noise ratio (SNR) of the optical transfer function (OTF) and the image by reducing the spread caused by the eye optics.

2. Simulation description

In the simulation presented in this paper, the optical compensation was performed, without loss of generality, by a phase plate (PP) that partially conjugates the ocular wavefront aberration of the subject. The phase plate is conjugated to the ocular pupil plane and the Shack-Hartmann lenslet plane. The filter we used in the simulations to recover the image spectrum from the degradation introduced by the ocular aberration (or by the residual aberration in the partially compensated case), is the vector Wiener filter [7,8].

Oˆ=j=1kIjHj*j=1kHj2+γ

where Ô is the estimated object spectrum, Ij is the jth degraded image spectrum, Hj is the computed OTF obtained from the jth wavefront estimation (normalized to one at the origin), γ is the regularization parameter, and k is the number of frames used to estimate Ô. The optimum value of γ for each frequency can be calculated if the object and noise power spectra are known. However, in many practical situations this a priori information is not available and a reasonable guess on γ has to be made. Before performing the inverse Fourier transform the estimated object spectrum Ô was multiplied by the diffraction limited OTF in order to avoid amplification of high frequency noise.

Several works using DWFS to recover degraded eye fundus images have been presented [4,5]. In those cases the dynamic behaviour of the ocular wavefront aberration is used to improve the SNR of the deconvolution, increasing the effective cutoff frequency of the method. In the technique analyzed here (which we applied to static eye aberrations) the main part of the improvement in the SNR and cutoff frequency is achieved by the partial optical compensation performed by the phase plate.

All the results we present were obtained using simulated objects and degraded images, and actual data sets of the ocular aberrations and manufactured phase-plates of two subjects. We used a simplified model of the eye where the ocular aberration, either uncompensated or partially compensated by a phase plate, is supposed to behave as a static phase screen over the eye pupil. This determined the pupil function from which we computed the OTF and the point spread function (PSF) of the system using standard methods of Fourier optics.

The aberration-degraded uncompensated or partially compensated images were generated by convolving the geometrical image of the object with the corresponding PSFs. Then gaussian noise of different variances were added in order to simulate different signal-to-noise ratios in the image channel (SNRI={∞, 50, 25, 12, 6}). As wavefront sensor we assumed a Shack-Hartmann [7] (SH) with 37 subpupils. The measurements of the SH were simulated by computing the spatial average of the phase gradient over each subpupil, and then introducing different amounts of gaussian noise (see below) in the centroid position. The wavefront was estimated using a modal LSQ [10] algorithm. Deconvolution of the uncompensated and partially compensated images was performed with the vector Wiener filter [7,8] using either 30 or 1 image and aberration measurement pairs. In this last case the vector Wiener filter equals the traditional Wiener filter.

The angular size of the object is 0.18°: it is a region of an AO corrected image of the eye fundus [1] of 128×128 pixels. The pupil diameter is 6.6 mm. The focal length of the 37 microlenses of the Shack-Hartmann wavefront sensor is 50 mm and the diameter is 600 µm. The wavelength of the image channel is λI=0. 633 µm and in the sensor channel is λSH=0.549 µm. The sensor camera pixel size was set to 9 µm. Random gaussian noise of zero-mean and standard deviation σSH={0, 0.03, 0.1, 0.3, 0.5} pixels was added to the centroid position. These error levels result in errors of standard deviation σradSH={0, 0.005, 0.018, 0.054, 0.090} mrad, respectively, in the wavefront slope estimation.

3. Simulation results

First we verified the improvement in the OTF of the eye optics achieved by the phase plate compensation. We analyzed the cases of two subjects, SB and SM, for whom we have information about their eye aberrations and the performance of several manufactured phase plates. These measurements were obtained by laser ray tracing [9]. The ocular wavefront aberration before partial correction was 0.80 µm rms (root mean square) for SB and 1.30 µm rms for SM. After phase plate compensation the residual aberrations were 0.19 µm and 0.36 µm rms, respectively.

 figure: Fig.1.

Fig.1. Modulus of the aberrated OTF (SB-blue, SM-red) and optically partial corrected OTF (SB-blue squared, SM-red circled). f is the spatial frequency normalized to the cutoff frequency of the unaberrated eye.

Download Full Size | PDF

In the presence of noise in the image channel, it is possible to define an effective cutoff frequency of the imaging system as the spatial frequency up to which the OTF of the aberrated eye is higher than the noise spectrum level. Beyond that frequency the noise spectrum of the image will be considerably amplified after deconvolution if the regularization parameter is small, causing the appearance of uncontrolled artifacts in the restored image. In order to reduce those artifacts we can increase the magnitude of the regularization parameter but this smooth the image losing high frequency information. There is a trade-off between artifacts and smoothing in restored image. We found γ=3×10-5 by trial and error as a magnitude allowing for a reasonable compromise.

In Fig. 1 we plot the moduli of the OTF of the aberrated and partially compensated eyes for subjects SB and SM; the SNRI levels corresponding to the noise variances of the image channel (solid for one frame and dashed for the average of 30 frames); and the normalized frequency of the cone mosaic used in this study (vertical line). We can see the improvement in the partially compensated OTF for both subjects, in comparison to the corresponding aberrated OTF. The use of partial correction allows to increase the effective cutoff frequency (for subject SB) in the single frame case from 0.55 to 0.78 for SNRI=50 and from 0.17 to 0.67 for SNRI=25. The improvement obtained for subject SM is less significant. By increasing the effective cutoff frequency we reduce the amplification of the noise spectrum, and so improve the quality of the restoration. A complementary way of increasing the effective cutoff frequency is to reduce the noise level by averaging several image frames (dotted lines).

 figure: Fig. 2.

Fig. 2. Simulated images degraded by the eye optics for (a) SB, (b) SM. Images after phase plate partial compensation: (c) SB, (d) SM.

Download Full Size | PDF

Figure 2 shows the simulated images of the retinal cone mosaic (before deconvolution) as would be obtained by imaging it through the aberrated eye (in Figs. 2(a) and (b)) and through the eye partially corrected by a phase plate (in Figs. 2(c) and (d)). In agreement with the smaller residual aberration for SB after phase plate compensation, the quality of the partially compensated retinal image for this subject is significantly superior to the one obtained for SM.

 figure: Fig. 3.

Fig. 3. Simulated images restored by DWFS and the hybrid technique PCDWFS for subjects SB and SM, using the vector Wiener filter with 30 samples.

Download Full Size | PDF

Figure 3 presents a set of images restored by DWFS and the presented hybrid technique, PCDWFS, showing the benefits of using partial correction before deconvolution. We show the images restored by both methods for subjects SB and SM obtained at about the most adverse situation (regarding noise at both channels) that each technique can withstand giving visually acceptable images. We can see, by comparing Figs. 3(a) and (b) with Fig. 3(c) and 3(d), respectively, that the amount of noise that can handle PCDWFS is superior to the one of DWFS. Besides the noise accepted in the image and sensor channels by PCDWFS for subject SB is superior to the one for SM; this behavior can be attributed to the higher improvement in the OTF achieved by partial compensation in the case of SB (see Fig. 1). In order to appreciate the improvement obtained by using partial correction we show in Figs. 3(e) and (f) the images restored by DWFS at the most adverse conditions supported by PCDWFS.

 figure: Fig. 4.

Fig. 4. Simulated images restored by DWFS and the hybrid technique PCDWFS for subjects SB and SM using the vector Wiener filter with only one image and aberration measurement pair.

Download Full Size | PDF

In Fig. 4 we explore the possibility of applying the hybrid technique using only one frame to deconvolve the degraded image. We chose the same amounts of noise as in Figs. 3(a) and (b), which are close to the worst noise conditions tolerated by DWFS working with 30 frames. In Fig. 4(a)-(b) we show the restoration of one degraded image with the Wiener filter (k=1) without partial correction for both subjects. In Figs. 4(c) and (d) we show the restoration achieved by using the hybrid technique. The results obtained for both subjects using only one image and aberration measurement pair open the possibility of using PCDWFS in real time, as far as the computational time required for performing the single frame deconvolution be of the order of the image acquisition time.

In order to compare quantitatively in the image domain the improvement obtained by the use of PCWFS versus DWFS we evaluated a gain function, G, defined as:

G=10log{n,m(oi)2n,m(ooˆ)2}

where o is the diffraction limited image, i is the degraded image, ô is the estimated image, and the summation is made over the n, m image pixel indices. Greater values of G are indicative of reconstructed images closer to the desired diffraction limited one.

Figure 5 shows the gains of the three methods (direct imaging of the eye fundus through the eye partially compensated (PC) by the phase plate, DWFS, PCDWFS) for subject SB and different amounts of noise in the sensor and image channels, using both the vector Wiener filter with k=30 [Fig. 5(a)] and k=1 [Fig. 5(b)]. In the single frame case (k=1) the gain curve may vary somewhat from frame to frame, due to the individual realizations of the noise in the image and sensor channels. The plots in this case represent expected values of G. The gain is related to the amount of residual aberration left after partial compensation, and decreases as the noise increases for both techniques: In the case shown here the gain with PCDWFS is always superior to the one obtained with DWFS, for all noise levels considered in both channels. For subject SM (gain curves not shown) the residual phase left after optical compensation is still relatively high, so that the gains achieved by DWFS and PCDWFS are not so different. Note that when the residual aberration is small (SB) and SNRI >25, the gain obtained by PCDWFS with just one sample (Fig. 5(b)) is greater than the one obtained with DWFS and 30 samples (Fig. 5(a)). When the residual aberration is not so small (SM) the gain obtained by DWFS in presence of noise can be less than the one obtained by using imaging with optical partial correction. This indicates that the attenuation of the spatial frequencies due to the uncompensated eye aberration is strong enough as to let the object spectrum be below the noise spectrum level and causing the appearance of artifacts in the restored image.

 figure: Fig. 5.

Fig. 5. Plots of the gain G (DWFS-blue; PC-red; PCDWFS-black) in function of the noise in the sensor (σSH), and image (SNRI) channels, subject SB. Vector Wiener filters with (a). k=30 and (b) k=1 (expected values).

Download Full Size | PDF

This analysis of the single-frame estimation in the image domain can be complemented by the evaluation of the behavior of DWFS and PCDWFS in the spatial frequency domain. The overall SNR of the estimated object spectrum Ô (which is a complex valued random quantity) can be defined conventionally [11] as:

SNROˆ=OˆOˆ2Oˆ2

where the brackets <> indicate average over estimates of Ô obtained after particular realizations of the deconvolution procedure. High values of SNRÔ are indicative of a small variability of the individual object spectrum estimates with respect to their average value. In Fig. 6 we plot SNRÔ for subject SB both for DWFS and PCDWFS with a single image and aberration measurement pair for the noise values shown in Fig 4(c). The SNRÔ for PCDWFS is always higher than for DWFS, and the range of spatial frequencies for which SNRÔ>1 is broader using the PCDWFS approach. The fundamental frequency of the cone mosaic (f≈0.4) is well within the PCDWFS bandwidth.

 figure: Fig. 6.

Fig. 6. SNRÔ for DWFS (blue) and PCDWFS (black).

Download Full Size | PDF

Note that a higher SNRÔ does not necessarily mean that the expected value of Ô is closer to the spectrum of the object. The deconvolution filter used in this work is generally biased, due to several factors (e.g. errors in the phase estimation due to the limiting spatial sampling and noise of the wavefront sensor, the use of a truncated expansion for the wave aberration function, the use of a constant regularization parameter and so on). Thus the deconvolution process introduces systematic errors in Ô, with the consequence that some spatial frequencies of the object are under- or overestimated in average. Alternative filters, making use of additional information obtained from aberrated images of a pointlike light source, have been demonstrated [11,12] for its use in astronomical imaging through the turbulent atmosphere. This approach is efficient to overcome the bias introduced by the Wiener filter; however some technical problems to create a suitable pointlike source at the retina make difficult its straight application to the human eye. Despite this bias, our results for G in the spatial domain and the simulated images obtained in this study support the better performance of PCDWFS versus DWFS also in terms of quantitative resemblance between the restored image and the diffraction-limited image of the original object.

4. Conclusions

It seems feasible to use hybrid techniques based on the combination of optical partial compensation and deconvolution from wavefront sensing to obtain high resolution images of the eye fundus. Simulations with different amounts of noise in the image and the sensor channels show the better performance of the PCDWFS hybrid technique in relation with pure deconvolution. These results suggest the possibility of using this technique in order to reduce (just to only one) the number of frames needed to perform the deconvolution, enabling its use in near real time applications.

This work has been supported by the Spanish MCyT, grant DPI2002-04370-C02-01 and FEDER. We thank D.R. Williams for providing the original cone mosaic image, and Susana Marcos for aberration data.

References and links

1. J. Liang, D. R. Williams, and D. T. Miller, “Supernormal vision and high-resolution retinal imaging through adaptive optics,” J. Opt. Soc. Am. A 14, 2884–2892 (1997). [CrossRef]  

2. R. Navarro, E. Moreno-Barriuso, S. Bará, and T. Mancebo, “Phase plates for wave aberration compensation in the human eye,” Opt. Lett. 25, 236–238 (2000). [CrossRef]  

3. S.A. Burns, S. Marcos, A.E. Elsner, and S. Bará, “Contrast improvement of confocal retinal imaging by use of phase-correcting plates,” Opt. Lett. 27, 400–402 (2002). [CrossRef]  

4. I. Iglesias and P. Artal, “High resolution images obtained by deconvolution from wave-front sensing,” Opt. Lett. 25, 1804–1806 (2000). [CrossRef]  

5. D. Catlin and C. Dainty, “High resolution imaging of the human retina with a Fourier deconvolution technique,” J. Opt. Soc. Am. A 19, 1515–1523 (2002). [CrossRef]  

6. J.C. Fontanella, “Analyse de surface d’onde, déconvolution et optique active,” J. Opt. (Paris) 16, 257–268 (1985). [CrossRef]  

7. J. Primot., et.al., “Deconvolution from wavefront sensing: a new technique for compensating turbulence-degraded image,” J. Opt. Soc. Am 71598–1608 (1990). [CrossRef]  

8. J. Arines and S. Bará, “Significance of the recovery filter in deconvolution from wavefront sensing,” Opt. Eng. 39, 2789–2796 (2000). [CrossRef]  

9. R. Navarro and M.A. Losada,. “Aberrations and relative efficiency of light pencils in the living human eye,” Optom Vision Sci. 74, 540 (1997). [CrossRef]  

10. R. Cubalchini, “Modal wave-front estimation from phase derivatives measurements,” J. Opt. Soc. Am. A 69, 972–977 (1979). [CrossRef]  

11. M.C. Roggeman and B. M. Welsh, “Signal-to-noise ratio for astronomical imaging by deconvolution from wave-front sensing,” Appl. Opt. 33, 5400–5414 (1994). [CrossRef]  

12. M.C. Roggeman, et. al., “Biased estimators and object-spectrum estimation in the method of deconvolution from wave-front sensing,” Appl. Opt. 33, 5754–5763 (1994). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig.1.
Fig.1. Modulus of the aberrated OTF (SB-blue, SM-red) and optically partial corrected OTF (SB-blue squared, SM-red circled). f is the spatial frequency normalized to the cutoff frequency of the unaberrated eye.
Fig. 2.
Fig. 2. Simulated images degraded by the eye optics for (a) SB, (b) SM. Images after phase plate partial compensation: (c) SB, (d) SM.
Fig. 3.
Fig. 3. Simulated images restored by DWFS and the hybrid technique PCDWFS for subjects SB and SM, using the vector Wiener filter with 30 samples.
Fig. 4.
Fig. 4. Simulated images restored by DWFS and the hybrid technique PCDWFS for subjects SB and SM using the vector Wiener filter with only one image and aberration measurement pair.
Fig. 5.
Fig. 5. Plots of the gain G (DWFS-blue; PC-red; PCDWFS-black) in function of the noise in the sensor (σ SH ), and image (SNRI) channels, subject SB. Vector Wiener filters with (a). k=30 and (b) k=1 (expected values).
Fig. 6.
Fig. 6. SNRÔ for DWFS (blue) and PCDWFS (black).

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

O ˆ = j = 1 k I j H j * j = 1 k H j 2 + γ
G = 10 log { n , m ( o i ) 2 n , m ( o o ˆ ) 2 }
SNR O ˆ = O ˆ O ˆ 2 O ˆ 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.