Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Achieving cellular resolution for in vivo retinal images of transgenic GFAP–GFP mice via image processing

Open Access Open Access

Abstract

In vivo retinal images of transgenic mice, expressing GFP under the control of the GFAP (glial fibrillary acidic protein) promoter, have very poor signal-to-noise ratio (SNR) and cellular resolution such that the analysis of GFAP–GFP expressing retinal cells from these images can be a very challenging task. We report an image averaging method based on a pixel rank matching criterion which significantly enhances both these image attributes. We also show that it compares favorably against direct image averaging and a commercial averaging routine available from the Heidelberg Retinal Angiograph 2 software.

©2008 Optical Society of America

1. Introduction

The advent of the green fluorescence protein (GFP) transgenic mouse model with the glial fibrillary acidic protein (GFAP) promoter has significantly advanced in vivo retinal imaging of GFP labeled astrocytes [1]. It provides valuable insight to the dynamic changes in astrocyte morphology during development and in response to physiological and pathological conditions.

The images of these fluorescently labeled cells in the optic disc of the mouse are captured via a scanning laser ophthalmoscope (SLO) where the mouse retina is scanned with a laser beam from a point source and the fluorescence emitted is then collected by a photodetector. However, the in vivo imaging of the mouse retina does not achieve diffraction limited resolution due to the wavefront aberration induced by the eye.

Despite the higher system complexity and cost incurred, the use of adaptive optics in mouse retinal imaging has reduced higher order aberration [2, 3]. However, the poor SNR level of these images, brought about by the thermal and shot noise from the detector and its amplifier, is another important factor which adversely affects image resolution. This is due to the application of image averaging on the sequence of successively acquired retinal images where the noise level may be reduced but at the expense of losing image resolution. The respiratory motion of the mouse results in significant changes to the morphology and location of the astrocytes between successive retinal images such that the averaged composite image appears diffused and occasionally contains undesirable image artifacts.

Various denoising methods are available [4, 5, 6, 7] but, for lack of a better method and also due to its effective noise reducing capability [8], image averaging has become the de facto technique for filtering noise from image sequences. Several modifications to the image averaging method have however been suggested such as the alignment of images prior to averaging [9, 10, 11, 12, 13, 14, 15, 16, 17] or alternatively the rejection of some images which are significantly different from that of a predefined reference image [18, 19, 20, 21, 22].

Image alignment requires the identification of landmark points such as the bifurcation of vessel branches and the establishment of point to point correspondence between landmark points in successive images via automated techniques [15, 17, 23]. However, unlike fluorescence angiography images, the high noise levels and the poor image resolution in our endogenous GFAP–GFP fluorescence images makes it a challenging task to even define these landmark points manually let alone in an automated manner. Even the alignment of these images is error prone since the astrocytes in our images exhibit a far greater degree of freedom compared to the vessels in the fluorescence angiography images which can be easily aligned using a rigid body transformation. More powerful alignment methods are therefore required but ironically this requires a larger number of landmark points which we cannot possibly identify from our images [12]. Conversely, alignment methods which do not rely on landmark points have not been convincing [19]. Here, we propose a novel frame averaging method based on a pixel rank matching criterion which we will henceforth refer to as “averaging via rank matching” (ARM). The two main advantages of ARM are that it achieves significantly higher SNR and cellular resolution compared to conventional frame averaging methods. This is attributed to the noise robustness of the rank matching criterion and the fact that it leverages on the a-priori estimates of the underlying cellular profile obtained by deconvolving the individual retinal images [24] from the sequence. Section 2.4.4 describes this in detail. This is a significant finding as it enables for the first time, the in vivo analysis on the role of astrocytes in the pathogenesis of various diseases such as diabetic retinopathy, Alzheimer’s disease and Parkinson’s disease. In addition, ARM can also be easily applied in the averaging of images acquired from other modalities such as MRI and confocal microscopy.

2. Materials and methods

2.1. Transgenic GFAP–GFP mice

The generation and genotyping of the transgenic GFAP–GFP mice are as previously described by Zhuo et al. [1]. We used adult mice (8–10 weeks old) in the FVB/N background. Animal husbandry was provided by the Biological Resource Centre in Biopolis, Singapore whereas the experimental protocol covering the current study was approved by the Institutional Animal Care and Use Committee (IACUC).

2.2. Preparation of transgenic mice for retinal imaging

Mice were anaesthetized by intra-peritoneal (i.p.) injections with 0.15ml/10g body weight of Avertin (1.5% 2,2,2-tribromoethanol; T48402) purchased from Sigma-Aldrich (St. Louis, MO, USA), and their pupils dilated with a drop of 0.5% Cyclogyl® sterile ophthalmic solution (cyclopentolate hydrochloride, Alcon®, Puurs, Belgium). Custom-made PMMA hard contact lenses (from Cantor & Nissel, Northamptonshire, UK) were used to avoid dehydration of the cornea. Careful eye examination prior to scanning laser ophthalmoscope imaging ruled out the presence of any corneal or lens opacities.

2.3. Scanning laser ophthalmoscope (SLO) imaging

We employed a commercially available scanning laser ophthalmoscope, the Heidelberg Retina Angiograph 2, HRA 2 (Heidelberg Engineering, Dossenheim, Germany), for retinal imaging of the mice. The imaging system was adapted for the optics of the mouse eye by replacing the 30° focal lens with a 55° wide angle objective. This reduces the laser beam diameter to 1.7mm and in turn allows more light to be coupled into the small mouse pupil. Maximum retinal irradiance is approximately 2.0mWcm-2 and, therefore, lies below the limits established by the American National Standards Institute. Since our primary interest is in the endogenous GFP signal in the mouse retina, we operate the HRA 2 in the fluorescence mode where the 488nm Argon laser provides the excitation light. The emission light is collected, with the use of the 500nm barrier filter, by a photodetector. The collected signal is then fed to a frame grabber interfaced to a computer at a frame rate of 5Hz where each image sequence comprises 45 single image frames. Each image is defined by 1536×1536 pixels with an intensity resolution of 8 bits/pixel. The estimated size of a focused spot on the retina i.e. resolution limit is approximately 10µm. By Nyquist sampling criterion, the sampling density must be at least 5.0µm/pixel i.e. twice the bandwidth frequency. Since we assume that 1° of visual angle subtended at the mouse eye corresponds to about 31µm of its retina [25] and our field of view is about 55.9°, the actual sampling density works out to be approximately 31×55.91536μm/pixel=1.13μm/pixel .

2.4. Proposed frame averaging algorithm

Here, we aim to obtain a composite fluorescence image of the mouse retina, with improved SNR and cellular resolution, from a sequence of relatively noisy and poorly resolved retinal fluorescence images. This is achieved by first aligning the images such that the between-frame motion artifacts are reduced and then averaging the aligned images. To this end, the proposed ARM algorithm comprises four major sections: (i) Detecting landmark points, (ii) Establishing point-to-point correspondence, (iii) Aligning images via affine transformation and (iv) Aligning images via the novel rank matching technique.

2.4.1. Detecting landmark points

The high noise levels in the image sequence prevent us from accurately detecting the landmark points. The image sequence can be formally defined as Ii, i=1,2, …,N where the subscript i identifies a particular image in the sequence of N images and N=45. In ARM, we propose an automated method of selecting landmark points from the a-priori estimates Ji, i=1,2, …,N of the underlying fluorescence signal in each retinal image from the original sequence. The a-priori estimates are obtained by deconvolving each image Ii (Huygens Essential Software, Scientific Volume Imaging BV) [24] in the sequence. The deconvolution software computes a theoretical point spread function (PSF), resembling a 2-D Gaussian profile in the x-y plane, based on the ophthalmoscope device settings. Also, it automatically estimates the image SNR and uses this as a regularization parameter to control the sharpness of the deconvolved image. In doing so, photon noise is effectively removed. Next, we apply a gray-scale morphological operation [26] to locate the intensity maximas (peak intensities) in each Ji estimate. Naturally, the locations of the intensity maximas in the Ii images will be identical to those in the corresponding Ji images. These intensity maximas serve as landmark points where their displacement over successive images in the sequence can be used to align these images. Their localized “spot-like” neighborhoods in Ji makes them effective landmark points since displacements along any direction, in the x-y plane, can be accurately detected.

2.4.2. Establishing point-to-point correspondence between adjacent images

Classical frame averaging methods establish correspondence between the landmark points in the images Ii, from a sequence, to those in the reference image I 1 from the same sequence. However, establishing this correspondence can be impossible in some cases since the fluorescence signal in image Ii of the sequence may have changed significantly in terms of intensity and location from the corresponding signal in the reference image I 1. ARM establishes point-to-point correspondence between successive images I i-1 and Ii in the sequence such that each pair of successive images in the sequence has its own set of corresponding landmark points. To this end, the normalized cross-correlation measure (NCC) [23]

NCCJi,u,v,Ji1,u,v=Ji,u,vJ¯i,u,v,Ji1,u,vJ¯i1,u,vJi,u,vJ¯i,u,vJi1,u,vJ¯i1,u,v

is applied as shown in Eq. 1 where Ji,u,v is the (2Wc+1)×(2Wc+1) neighborhood of the ith deconvolved image centered around one of its landmark points (u,v) and Wc is the width of the neighborhood, i,u,v is the mean value of this neighborhood, J i-1,u′,v is the (2Wc+1)×(2Wc+1) neighborhood of the (i-1)th deconvolved image centered around one of its landmark points (u′,v′) and i-1,u′,v is the mean value of this neighborhood. The notation 〈·〉 denotes the dot operator whereas |·| denotes the magnitude operator. We again use the a-priori estimates Ji since they are cleaner and better resolved compared to the Ii images.

Point-to-point correspondence between landmark points (u,v) and (u′,v′) is established if (u,v) lies within the (2Wc+1)×(2Wc+1) neighborhood of (u′,v′) and their NCC measure, NCCJi,u,v,Ji1,u,v , is the largest exceeding an empirically predefined threshold, NCCth. NCCth may vary from one sequence to another due to various factors such as SNR levels but typically its value ranges from 0.7–0.9. NCCth should be large enough so that spurious landmark points are avoided but at the same time it should be low enough such that valid landmark points are not excluded. Wc is empirically set to 11 since it was observed that the maximum displacement in the field of view remains within this vicinity for all images in the sequence. The point-to-point correspondence in the Ii images follows exactly that determined from the Ji images.

2.4.3. Linear alignment of images via affine transformation

ARM implements the linear alignment of images in a sequential manner and takes into consideration the translation, rotation, scaling and shearing effects between successive images in the sequence. We begin with the alignment of I 2 to I 1 resulting in the aligned image Ia 2 followed by the alignment of I 3 to I 1 and the sequential alignment process continues in this way until the last image In in the sequence is aligned. The alignment process at every step can be formally expressed as follows where

Iia=fifi1...f2(Ii)

the superscript a denotes that Ii has been aligned to I 1 and f 2 represents the affine transformation function determined from the point-to-point correspondence of the landmark points between I 1 and I 2. Similarly, fi is determined from the point-to-point correspondence of the landmark points between adjacent images I i-1 and Ii. The motivation here is that transformation functions fi, i=1,2, …,N between adjacent images can be more accurately determined compared to those between any given image and a fixed reference image. The reason is that the motion artifacts between adjacent images tend to be lesser as compared to images that are further apart.

But the retina is a biological tissue and as such affine transformation alone may not accurately compensate the complex respiratory motion between successive images in the sequence. In the next section, we propose a non-linear alignment approach which overcomes this limitation.

2.4.4. Non-linear alignment of images via rank matching

In the heart of ARM lies a pixel rank matching criterion which defines our non-linear alignment process. The rank of a pixel denotes the order of that pixel in an image neighborhood assuming all pixels in that neighborhood have been sorted in ascending order based on their intensity values. If image A is to be aligned to image B, the criterion requires the pixel values in image A to be updated such that the rank of the center pixel in every local neighborhood of image A is identical to that of the center pixel from a corresponding neighborhood of image B. This is a reasonable requirement since the rank of both pixels should not differ despite there being variations in the average intensity level in their neighborhood. However, this assumption is only valid in the absence of noise which could significantly alter the pixel ranks.

In order to ensure the noise robustness of our rank matching criterion, we utilize the rank information obtained from the aligned a-priori estimates Jai, i=1,2, …,45 computed by subjecting Ji to the same transformation functions as in Eq. 2. As in Section 2.4.3, the non-linear alignment is implemented sequentially as shown in Eq. 3 where

Iia,r(x,y)=Iia(xo,yo)RANKIia(xo,yo)=RANKJi1ref(x,y)

where the RANK operation is performed on a local (2Wr+1)×(2Wr+1) neighborhood centered at (x,y) and Wr is the width. The pixel (xo,yo) is located within this neighborhood and the superscript r in Ia,ri denotes that Iai has been non-linearly aligned such that the ranks of its pixels match those of Jref i-1 and i≥2. Jrefi is defined as

Jiref=(i1)Ji1ref+Jia,ri

where Jref 1=J 1 and Ja,ri is computed as follows:

Jia,r(x,y)=Jia(xo,yo)RANKJia(xo,yo)=RANKJi1ref(x,y)

Finally, the result of averaging i number of aligned images in the sequence is given by

Iiref=(i1)Ii1ref+Iia,ri

where Iref 1=I 1. Eqs. 3–6 are implemented in a cyclic fashion where for every cycle, i is incremented before proceeding to the next cycle where Eqs. 3–6 are recomputed and IrefN denotes the final averaged image where N is the total number of images in the sequence.

3. Results

3.1. Estimating the underlying fluorescence signal

Figure 1(a) is a typical image of the optic disc in the retina of a transgenic mouse where the fluorescence signal is from the GFP protein which, in this case, labels the glial cells. As observed, the image has a very low SNR and this makes it extremely difficult to detect any suitable landmark points and it is clearly inappropriate to use such images as reference images in rank matching. An a-priori estimate is therefore required to accurately estimate the underlying fluorescence signal. We have explored several methods of determining this estimate and found that the Huygens deconvolution method (Huygens Essential Software, Scientific Volume Imaging BV) [27] gives the best results. The deconvolved result of Fig. 1(a) is shown in Fig. 1(b) and as observed, the speckle noise is removed and the fluorescence intensity profile is characterized by various intensity maximas. Conversely, a 9×9 median filtering method of estimating the a-priori estimate fails to remove the speckle noise and instead distorts the image as observed in Fig. 1(c). Figure 1(d) shows the results of applying anisotropic diffusion [5] where the noise appears to be removed but at the expense of diffusing the underlying fluorescence signal resulting in the image appearing blurred overall.

 figure: Fig. 1.

Fig. 1. (a) Input image, Ii, of the GFP signal from the optic disc of the transgenic mouse (b)–(d) Ji obtained after subjecting Ii to (b) Huygens deconvolution (c) 9×9 median filtering and (d) speckle reducing anisotropic diffusion.

Download Full Size | PDF

3.1.1. Tuning of the deconvolution parameters

The Huygens deconvolution method requires certain device information such as the pinhole diameter, numerical aperture, excitation wavelength etc. but assumes that the device in question is a wide-field or confocal microscope and not a scanning laser ophthalmoscope. We circumvent this problem by defining parameters for a hypothetical confocal microscope system such that the ratio of its Nyquist critical sampling density over the actual sampling density is comparable to that of the HRA 2 system specified in Section 2.3 (5.00µm/pixel and 1.13µm/pixel respectively). To this end, we define the confocal microscope parameters as shown in Table 1 so that the Nyquist critical and actual sampling density are 70nm and 15.8nm respectively. This ensures that both systems have the same degree of oversampling i.e. 5.001.13=70.015.8 . The theoretical point spread function for deconvolution was subsequently computed based on the parameters specified in Table 1.

Tables Icon

Table 1. Parameters of imaginary confocal microscope for the Huygens deconvolution method

3.2. Fast and accurate tracking of landmark points

We apply the NCC measure in Section 2.4.1 to automatically select intensity maximas as landmark points for every a-priori deconvolved image Ji in a sequence. Although a large number of landmark points may have been detected in an image Ji, only a subset of these points correspond to those in the adjacent image J i-1 since we require that NCCJi,u,v,Ji1,u,v0.9 and that the corresponding points lie within a 23×23 square neighborhood of each other (Wc=11 as in Section 2.4.2).

 figure: Fig. 2.

Fig. 2. Point-to-point correspondence between: First row-(a) J 1 and (b) J 2, Second row-(c) J 2 and (d) J 3, Corresponding points in the adjacent images carry the same number.

Download Full Size | PDF

Figure 2 illustrates this where Figs. 2(a) and (b) are a-priori estimates of the first and second images in a sequence whereas Figs. 2(c) and (d) are a-priori estimates of the second and third images in the same sequence. As observed, each pair has their own unique set of corresponding landmark points indicated by the labeled green and red dots.

3.3. Quantitative analysis of ARM

Here, we quantify the influence of the window size Wr and image SNR on the alignment accuracy of the rank-matching criterion. This is achieved by omitting the affine transformation step of Section 2.4.3 and directly applying the rank-matching criterion of Section 2.4.4 on a sequence of 45 noisy synthetic images where each 256×256 image comprises a 2-D “ball” shaped intensity profile with a diameter of approximately 36 pixels. The “ball” is vertically displaced by two pixels between adjacent images. The Huygens deconvolution method was applied to obtain the a-priori deconvolved images Ji from the sequence.

3.3.1. Influence of Wr on alignment accuracy

In each case, the final averaged image Iref 45 is compared against the ideal (noiseless) synthetic image I′ and the mean absolute error (MAE) is computed as MAE=1Mi=1MI45,irefIi and the results shown in Fig. 3(g) where i and M denote the pixel entry and total number of image pixels respectively. It is evident here that the distortion is smaller for larger window sizes and that it is minimal for Wr=19. Fig. 3(a) shows a sample image from a noisy sequence with an SNR of 0dB whereas the final averaged results in Figs. 3(b)–(e) illustrates the influence of Wr on the alignment accuracy. We observe that the averaged results in Figs. 3(b)–(d) appear truncated. This is due to the insufficient window size of (2Wr+1)×(2Wr+1) which limits the validity of the rank matching operation to the peak intensity region of the “ball” profile as observed in Figs. 3(b)–(d) although the correctly aligned region increases for larger window sizes. Accurate alignment of the entire “ball” profile can only be obtained if the window is large enough to cover the entire “ball” and its displaced counterpart in the adjacent image i.e. 2Wr+1 ≃ Ball diameter+Between-frame pixel displacement. The window size requirement is met for Fig. 3(e) and as observed, the distortion is minimal. In contrast, the 2-D “ball’ profile appears totally distorted in the direct frame averaged result of Fig. 3(f) due to the vertical displacements between adjacent images in the sequence.

3.3.2. Influence of SNR on alignment accuracy

In Figs. 4(d)–(f), we observe the efficacy of the ARM method in reducing the noise levels while minimizing distortion to image sequences with different SNR levels as shown in Figs. 4(a)–(c). A window size of Wr=19 is used in all three cases. Significantly small error values of about 1.9% and 0.5% were achieved for SNR levels of -10dB and 0dB respectively. Even for a poor SNR level of -20dB, a reasonably low MAE of about 6.5% was achieved. The efficacy of ARM is attributed to the noise robustness of the deconvolution method and the proposed rank matching criterion.

3.4. Qualitative analysis of ARM on noisy optic disc images of transgenic mice

We have also evaluated the performance of ARM on noisy optic disc images of transgenic GFAP–GFP mice and found that the results corroborate the findings made in Section 3.3 on the synthetic images.

 figure: Fig. 3.

Fig. 3. Influence of Wr on the alignment accuracy: (a) Reference image, (b)–(f) Final averaged images corresponding to (b) Wr=4, (c) Wr=9, (d) Wr=14, (e) Wr=19, (f) Direct averaging and (g) Corresponding drop in mean absolute error (MAE) error values from (b) to (e).

Download Full Size | PDF

3.4.1. Influence of Wr on alignment accuracy

As Wr increases, the resultant improvement in the alignment accuracy can be observed from the final averaged images IrefN of Fig. 5 where N=45. Image artifacts are visible in IrefN of Fig. 5(a) due to the small window size of Wr=2. Although the underlying fluorescence profile is taking shape in Fig. 5(b), it appears that only the maxima and minima intensity regions of the underlying fluorescence have been aligned accurately resulting in the appearance of tiny fluorescence dots and narrow branches of blood vessels in the optic disc. This is consistent with the observation made in Section 3.3.1. Accurate results can only be achieved if Wr≃17 since the sum of the “spot” diameter in the Ji images and the displacement between adjacent retinal images is approximately 35 pixels (2Wr+1=35). This window size corresponds to the result shown in Fig. 5(d). A slight deterioration in signal resolution is observed when the larger window size of Wr=22 is applied in Fig. 5(e) whereas minor image artifacts are observable for the smaller window size of Wr=12.

 figure: Fig. 4.

Fig. 4. Influence of Wr on the alignment accuracy: (a) Reference image, (b)–(f) Final averaged images corresponding to (b) Wr=4, (c) Wr=9, (d) Wr=14, (e) Wr=19, (f) Direct averaging and (g) Corresponding drop in mean absolute error (MAE) error values from (b) to (e).

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Final averaged image IrefN of a transgenic mouse retina from the ARM method (a) Wr=2, (b) Wr=7, (c) Wr=12 (d) Wr=17 and (e) Wr=22.

Download Full Size | PDF

3.4.2. Comparison against conventional averaging methods

We also find that the final averaged image IrefN of Fig. 6(a), processed using ARM, has a significantly better resolution compared to those obtained using direct frame averaging in Fig. 6(b) as well as the averaging module in the commercial Heidelberg Retinal Angiograph 2 (HRA 2) software in Fig. 6(c). In addition, the noise level in Fig. 6(a) appears to be notably lower than those in Figs. 6(b) and (c). The poor resolution of the results in Fig. 6(b) and (c) makes it extremely difficult if not impossible to perform morphometric and intensity analysis of the fluorescence spots in the optic disc region.

4. Conclusions

We have presented a fully automated frame averaging method of in vivo GFAP–GFP fluorescent images of the transgenic mouse retina. It is based on a non linear alignment method we call averaging via rank matching (ARM) which ensures a significantly higher resolution and SNR of the final averaged image compared to conventional frame averaging and motion compensated frame averaging methods. Rank matching provides a significant improvement to the alignment accuracy of standard motion compensation operations such as affine transformation as it is robust against noise and variations in the fluorescence intensity of astrocytes between successive images. The method also enables an automated, accurate and robust detection of landmark points from the peak intensity points of the fluorescence profiles. These landmark points are subsequently used to establish point-to-point correspondence between successive images rather than between a fixed reference frame and all other images in a sequence since successive images are more closely aligned. Other potential applications of ARM include the averaging of images acquired from other modalities such as confocal microscopy, MRI and PET.

 figure: Fig. 6.

Fig. 6. Qualitative comparison of the composite GFAP–GFP transgenic mouse retina images, from the same sequence of noisy images, processed using (a) the proposed ARM method, (b) Direct frame averaging and (c) HRA 2’s averaging module.

Download Full Size | PDF

Acknowledgements

We would like to thank Dr. Xiao-Shan Min for the confirmation of a proper pupil dilation and lack of the corneal or lens opacities in the anesthetized mice prior to retinal imaging. The research was supported and funded by the Institute of Bioengineering and Nanotechnology (IBN), the Agency of Science, Technology and Research (A*STAR), the Republic of Singapore.

References and links

1. L. Zhuo, B. Sun, C. L. Zhang, A. Fine, S. Y. Chiu, and A. Messing, “Live astrocytes visualized by green fluorescent protein in transgenic mice.” Dev. Biol. 187(1), 36–42 (1997). URL http://dx.doi.org/10.1006/dbio.1997.8601. [CrossRef]   [PubMed]  

2. D. P. Biss, D. Sumorok, S. A. Burns, R. H. Webb, Y. Zhou, T. G. Bifano, D. Ct, I. Veilleux, P. Zamiri, and C. P. Lin, “In vivo fluorescent imaging of the mouse retina using adaptive optics.” Opt. Lett. 32(6), 659–661 (2007). URL http://dx.doi.org/10.1364/OL.32.000659. [CrossRef]   [PubMed]  

3. V. Nourrit, B. Vohnsen, and P. Artal, “Blind deconvolution for high-resolution confocal scanning laser ophthalmoscopy,” J. Opt. A 7, 585–592 (2005). [CrossRef]  

4. K. W. Leszczynski, S. Shalev, and N. S. Cosby, “An adaptive technique for digital noise suppression in on-line portal imaging.” Phys. Med. Biol. 35(3), 429–439 (1990). URL http://dx.doi.org/10.1088/0031-9155/35/3/011. [CrossRef]   [PubMed]  

5. Y. Yu and S. T. Acton, “Speckle reducing anisotropic diffusion,” IEEE Trans. Image Process. 11(11), 1260–1270 (2002). URL http://dx.doi.org/10.1109/TIP.2002.804276. [CrossRef]  

6. L. Yin, R. Yang, M. Gabbouj, and Y. Neuvo, “Weighted median filters: a tutorial,” IEEE Trans. Circuits Syst. 43(3), 157–192 (1996). URL http://dx.doi.org/10.1109/82.486465. [CrossRef]  

7. Y. Xu, J. B Weaver, D. M. Healy, and J. Lu, “Wavelet transform domain filters: a spatially selective noise filtration technique,” IEEE Trans. Image Process. 3(6), 747–758 (1994). URL http://dx.doi.org/10.1109/83.336245. [CrossRef]   [PubMed]  

8. W. Swindell and M. A. Mosleh-Shirazi, “Noise reduction by frame averaging: a numerical simulation for portal imaging systems.” Med. Phys. 22(9), 1405–1411 (1995). URL http://dx.doi.org/10.1118/1.597618. [CrossRef]   [PubMed]  

9. D. C. Ercole, C. Giuseppe, M. Alessandro, M. Carlo, and V. Marco, “Compensation of random eye motion in television ophthalmoscopy: preliminary results,” IEEE Trans. Med. Img. 6(1), 74–81 (1987). URL http://dx.doi.org/10.1109/TMI.1987.4307800. [CrossRef]  

10. D. U. Bartsch, M. H. El-Bradey, A. El-Musharaf, and W. R. Freeman, “Improved visualisation of choroidal neovascularisation by scanning laser ophthalmoscope using image averaging.” Br. J. Ophthalmol. 89(8), 1026–1030 (2005). URL http://dx.doi.org/10.1136/bjo.2004.057364. [CrossRef]   [PubMed]  

11. K. A. Goatman, A. Manivannan, J. H. Hipwell, P. F. Sharp, N. Lois, and J. V. Forrester, “Automatic registration and averaging of ophthalmic autofluorescence images,” in Medical Image Understanding and Analysis (2001).

12. B. M. Ege, T. Dahl, T. Søndergaard, and O. V. Larsen, “Automatic registration of ocular fundus images,” in 1st International Workshop on Computer Assisted Fundus Image Analysis (2000).

13. S. B. Stevenson and A. Roorda, “Correcting for miniature eye movements in high resolution scanning laser ophthalmoscopy,” Proc. SPIE 5688A, 145–151 (2005). URL http://dx.doi.org/10.1117/12.591190. [CrossRef]  

14. J. B. Mulligan, “Recovery of motion parameters from distortions in scanned images,” in Proceedings of the NASA Image Registration Workshop (1997).

15. A. V. Cideciyan, “Registration of ocular fundus images: an algorithm using cross-correlation of triple invariant image descriptors,” IEEE Eng. Med. Biol. Mag. 14(1), 52–58 (1995). URL http://dx.doi.org/10.1109/51.340749. [CrossRef]  

16. C. R. Vogel, D. W. Arathorn, A. Roorda, and A. Parker, “Retinal motion estimation in adaptive optics scanning laser ophthalmoscopy,” Opt. Express 14, 487–497 (2006). URL http://dx.doi.org/10.1364/OPEX.14.000487. [CrossRef]   [PubMed]  

17. A. Wade and F. Fitzke, “A fast, robust pattern recognition system for low light level image registration and its application to retinal imaging,” Opt. Express 3, 190–197 (1998). URL http://www.opticsexpress.org/abstract.cfm?URI=oe-3-5-190. [CrossRef]   [PubMed]  

18. I. Graesslin, M. Mittlebach, H. Eggers, T. Schaeffter, P. Bornert, and O. Lange, “MR real-time coronary imaging using the normalized cross-correlation,” in Proceedings of the International Society for Magnetic Resonance in Medicine (2002).

19. I. Graesslin, M. Mittelbach, T. Schaeffter, P. Bornert, H. Eggers, and O. Lange, “Adaptive weighted averaging for real-time MR imaging,” in Proceedings of the International Society for Magnetic Resonance in Medicine (2002).

20. C. J. Hardy, M. Saranathan, Y. Zhu, and R. D. Darrow, “Coronary angiography by real-time MRI with adaptive averaging.” Magn. Reson. Med. 44(6), 940–946 (2000). URL http://dx.doi.org/10.1002/1522-2594(200012)44:6¡940::AID-MRM16¿3.0.CO;2-F. [CrossRef]   [PubMed]  

21. M. S. Sussman, N. Robert, and G. A. Wright, “Adaptive averaging for improved SNR in real-time coronary artery MRI.” IEEE Trans. Med. Img. 23(8), 1034–1045 (2004). URL http://dx.doi.org/10.1109/TMI.2004.828677. [CrossRef]  

22. M. S. Sussman, A. B. Kerr, J. M. Pauly, N. Merchant, and G. A. Wright, “Tracking the motion of the coronary arteries with the correlation coefficient,” in Proceedings of the International Society for Magnetic Resonance in Medicine (1999).

23. J. P. Lewis, “Fast normalized cross-correlation,” in Vision Interface (1995). URL http://citeseer.ist.psu.edu/lewis95fast.html.

24. H.-U. Dodt, U. Leischner, A. Schierloh, N. Jhrling, C. P. Mauch, K. Deininger, J. M. Deussing, M. Eder, W. Zieglgnsberger, and K. Becker, “Ultramicroscopy: three-dimensional visualization of neuronal networks in the whole mouse brain.” Nat. Methods 4(4), 331–336 (2007). URL http://dx.doi.org/10.1038/nmeth1036. [CrossRef]   [PubMed]  

25. A. Maass, P. L. von Leithner, V. Luong, L. Guo, T. E. Salt, F. W. Fitzke, and M. F. Cordeiro, “Assessment of rat and mouse RGC apoptosis imaging in vivo with different scanning laser ophthalmoscopes.” Curr. Eye Res. 32(10), 851–861 (2007). URL http://dx.doi.org/10.1080/02713680701585872. [CrossRef]   [PubMed]  

26. P. Soille, Morphological Image Analysis (Springer-Verlag, 1999). URL http://web.ukonline.co.uk/soille/book1st.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. (a) Input image, Ii , of the GFP signal from the optic disc of the transgenic mouse (b)–(d) Ji obtained after subjecting Ii to (b) Huygens deconvolution (c) 9×9 median filtering and (d) speckle reducing anisotropic diffusion.
Fig. 2.
Fig. 2. Point-to-point correspondence between: First row-(a) J 1 and (b) J 2, Second row-(c) J 2 and (d) J 3, Corresponding points in the adjacent images carry the same number.
Fig. 3.
Fig. 3. Influence of Wr on the alignment accuracy: (a) Reference image, (b)–(f) Final averaged images corresponding to (b) Wr =4, (c) Wr =9, (d) Wr =14, (e) Wr =19, (f) Direct averaging and (g) Corresponding drop in mean absolute error (MAE) error values from (b) to (e).
Fig. 4.
Fig. 4. Influence of Wr on the alignment accuracy: (a) Reference image, (b)–(f) Final averaged images corresponding to (b) Wr =4, (c) Wr =9, (d) Wr =14, (e) Wr =19, (f) Direct averaging and (g) Corresponding drop in mean absolute error (MAE) error values from (b) to (e).
Fig. 5.
Fig. 5. Final averaged image Iref N of a transgenic mouse retina from the ARM method (a) Wr =2, (b) Wr =7, (c) Wr =12 (d) Wr =17 and (e) Wr =22.
Fig. 6.
Fig. 6. Qualitative comparison of the composite GFAP–GFP transgenic mouse retina images, from the same sequence of noisy images, processed using (a) the proposed ARM method, (b) Direct frame averaging and (c) HRA 2’s averaging module.

Tables (1)

Tables Icon

Table 1. Parameters of imaginary confocal microscope for the Huygens deconvolution method

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

NCC J i , u , v , J i 1 , u , v = J i , u , v J ¯ i , u , v , J i 1 , u , v J ¯ i 1 , u , v J i , u , v J ¯ i , u , v J i 1 , u , v J ¯ i 1 , u , v
I i a = f i f i 1 . . . f 2 ( I i )
I i a , r ( x , y ) = I i a ( x o , y o ) RANK I i a ( x o , y o ) = RANK J i 1 ref ( x , y )
J i ref = ( i 1 ) J i 1 ref + J i a , r i
J i a , r ( x , y ) = J i a ( x o , y o ) RANK J i a ( x o , y o ) = RANK J i 1 ref ( x , y )
I i ref = ( i 1 ) I i 1 ref + I i a , r i
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.