Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Retinal imaging using adaptive optics optical coherence tomography with fast and accurate real-time tracking

Open Access Open Access

Abstract

One of the main obstacles in high-resolution 3-D retinal imaging is eye motion, which causes blur and distortion artifacts that require extensive post-processing to be corrected. Here, an adaptive optics optical coherence tomography (AOOCT) system with real-time active eye motion correction is presented. Correction of ocular aberrations and of retinal motion is provided by an adaptive optics scanning laser ophthalmoscope (AOSLO) that is optically and electronically combined with the AOOCT system. We describe the system design and quantify its performance. The AOOCT system features an independent focus adjustment that allows focusing on different retinal layers while maintaining the AOSLO focus on the photoreceptor mosaic for high fidelity active motion correction. The use of a high-quality reference frame for eye tracking increases revisitation accuracy between successive imaging sessions, allowing to collect several volumes from the same area. This system enables spatially targeted retinal imaging as well as volume averaging over multiple imaging sessions with minimal correction of motion in post processing.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

Corrections

15 November 2022: Typographical corrections were made to Section 1 and Section 2.6.

1. Introduction

Optical coherence tomography (OCT) is a non-invasive interferometric imaging technique that can record three-dimensional images of biological tissue. Due to OCT’s high volumetric resolution and rapid acquisition speeds, it has become a standard ophthalmic instrument for diagnostics and it is becoming standard-of-care in other disciplines such as dermatology, cardiology and pulmonology [1,2]. Technological development of OCT has been directed towards improving its sensitivity, in turn allowing high-speed imaging [3,4] improving its axial resolution [5,6] and incorporating adaptive optics (AO) for lateral resolution improvement [710]. OCT has been valuable clinically for its functional imaging techniques to evaluate the vasculature via angiography [1113] and functional cellular changes in the retina [1421].

However, even during fixation, OCT images are subject to motion artifacts caused by ever-present ocular movements. Most modern commercial systems now employ some form of active tracking to mitigate these artifacts, but these trackers are relatively slow and have limited accuracy so that most clinical data, upon close inspection, still contain artifacts caused by motion [22,23]. For commercial OCT systems and many clinical applications, this is generally acceptable; however, for high resolution and/or research-application OCT systems, these errors remain problematic. Acquiring a sequence of B-scans (single retinal cross-section images) from a fixed location over any length of time in a human eye, for example, is extremely challenging due to the lack of spatial references in the dimension perpendicular to the scanning direction which impedes accurate registration of the B-scans. Most research systems, with a few exceptions [24,25], direct their efforts to track a structure over time by imaging an entire volume sequence to ensure that the structures of interest are captured frequently so that their position can be registered onto a reference volume. Careful post-processing is then required to accurately register each volume prior to making any quantitative measures of that structure [21]. This has resulted in a new sub-field of research that develops algorithms and methods dedicated to motion correction [2628] and volumetric motion correction [29]. This kind of approach benefits from frequent revisitation of the structures, which minimizes intraframe distortions, in turn improving registration accuracy. Therefore, efforts have been directed towards increasing temporal acquisition rates by using MHz-rate swept sources [12], implementing cylindrical lenses for line field illumination [21], multiplexing cameras to quadruple the acquisition rate [30], full field implementation [31,32] and bidirectional scanning [33,34]. Although the (equivalent) point acquisition rate for these imaging systems can be in the MHz range, the temporal resolution during functional recording is limited by the volume acquisition rate, while the SNR is limited by the energy deposited on the retina per each pixel and the integration time of the detector.

An alternate method to mitigate eye movements is to measure the eye’s movement and actively correct for the eye motion for a retina-stable OCT. The most common methods for measuring eye movements include magnetic search coils [3537], dual-Purkinje tracking [38], and systems that estimate motion based on pupil position and/or lens reflections relative to reflections from the cornea [3942]. Although these methods are widely used, they lack the spatial precision necessary to accurately mitigate eye motion in retinal images since the eye is not a rigid body, resulting in a disparity between motion of the elements in the anterior portions of the eye and retinal motion. The tracking methods with the best accuracy for estimating retinal motion are those that image the retina directly. Retinal-image-based methods were first demonstrated using camera-based methods [43,44] and eventually using lasers via a scanning laser ophthalmoscope (SLO) [45,46]. An SLO utilizes the ocular optics as an objective to image a single point source on the retina and records the intensity of the light scattered back from this single point; this point source is scanned in a raster pattern to image a patch of the retina and produce an en face image. The further development of the confocal SLO [47], which uses a pinhole in the detection plane to reject multiply scattered light from different focal depths improved lateral resolution, image contrast, and depth sectioning, in turn improving retrieval of retinal motion.

Shortly after its invention, SLO was recognized as a candidate technology for tracking eye motion because of its high frame rate, contrast, and spatial resolution [46]. It was initially used for vision tests by recovering motion traces and correcting retinal videos in post processing [48]. Even real time, active retinal tracking was suggested, although not implemented at that time, as a useful method for psychophysical tasks and for guiding laser photocoagulation [49]. Further developments in SLO-based eye tracking leveraged the rolling shutter-feature of SLO for higher-frequency eye motion analysis to track saccades [50] and enable overall improvements in eye tracking accuracy and speed [51]. Improvements in SLO resolution using AO [52] and development of specialized FPGA based hardware and software have produced SLO-based systems that track the eye in real time at rates approaching the kHz range and accuracies better than 1 arcmin [5358]. The active tracking and targeted stimulation platform developed for AOSLO [57,59] has enabled the study of retinal function at an unprecedented level of detail in humans. Experiments include probing the photoreceptors to characterize spatial, temporal and color vision pathways [6066], investigating the effect of motion on acuity of vision [67,68], and microperimetry [58,59,6971].

Active retinal tracking was also implemented using a tracking SLO (AO) [54] to actively guide an OCT beam and compensate the eye motion for both structural and angiographic imaging [24,25], and to stabilize an AOSLO raster scan on the retina [72]. In this paper, we further advance this technology by developing an adaptive optics platform which uses AOSLO-based tracking for guiding the eye motion correction of an adaptive optics OCT (AOOCT) beam onto the retina. We describe the system design and characterize its tracking performance using a model eye and present AOOCT images of several retinal layers to demonstrate the ability of the system to compensate retinal motion in the living human eye.

2. Materials and methods

We designed and built an AOSLO to track the eye motion and drive a scanner to actively stabilize an AOOCT beam on a fixed location in the human retina. The AOOCT beam is integrated into the AOSLO beam using a dichroic mirror just before the deformable mirror such that the AOOCT beam benefits from the AO correction of the aberrations of the eye as determined by the AOSLO system. The AOSLO retrieves eye motion traces from images of the retina at a rate of 960 Hz and an accuracy better than 0.1 arcmin [56,73]. The motion traces are combined with the AOOCT scanning waveforms to steer the AOOCT scanner counter to the retinal motion and therefore stabilize it on the retina. AOSLO and AOOCT videos are recorded synchronously such that per each frame of the AOSLO video, four AOOCT B-scans are recorded. The motion correction accuracy was experimentally quantified with a moving model eye that included a galvo scanner. To show the stabilization capabilities of the system, retinal images of a single human subject were taken with the AOSLO-AOOCT system.

2.1 AOSLO system design

This multi-spectral AOSLO system is described in more detail in previous publications [74,75], and so only the details that are most pertinent to this paper are described here. The optical schematic is shown below in Fig. 1. Red rays indicate the AOSLO path, blue rays indicate the AOOCT path, and purple rays indicate where the beams from the two systems become superimposed using a dichroic mirror.

 figure: Fig. 1.

Fig. 1. The system schematic is depicted here with the red path corresponding to the AOSLO, the blue path shows the AOOCT and purple depicts where these two beams are combined with a dichroic mirror. Within the AOSLO path we can appreciate the three imaging channels and the WFS channel with the independent delivery and collection. In the AOOCT path the main components consist of the achromatizing lens, the focus adjustment, and the active scanning mirror.

Download Full Size | PDF

The AOSLO uses a supercontinuum light source from which 4 wavelengths are separated out and coupled into single-mode fibers. The wavelengths are 543 $\pm$ 11 nm, 680 $\pm$ 11 nm, and 840 $\pm$ 6 nm for imaging and/or projecting a stimulus, and 940 $\pm$ 5 nm for wavefront sensing. The beams are collimated and coaligned as they enter the system with the initial beam splitter, then scanned into a 512-line raster pattern with a 16 kHz resonant fast scanner (SC-30, EOPC, Ridgewood, USA) to scan lines and a galvanometer slow scanner (6210h, Cambridge Technology, Bedford, USA) to scan each frame covering a field size of about 1 degree at 30 Hz frame rate. The light reflected by the retina is descanned and redirected according to wavelength. The light collected in the 940 nm channel is guided into a wavefront sensor to measure the ocular wavefront and drives the deformable mirror (DM97-08, ALPAO, Montbonnot-Saint-Martin, France) to correct for the measured aberrations. The imaging light is collected by individual photomultiplier tubes for each imaging channel and digitized with a field programmable gate array (FPGA). Only the 840 nm imaging light and 940 nm wavefront sensing light are used in this report. Power at the cornea was 45 µW for 840 nm radiation and 38 µW for 940 nm light. Each video frame is digitized into 512 x 512 pixels. Because the eye motion measurements are done on a strip based cross-correlation, and each frame is broken up into 32 strips, the computation to extract the real-time eye motion trace using the FPGA is reported at 960 Hz with a measured 2.9 ms latency [24,25].

2.2 AOOCT system design

The AOOCT uses a 100 kHz swept source laser (Axsun Inc, Billerica Massachusetts, USA) with a center wavelength of 1040 nm and bandwidth of 90 nm resulting in an axial resolution of 10.5 µm in air after applying a cosine window to the spectra. Power at the cornea was 440 µW for 1040 nm radiation. A k-clock signal produced by the laser is used to acquire linear-in-wavenumber interferograms avoiding the need for post-processing linearization. The beam diameter at the cornea is adjustable based on the light delivery telescope; for this study the beam diameter was set to 7.2 mm in the pupil plane, resulting in a diffraction-limited lateral point spread function of 0.61 arcmin (distance from the peak of the Airy pattern to the first dark ring), which corresponds to about 3 µm assuming that the retinal magnification factor is 300 µm per degree of visual angle.

The system optical schematic of the AOOCT is shown in Fig. 1 as blue rays. The light source is coupled into the system with a single mode fiber to a 4-mm reflective collimator using an off axis parabolic mirror (RC04APC-P01, Thorlabs Inc. Newton, NJ, USA). Upon entering the system, the beam enters the light delivery telescope - Lens 5 and Lens 6 - which controls the AOOCT beam size on the achromatizing lens. The achromatizing lens is a custom optic which corrects for the longitudinal chromatic aberrations in the human eye across the wavelength range of the source to obtain a sharper point-spread-function on the retina. The next telescope pair - Lens 7 and Lens 8 - controls the AOOCT focus adjustment by changing the beam vergence through adjustment of the distance between the two lenses with the motorized stage. The beam is then relayed onto a 7.5 mm diameter, dual-axis MEMS bonded mirror with a resonant frequency of about 500 Hz (Mirrorcle, Richmond, CA, USA) which is controlled with two superimposed driving signals, the beam-scanning waveforms and the eye-motion stabilization signals. Two curved mirrors are used to relay onto the deformable mirror the pupil of the AOOCT beam, which is superimposed to the AOSLO optical path via a dichroic mirror. The OCT and SLO are both adaptive optics corrected and coaligned in both the image and pupil plane to acquire from the same spatial location in the retina. In its standard setting, the AOOCT beam is set to have a fixed offset in vergence with respect to the AOSLO wavelengths to account for the longitudinal chromatic aberration (LCA) of the eye, such that all the wavelengths focus at the same axial location in the retina. However, the AOOCT beam vergence can be changed to focus onto different layers.

The reference arm length matches the sample arm with a total length of 11.7 meters in free space, 104.8 mm of optical glass, and 5.4 m of fiber optic length. The reference arm consists of a series of 4-f telescopes which improve the collection efficiency by relaying the exit pupil of the fiber collimator and maintaining a high beam quality. The delay line is adjusted to avoid the coherence revival fringes and ensure the correct set of interference fringes are acquired by going through each set of fringes and measuring the phase stability in each coherence revival interference signal [76,77]. The reference arm power is aligned for maximum power throughput and an iris is used as a limiting aperture to decrease the reference arm power to avoid saturating the balanced photoreceiver while still performing at the shot-noise limit. The reference arm has a fiber-based polarization controller which is used to optimally distribute the power in the two leads going to the balanced photoreceiver (BPD430c, Thorlabs Inc).

To enable phase sensitive imaging, an optical trigger is implemented to sync with the Mach-Zehnder interferometer (MZI). A 99 to 1 fiber beam splitter is used to send 1% of the light into a fiber Bragg grating (Center wavelength 992 nm and bandwidth of 0.1 nm, O/E land, LaSalle, Canada) and send the reflection back into the opposing fiber beam splitter lead to be coupled into a photodetector. The signal from the photodetector is amplified and used as an A-line trigger optically synced to the MZI signal. The temporal delay between the interference signal and the MZI signal is matched by accounting for the propagation time of the light through the fiber and free space paths, for electronic signal propagation time through the coaxial cables, and for the detector delay to the digitizer (ATS9350 AlazarTech, Pointe Claire, Canada).

2.3 Synchronization of AOSLO and AOOCT

The AOSLO has two main synchronizing signals, the horizontal-sync (H-sync) and the vertical-sync (V-sync) as shown in Fig. 2. The H-sync is a TTL signal at 16 kHz generated by the resonant scanner driver and operates as the master clock of the AOSLO. The V-sync is generated based off the H-sync and starts each new frame at 1/512th of the line scan rate, equivalent to 30.25 Hz.

 figure: Fig. 2.

Fig. 2. The schematic above displays how the sync signals are integrated between the AOSLO and AOOCT. The main clocks are originating from the resonance frequency of the AOSLO resonant scanner (H-sync) and the start of the AOOCT laser sweep as detected by the fiber Bragg grating (AOOCT FBG trigger). BPR balanced photoreceiver, PMT photomultiplier tube, DAQ data acquisition board, H Horizontal, V Vertical

Download Full Size | PDF

The AOOCT digitizer uses three electronic synchronization signals: the k-clock, the optical sweep trigger, and the frame trigger. Since the laser does not sweep linearly in wavenumber, an MZI samples the sweep producing an interferometric signal whose zero-crossing are evenly spaced in wavenumber space. This signal (k-clock) is fed to the digitizer to sample the AOOCT signal linearly in k-space. The k-clock frequency is set as such to provide an imaging depth of 5 mm in air. The optical sweep trigger is produced by an FBG as described in section 2.2. The frame trigger is a 4-fold multiple of the AOSLO V-sync, equivalent to a B-scan rate of 121 Hz and each AOOCT frame is composed of 450 A-lines. When the operator starts an OCT recording, a signal triggers an AOSLO recording. The waveforms generated by the AOOCT system to drive the tip-tilt scanner are based on the AOSLO H-sync and the AOOCT frame trigger. This configuration effectively synchronizes the images acquired with the two systems. The two main AOOCT imaging protocols consist of (a) a volume acquisition which distributes the scan over a field size of 0.75 degrees for an 8-second acquisition, oversampling in both lateral directions and (b) continuous B-scan acquisition which spatially locks onto a line of cells to maximize the temporal sampling rate

2.4 Active motion correction

The image stabilization software, Image Capture and Delivery Interface (ICANDI) is a custom program which can extract the eye motion from an AOSLO image for real-time correction implementation as previously described [53,57]. Recent modifications of this software enable loading a previously-acquired high-quality AOSLO image as a reference frame to be used in separate imaging sessions. The process begins by acquiring an AOSLO video of a particular retinal location, correcting each frame in the video for eye motion, and summing each frame for a high-signal-to-noise, distortion-reduced AOSLO image, followed by selecting a 512x512 region-of-interest and loading it into ICANDI as a reference frame. The incoming frames are then broken down into strips which are cross-correlated with respect to the reference frame. The horizontal and vertical displacements of each strip are then used as the correction signal. The eye motion is reported at 960 Hz, converted to appropriately-scaled analog signals (see next paragraph), and added to the AOOCT scanner signals before sending them to the MEMS mirror. A similar methodology has been previously shown and characterized by Vienola et al. and Sheehy et al. [24,72].

The eye motion trace output can be modified in gain and angle in the ICANDI software to account for the scanner variance in driver voltage and alignment in scan orientation between the two imaging systems. These gain and angle corrections are experimentally determined using a moving model eye setup which consists of an LCA-corrected lens focusing onto a piece of white paper and a single axis galvo scanner in the optical path positioned between the lens and the paper to simulate eye motion. To calibrate, the motion correction is activated and the gain and angle are systematically adjusted to find the minimal residual motion on the AOOCT B-scan of the sample in the moving model eye.

2.5 AOOCT data processing

The standard OCT data processing pipeline consists of removing the background, removing the system interference artifacts, and shaping the spectrum. Additional ophthalmic OCT data processing includes applying a chromatic dispersion curve, axial alignment, flattening the image in depth, and linearizing the image along the fast axis. The dispersion correction is empirically found for each subject by applying quadratic dispersion curve with different curvatures and optimizing for image contrast. The eye motion correction begins with the bulk axial motion correction, a centroiding algorithm determines the centroid with respect to depth for the first B-scan image and consecutive B-scans are shifted to align the images axially, both for B-scan time traces acquisitions and volumetric acquisitions. This assumes that the retina is at a fixed depth (B-scan time traces) or that the retina is flat (volumetric acquisitions). The images are then flattened to account for the curvature of the retina along the B-scan direction; the B-scan is segmented to determine the curvature at the layer of interest, mainly the inner and outer segment junction (IS/OS) and retinal nerve fiber layer (RNFL). The image is then warped to fit this line on a single plane in depth. The last processing step is linearizing the B-scan to account for the non-linearity of MEMS fast-axis sweep. A linearization curve is obtained by imaging a calibration grid and it is used to resample the B-scans along the fast axis accordingly.

2.6 Human subject imaging

The University of California Berkeley Institutional Review Board approved this research, and the subject signed an informed consent before participation. All experimental procedures adhered to the tenets of the Declaration of Helsinki. Mydriasis and cycloplegia were achieved with 1% tropicamide and 2.5% phenylephrine ophthalmic solutions before each experimental session. The subject bit into a dental impression mount affixed to an XYZ stage to hold the eye and head still. The subject was a healthy young adult volunteer, 20112L.

3. Results

3.1 Quantifying residual motion of stabilization with a moving model eye

We quantified the active stabilization performance with different amplitudes and frequencies of motion. It is important to introduce motion only along the B-scan direction since orthogonal motion creates images that do not have spatial relations to a reference B-scan. The moving model eye was driven to replicate the eye motion spectrum [24] by stepping through frequencies while adjusting the amplitude accordingly to match the velocity. The residual motion was calculated by comparing the lateral pixel shifts in the AOOCT data with and without motion correction. Results are shown in Fig. 3.

 figure: Fig. 3.

Fig. 3. Demonstration of motion compensation. a) Average of 720 B-scans acquired while the moving model eye actuated with a 1-Hz sinusoidal waveform. Top image is without motion compensation, bottom image is with active motion compensation. Images are in linear greyscale. b) Time traces of motion in the moving model eye, obtained from the depth location marked by the dashed line in a). Top image is without motion compensation while the bottom image is with active compensation. c) The amplitude of sinusoidal motion in a moving model eye as extracted from the images without and with active compensation are shown in the orange and blue traces respectively. d) Quantification of motion compensation across a frequency spectrum, from 1 Hz to 64 Hz. The 3-dB bandwidth of motion compensation was estimated to be 10 Hz.

Download Full Size | PDF

To give a visual sense of the effect of active eye motion correction, Fig. 3(a) compares the average B-scan of a series of 720 B-scans where motion was not compensated, effectively blurring the frame, whereas the frame below is the average of a series of motion-compensated B-scans, resulting in a sharp image. Figure 3(b) shows a time trace of the intensity of a given depth of the B-scan, indicated by the dashed line in Fig. 3(a). This visualization clearly shows the effect of the applied sinusoidal motion in the top image and the effect of stabilization in the bottom image. Figure 3(c) quantifies the motion in the image, extracted by registering every B-scan in the dataset to an average B-scan of the stationary sample. Figure 3(d) shows the efficacy of active eye motion correction for a spectrum of frequencies in the range from 1 Hz to 64 Hz. The efficacy was determined as the difference between 1 and the residual motion in the image at the actuation frequency, all expressed as a percentage.

As expected, motion compensation decreases with increasing frequency of motion due to the effect of the system latency. We estimated a 3-dB motion compensation bandwidth of 10 Hz. We also estimated the expected motion compensation based on the total latency of the system (7 ms, originating as follows: 2.9 ms motion-trace output latency, 1.1 ms external low pass filter, 3.0 ms MEMS driver latency) and using an analytical model for motion correction in the presence of latency. As seen in Fig. 3(d), the measured and predicted motion compensation show good agreement. Note that, owing to the latency, the effort to compensate for motion actually increases the motion in the image for high frequencies.

3.2 Human imaging with actively stabilized AOOCT

To get a spatial reference of the location of interest, both a clinical fundus image and an AOSLO montage were acquired for subject 20112 left eye as shown in Fig. 4. The individual AOSLO images were stitched together from the fovea to 12 degrees in the temporal direction. When the fundus and the montage are integrated together, we can appreciate the difference in scale and how the two complement each other especially as the vasculature of the fundus lines up with the AOSLO montage. The AOOCT photoreceptor images were collected at about 5 degrees, in the location marked by the AOSLO frame with the magenta line, while the superficial nerve fiber layer images were taken at 12 degrees in the location marked by the light blue line.

 figure: Fig. 4.

Fig. 4. An AOSLO montage from the fovea to 12 degrees eccentricity was made as a reference to indicate the locations for the AOOCT results.

Download Full Size | PDF

Figure 5 shows en face images of a human photoreceptor mosaic (en face view of the IS/OS junction) acquired over 8 seconds without and with active stabilization. Without motion correction, AOOCT en face images show severe artifacts, whereas raw actively stabilized AOOCT volumes can be directly projected to reveal a high-fidelity photoreceptor mosaic. Although the active-motion-corrected volume shows a few line artifacts from high velocity microsaccades, the majority of the eye motion is adequately compensated, and the photoreceptor mosaic can be easily distinguished. It is noted here that the AOOCT and AOSLO en face frames are slightly rotated with respect to each other (about 1 degree). This is is due to small differences in the X-Y scan orientations between the two systems. Although the differences can be corrected in hardware, the relative rotation as it is does not pose a problem for tracking, since both gain and angle of the motion correction is considered in the motion control algorithm (see section 2.4) Our system, therefore, enables the raw acquisition of a photoreceptor mosaic within a single volume with only bulk axial registration and no need to register the frames onto a distortion-free reference AOOCT volume. To further increase the signal to noise ratio, repeated acquisitions can be easily averaged since no lateral registration is required.

 figure: Fig. 5.

Fig. 5. Comparison of AOOCT images with and without active eye tracking. a) shows an AOSLO image of the area being imaged. b) shows an overlay of the same AOSLO image with the corresponding en face AOOCT. An animation of the overlay can be found in Supplementary Material (See Visualization 1). c) shows an 8-second en face AOOCT view of the IS/OS junction without tracking while d) presents one with active tracking. The B-scans were flattened at the IS/OS layer to produce the en face images. e,g) show time traces of B-scans of the IS/OS junction, with and without tracking, respectively. An average of the registered B-scan is shown below each time trace (f,h). Without tracking, the en face image is distorted as the B-scan continuously shifts to different locations. Active stabilization yields high-fidelity en face views with no need for post-processing and consistent B-scans from a single location which, after averaging reveals a cross-section with resolved cones. Scale bar: 20 µm.

Download Full Size | PDF

Using the actively stabilized AOOCT, the motion correction no longer requires a volume for registration and a B-scan can be spatially locked on a series of photoreceptors. In Fig. 5 we can see the challenge which AOOCT faces with eye motion while trying to maintain a fixed B-scan on the retina. The top panel shows an averaged B-scan in which the details wash out due to the motion, the time trace of the IS/OS junction shows sections of minimal eye motion either in lateral drift or saccades, but due to the eye motion the retinal location of the B-scan is not maintained. In the lower panel of Fig. 5 the active stabilization is activated, and the individual photoreceptors are clearly visible in the averaged B-scan over 1,000 frames. The time trace of the IS/OS is shown in the bottom right image of Fig. 5, where we can see that the majority of the motion is corrected. Although two high velocity microsaccades are evident, the tracking recovers and returns the B-scan onto the original retinal location.

Figure 6 shows that the post hoc lateral registration is not necessary to localize photoreceptors. The B-scan average is only marginally improved with post processing lateral registration.

 figure: Fig. 6.

Fig. 6. This figure compares a series of B-scans recorded with active stabilization (left panel) to the same series with additional post processing lateral registration (right panel). The B-scans were flattened at the IS/OS junction to produce the time traces. a) and c) show the time trace of the IS/OS junction in a fixed B-scan protocol over an 8-second acquisition; b) and d) show an average of the B-scan series over the 8-second acquisition. Scale bar 20 µm.

Download Full Size | PDF

The use of a single, high quality AOSLO retinal reference allows us to revisit and probe the same retinal location for multiple imaging sessions. Taking advantage of these developments, we look at different retinal planes by independently adjusting the AOOCT focus while maintaining a lock on the same mosaic of photoreceptors as shown in Fig. 7. In this series of images, the AOSLO image plane is fixed on the photoreceptor layer while the AOOCT focus changes from the photoreceptors to the nerve fiber layer. We show images in linear image scale to highlight the photoreceptor signal and logarithmic image scale to highlight the weaker signals. Although the change in focus changes the image structure immensely, the blood vessel in the top right of all these images shows a clear shadow on the underlying layers and is evident in all these images.

 figure: Fig. 7.

Fig. 7. Actively stabilized B-scan images of the same retinal location. a,c) images are in linear scale and b,d) images are in logarithmic scale. In these images the retinal curvature at the IS/OS junction was not corrected. a,b) show an average B-scan with the AOOCT focus on the IS/OS junction of the photoreceptors while c,d) show an average B-scan with the focus on the nerve fiber layer. Arrows point to a capillary vessel in the superficial retina and its shadow on the photoreceptors. Scale bar: 20 µm.

Download Full Size | PDF

When all the features in this system are combined, the active stabilized AOOCT can collect volumes of targeted structures in low reflective retinal regions at intervals of several minutes over a time frame of 1 hour or more. This is useful to reveal transparent neurons in the living human eye by averaging images with different speckle patterns created by the slow organelles motion in the somas and therefore highlight the underlying structure, as demonstrated by Liu et al. [29]. The system is also able to quickly recover the wavefront correction and the retinal position following an eye blink. These volumes can be averaged in time without lateral registration. When these volumes are averaged and sliced in different depths, the en face images reveal distinct features as shown in Fig. 8. The different layers of vasculature can be appreciated in the depth sections along with the strong signal from the individual nerve fibers. Although an entire mosaic of ganglion cells is not distinctly apparent, the images present features that can be associated to retinal ganglion cells, indicated by the fact that their size, thickness, and location matches that shown by other state-of-the-art AOOCT systems [29,78].

 figure: Fig. 8.

Fig. 8. En face images obtained in the inner layers of the retina. The top series (b-e) is comprised of 18 volumes while the bottom series (g-j) from 23 volumes, two spatially adjacent locations both at 12 degrees eccentricity. The B-scans were flattened at the RNFL to produce the en face images. For each series, en face images at four different depths are presented. The layers are separated by 12 µm steps and each en face image is obtained by integrating the volume over a depth of 8 µm. a,f) B-scan that shows the depth locations from which the en face images originate. b,g) depth location that clearly shows the RNFL. c,h) deeper location where some fibers are still visible. d,i) depth location with a clear presence of microvasculature and round structures that hint to the presence of retinal ganglion cells. e,j) deepest location of the series, with presence of round structures and microvasculature. Scale bar: 20 µm.

Download Full Size | PDF

4. Discussion

In this paper we demonstrate AOSLO-based active eye motion correction for an AOOCT system that can reveal the photoreceptor mosaic without additional lateral motion registration, enabling fixed-line imaging at a chosen retinal location. These characteristics facilitate a high throughput of high-quality data that need minimal postprocessing. The ability to change the AOOCT focus independently allows acquisition of high contrast AOOCT images from different retinal layers, while maintaining high-fidelity AOSLO-based eye-motion correction from a feature-rich structure.

We were able to quantify the motion correction accuracy with a moving model eye setup in which a galvo scanner induced motion by comparing the residual motion after stabilization with the non-stabilized condition. Considering the eye motion frequency spectrum along with the amplitude of eye motion, we can confidently correct for most of the eye motion with limitations in high frequency and faster velocities corresponding to large microsaccades.

Active tracking enables AOOCT to employ a fixed line acquisition protocol as it efficiently targets a particular spatial location reliably and does not require a volume for spatial registration. Additionally, using the same AOSLO reference frame to spatially target retinal locations over the entire imaging session enables the series of 20-30 volumes acquired within an imaging session to be averaged with no additional lateral registration necessary.

The fixed line-scan protocol with active tracking removes the need for large volume acquisitions and greatly reduces the complexity of post processing. The spatially bound acquisition facilitates the probing of a particular series of cones by increasing the effective temporal sample rate of a particular region without a volume acquisition, potentially increasing the effective localized sampling rate to the A-line acquisition rate. In other words, the features of this system create a platform that would enable measurements of an individual cell with AOOCT at an extremely high sampling rate, in the order of hundreds of kHz.

The benefits of AO-correction in the AOOCT images are apparent in the sharpness and resolution of the nerve fiber layer and photoreceptors. However, we were not able to visualize retinal ganglion cell mosaics as distinctly as seen in other papers [29]. Nevertheless, we are confident that any limitations in visibility of certain structures in AOOCT are not due to optical errors caused by non-common path aberrations between the AOSLO and AOOCT systems. We characterized the AOSLO system performance in our previous publication with a flat mirror in place of the deformable mirror by measuring the PSF of a focused spot and showed near diffraction-limited performance for all the wavelength channels [75]. We implemented similar procedures for the AOOCT channel and obtained the same near diffraction-limited performance. Given the excellent optical performance of both channels, we are confident that operating the AO control in either will not compromise the optical performance of the other in any significant way. Therefore, we hypothesize that our challenges in imaging ganglion cells are due to a combination of reduced scattering and reduced intrinsic contrast of these cells at the 1040 nm wavelength range compared to the lower near-infrared wavelengths that other groups have used.

The main limitation to the system’s current framerate is the tip-tilt mirror we chose to drive with two types of motion signals: the eye motion compensation and the volume or line scan. Moreover, to protect the mirror from damage due to oscillations near the resonance frequency, all actuation signals need to be low-pass-filtered at a cut-off frequency of about 200 Hz. On top of limiting the scanning speed, this also typically introduces additional latency (3.0 ms in our case, bringing the total latency to 7 ms) which severely limited the tracking bandwidth. To improve performance of the system, a one-axis galvanometer mirror could be installed in place of the MEMS mirror while another one could be added using an afocal telescope to relay the beam pupil. This would have two immediate advantages: increasing frame rate by allowing scanning faster (up to 1 kHz) and reducing system latency since galvanometric scanners typically show a response that is much shorter than 1 ms. The main downside is needing to modify the sample arm with an extra telescope and increasing the length of reference arm accordingly.

Overall, we successfully implemented active eye-motion correction for an AOOCT with independent focus adjustment and spatial targeting as a platform for creating a more systematic approach from targeting a particular ensemble of cones for color vision experiments to clinical imaging for diagnostic purposes. This technology pushes the envelope of high-resolution retinal tracking using AO for real-time stabilization of a separate beam; we successfully compensated for the eye motion and confidently probed targeted photoreceptors with a fixed B-scan acquisition protocol.

5. Conclusions

We successfully use an AOSLO for real time correction of high amplitude and low frequency drifts of the eye in an AOOCT imaging system. This is demonstrated by acquisition of high-resolution, high-sampling-density, distortion-free AOOCT volume images and motion-free B-scan sequences. Although latency issues in this specific system caused the motion compensation to steadily decrease with higher frequency movement, most of the eye motion was compensated in real time enabling high-fidelity and stable AOOCT acquisition from specific spatial locations in a human eye. Overall, this technology facilitates AOOCT imaging of targeted locations on a cellular scale and at high frequencies.

Funding

National Eye Institute (P30-EY003176, R01EY023591, T32EY007043, U01EY025501); Alcon Foundation (Alcon Research Investigator Award); Minnie Flaura Turner Memorial Fund for Impaired Vision Research; Soroptimist International Founders Region Fellowship.

Acknowledgments

The authors thank Ramkumar Sabesan and Vimal Prabhu Pandiyan from the University of Washington for helpful discussions.

Disclosures

AR:USPTO #7,118,216, "Method and apparatus for using AO in a scanning laser ophthalmoscope" and USPTO #6,890,076, "Method and apparatus for using AO in a scanning laser ophthalmoscope". These patents are assigned to both the University of Rochester and the University of Houston and are currently licensed to Boston Micromachines Corporation in Cambridge, Massachusetts. Both AR and the company may benefit financially from the publication of this research.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. Y. Wang, S. Liu, S. Lou, W. Zhang, H. Cai, and X. Chen, “Application of optical coherence tomography in clinical diagnosis,” J. X-Ray Sci. Technol. 27, 995–1006 (2020). [CrossRef]  

2. E. A. Swanson and J. G. Fujimoto, “The ecosystem that powered the translation of OCT from fundamental research to clinical and commercial impact [Invited],” Biomed. Opt. Express 8(3), 1638–1664 (2017). [CrossRef]  

3. M. A. Choma, M. V. Sarunic, C. Yang, and J. A. Izatt, “Sensitivity advantage of swept source and Fourier domain optical coherence tomography,” Opt. Express 11(18), 2183 (2003). [CrossRef]  

4. J. F. De Boer, B. Cense, B. H. Park, M. C. Pierce, G. J. Tearney, and B. E. Bouma, “Improved signal-to-noise ratio in spectral-domain compared with time-domain optical coherence tomography,” Opt. Lett. 28(21), 2067 (2003). [CrossRef]  

5. W. Drexler, U. Morgner, R. K. Ghanta, F. X. Kärtner, J. S. Schuman, and J. G. Fujimoto, “Ultrahigh-resolution ophthalmic optical coherence tomography,” Nat. Med. 7(4), 502–507 (2001). [CrossRef]  

6. M. Wojtkowski, V. J. Srinivasan, T. H. Ko, J. G. Fujimoto, A. Kowalczyk, and J. S. Duker, “Ultrahigh-resolution, high-speed, Fourier domain optical coherence tomography and methods for dispersion compensation,” Opt. Express 12(11), 2404 (2004). [CrossRef]  

7. B. Hermann, E. J. Fernández, A. Unterhuber, H. Sattmann, A. F. Fercher, W. Drexler, P. M. Prieto, and P. Artal, “Adaptive-optics ultrahigh-resolution optical coherence tomography,” Opt. Lett. 29(18), 2142 (2004). [CrossRef]  

8. D. Miller, J. Qu, R. S. Jonnal, and K. E. Thorn, “Coherence gating and adaptive optics in the eye,” Coherence Domain Optical Methods and Optical Coherence Tomography in Biomedicine VII, vol. 4956, V. V. Tuchin, J. A. Izatt, and J. G. Fujimoto, eds. (2003), p. 65. [CrossRef]  

9. R. J. Zawadzki, S. M. Jones, S. S. Olivier, M. Zhao, B. A. Bower, J. A. Izatt, S. Choi, S. Laut, and J. S. Werner, “Adaptive-optics optical coherence tomography for high-resolution and high-speed 3D retinal in vivo imaging,” Opt. Express 13(21), 8532 (2005). [CrossRef]  

10. Y. Zhang, J. Rha, R. S. Jonnal, and D. T. Miller, “Adaptive optics parallel spectral domain optical coherence tomography for imaging the living retina,” Opt. Express 13(12), 4792 (2005). [CrossRef]  

11. Y. Jia, J. C. Morrison, J. Tokayer, O. Tan, L. Lombardi, B. Baumann, C. D. Lu, W. Choi, J. G. Fujimoto, and D. Huang, “Quantitative OCT angiography of optic nerve head blood flow,” Biomed. Opt. Express 3(12), 3127 (2012). [CrossRef]  

12. W. J. Choi, K. J. Mohler, B. Potsaid, C. D. Lu, J. J. Liu, V. Jayaraman, A. E. Cable, J. S. Duker, R. Huber, and J. G. Fujimoto, “Choriocapillaris and choroidal microvasculature imaging with ultrahigh speed OCT angiography,” PLoS One 8(12), e81499 (2013). [CrossRef]  

13. J. V. Migacz, I. Gorczynska, M. Azimipour, R. Jonnal, R. J. Zawadzki, and J. S. Werner, “Megahertz-rate optical coherence tomography angiography improves the contrast of the choriocapillaris and choroid in human retinal imaging,” Biomed. Opt. Express 10(1), 50–65 (2019). [CrossRef]  

14. M. Azimipour, J. V. Migacz, R. J. Zawadzki, J. S. Werner, and R. S. Jonnal, “Functional retinal imaging using adaptive optics swept-source OCT at 1.6 MHz,” Optica 6(3), 300–303 (2019). [CrossRef]  

15. F. Zhang, K. Kurokawa, A. Lassoued, J. A. Crowell, and D. T. Miller, “Cone photoreceptor classification in the living human eye from photostimulation-induced phase dynamics,” Proc. Natl. Acad. Sci. U.S.A 116(16), 7951–7956 (2019). [CrossRef]  

16. V. J. Srinivasan, Y. Chen, J. S. Duker, and J. G. Fujimoto, “In vivo functional imaging of intrinsic scattering changes in the human retina with high-speed ultrahigh resolution OCT,” Opt. Express 17(5), 3861 (2009). [CrossRef]  

17. K. Bizheva, R. Pflug, B. Hermann, B. Považay, H. Sattmann, P. Qiu, E. Anger, H. Reitsamer, S. Popov, J. R. Taylor, A. Unterhuber, P. Ahnelt, and W. Drexler, “Optophysiology: Depth-resolved probing of retinal physiology with functional ultrahigh-resolution optical tomography,” Proc. Natl. Acad. Sci. U. S. A. 103(13), 5066–5071 (2006). [CrossRef]  

18. R. S. Jonnal, J. Rha, Y. Zhang, B. Cense, W. Gao, and D. T. Miller, “In vivo functional imaging of human cone photoreceptors,” Opt. Express 15(24), 16141–16160 (2007). [CrossRef]  

19. O. P. Kocaoglu, Z. Liu, F. Zhang, K. Kurokawa, R. S. Jonnal, and D. T. Miller, “Photoreceptor disc shedding in the living human eye,” Biomed. Opt. Express 7(11), 4554–4568 (2016). [CrossRef]  

20. R. S. Jonnal, O. P. Kocaoglu, Q. Wang, S. Lee, and D. T. Miller, “Phase-sensitive imaging of the outer retina using optical coherence tomography and adaptive optics,” Biomed. Opt. Express 3(1), 104–124 (2012). [CrossRef]  

21. V. P. Pandiyan, X. Jiang, A. M. Bertelli, J. A. Kuchenbecker, and R. Sabesan, “High-speed adaptive optics line-scan OCT for cellular-resolution optoretinography,” Biomed. Opt. Express 11(9), 5274–5296 (2020). [CrossRef]  

22. R. J. Zawadzki, A. G. Capps, D. Y. Kim, A. Panorgias, S. B. Stevenson, B. Hamann, and J. S. Werner, “Progress on developing adaptive optics-optical coherence tomography for in vivo retinal imaging: Monitoring and correction of eye motion artifacts,” IEEE J. Sel. Top. Quantum Electron. 20(2), 322–333 (2014). [CrossRef]  

23. L. S. Brea, D. A. De Jesus, M. F. Shirazi, M. Pircher, T. van Walsum, and S. Klein, “Review on retrospective procedures to correct retinal motion artefacts in OCT imaging,” Appl. Sci. 9(13), 2700 (2019). [CrossRef]  

24. K. V. Vienola, B. Braaf, C. K. Sheehy, Q. Yang, P. Tiruveedhula, D. W. Arathorn, J. F. de Boer, and A. Roorda, “Real-time eye motion compensation for OCT imaging with tracking SLO,” Biomed. Opt. Express 3(11), 2950 (2012). [CrossRef]  

25. B. Braaf, K. V. Vienola, C. K. Sheehy, Q. Yang, K. A. Vermeer, P. Tiruveedhula, D. W. Arathorn, A. Roorda, and J. F. de Boer, “Real-time eye motion correction in phase-resolved OCT angiography with tracking SLO,” Biomed. Opt. Express 4(1), 51–65 (2013). [CrossRef]  

26. S. Ricco, M. Chen, H. Ishikawa, G. Wollstein, and J. Schuman, “Correcting motion artifacts in retinal spectral domain optical coherence tomography via image registration,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2009), pp. 100–107.

27. M. F. Kraus, B. Potsaid, M. A. Mayer, R. Bock, B. Baumann, J. J. Liu, J. Hornegger, and J. G. Fujimoto, “Motion correction in optical coherence tomography volumes on a per A-scan basis using orthogonal scan patterns,” Biomed. Opt. Express 3(6), 1182 (2012). [CrossRef]  

28. M. Azimipour, J. Migacz, R. Zawadzki, J. Werner, and R. Jonnal, “Functional retinal imaging using adaptive optics swept-source OCT at 1.6MHz,” Optica 6, 300–303 (2018). [CrossRef]  

29. Z. Liu, K. Kurokawa, F. Zhang, J. J. Lee, and D. T. Miller, “Imaging and quantifying ganglion cells and other transparent neurons in the living human retina,” Proc. Natl. Acad. Sci. U. S. A. 114(48), 12803–12808 (2017). [CrossRef]  

30. O. P. Kocaoglu, T. L. Turner, Z. Liu, and D. T. Miller, “Adaptive optics optical coherence tomography at 1 MHz,” Biomed. Opt. Express 5(12), 4186 (2014). [CrossRef]  

31. D. Hillmann, H. Spahr, C. Pfäffle, H. Sudkamp, G. Franke, and G. Hüttmann, “In vivo optical imaging of physiological responses to photostimulation in human photoreceptors,” Proc. Natl. Acad. Sci. U. S. A. 113(46), 13138–13143 (2016). [CrossRef]  

32. H. Spahr, D. Hillmann, C. Hain, C. Pfäffle, H. Sudkamp, G. Franke, and G. Hüttmann, “Imaging pulse wave propagation in human retinal vessels using full-field swept-source optical coherence tomography,” Opt. Lett. 40(20), 4771 (2015). [CrossRef]  

33. B. Potsaid, V. Jayaraman, J. G. Fujimoto, J. Jiang, P. J. S. Heim, and A. E. Cable, “MEMS tunable VCSEL light source for ultrahigh speed 60kHz - 1MHz axial scan rate and long range centimeter class OCT imaging,” Optical Coherence Tomography and Coherence Domain Optical Methods in Biomedicine XVI, vol. 8213 (SPIE, 2012), 70-77. [CrossRef]  

34. C. Blatter, B. Grajciar, L. Schmetterer, and R. A. Leitgeb, “Angle independent flow assessment with bidirectional Doppler optical coherence tomography,” Opt. Lett. 38(21), 4433 (2013). [CrossRef]  

35. N. A. Robinson, “A method of measuring eye movement using a scleral search coil in a magnetic field,” IEEE Trans. Biomed. Eng. 10, 137–145 (1963). [CrossRef]  

36. K. N. Hageman, M. R. Chow, D. C. Roberts, and C. C. Santina, “Low-Noise Magnetic Coil System for Recording 3-D Eye Movements,” IEEE Trans. Instrum. Meas. 70, 1–9 (2021). [CrossRef]  

37. Y. Li, H. Cheng, Z. Alhalili, G. Xu, and G. Gao, “The progress of magnetic sensor applied in biomedicine: A review of non-invasive techniques and sensors,” J. Chin. Chem. Soc. 68(2), 216–227 (2021). [CrossRef]  

38. T. N. Cornsweet and H. D. Crane, “Accurate two-dimensional eye tracker using first and fourth Purkinje images,” J. Opt. Soc. Am. 63(8), 921–928 (1973). [CrossRef]  

39. S. Martinez-Conde, “Chapter 8 Fixational eye movements in normal and pathological vision,” Prog. Brain Res. 154, 151–176 (2006). [CrossRef]  

40. J. Zhu and J. Yang, “Subpixel eye gaze tracking,” Proceedings - 5th IEEE International Conference on Automatic Face Gesture Recognition, FGR 2002, pp. 131–136 (2002).

41. L. Świrski, A. Bulling, and N. Dodgson, “Robust real-time pupil tracking in highly off-axis images,” in Proceedings of the 2012 Symposium on Eye Tracking Research and Applications - ETRA (ACM, New York, NY, USA, 2012), pp. 173–176.

42. C. H. Morimoto and M. R. Mimica, “Eye gaze tracking techniques for interactive applications,” Comput. Vis. Image Underst. 98(1), 4–24 (2005). [CrossRef]  

43. O. Pomerantzeff, R. H. Webb, and F. C. Delori, “Image formation in fundus cameras,” Investigative Ophthalmology and Visual Science 18, 630–637 (1979).

44. S. Fonda, M. Melli, M. Neroni, A. Sargentini, F. Torlai, and M. Peduzzi, “Recent developments in eye fundus imaging for clinical application: Television fluoroangiography and new technologies,” Graefe’s Arch. Clin. Exp. Ophthalmol. 220(2), 66–70 (1983). [CrossRef]  

45. R. H. Webb and G. W. Hughes, “Scanning Laser Ophthalmoscope,” IEEE Trans. Biomed. Eng. 7, 488–549 (1981). [CrossRef]  

46. M. A. Mainster, G. T. Timberlake, R. H. Webb, and G. W. Hughes, “Scanning Laser Ophthalmoscopy: Clinical Applications,” Ophthalmology 89(7), 852–857 (1982). [CrossRef]  

47. R. H. Webb, G. W. Hughes, and F. C. Delori, “Confocal scanning laser ophthalmoscope,” Appl. Opt. 26(8), 1492 (1987). [CrossRef]  

48. G. T. Timberlake, M. A. Mainster, E. Peli, R. A. Augliere, E. A. Essock, and L. E. Arend, “Reading with a macular scotoma. I. Retinal location of scotoma and fixation area,” Invest. Ophthalmol. Vis. Sci. 27, 1137–1147 (1986).

49. D. P. Wornson, G. W. Hughes, and R. H. Webb, “Fundus tracking with the scanning laser ophthalmoscope,” Appl. Opt. 26(8), 1500 (1987). [CrossRef]  

50. M. Stetter, R. A. Sendtner, and G. T. Timberlake, “A novel method for measuring saccade profiles using the scanning laser ophthalmoscope,” Vision Res. 36(13), 1987–1994 (1996). [CrossRef]  

51. J. B. Mulligan, “Recovery of motion parameters from distortions in scanned images,” in Proceedings of the NASA Image Registration Workshop (IRW97), NASA Goddard Space Flight Center, MD (NASA, 1997), pp. 281–292.

52. A. Roorda, F. Romero-Borja, W. J. Donnelly III, H. Queener, T. J. Hebert, and M. C. Campbell, “Adaptive optics scanning laser ophthalmoscopy,” Opt. Express 10(9), 405 (2002). [CrossRef]  

53. D. W. Arathorn, Q. Yang, C. R. Vogel, Y. Zhang, P. Tiruveedhula, and A. Roorda, “Retinally stabilized cone-targeted stimulus delivery,” Opt. Express 15(21), 13731 (2007). [CrossRef]  

54. C. K. Sheehy, Q. Yang, D. W. Arathorn, P. Tiruveedhula, J. F. de Boer, and A. Roorda, “High-speed, image-based eye tracking with a scanning laser ophthalmoscope,” Biomed. Opt. Express 3(10), 2611 (2012). [CrossRef]  

55. S. B. Stevenson and A. Roorda, “Correcting for miniature eye movements in high-resolution scanning laser ophthalmoscopy,” Ophthalmic Technologies XI, F. Manns, P. Soderberg, and A. Ho, eds. (SPIE, Bellingham, WA, 2005), Proceedings of SPIE, Vol. 5688A, 145–151. [CrossRef]  

56. S. B. Stevenson, A. Roorda, and G. Kumar, “Eye tracking with the adaptive optics scanning laser ophthalmoscope,” in Eye Tracking Research and Applications Symposium (ETRA), (ACM, New York, NY, USA, 2010), pp. 195-198.

57. Q. Yang, D. W. Arathorn, P. Tiruveedhula, C. R. Vogel, and A. Roorda, “Design of an integrated hardware interface for AOSLO image capture and cone-targeted stimulus delivery,” Opt. Express 18(17), 17841 (2010). [CrossRef]  

58. D. W. Arathorn, S. B. Stevenson, Q. Yang, P. Tiruveedhula, and A. Roorda, “How the unstable eye sees a stable and moving world,” J. Vis. 13(10), 22 (2013). [CrossRef]  

59. W. S. Tuten, P. Tiruveedhula, and A. Roorda, “Adaptive optics scanning laser ophthalmoscope-based microperimetry,” Optom Vis Sci 89(5), 563–574 (2012). [CrossRef]  

60. W. M. Harmening, P. Tiruveedhula, A. Roorda, and L. C. Sincich, “Measurement and correction of transverse chromatic offsets for multi-wavelength retinal microscopy in the living eye,” Biomed. Opt. Express 3(9), 2066 (2012). [CrossRef]  

61. K. S. Bruce, W. M. Harmening, B. R. Langston, W. S. Tuten, A. Roorda, and L. C. Sincich, “Normal perceptual sensitivity arising from weakly reflective cone photoreceptors,” Invest. Ophthalmol. Vis. Sci. 56(8), 4431–4438 (2015). [CrossRef]  

62. X. Yao and B. Wang, “Intrinsic optical signal imaging of retinal physiology: a review,” J. Biomed. Opt. 20(9), 090901 (2015). [CrossRef]  

63. R. Sabesan, B. P. Schmidt, W. S. Tuten, and A. Roorda, “The elementary representation of spatial and color vision in the human retina,” Sci. Adv. 2, e1600797 (2016). [CrossRef]  

64. J. H. Tu, K. G. Foote, B. J. Lujan, K. Ratnam, J. Qin, M. B. Gorin, E. T. Cunningham, W. S. Tuten, J. L. Duncan, and A. Roorda, “Dysflective cones: Visual function and cone reflectivity in long-term follow-up of acute bilateral foveolitis,” Am. J. Ophthalmol. Case Reports 7, 14–19 (2017). [CrossRef]  

65. B. P. Schmidt, A. E. Boehm, W. S. Tuten, and A. Roorda, “Spatial summation of individual cones in human color vision,” PLoS One 14(7), e2011397 (2019). [CrossRef]  

66. B. P. Schmidt, R. Sabesan, W. S. Tuten, J. Neitz, and A. Roorda, “Sensations from a single M-cone depend on the activity of surrounding S-cones,” Sci. Rep. 8, 8561 (2018). [CrossRef]  

67. A. G. Anderson, K. Ratnam, A. Roorda, and B. A. Olshausen, “High-acuity vision from retinal image motion,” J. Vis. 20(7), 34 (2020). [CrossRef]  

68. K. Ratnam, N. Domdei, W. M. Harmening, and A. Roorda, “Benefits of retinal image motion at the limits of spatial vision,” J. Vis. 17(1), 30 (2017). [CrossRef]  

69. W. M. Harmening, W. S. Tuten, A. Roorda, and L. C. Sincich, “Mapping the perceptual grain of the human retina,” J. Neurosci. 34(16), 5667–5677 (2014). [CrossRef]  

70. L. C. Sincich, Y. Zhang, P. Tiruveedhula, J. C. Horton, and A. Roorda, “Resolving single cone inputs to visual receptive fields,” Nat. Neurosci. 12(8), 967–969 (2009). [CrossRef]  

71. W. S. Tuten, W. M. Harmening, R. Sabesan, A. Roorda, and L. C. Sincich, “Spatiochromatic interactions between individual cone photoreceptors in the human retina,” J. Neurosci. 37(39), 9498–9509 (2017). [CrossRef]  

72. C. K. Sheehy, P. Tiruveedhula, R. Sabesan, and A. Roorda, “Active eye-tracking for an adaptive optics scanning laser ophthalmoscope,” Biomed. Opt. Express 6(7), 2412 (2015). [CrossRef]  

73. N. R. Bowers, A. E. Boehm, and A. Roorda, “The effects of fixational tremor on the retinal image,” J. Vis. 19(11), 8 (2019). [CrossRef]  

74. S. Mozaffari, V. Jaedicke, F. Larocca, P. Tiruveedhula, and A. Roorda, “Versatile multi-detector scheme for adaptive optics scanning laser ophthalmoscopy,” Biomed. Opt. Express 9(11), 5477–5488 (2018). [CrossRef]  

75. S. Mozaffari, F. Larocca, V. Jaedicke, P. Tiruveedhula, and A. Roorda, “Wide-vergence, multi-spectral adaptive optics scanning laser ophthalmoscope with diffraction-limited illumination and collection,” Biomed. Opt. Express 11(3), 1617 (2020). [CrossRef]  

76. A.-H. Dhalla, D. Nankivil, and J. A. Izatt, “Complex conjugate resolved heterodyne swept source optical coherence tomography using coherence revival,” Biomed. Opt. Express 3(3), 633 (2012). [CrossRef]  

77. A. H. Dhalla, D. Nankivil, T. Bustamante, A. Kuo, and J. A. Izatt, “Simultaneous swept source optical coherence tomography of the anterior segment and retina,” Opt. Lett. 37(11), 1883–1885 (2012). [CrossRef]  

78. V. P. Pandiyan, X. Jiang, J. A. Kuchenbecker, and R. Sabesan, “Reflective mirror-based line-scan adaptive optics OCT for imaging retinal structure and function,” Biomed. Opt. Express 12(9), 5865–5880 (2021). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       Animation of the overlay of an AOSLO frame with an AOOCT en face image obtained from the same location shown in Fig 5b.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. The system schematic is depicted here with the red path corresponding to the AOSLO, the blue path shows the AOOCT and purple depicts where these two beams are combined with a dichroic mirror. Within the AOSLO path we can appreciate the three imaging channels and the WFS channel with the independent delivery and collection. In the AOOCT path the main components consist of the achromatizing lens, the focus adjustment, and the active scanning mirror.
Fig. 2.
Fig. 2. The schematic above displays how the sync signals are integrated between the AOSLO and AOOCT. The main clocks are originating from the resonance frequency of the AOSLO resonant scanner (H-sync) and the start of the AOOCT laser sweep as detected by the fiber Bragg grating (AOOCT FBG trigger). BPR balanced photoreceiver, PMT photomultiplier tube, DAQ data acquisition board, H Horizontal, V Vertical
Fig. 3.
Fig. 3. Demonstration of motion compensation. a) Average of 720 B-scans acquired while the moving model eye actuated with a 1-Hz sinusoidal waveform. Top image is without motion compensation, bottom image is with active motion compensation. Images are in linear greyscale. b) Time traces of motion in the moving model eye, obtained from the depth location marked by the dashed line in a). Top image is without motion compensation while the bottom image is with active compensation. c) The amplitude of sinusoidal motion in a moving model eye as extracted from the images without and with active compensation are shown in the orange and blue traces respectively. d) Quantification of motion compensation across a frequency spectrum, from 1 Hz to 64 Hz. The 3-dB bandwidth of motion compensation was estimated to be 10 Hz.
Fig. 4.
Fig. 4. An AOSLO montage from the fovea to 12 degrees eccentricity was made as a reference to indicate the locations for the AOOCT results.
Fig. 5.
Fig. 5. Comparison of AOOCT images with and without active eye tracking. a) shows an AOSLO image of the area being imaged. b) shows an overlay of the same AOSLO image with the corresponding en face AOOCT. An animation of the overlay can be found in Supplementary Material (See Visualization 1). c) shows an 8-second en face AOOCT view of the IS/OS junction without tracking while d) presents one with active tracking. The B-scans were flattened at the IS/OS layer to produce the en face images. e,g) show time traces of B-scans of the IS/OS junction, with and without tracking, respectively. An average of the registered B-scan is shown below each time trace (f,h). Without tracking, the en face image is distorted as the B-scan continuously shifts to different locations. Active stabilization yields high-fidelity en face views with no need for post-processing and consistent B-scans from a single location which, after averaging reveals a cross-section with resolved cones. Scale bar: 20 µm.
Fig. 6.
Fig. 6. This figure compares a series of B-scans recorded with active stabilization (left panel) to the same series with additional post processing lateral registration (right panel). The B-scans were flattened at the IS/OS junction to produce the time traces. a) and c) show the time trace of the IS/OS junction in a fixed B-scan protocol over an 8-second acquisition; b) and d) show an average of the B-scan series over the 8-second acquisition. Scale bar 20 µm.
Fig. 7.
Fig. 7. Actively stabilized B-scan images of the same retinal location. a,c) images are in linear scale and b,d) images are in logarithmic scale. In these images the retinal curvature at the IS/OS junction was not corrected. a,b) show an average B-scan with the AOOCT focus on the IS/OS junction of the photoreceptors while c,d) show an average B-scan with the focus on the nerve fiber layer. Arrows point to a capillary vessel in the superficial retina and its shadow on the photoreceptors. Scale bar: 20 µm.
Fig. 8.
Fig. 8. En face images obtained in the inner layers of the retina. The top series (b-e) is comprised of 18 volumes while the bottom series (g-j) from 23 volumes, two spatially adjacent locations both at 12 degrees eccentricity. The B-scans were flattened at the RNFL to produce the en face images. For each series, en face images at four different depths are presented. The layers are separated by 12 µm steps and each en face image is obtained by integrating the volume over a depth of 8 µm. a,f) B-scan that shows the depth locations from which the en face images originate. b,g) depth location that clearly shows the RNFL. c,h) deeper location where some fibers are still visible. d,i) depth location with a clear presence of microvasculature and round structures that hint to the presence of retinal ganglion cells. e,j) deepest location of the series, with presence of round structures and microvasculature. Scale bar: 20 µm.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.