Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Simultaneous directional full-field OCT using path-length and carrier multiplexing

Open Access Open Access

Abstract

Full-field swept-source optical coherence tomography (FF-SS-OCT) is an emerging technology with potential applications in ophthalmic imaging, microscopy, metrology, and other domains. Here we demonstrate a novel method of multiplexing FF-SS-OCT signals using carrier modulation (CM). The principle of CM could be used to inspect various properties of the scattered light, e.g. its spectrum, polarization, Doppler shift, or distribution in the pupil. The last of these will be explored in this work, where CM was used to acquire images passing through two different optical pupils. The two pupils contained semicircular optical windows with perpendicular orientations, with each window permitting measurement of scattering anisotropy in one dimension by inducing an optical delay between the images formed by the two halves of the pupil. Together, the two forms of multiplexing permit measurement of differential scattering anisotropy in the x and y dimensions simultaneously. To demonstrate the feasibility of this technique our carrier multiplexed directional FF-OCT (CM-D-FF-OCT) system was used to acquire images of a microlens array, human hair, onion skin and in vivo human retina. The results of these studies are presented and briefly discussed in the context of future development and application of this technique.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Full-field swept-source optical coherence tomography (FF-SS-OCT) has emerged in recent years as an alternative to traditional scanning confocal implementations [13]. The catalyst for this development has been the advent of complementary metal-oxide-semiconductor (CMOS) cameras with high sensitivity and frame rates in the hundreds of kilohertz. These fast cameras permit the parallel detection of thousands of A-scans, with resulting volume rates an order of magnitude higher than competing scanning implementations.

On the other hand, FF-OCT is not a confocal imaging modality, and is thus more susceptible than scanning systems to common-path coherent artifacts such as cross-talk between pixels and interference between optical surfaces within the same channel. To mitigate these sources of noise, investigators have developed methods to reduce the spatial coherence of the imaging light [4,5] and to introduce a small angle between the optical axes of the sample and reference arms [68]. The latter generates a carrier fringe on the sensor that modulates the signal from interference between the sample and reference beams, but does not modulate fringes generated by common path interference. This difference permits separation of signal from common path artifacts during demodulation.

Here we present a method of multiplexing off-axis FF-OCT signals by splitting the sample beam into two beams and delivering them at the detector plane with different angles of incidence, similar to what has been done in digital holography [911]. The two angles generate carrier fringes with orthogonal orientations. In post-processing, these are mutually separable from each other and from the common path component, creating two independent OCT channels that could be used to measure different aspects of the light backscattered by the sample, such as wavelength-dependence, polarization or scattering angle. We refer to this method as carrier multiplexing (CM). In our experiment, the CM channels were used to acquire simultaneous directional imaging in $x$ and $y$ using crossed pupil apodizations.

While conventional OCT is typically based on the detection of directly backscattered light, methods have been suggested to explore anisotropic scattering distributions. In ophthalmoscopy, directional OCT (dOCT) is usually performed by varying the pupil entrance position of the OCT beam, resulting in a change of the orientation of light incident on the retina. This method has been used to measure the pointing direction of angular scatterers such as the photoreceptors [12,13], visualize structures such as Henle’s fiber layer [14], and characterize other directionally scattering retinal layers [15]. It is attractive for its simplicity and applicability to commercial OCT systems, but it requires moving parts with limited precision and sequential measurement. Changes in fixation or other experimental parameters between acquisitions may cause spurious variation in the resulting images. In multi-channel OCT multiple beams and sensors are used in parallel, enabling simultaneous probing of linearly independent orientations [16], with applications in 3D Doppler [17,18], but the complexity of those systems is increased by their parallelization, making them costly and prone to misalignment.

Scattering anisotropy has also been studied in OCT using optical path length multiplexing (OPLM). In that approach, a path-length multiplexing element (PME) such as an annular glass window was used at the pupil plane to separate the backscattered light from the sample into low and high angles [1921]. In another experiment, a custom beam-dividing glass plates was used in a scanning OCT to encode three different Doppler angles [22]. The concept of PME was similarly applied with a few-mode optical fiber by exploiting the angular dependence of coupling efficiency and modal propagation in the fiber to achieve bright and dark field OCT [23]. More recently, the concept of computational pupil apodization to access scattering anisotropy was demonstrated using FF-SS-OCT [24].

In our experiment, semicircular optical windows were placed in the pupil plane of each CM channel, with perpendicular orientations. Each of the two windows serves as a PME, subdividing the two CM channels and resulting in four channels measuring light passing through the top, bottom, left, and right halves of the pupils. This configuration permits concurrent measurements of the $x$ and $y$ angular scattering distributions without moving parts and with just one CMOS camera. To validate the system, measurements were acquired of a microlens array, human hair, onion epidermis and in vivo human retinal photoreceptors.

2. Experimental setup

The setup is a modification of a system previously reported by our group [8]. Briefly, it consists of a Mach-Zehnder interferometer with tunable light source ($800-875$ nm, BS-840-2-HP, Superlum, Cork, Ireland) divided into sample and reference arms by a polarizing beamsplitter (PBS). Light reflected by the PBS enters the sample channel. In the current configuration, the light backscattered by the sample is split using a 1.5", 30R:70T non-polarizing cube beamsplitter (Lambda Research Optics, Costa Mesa, CA, USA) rotated 45$^{\circ }$ with the cube’s central coating layer parallel to the collimated input beam [Fig. 1(a)]. The low reflection ratio was chosen to compensate for the high angle of incidence at the cube’s reflecting surface, and the ratio resulting from this configuration was measured as 58R:42T. The sample light emerges from the cube in two parallel beams with separation controlled by the position of the prism with respect to the incident beam. The two beams are then focused by an achromatic doublet lens ($f = {500} \textrm {mm}$), hitting the high-speed 2D CMOS sensor (FASTCAM NOVA S-12, Photron, Tokyo, Japan) with an angle of - 1$^{\circ }$ and + 1$^{\circ }$ in $x$ for each of the sample beams respectively, as shown in Fig. 1(a).

 figure: Fig. 1.

Fig. 1. (a) Schematic of the carrier and path length multiplexed directional full-field OCT (CM-D-FF-OCT) (not to scale). The dashed red box highlights the beamsplitter used to generate the two carrier fringes, as well as the optical windows used to separate the left-right (L-R) and top-bottom (T-B) pupil halves. BS: beamsplitter; DBS: dichroic beamsplitters; PBS: polarizing beamsplitter; HWP: half wave plate; QWP: quarter wave plate; DM: deformable mirror; SHWS: Shack-Hartmann wavefront sensor; (b) Fourier transform of the intensity interference pattern on the camera for a single frame, illuminated by a narrow 0.15 nm portion of the light source’s spectral sweep. The two channels, represented in blue and orange, have $k_{\parallel }$ in opposing directions in $x$, and interfere with the reference beam with $k_{\parallel }$ in $y$. This scheme results in independent orthogonal spatial modulation of each of the two OCT signals in the detection plane of the CMOS camera. By numerically filtering in the Fourier domain, carrier multiplexing allows complete separation of signals from channels 1 and 2. The edges of the semi-circular optical windows are visible in the images; (c) Imaging a model eye reveals two volumetric images, formed by light refracted (blue) and reflected (orange) at the BS, respectively, and extracted after carrier demodulation; representative B-scans extracted from the volumes are presented to visualize the axial separation due to the path length multiplexing.

Download Full Size | PDF

In turn, the beam transmitted at the PBS forms the reference arm and hits the camera with an angle of + 1$^{\circ }$ in $y$ direction. This creates carrier fringes for each imaging beam with equal spatial frequency modulation but orthogonally oriented. The 2D Fourier transform of each camera frame reveals two conjugate pairs, with each pair originating from one orientation of the carrier. Demodulation and demultiplexing the OCT signals consists of spatially filtering one region from each carrier pair (i.e. one blue region and one orange region in Fig. 1(b)) followed by inverse Fourier transformation. Two independent OCT volumes are thus reconstructed from series of camera frames acquired during one spectral sweep of the light source.

We used these two CM channels to implement directional imaging by incorporating semi-circular optical windows (WTS-photonics - BK7, 1 mm thickness, $\lambda /10$ wavefront distortion), with perpendicular orientations, into each of the two beams. The path-length difference induced by the window results in two separate images of the sample–apparently displaced in depth–which encode two directions of scattering. Thus each CM channel can extract directional scattering in the direction orthogonal to the edge of the semi circular optical window.

The sample channel also incorporated a hardware adaptive optics (AO) subsystem for real-time aberration correction. The AO system incorporated a super-luminescent diode beacon (740–770 nm, IPSDM0701-C, Inphenix, Livermore CA, USA), a deformable mirror (DM) (DM-97-15, ALPAO, Montbonnot-Saint-Martin, France) and a custom-built Shack-Hartmann wavefront sensor (SHWS) consisting of a CMOS camera (acA2040-90um, Basler, Ahrensburg, Germany) and microlens array (MLA300-14AR-M, Thorlabs, Newton NJ, USA). The AO system was operated in closed-loop with a gain of 0.3 and rate of 30 Hz, using open source software developed in Python/Cython by our lab [25]. Wavefront slopes were measured relative to reference coordinates generated by a planar wavefront propagated through the non-common path.

In our previous work [8] the real-time AO subsystem provided diffraction-limited imaging (by the Maréchal criterion), with expected lateral resolution at the retina of approximately ${2.6}\;\mu \textrm {m}$. In the present work, each image is the product of a semicircular pupil function, and corresponding impacts are seen in the PSFs and images. To model the impact on resolution, the wavefront was measured after each PME, using a commercial SHWS (HASO3 128-GE2, Imagine Optic, Orsay, France) to certify that no significant optical aberration was introduced by the beamsplitter and PME. The measured RMS wavefront error was $~0.04 \lambda$, and thus still diffraction-limited by the Maréchal criterion. However, the splitting of the pupil by the PME resulted in an elongated point spread function (PSF) for each subset of images (Fig. 2), reducing the system’s lateral resolution and distorting the visual appearance of small structures.

 figure: Fig. 2.

Fig. 2. PSF for different pupil functions calculated using Zemax eye retinal image model and a pupil size of 6.75 mm. The splitting of the signal by the PME into two solid angles causes each of the multiplexed images to be generated by a semicircular pupil, which elongates the PSF and reduces the lateral resolution of the system; (left) full pupil; (center) semi-circular aperture oriented horizontally; (right) semi-circular aperture oriented vertically.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. The signal measured at the camera sensor plane is the coherent sum of three waves, two originate from the object and one originates from the reference mirror. The central k-vector of the object field for each channel are designated $k_1$ and $k_2$, and the wave vector of the reference beam is designated $k_{\textrm {ref}}$. The shaded blue and orange cones represents the wave vector bandwidth, limited by the system NA. None of the waves propagates in a direction normal to the sensor surface, thus each has component perpendicular to the sensor, designated with $\bot$ and a component parallel to the sensor, designated $\parallel$. The two waves described by vectors $k_1$ and $k_2$ interfere with the reference wave, generating two carrier modulations with different orientations.

Download Full Size | PDF

3. Theory

3.1 Carrier multiplexing (CM)

A Fourier-domain representation of a single OCT volume is recorded by the spectral interference fringes created by a series of 500 images, $I(x,y,k)$, collected during a single sweep of the source (800 nm to 875 nm). Each image in the series records the intensity, which is the time-averaged square of the coherent sum of the electric fields from the two CM channels and the reference beam:

$$\begin{aligned} I(x,y,k) \propto & \left\langle |R(x,y,k) + O_{1}(x,y,k) + O_{2}(x,y,k) |^2\right\rangle \\ = &\langle|R(x,y,k)|^2\rangle + \langle|O_{1}(x,y,k)|^2\rangle + \langle|O_{2}(x,y,k)|^2\rangle\\ &+ \langle R^*O_{1}(x,y,k)\rangle + \langle R^*O_{2}(x,y,k)\rangle\\ &+ \langle O_{1}^*R(x,y,k)\rangle + \langle O_{2}^*R(x,y,k)\rangle\\ &+ \langle O_{1}^*O_{2}^{*}(x,y,k)\rangle + \langle O_{2}^*O_{1}^{*}(x,y,k)\rangle. \end{aligned}$$
The sum of the first three terms $\langle |R(x,y,k)|^2\rangle + \langle |O_{1}(x,y,k)|^2\rangle + \langle |O_{2}(x,y,k)|^2\rangle$ represents the DC and autocorrelation components of the detected signal, while the last two terms $\langle O_{1}^*O_{2}^{*}(x,y,k)\rangle$ and $\langle O_{2}^*O_{1}^{*}(x,y,k)\rangle$ represent interference between the multiplexed object fields. The remaining terms describe interference between the multiplexed object fields and the reference field, and thus encode the object structure.

The solutions of the Helmholtz-equation for reference and object can be expressed in camera plane as:

$$R=R_0e^{i\vec{k}_{\textrm{ref}}\cdot\vec{r}}=R_0e^{ik_{({\parallel},\textrm{ref})}y}e^{ik_{(\bot,\textrm{ref})}z}|_{z=z_0}$$
$$O_{n}=O_{0,n}(x,y)e^{i\vec{k}_{\textrm{n}}\cdot\vec{r}}=O_{0,n}(x,y)e^{ik_{({\parallel},n)}x}e^{ik_{(\bot,n)}z}|_{z=z_0},$$
where $R_0$ and $O_{0,n}$ are the respective reference and object wave amplitudes, $\vec {r}$ is a spatial position, $k_{\parallel }$ and $k_{\bot }$ are the respective components of the wave vector parallel and orthogonal to the camera surface, $z_0$ is the camera plane and $n$ refers to the CM channel 1 or 2. The spatial field $O$ has a spectral extent, with lateral bandwidth limited by the numerical aperture (NA) of the system (Fig. 3). The $\vec {k}_n$ refers to the central k-vector of the object field.

The Fourier transform of the interference signal (time averaged intensity) in the camera plane generated by the $n^{\textrm {th}}$ sample beam $R^*O_{n}$ with respect to $x$ and $y$ is given by:

$$\begin{aligned}\mathcal{F}( R^*O_{n})|_{z=z_0}=& R_0e^{i\left(k_{\bot,n}-k_{\bot,ref}\right)z} \mathcal{F}_{xy}\left(O_{0,n}(x,y)\right) \circledast \int\limits_{-\infty}^{+\infty} \int\limits_{-\infty}^{+\infty} e^{i\left(k_{{\parallel},n}-k_x\right)x}e^{i\left({-}k_{{\parallel},\textrm{ref}}-k_y\right)y} dxdy |_{z=z_0}\\ =& R_0e^{i\left(k_{\bot,n}-k_{\bot,ref}\right)z} \mathcal{F}_{xy}\left(O_{0,n}(x,y)\right) \circledast\delta\left(k_{{\parallel},n}-k_x\right)\delta\left({-}k_{{\parallel},\textrm{ref}}-k_y\right)|_{z=z_0}, \end{aligned}$$
with $\circledast$ and $\delta$ being the convolution operation and Dirac delta function, respectively. The coordinates of the $\delta$ product, $k_{\parallel ,n}$ and $k_{\parallel ,\textrm {ref}}$ represent the modulation of the signal by the carrier frequency, and thus the lateral shift of the OCT interference signal in the Fourier domain. The concept of carrier modulation and shift is illustrated in Fig. 4. Conjugated terms are shifted to the opposite direction in the Fourier plane and DC and autocorrelation terms $\mathcal {F}_{xy}\left (|R(x,y,k)|^2+|O_{1}(x,y,k)|^2 + |O_{2}(x,y,k)|^2\right )$ are kept in the center. Meanwhile, $O_{1}^*O_{2}^{*}$ and its conjugated term are shifted in the $x$-axis but, since the amplitude of each channel is typically much smaller than the reference field, those terms can be ignored. This tilted approach results in a wavelength-dependent frequency shift in Fourier space. Diffraction gratings could be used instead to introduce fixed carrier frequencies [7].

 figure: Fig. 4.

Fig. 4. After the sample beam is split, the average $x$ direction components of the resulting wave vectors have opposite directions (blue and orange solid arrows). Those will interfere with the reference beam, with $k_\parallel$ in $y$ (gray arrow). The relative angles of sample and reference arms are represented by dashed arrows. (b) The off-axis interference of sample and reference beams creates lateral modulations with orthogonal orientations and frequency of the $n^{\textrm {th}}$ sample beam is given by $\sqrt {k_{\parallel ,n}^2+k_{\parallel ,\textrm {ref}}^2}$. Choice of orientation and frequency should be informed by the size of the sensor and density of its pixels. Notwithstanding these constraints, carrier modulations for an arbitrary number of channels may be realized. (c) The intensity interference pattern in $x$ and $y$ shifts the signals from each portion of the beam and its complex conjugate term (dashed ellipses) into different quadrants in the Fourier space. By spatially filtering the 2D Fourier transform to only one ellipse in each camera frame, the information encoded in each sample beam are completely decoupled, creating two independent channels. (d) The image from one channel is left-right reversed with respect to the other due to its reflection in the beamsplitter.

Download Full Size | PDF

Finally, numerical filtering in the Fourier domain permits complete separation of the signals from channels 1 and 2, as well as suppression of the autocorrelation terms, DC, and complex conjugate pairs [6,7]. The frequency of modulation $\sqrt {k_{\parallel ,n}^2+k_{\parallel ,\textrm {ref}}^2}$ establishes the magnitude of the shift but this angle is limited by the pixel size of the camera, since the camera must sample the carrier frequency with at least two pixels per cycle. The high NA of our system and limited angle at the carrier frequency impede complete separation of cross and autocorrelation terms, thus a trade-off arises between noise suppression and lateral resolution [8]. However, because all of the terms are recorded, the size of the filter can be adjusted in an ad hoc manner, as needed.

Note that the image from channel 2 is flipped laterally [Fig. 4(d)] due to the $\pi$ phase change introduced by the extra-reflection that this channel experiences in the beamsplitter [Fig. 1(a)].

The approach outlined above allows the creation of two independent OCT detection channels, operating in parallel, while using only one sensor and improving utilization of the sensor area. It has the potential to characterize and measure different aspects of the light back-scattered by the sample including, but not limited to, wavelength-dependence, polarization, Doppler shift or scattering angle.

3.2 Path-length multiplexing

Downstream of the CM beamsplitter, a semi-circular optical window was introduced in each of the two channels, optically conjugated to the pupil plane, to split the pupil in orthogonal directions. Each optical window acts as a PME [1921]. In this way, sample field components backscattered with positive wavevector in $y$ travel an optical distance of $n_{\textrm {air}}d = {1}$ mm through the PME in channel 1, while the components with negative wave vector in $y$ will propagate through a longer optical distance given by $n_{\textrm {BK7}}d = {1.51}$ mm, where $n_{\textrm {BK7}}= 1.510$ is the refractive index of BK7 glass at $\lambda = {837}$ nm. This optical path difference makes the backscattered light that goes trough each half of the PME appear at different depths in the OCT tomogram, separated by $(n_{\textrm {BK7}}-n_{\textrm {air}})d = {510}\;\mu \textrm {m}$ upon interference with the reference beam, as shown in Fig. 1(c). Similarly, the field was multiplexed in the left-right direction in channel two.

In this way, the CM-D-FF-OCT acquires simultaneously OCT volumes with multiplexed volumes acquired for light scattered at different solid angles, namely $I_R$, $I_L$, $I_T$ and $I_B$, referring to right, left, top and bottom volumes, respectively. To correct for frame-wide changes in image intensity due to an uneven beamsplitting ratio as well as reduction in the pixel brightness due to signal roll-off with increasing scan depth, each en face projection was divided by its mean intensity $\widetilde {I}=I/\overline {I}$.

The vectorial pointing of the scattering structures at each pixel can be determined by calculating a difference vector ($\Delta _{x,mn}$, $\Delta _{y,mn}$) [26]

$$\Delta_{x,mn}=\frac{\widetilde{I}_{R,mn}-\widetilde{I}_{L,mn}}{\widetilde{I}_{R,mn}+\widetilde{I}_{L,mn}} \quad \mbox{and} \quad \Delta_{y,mn}=\frac{\widetilde{I}_{T,mn}-\widetilde{I}_{B,mn}}{\widetilde{I}_{T,mn}+\widetilde{I}_{B,mn}},$$
where $mn$ are the pixels coordinates. The length of the arrows in quiver plots was determined by $\Delta _{x,mn}$ and $\Delta _{y,mn}$ by scaling all the arrows in each plot equally to optimize visibility. The average values of $\Delta _{x,mn}$ and $\Delta _{y,mn}$ were subtracted to compensate for global tip and tilt in the volumes.

4. Signal processing

OCT volumes were acquired with the tunable light source sweeping from 800 nm to 875 nm at volume rates of 100–400 Hz, with 500 spectral frames per volume. After the data was collected, each frame was Fourier transformed in $x$ and $y$. The images were filtered in the Fourier domain to separate the CM channels, as described in §3.1, and inverse Fourier transformed to generate the spectral stack of interferograms, which encodes the structure of the object, as seen through one channel. Next, this stack was processed using a standard swept-source OCT post-processing approach.

Since the source’s sweep speed is constant with respect to $\lambda$, after spatial filtering, the stack was resampled using linear interpolation to be uniform with respect to wavenumber $k = 2\pi /\lambda$. Short-time Fourier transformation was then performed to estimate and remove chirp due to dispersion mismatch and path length difference variations caused by vibrations and sample axial movements [8]. The OCT volume was then flattened by a polynomial surface fitting of degree 1 in $x$ and $y$ to remove the slope in the volume due to the off-axis interference with the reference arm and the sample layers were identified to create en face projections. The areal images were acquired at two depths for each channel, corresponding to the top and bottom scattering cone angles in channel 1 and left and right in channel 2. Those pairs of images were used to calculate the directional scattering vector (Eq. (5)).

5. Results

5.1 Microlens array

The CM-D-FF-OCT can be used to analyze the topography of biological or non-biological samples. To illustrate the latter, images were acquired of a microlens array (MLA150-7AR-M, Thorlabs, New Jersey, USA) in the back focal plane of a 30 mm achromatic lens. The microlens array consists of a square grid of fused silica plano-convex microlenses with a lens pitch of ${150}\;\mu \textrm {m}$ and radius of curvature of 25.4 mm. Figure 5 shows the differential images color coded to aid visualization. The spherical contour of the microlenses is evident in the differential images.

 figure: Fig. 5.

Fig. 5. Directional light scattering captured with a microlens array in the back focal plane of a 30 mm lens. (a) $\tilde {I}_T - \tilde {I}_B$; (b) $\tilde {I}_R - \tilde {I}_L$; (c) Superimposed differential images. Color identification shows local pointing direction: up (yellow), down (blue), left (green), right (red). The inset miniature shows the microscopy image of the microlens array (courtesy of Thorlabs, New Jersey, USA), with arrows superimposed to show the expected pointing direction of the backscattered light.

Download Full Size | PDF

5.2 Human hair

Two crossed strands of human hair were mounted on a lens tube and placed in the posterior focal plane of an achromatic lens ($f = {30}$ mm). Volumes were acquired at 400 Hz and the data were processed as described in §4.

The multiplexing of the signal by the presence of the PME can be appreciated in the 3D projection of the channel 1, showing two pairs of crossed strands (Fig. 6). Images obtained through the two channels and PMEs are shown in Fig. 7(a) with the algebraic sum of intensities in the middle. Figure 7(b) gives the differential images with each sub-image scaled to the maximum contrast. The quiver plot, with scattering pointing direction calculated by Eq. (5), is shown at the center.

 figure: Fig. 6.

Fig. 6. Three-dimensional projection view of a pair of crossed strands of human hair imaged through a single CM channel (channel 1). The PME creates two copies of the image, separated by ${510}\;\mu \textrm {m}$, formed by light passing through the top and bottom halves of the pupil. Although directional differences are inconspicuous in this view, it illustrates the principle of path-length multiplexing. The image generated from CM channel 2 contains two copies of the image as well, originating in light passing through the left and right halves of that channel’s pupil plane.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. (a) En face OCT projection of the strands of hair in each subset of solid angles and their algebraic sum; (b) Directional differential images are computed through linear combinations of the four directional images. The operation used to generate each is displayed on the image. Directional differences are evident in the differential images, as well as in the quiver plot at the center.

Download Full Size | PDF

Although some directional differences are visible in the individual images [Fig. 7(a)], the differential images highlight these contrasts. In the horizontal strand it is possible to see that the upper part of the hair backscattered light mainly upwards while the lower part is pointing downwards. This is expected, due to the highly reflective cuticle layer and the cylindrical geometry of the strand, which result in a shadow-like effect seen in the differential images [Fig. 7(b)]. Meanwhile, the differential images in the horizontal direction are dominated by the overall tilt of each hair, with the horizontal strand pointing to the right and the vertical strand pointing to the left.

5.3 Onion epidermis

OCT volumes were collected of onion epidermis. The large size and simple structure of these cells makes them attractive for microscopic imaging. The skin of a yellow onion (Allium cepa) was peeled from an inner layer (adaxial epidermis) of the onion bulb using tweezers. The removed membrane was big enough to be stretched over a lens tube, without the need of a microscope glass slide to hold the specimen. The epidermis was placed at the focal plane of the achromatic lens.

Cell nuclei were not visible, suggesting that the peeled layer contained only the outer half of the epidermal cell. Figure 8 shows the directional scattering of the cell walls. One of the cells is highlighted with blue and green arrows. The blue arrows indicate the left margin of the cell, which points leftward and downward. The green arrows indicate the central portion, which points rightward and upward. The quiver plot in the center of Fig. 8 is a spatial map of directional backscattering, and likely bears a relationship to the contour of the surface.

 figure: Fig. 8.

Fig. 8. (a) En face OCT projections of $I_T$, $I_B$, $I_L$ and $I_R$ of the adaxial epidermis of an yellow onion and the algebraic sum of the images in the center; (b) Differential images. The blue arrows show a portion of the cell wall pointing down and left while the green arrow shows another portion pointing up and right.

Download Full Size | PDF

5.4 Retina

The photoreceptors of the human retina are the cells responsible for phototransduction, converting the light that enters the eye into an electrical signal to be delivered to the brain. These elongated cells are well known for having strong directionality, being more sensitive to light rays that enter the eye at the center of the pupil, hitting the retina at low angle, than to those entering near the edge. This angular selectivity, referred to as the psychophysical Stiles-Crawford effect (SCE) [27,28], is also observed in light backscaterred by the retina, the optical Stiles-Crawford effect (OSCE) [29,30]. The precise origin of this effect is still unknown, with some attributing it to waveguiding properties of the photoreceptors [28,31] and others proposing alternative mechanisms [32,33]. Regardless of its origins, measurement of cellular disorientation using the OSCE has potential for clinical application in diseases like age-related macular degeneration [34,35], central serous retinopathy [36], diabetic retinopathy [37] and optic neuropathies [38].

Several methods have been implemented to analyze the pointing directions in the photoreceptor mosaic including shifting of entrance and exit pupils in fundus imaging [30,39,40], scanning laser ophthalmoscopy (SLO) [36,41] and OCT [12,13]. To avoid the mechanical shifting of the pupil position, Wartak et al. demonstrated a multi-directional SD-OCT system using three separate beams and spectrometers [16] while Qaysi et al. used a pyramidal prism in a pupil conjugate plane in the imaging path of a fundus camera [26]. Here, the pointing direction of the cones in the retinal mosaic was assessed using CM-D-FF-OCT.

One subject free of known ocular disease was imaged in the temporal (T) retina at 4$^{\circ }$ eccentricity. The eye was dilated and cyclopleged using topical drops of phenylephrine (2.5 %) and tropicamide (1.0 %). A bite bar and a forehead rest were employed to position and stabilize the subject’s pupil during imaging. Subject fixation was guided with a calibrated target. The imaging light source was focused 3 cm in front of the subjects’s cornea, illuminating a 2° field-of-view (FOV) on the retina with a converging (but not focused) beam [8] with a power of 6.5 mW measured at the cornea. Meanwhile, the beacon for the AO subsystem was focused at the retina and had a power of ${100}\;\mu \textrm {W}$, measured at the cornea. The simultaneous illumination from two sources was in accordance with the laser safety standards, deemed safe for retinal and corneal continuous exposure [42]. All procedures were in accordance with the Declaration of Helsinki and approved by the University of California Davis Institutional Review Board.

After signal processing, the volumes were segmented axially and the photoreceptor inner segment - outer segment junction (IS/OS) and cone outer segment tip (COST) layers were identified in each multiplexed volume, aerially projected, registered using histogram-based bulk motion correction [43], and averaged. 150 projections were averaged to produce each en face OCT image [Fig. 9(a)]. The differential images and quiver plot are also shown [Fig. 9(b)]. Bright dots corresponds to photoreceptors pointing in the corresponding direction while dark dots indicates a scatterer oriented in opposite direction. To exemplify, some photoreceptors are highlighted in the images.

 figure: Fig. 9.

Fig. 9. (a) En face projections through the cones for each solid angle (top, bottom, left and right) and the corresponding algebraic sum of them in the center; differences in image quality among the four channels may be due to PSF elongation combined with a non-centered Stiles-Crawford peak; (b) The differential images show the orientation of different photoreceptors. The blue arrows show a photoreceptor pointing left and up, the green arrow shows a photoreceptor pointing right and up while the magenta arrow shows a photoreceptor pointing down but with no significant angle in $x$ direction.

Download Full Size | PDF

6. Discussion

FF-SS-OCT carrier modulation was implemented to multiplex the OCT signal. Two degrees of freedom in the optical design–in this case the horizontal angles of the object beams and the vertical angle of the reference beam–yield independent control of the carrier fringe orientation and frequency. Demodulation of the channels was performed in post-processing of the two-dimensional Fourier transformation of the image plane, which corresponds to the pupil plane.

Directional OCT was then achieved by sub-diving each CM channel using a PME. Measurements were acquired in a lenslet array, crossed strands of human hair, onion epidermis, and in vivo human retinal cone mosaic. For all the selected samples, differences were observed among the four resulting directional images, revealing details of the sample’s topography or scattering angle.

In this particular application of CM, a significant reduction in lateral resolution was observed, due to the semi-circular apodization imposed by the PME of each channel’s pupil. The impacts were more pronounced when imaging smaller structures, like the lenslet array and the photoreceptors in the retina. In addition to the resolution loss, multiplexing also results in an overall loss of signal. As the power coming from the object are divided into multiple channels, one would expect according SNR losses in each channel. Thus, the proposed methods for multiplexing are not intended to improve the performance of FF-OCT systems, but rather to broaden their scope of applications.

Another shortcoming of the system was the short optical path difference imposed by the PME, which limited the thickness of structures that could be imaged with the current configuration. Thicker semi-circular optical windows or windows with higher refractive index could be utilized instead to overcome this limitation, albeit at the expense of additional signal loss due to roll-off.

This experiment illustrates the implementation of CM for simultaneous directional imaging, which has potential applications in a variety of domains, such as microscopy [44], metrology [45,46], and medical diagnosis [47]. Techniques such as goniometric measurements, for instance, have been used for years to detect changes in the scattering properties of tissues in order to predict the presence of early carcinogenesis [48,49]. Other applications, like dark-field spectral scatter imaging, split-detector SLO and off-set pinhole methodologies reject ballistic photons and facilitate the imaging of multiply scattering structures [5052]. Among those, OCT emerges as an alternative, providing further details with depth-resolved scattering maps, with potential of improving angiograms [53] and 3D Doppler imaging.

For samples consisting of densely packed scatterers, i.e., those spaced closely relative to the OCT’s point spread function, the apparent directional differences may be confounded by other effects, namely coherent interference among the scattered light fields. As the angle of measurement changes, the phase delay between these fields may change as well, creating variations in the light distribution in the pupil that don’t have directional origins. The proposed method would work better for samples consisting of isolated scatterers or reflective surfaces (such as the microlens array). However, in living samples where movement of scattering objects cause decorrelation of speckle [54,55], averaging of OCT images may reduce the contribution of coherent effects and permit measurement of directional differences. Moreover, spatial averaging over multiple speckles may also permit separation and measurement of directional scattering.

In the specific case of directional imaging of photoreceptors, displacement of the Stiles-Crawford peak from the center of the pupil may result in large differences in the amount of light passing through different parts of the pupil. This may result in SNR variations that should be considered when interpreting results. Combined with the elongated PSFs caused by pupil apodization and overall loss of SNR due to multiplexing, this may result in image quality lower than what is seen in comparable single-channel AO-OCT images of the photoreceptors [54,55].

In addition to directional imaging, CM could be used in a variety of other modalities, such as polarization-sensitive imaging [56,57], multispectral imaging [58,59], simultaneous imaging through multiple apodized pupils or a combination of different techniques.

Furthermore, CM could be implemented with a higher number of channels. However, the complete separation of those multiplexed channels would depend on the sensor size, pixel size and numerical aperture of the imaging system [8], and thus the number of possible independent channels is limited by the sensor characteristics and resolution requirements.

7. Conclusions

We have presented a novel method for multiplexing signals in off-axis full-field swept-source OCT by manipulating the orientation and frequency of carrier modulations generated by each channel. Carrier-multiplexing represents a way to record multiple OCT volumes at once, each acquired through an independent optical channel, while optimizing utilization of the sensor’s pixels without substantial additional cost or modifications to the system. In the present paper, we have illustrated the concept by multiplexing just two channels, and implemented simultaneous differential imaging in two dimensions.

To demonstrate the utility of CM, we utilized two CM channels to measure backscattering anisotropy in the $x$ and $y$ directions simultaneously. Anisotropy in each dimension was measured in a single CM channel by introducing a semicircular optical window in half of the pupil. This window created an optical delay between light passing through the empty half of the pupil and light passing through the glass, thus generating separate OCT signals for each half of the pupil, detected simultaneously. One CM channel measured anisotropy in the vertical dimension, and the other in the horizontal dimension. Linear combinations of the four resulting OCT volumes permitted assessment of directional backscattering.

This implementation of differential imaging has the advantage of having no moving parts, although it does not benefit from the directional illumination achieved in double-pass implementations.

Other forms of imaging could also benefit from the CM approach, including spectroscopy, polarization-sensitive OCT, Doppler imaging, or a combination of different methods. Some of these applications may require more than two CM channels, which can be implemented by further subdivision of the sample beam, limited by the sensor size, pixel size and numerical aperture of the imaging system. Consideration should also be given to the SNR losses caused by these subdivisions.

Funding

National Institutes of Health (R00-EY-026068, R01-EY-026556, R01-EY-031098); UC Davis Eye Center Startup Funds.

Acknowledgments

We gratefully acknowledge the assistance of Susan Garcia and the support of VSRI and EyePod Lab members.

Disclosures

RSJ and RJZ have patents related to AO-OCT. DV and KVV declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper can be provided upon request from the authors.

References

1. B. Považay, A. Unterhuber, B. Hermann, H. Sattmann, H. Arthaber, and W. Drexler, “Full-field time-encoded frequency-domain optical coherence tomography,” Opt. Express 14(17), 7661–7669 (2006). [CrossRef]  

2. T. Bonin, G. Franke, M. Hagen-Eggert, P. Koch, and G. Hüttmann, “In vivo Fourier-domain full-field OCT of the human retina with 1.5 million A-lines/s,” Opt. Lett. 35(20), 3432–3434 (2010). [CrossRef]  

3. D. Hillmann, C. Lührs, T. Bonin, P. Koch, and G. Hüttmann, “Holoscopy-holographic optical coherence tomography,” Opt. Lett. 36(13), 2390–2392 (2011). [CrossRef]  

4. E. Auksorius, D. Borycki, and M. Wojtkowski, “Crosstalk-free volumetric in vivo imaging of a human retina with Fourier-domain full-field optical coherence tomography,” Biomed. Opt. Express 10(12), 6390–6407 (2019). [CrossRef]  

5. E. Auksorius, D. Borycki, and M. Wojtkowski, “Multimode fiber enables control of spatial coherence in Fourier-domain full-field optical coherence tomography for in vivo corneal imaging,” Opt. Lett. 46(6), 1413–1416 (2021). [CrossRef]  

6. D. Hillmann, H. Spahr, H. Sudkamp, C. Hain, L. Hinkel, G. Franke, and G. Hüttmann, “Off-axis reference beam for full-field swept-source OCT and holoscopy,” Opt. Express 25(22), 27770–27784 (2017). [CrossRef]  

7. H. Sudkamp, D. Hillmann, P. Koch, M. vom Endt, H. Spahr, M. Münst, C. Pfäffle, R. Birngruber, and G. Hüttmann, “Simple approach for aberration-corrected OCT imaging of the human retina,” Opt. Lett. 43(17), 4224–4227 (2018). [CrossRef]  

8. D. Valente, K. V. Vienola, R. J. Zawadzki, and R. S. Jonnal, “Kilohertz retinal FF-SS-OCT and flood imaging with hardware-based adaptive optics,” Biomed. Opt. Express 11(10), 5995–6011 (2020). [CrossRef]  

9. X. Wang, H. Zhai, and G. Mu, “Pulsed digital holography system recording ultrafast process of the femtosecond order,” Opt. Lett. 31(11), 1636–1638 (2006). [CrossRef]  

10. C. Yuan, H. Zhai, and H. Liu, “Angular multiplexing in pulsed digital holography for aperture synthesis,” Opt. Lett. 33(20), 2356–2358 (2008). [CrossRef]  

11. M. Paturzo, P. Memmolo, A. Tulino, A. Finizio, and P. Ferraro, “Investigation of angular multiplexing and de-multiplexing of digital holograms recorded in microscope configuration,” Opt. Express 17(11), 8709–8718 (2009). [CrossRef]  

12. A. Roorda and D. R. Williams, “Optical fiber properties of individual human cones,” J. Vis. 2(5), 4 (2002). [CrossRef]  

13. W. Gao, B. Cense, Y. Zhang, R. S. Jonnal, and D. T. Miller, “Measuring retinal contributions to the optical Stiles-Crawford effect with optical coherence tomography,” Opt. Express 16(9), 6486–6501 (2008). [CrossRef]  

14. B. J. Lujan, A. Roorda, J. A. Croskrey, A. M. Dubis, R. F. Cooper, J.-K. Bayabo, J. L. Duncan, B. J. Antony, and J. Carroll, “Directional optical coherence tomography provides accurate outer nuclear layer and henle fiber layer measurements,” Retina 35(8), 1511–1520 (2015). [CrossRef]  

15. R. K. Meleppat, P. Zhang, M. J. Ju, S. K. K. Manna, Y. Jian, E. N. Pugh, and R. J. Zawadzki, “Directional optical coherence tomography reveals melanin concentration-dependent scattering properties of retinal pigment epithelium,” J. Biomed. Opt. 24(06), 1 (2019). [CrossRef]  

16. A. Wartak, M. Augustin, R. Haindl, F. Beer, M. Salas, M. Laslandes, B. Baumann, M. Pircher, and C. K. Hitzenberger, “Multi-directional optical coherence tomography for retinal imaging,” Biomed. Opt. Express 8(12), 5560–5578 (2017). [CrossRef]  

17. R. M. Werkmeister, N. Dragostinoff, M. Pircher, E. Götzinger, C. K. Hitzenberger, R. A. Leitgeb, and L. Schmetterer, “Bidirectional Doppler Fourier-domain optical coherence tomography for measurement of absolute flow velocities in human retinal vessels,” Opt. Lett. 33(24), 2967–2969 (2008). [CrossRef]  

18. R. Haindl, W. Trasischker, A. Wartak, B. Baumann, M. Pircher, and C. K. Hitzenberger, “Total retinal blood flow measurement by three beam Doppler optical coherence tomography,” Biomed. Opt. Express 7(2), 287–301 (2016). [CrossRef]  

19. B. Wang, B. Yin, J. Dwelle, H. G. Rylander, M. K. Markey, and T. E. Milner, “Path-length-multiplexed scattering-angle-diverse optical coherence tomography for retinal imaging,” Opt. Lett. 38(21), 4374–4377 (2013). [CrossRef]  

20. M. R. Gardner, N. Katta, A. S. Rahman, H. G. Rylander, and T. E. Milner, “Design considerations for murine retinal imaging using scattering angle resolved optical coherence tomography,” Appl. Sci. 8(11), 2159 (2018). [CrossRef]  

21. M. R. Gardner, V. Baruah, G. Vargas, M. Motamedi, T. E. Milner, I. Rylander, and G. Henry, “Scattering angle resolved optical coherence tomography detects early changes in 3xTg Alzheimer’s disease mouse model,” Trans. Vis. Sci. Tech. 9(5), 18 (2020). [CrossRef]  

22. Y.-C. Ahn, W. Jung, and Z. Chen, “Quantification of a three-dimensional velocity vector using spectral-domain Doppler optical coherence tomography,” Opt. Lett. 32(11), 1587–1589 (2007). [CrossRef]  

23. P. Eugui, A. Lichtenegger, M. Augustin, D. J. Harper, M. Muck, T. Roetzer, A. Wartak, T. Konegger, G. Widhalm, C. K. Hitzenberger, A. Woehrer, and B. Baumann, “Beyond backscattering: optical neuroimaging by BRAD,” Biomed. Opt. Express 9(6), 2476–2494 (2018). [CrossRef]  

24. H. Spahr, C. Pfäffle, P. Koch, H. Sudkamp, G. Hüttmann, and D. Hillmann, “Interferometric detection of 3D motion using computational subapertures in optical coherence tomography,” Opt. Express 26(15), 18803–18816 (2018). [CrossRef]  

25. R. S. Jonnal, “CIAO: Community Inspired Adaptive Optics v1.0,” Zenodo, https://doi.org/10.5281/ZENODO.3903941.

26. S. Qaysi, D. Valente, and B. Vohnsen, “Differential detection of retinal directionality,” Biomed. Opt. Express 9(12), 6318–6330 (2018). [CrossRef]  

27. W. S. Stiles, B. H. Crawford, and J. H. Parsons, “The luminous efficiency of rays entering the eye pupil at different points,” Proc. R. Soc. Lond. B. 112(778), 428–450 (1933). [CrossRef]  

28. G. Westheimer, “Directional sensitivity of the retina: 75 years of Stiles-Crawford effect,” Proc. R. Soc. B. 275(1653), 2777–2786 (2008). [CrossRef]  

29. G. Van Blokland, “Directionality and alignment of the foveal receptors, assessed with light scattered from the human fundus in vivo,” Vision Res. 26(3), 495–500 (1986). [CrossRef]  

30. S. A. Burns, S. Wu, F. Delori, and A. E. Elsner, “Direct measurement of human-cone-photoreceptor alignment,” J. Opt. Soc. Am. A 12(10), 2329–2338 (1995). [CrossRef]  

31. A. W. Snyder and C. Pask, “The Stiles-Crawford effect—explanation and consequences,” Vision Res. 13(6), 1115–1137 (1973). [CrossRef]  

32. B. Vohnsen, “Directional sensitivity of the retina: A layered scattering model of outer-segment photoreceptor pigments,” Biomed. Opt. Express 5(5), 1569–1587 (2014). [CrossRef]  

33. B. Vohnsen, A. Carmichael, N. Sharmin, S. Qaysi, and D. Valente, “Volumetric integration model of the Stiles-Crawford effect of the first kind and its experimental verification,” J. Vis. 17(12), 18 (2017). [CrossRef]  

34. V. C. Smith, J. Pokorny, and K. R. Diddie, “Color matching and the Stiles-Crawford effect in observers with early age-related macular changes,” J. Opt. Soc. Am. A 5(12), 2113–2121 (1988). [CrossRef]  

35. M. J. Kanis, R. P. L. Wisse, T. T. J. M. Berendschot, J. van de Kraats, and D. van Norren, “Foveal cone-photoreceptor integrity in aging macula disorder,” Invest. Ophthalmol. Vis. Sci. 49(5), 2077–2081 (2008). [CrossRef]  

36. P. J. DeLint and T. T. Berendschot, “A comparison of the optical Stiles-Crawford effect and retinal densitometry in a clinical setting,” Investigative Ophthalmology & Visual Science 39(8), 1519–1523 (1998).

37. N. P. Zagers, M. C. Pot, and D. van Norren, “Spectral and directional reflectance of the fovea in diabetes mellitus: Photoreceptor integrity, macular pigment and lens,” Vision Res. 45(13), 1745–1753 (2005). [CrossRef]  

38. S. S. Choi, R. J. Zawadzki, J. L. Keltner, and J. S. Werner, “Changes in cellular structures revealed by ultra-high resolution retinal imaging in optic neuropathies,” Invest. Ophthalmol. Visual Sci. 49(5), 2103–2119 (2008). [CrossRef]  

39. J.-M. Gorrand and F. Delori, “A reflectometric technique for assessing photorecelptor alignment,” Vision Res. 35(7), 999–1010 (1995). [CrossRef]  

40. S. Marcos, S. A. Burns, and J. C. He, “Model for cone directionality reflectometric measurements based on scattering,” J. Opt. Soc. Am. A 15(8), 2012–2022 (1998). [CrossRef]  

41. D. Rativa and B. Vohnsen, “Analysis of individual cone-photoreceptor directionality using scanning laser ophthalmoscopy,” Biomed. Opt. Express 2(6), 1423–1431 (2011). [CrossRef]  

42. ANSI Z136.1, American National Standard for Safe Use of Lasers (Laser Institute of America, Orlando, USA, 2014).

43. S. Makita, Y. Hong, M. Yamanari, T. Yatagai, and Y. Yasuno, “Optical coherence angiography,” Opt. Express 14(17), 7821–7840 (2006). [CrossRef]  

44. J. W. Pyhtila, J. D. Boyer, K. J. Chalut, and A. Wax, “Fourier-domain angle-resolved low coherence interferometry through an endoscopic fiber bundle for light-scattering spectroscopy,” Opt. Lett. 31(6), 772–774 (2006). [CrossRef]  

45. J. Bischoff, J. W. Baumgart, H. Truckenbrodt, and J. J. Bauer, “Photoresist metrology based on light scattering,” Proc. SPIE 2725, 678–689 (1996). [CrossRef]  

46. R. Hocken, N. Chakraborty, and C. Brown, “Optical metrology of surfaces,” CIRP Ann. 54(2), 169–183 (2005). [CrossRef]  

47. Z. A. Steelman, D. S. Ho, K. K. Chu, and A. Wax, “Light-scattering methods for tissue diagnosis,” Optica 6(4), 479–489 (2019). [CrossRef]  

48. M. Arnfield, J. Tulip, and M. McPhee, “Optical propagation in tissue with anisotropic scattering,” IEEE Trans. Biomed. Eng. 35(5), 372–381 (1988). [CrossRef]  

49. J. R. Mourant, J. P. Freyer, A. H. Hielscher, A. A. Eick, D. Shen, and T. M. Johnson, “Mechanisms of light scattering from biological cells relevant to noninvasive optical-tissue diagnostics,” Appl. Opt. 37(16), 3586–3593 (1998). [CrossRef]  

50. T. Y. P. Chui, M. Dubow, A. Pinhas, N. Shah, A. Gan, R. Weitz, Y. N. Sulai, A. Dubra, and R. B. Rosen, “Comparison of adaptive optics scanning light ophthalmoscopic fluorescein angiography and offset pinhole imaging,” Biomed. Opt. Express 5(4), 1173–1189 (2014). [CrossRef]  

51. D. Scoles, Y. N. Sulai, C. S. Langlo, G. A. Fishman, C. A. Curcio, J. Carroll, and A. Dubra, “In vivo imaging of human cone photoreceptor inner segments,” Invest. Ophthalmol. Visual Sci. 55(7), 4244–4251 (2014). [CrossRef]  

52. E. A. Rossi, C. E. Granger, R. Sharma, Q. Yang, K. Saito, C. Schwarz, S. Walters, K. Nozato, J. Zhang, T. Kawakami, W. Fischer, L. R. Latchney, J. J. Hunter, M. M. Chung, and D. R. Williams, “Imaging individual neurons in the retinal ganglion cell layer of the living eye,” Proc. Natl. Acad. Sci. 114(3), 586–591 (2017). [CrossRef]  

53. M. Chlebiej, I. Gorczynska, A. Rutkowski, J. Kluczewski, T. Grzona, E. Pijewska, B. L. Sikorski, A. Szkulmowska, and M. Szkulmowski, “Quality improvement of OCT angiograms with elliptical directional filtering,” Biomed. Opt. Express 10(2), 1013–1031 (2019). [CrossRef]  

54. R. S. Jonnal, O. P. Kocaoglu, R. J. Zawadzki, Z. Liu, D. T. Miller, and J. S. Werner, “A Review of Adaptive Optics Optical Coherence Tomography: Technical Advances, Scientific Applications, and the Future,” Invest. Ophthalmol. Visual Sci. 57(9), OCT51–OCT68 (2016). [CrossRef]  

55. M. Pircher and R. J. Zawadzki, “Review of Adaptive Optics OCT (AO-OCT): Principles and Applications for Retinal Imaging [Invited],” Biomed. Opt. Express 8(5), 2536–2562 (2017). [CrossRef]  

56. J. F. de Boer, C. K. Hitzenberger, and Y. Yasuno, “Polarization sensitive optical coherence tomography – a review [invited],” Biomed. Opt. Express 8(3), 1838–1873 (2017). [CrossRef]  

57. D. Zhu, J. Wang, M. Marjanovic, E. J. Chaney, K. A. Cradock, A. M. Higham, Z. G. Liu, Z. Gao, and S. A. Boppart, “Differentiation of breast tissue types for surgical margin assessment using machine learning and polarization-sensitive optical coherence tomography,” Biomed. Opt. Express 12(5), 3021–3036 (2021). [CrossRef]  

58. U. Morgner, W. Drexler, F. X. Kärtner, X. D. Li, C. Pitris, E. P. Ippen, and J. G. Fujimoto, “Spectroscopic optical coherence tomography,” Opt. Lett. 25(2), 111–113 (2000). [CrossRef]  

59. R. Leitgeb, M. Wojtkowski, A. Kowalczyk, C. K. Hitzenberger, M. Sticker, and A. F. Fercher, “Spectral measurement of absorption by spectroscopic frequency-domain optical coherence tomography,” Opt. Lett. 25(11), 820–822 (2000). [CrossRef]  

Data availability

Data underlying the results presented in this paper can be provided upon request from the authors.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. (a) Schematic of the carrier and path length multiplexed directional full-field OCT (CM-D-FF-OCT) (not to scale). The dashed red box highlights the beamsplitter used to generate the two carrier fringes, as well as the optical windows used to separate the left-right (L-R) and top-bottom (T-B) pupil halves. BS: beamsplitter; DBS: dichroic beamsplitters; PBS: polarizing beamsplitter; HWP: half wave plate; QWP: quarter wave plate; DM: deformable mirror; SHWS: Shack-Hartmann wavefront sensor; (b) Fourier transform of the intensity interference pattern on the camera for a single frame, illuminated by a narrow 0.15 nm portion of the light source’s spectral sweep. The two channels, represented in blue and orange, have $k_{\parallel }$ in opposing directions in $x$, and interfere with the reference beam with $k_{\parallel }$ in $y$. This scheme results in independent orthogonal spatial modulation of each of the two OCT signals in the detection plane of the CMOS camera. By numerically filtering in the Fourier domain, carrier multiplexing allows complete separation of signals from channels 1 and 2. The edges of the semi-circular optical windows are visible in the images; (c) Imaging a model eye reveals two volumetric images, formed by light refracted (blue) and reflected (orange) at the BS, respectively, and extracted after carrier demodulation; representative B-scans extracted from the volumes are presented to visualize the axial separation due to the path length multiplexing.
Fig. 2.
Fig. 2. PSF for different pupil functions calculated using Zemax eye retinal image model and a pupil size of 6.75 mm. The splitting of the signal by the PME into two solid angles causes each of the multiplexed images to be generated by a semicircular pupil, which elongates the PSF and reduces the lateral resolution of the system; (left) full pupil; (center) semi-circular aperture oriented horizontally; (right) semi-circular aperture oriented vertically.
Fig. 3.
Fig. 3. The signal measured at the camera sensor plane is the coherent sum of three waves, two originate from the object and one originates from the reference mirror. The central k-vector of the object field for each channel are designated $k_1$ and $k_2$, and the wave vector of the reference beam is designated $k_{\textrm {ref}}$. The shaded blue and orange cones represents the wave vector bandwidth, limited by the system NA. None of the waves propagates in a direction normal to the sensor surface, thus each has component perpendicular to the sensor, designated with $\bot$ and a component parallel to the sensor, designated $\parallel$. The two waves described by vectors $k_1$ and $k_2$ interfere with the reference wave, generating two carrier modulations with different orientations.
Fig. 4.
Fig. 4. After the sample beam is split, the average $x$ direction components of the resulting wave vectors have opposite directions (blue and orange solid arrows). Those will interfere with the reference beam, with $k_\parallel$ in $y$ (gray arrow). The relative angles of sample and reference arms are represented by dashed arrows. (b) The off-axis interference of sample and reference beams creates lateral modulations with orthogonal orientations and frequency of the $n^{\textrm {th}}$ sample beam is given by $\sqrt {k_{\parallel ,n}^2+k_{\parallel ,\textrm {ref}}^2}$. Choice of orientation and frequency should be informed by the size of the sensor and density of its pixels. Notwithstanding these constraints, carrier modulations for an arbitrary number of channels may be realized. (c) The intensity interference pattern in $x$ and $y$ shifts the signals from each portion of the beam and its complex conjugate term (dashed ellipses) into different quadrants in the Fourier space. By spatially filtering the 2D Fourier transform to only one ellipse in each camera frame, the information encoded in each sample beam are completely decoupled, creating two independent channels. (d) The image from one channel is left-right reversed with respect to the other due to its reflection in the beamsplitter.
Fig. 5.
Fig. 5. Directional light scattering captured with a microlens array in the back focal plane of a 30 mm lens. (a) $\tilde {I}_T - \tilde {I}_B$; (b) $\tilde {I}_R - \tilde {I}_L$; (c) Superimposed differential images. Color identification shows local pointing direction: up (yellow), down (blue), left (green), right (red). The inset miniature shows the microscopy image of the microlens array (courtesy of Thorlabs, New Jersey, USA), with arrows superimposed to show the expected pointing direction of the backscattered light.
Fig. 6.
Fig. 6. Three-dimensional projection view of a pair of crossed strands of human hair imaged through a single CM channel (channel 1). The PME creates two copies of the image, separated by ${510}\;\mu \textrm {m}$, formed by light passing through the top and bottom halves of the pupil. Although directional differences are inconspicuous in this view, it illustrates the principle of path-length multiplexing. The image generated from CM channel 2 contains two copies of the image as well, originating in light passing through the left and right halves of that channel’s pupil plane.
Fig. 7.
Fig. 7. (a) En face OCT projection of the strands of hair in each subset of solid angles and their algebraic sum; (b) Directional differential images are computed through linear combinations of the four directional images. The operation used to generate each is displayed on the image. Directional differences are evident in the differential images, as well as in the quiver plot at the center.
Fig. 8.
Fig. 8. (a) En face OCT projections of $I_T$, $I_B$, $I_L$ and $I_R$ of the adaxial epidermis of an yellow onion and the algebraic sum of the images in the center; (b) Differential images. The blue arrows show a portion of the cell wall pointing down and left while the green arrow shows another portion pointing up and right.
Fig. 9.
Fig. 9. (a) En face projections through the cones for each solid angle (top, bottom, left and right) and the corresponding algebraic sum of them in the center; differences in image quality among the four channels may be due to PSF elongation combined with a non-centered Stiles-Crawford peak; (b) The differential images show the orientation of different photoreceptors. The blue arrows show a photoreceptor pointing left and up, the green arrow shows a photoreceptor pointing right and up while the magenta arrow shows a photoreceptor pointing down but with no significant angle in $x$ direction.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

I ( x , y , k ) | R ( x , y , k ) + O 1 ( x , y , k ) + O 2 ( x , y , k ) | 2 = | R ( x , y , k ) | 2 + | O 1 ( x , y , k ) | 2 + | O 2 ( x , y , k ) | 2 + R O 1 ( x , y , k ) + R O 2 ( x , y , k ) + O 1 R ( x , y , k ) + O 2 R ( x , y , k ) + O 1 O 2 ( x , y , k ) + O 2 O 1 ( x , y , k ) .
R = R 0 e i k ref r = R 0 e i k ( , ref ) y e i k ( , ref ) z | z = z 0
O n = O 0 , n ( x , y ) e i k n r = O 0 , n ( x , y ) e i k ( , n ) x e i k ( , n ) z | z = z 0 ,
F ( R O n ) | z = z 0 = R 0 e i ( k , n k , r e f ) z F x y ( O 0 , n ( x , y ) ) + + e i ( k , n k x ) x e i ( k , ref k y ) y d x d y | z = z 0 = R 0 e i ( k , n k , r e f ) z F x y ( O 0 , n ( x , y ) ) δ ( k , n k x ) δ ( k , ref k y ) | z = z 0 ,
Δ x , m n = I ~ R , m n I ~ L , m n I ~ R , m n + I ~ L , m n and Δ y , m n = I ~ T , m n I ~ B , m n I ~ T , m n + I ~ B , m n ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.