Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Efficient framework of solving time-gated reflection matrix for imaging through turbid medium

Open Access Open Access

Abstract

Imaging through turbid medium is a long pursuit in many research fields, such as biomedicine, astronomy and automatic vehicle, in which the reflection matrix-based method is a promising solution. However, the epi-detection geometry suffers from round-trip distortion and it is challenging to isolate the input and output aberrations in non-ideal cases due to system imperfections and measurement noises. Here, we present an efficient framework based on single scattering accumulation together with phase unwrapping that can accurately separate input and output aberrations from the noise-affected reflection matrix. We propose to only correct the output aberration while suppressing the input aberration by incoherent averaging. The proposed method is faster in convergence and more robust against noise, avoiding precise and tedious system adjustments. In both simulations and experiments, we demonstrate the diffraction-limited resolution capability under optical thickness beyond 10 scattering mean free paths, showing the potential of applications in neuroscience and dermatology.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

In various fields, from life science to industrial inspection, optical methods are always concerned as they offer high resolution with low cost, fast speed, and safe usage. However, these methods face great challenges when imaging into deep depth as light is scattered multiple times due to spatial variation of the refractive index of the medium. For example, if we intend to directly image deep tissues in their original surroundings, other than pathological tissue slice, the detected signals become blurred and faint owing to multiple-scattering events, and the sharp image can no longer be obtained.

To deal with such problems, researchers proposed adaptive optics techniques [1] originated from astronomical observation [24], in which astronomers compensated the atmospheric turbulence to obtain clearer images of stars and satellites. It uses a wavefront sensor to measure the distortion of the incoming wavefront, then physically counteracts the distortion with a phase modulator, such as deformable mirror (DM) or spatial light modulator (SLM). This technique was subsequently proven to be useful in many microscopy scenarios such as wide-field microscopy [57], confocal microscopy [811], multi-photon microscopy [1218], and optical coherence tomography [19,20]. In these techniques, the key step is the accurate wavefront measurement. While direct wavefront measurement with a specific wavefront sensor is time-saving but structurally complex [16,21], indirect wavefront measurement without a wavefront sensor but accompanied by iterations of computation is easier to implement in other systems [22,23].

So far, most adaptive optics techniques are limited to low-order aberration correction. Fortunately, recent work found that the almost random wavefront could still be corrected to recover a sharp focus after iterations by finding the corresponding correction pattern [24]. In the field of view (FOV) determined by the memory effect range [25,26], once the focus is formed, we can scan the focus by tilting the incident light and obtain a clear image [27]. More efficient compensating methods without iterations use optical phase conjugation [28] and transmission matrix [29,30] to determine the correction pattern, or speckle correlography [31,32] to directly recover the object. However, all these methods require whole access through the scattering medium, which is impractical in most real-world imaging scenarios. The desired setting for imaging in deep tissue is epi-detection mode [3335], that the light source and the sensor are placed on the same side from the sample.

Contrary to the transmission matrix, reflection matrix measurement is an epi-detection method, which has already been proposed for light transmission enhancement [36,37], selective focusing [3840], and diffraction-limited imaging [4145]. With coherence gating techniques [14,46,47], we can directly measure the wavefront from a specific target plane deep inside the turbid medium and build the reflection matrix. For reconstruction, Singular Value Deposition (SVD) based reflection matrix analysis, originally designed for time reversal acoustics [48,49], can immediately separate object and aberration after the matrix deposition, achieving selective focusing on different areas of the object [37,39] or obtaining the whole object image [45]. However, SVD-based methods rely on the sparsity of the object to achieve correct focusing or imaging. When imaging continuous objects, the deduced singular vectors are influenced by both the input aberration and output aberration that need explicit separation to achieve better recovery.

CLASS (Closed-Loop Accumulation of Single Scattering) algorithm is another effective way to recover the object from the reflection matrix [42,44], which applies coherent summation to the reflection matrix and obtains the spatial spectrum of the object, then it compares the spectrum with the row and column of the reflection matrix to determine the input and output aberration separately. However, various system imperfections such as inaccurate alignments or uncontrolled shot-to-shot noise will affect the CLASS model [44], resulting in a blurred reconstruction. Here, we present a modified CLASS algorithm for the reflection matrix-based imaging method, that can correct high-order aberrations and facilitate imaging in deep turbid tissue. First, we provide an efficient framework based on single scattering accumulation and phase unwrapping to separate the input and output aberrations from the measured noisy reflection matrix. Then, we propose to only correct the output aberration while suppressing the input aberration by incoherent averaging to enhance the SNR (Signal-to-Noise Ratio) of the final image. We compare our method with the state-of-the-art reflection matrix-based imaging methods [44,50] through simulations and experiments, demonstrating the effectiveness of our proposed algorithm which is faster in convergence and more robust in reconstruction.

2. Materials and methods

2.1 Principle

In a coherent laser scanning epi-detection imaging scenario, after passing through several relay lenses and the objective lens, light penetrates the turbid medium and focuses on the target plane. Then it is backscattered from that plane and traverses through the medium again, forming a degraded 2D image of the illuminated part of the object on the camera. Without the turbid medium, this image is solely a bright focal point in the center. After scanning, we can recover the object by simply stitching these points. However, the inhomogeneity of the refractive index of the sample causes aberration and multiple scattering that could severely distort the wavefront of received light, forming random patterns in the center of the image.

The multiple-scattering events introduce speckles that follow statistical distributions [51], which add random and complex noise background to the image and lower the SNR of the image. The multiple-scattering noise can be reduced by coherence gating which eliminates noise signals arriving at different flight times from the desired signals, or by confocal gating with a pinhole placed in front of the camera. Further noise suppression methods include applying coherent summation in the far field [41,52] or multiplying a filter matrix [39].

Besides, the system-induced and sample-induced aberrations are also non-trivial, which are concerned for decades by specialists in the adaptive optics domain. The aberration will extend the point spread function (PSF) from the diffraction-limited spot to a more complex pattern. In coherence-gated imaging scenarios, as we can directly measure the complex field originating from the gated plane (object plane) by digital holography, we now consider the time-gated coherent spread function (CSF, also termed E-field PSF) defined as $P(r)$. The output complex field $E(r_{out};r_{in})$ from the coherence-gated object plane can be formulated as:

$$E(r_{out};r_{in})=\int P_{out}(r_{out},r)O(r)P_{in}(r,r_{in})d^2 r+N_m(r_{out};r_{in})$$

Here, $r_{in}$ describes the spatial coordinates of the input plane that the laser intends to focus on, $r_{out}$ describes the spatial coordinates of the output plane that is conjugate to the image sensor. $r$ describes the spatial coordinates of the plane where the object lies and the object is termed by $O(r)$. If the object is placed in focus, these coordinates are the same. $P_{in}$ and $P_{out}$ are CSFs describing the aberrations originating from the input (illumination) optical path and the output (detection) optical path. $E(r_{out};r_{in})$ is the measured complex field after digital holography. $N_m(r_{out};r_{in})$ is aggregated noise background including speckle noise, measurement noise, and system noise.

The CSFs are approximately shift-invariant in a region called "isoplanatic patch (IP)" [53]. IP can be defined by the region where a single correction is valid in adaptive optics or shift memory effect range in wavefront-shaping [54]. Therefore, in each IP, we can approximate Eq. (1) as:

$$\begin{aligned} E(r_{out};r_{in}) & =\int P_{out}(r_{out}-r)O(r)P_{in}(r-r_{in})d^2 r+N_m(r_{out};r_{in})\\ & = (P_{out}\ast O_{sca})(r_{out})+N_m(r_{out};r_{in}) \end{aligned}$$

In which $O_{sca}(r;r_{in})=O(r) \cdot P_{in}(r-r_{in})$, describing the object illuminated by the degraded scan point.

Rewriting Eq. (2) in 2D matrix representation where the coordinates of the row and column are $r_{in}$ and $r_{out}$. We now term $E$ as the reflection matrix $R$ and therefore $R=P_oOP_i+N_m$. $O$ is a diagonal matrix representing the reflectivity of the object. Without sample inhomogeneity, $R$ is a diagonal matrix, and the optical coherence microscopy (OCM) [55] image can be directly obtained by selecting the diagonal elements of the reflection matrix. However, the aberration and scattering will cause a spread of the diagonal elements to the off-diagonal places and $R$ becomes a band-shape matrix, resulting in a decrease in resolving details of the OCM image. To solve the problem, considering an IP region where $P_{in}(r,r_{in})$ and $P_{out}(r_{out},r)$ are shift-invariant, thus $P_i$ and $P_o$ are Toeplitz matrices, their Fourier transform $\widetilde {P}_i(u)$ and $\widetilde {P}_o(u)$ are diagonal matrices, which greatly simplify the calculation. The Fourier transform of the reflection matrix can be easily calculated using discrete Fourier matrix $\mathcal {F}$ as $\widetilde {P}=\mathcal {F}P\mathcal {F}^{-1}$. Therefore, in the Fourier basis representation, we have:

$$\widetilde{R}=\widetilde{P}_o\widetilde{O}\widetilde{P}_i+\widetilde{N}_m$$

To separate the spectrum of the object $\widetilde {O}$ from Fourier basis reflection matrix $\widetilde {R}$, various methods have been discovered. As $\widetilde {P}_i$ and $\widetilde {P}_o$ are diagonal matrices, they only totally contain $2N$ unknowns in which $N$ is the number of free mode in the area of scan, so they can be inferred by iterative methods [24,42,43] which are slow in computation. A faster estimation method has also been proposed [44], however, as the time-gated complex field $E(r_{out};r_{in})$ is measured by interferometry which is highly sensitive to the optical path length of the sample arm, the inaccuracy of the system adjustment, together with mechanical and environmental instability, will introduce shot-to-shot phase noise to the measurement, especially in the scanning system. Mathematically, to simplify the scenario, the shot-to-shot noise is equivalent to multiplying random $e^{i\phi _i}$ to each column of $R$, and $\widetilde {P}_i(u)$ is no longer a diagonal matrix, resulting in inaccurate estimation of input and output aberration $P_{in}$ and $P_{out}$.

To solve the problem, we propose an effective modification to the CLASS algorithm [44] that can calculate the aberrations more precisely from $\widetilde {R}$. As we can see in Eq. (2), the output signal $E(r_{out};r_{in})$ contains the aberrations from both input and output paths. In a linear scalar scattering system, the reciprocity relation can be deduced [56]. As the epi-detection geometry is applied in our system, the output aberration matrix $P_o$ is theoretically identical to the transpose of the input aberration matrix $P_i$. Thus, their Fourier transforms $\widetilde {P}_i(u)$ and $\widetilde {P}_o(u)$ are nearly the same with $\widetilde {P}_o(u)\approx \widetilde {P}^T_i(u)=\widetilde {P}_i(u)$. The approximation instead of equivalence is due to the slight mismatch between illumination and detection optics.

Actually, during each iteration of the CLASS algorithm, the estimated input and output aberrations are approximately twice the actual aberration. To clarify, we consider the reconstruction process of the CLASS algorithm. First, it estimates the spectrum of the object from Fourier basis reflection matrix $\widetilde {R}$ by coherent summation with respect to $\Delta v$, which was originally proposed as CASS (Collective Accumulation of Single Scattering) algorithm [41]:

$$\begin{aligned} \widetilde{O}_{\mathrm{CASS}}(\Delta v) & =\sum_{v_{in}}\widetilde{R}(v_{out},v_{in})\\ & =\widetilde{O}(\Delta v)\cdot \sum_{v_{in}}\widetilde{P}_{out}(v_{in}+\Delta v)\widetilde{P}_{in}(v_{in})+\sum_{v_{in}}\widetilde{N}_m(v_{out};v_{in}) \end{aligned}$$

This calculation will enhance the intensity of the single-scattering term from the multiple-scattering noise. Omitting the noise term, the estimated spectrum is the ideal object spectrum multiplying autocorrelation of the aberration $P$. Furthermore, for highly turbid medium, the aberration is complex and its autocorrelation gets closer to a delta function $\delta (\Delta v)$, therefore we have the estimated object spectrum as:

$$\widetilde{O}_{\mathrm{CASS}}(\Delta v) \approx \widetilde{O}(\Delta v)\delta(\Delta v)$$

The second step is the recovery of the aberration from the reflection matrix using the above spectrum. We expand the input aberration calculation proposed by Yoon and his colleagues [44] to clarify what this equation estimates. The output aberration calculation is similar.

$$\begin{aligned} \phi_i^n(v_{in}) & =\mathrm{arg}\{\sum_{\Delta v}\widetilde{R}(v_{in}+\Delta v;v_{in})^*\cdot \widetilde{O}_{\mathrm{CASS}}(\Delta v)\}\\ & \approx \mathrm{arg}\{\sum_{\Delta v}\widetilde{P}_{out}^*(v_{in}+\Delta v)\widetilde{O}^*(\Delta v)\widetilde{P}_{in}^*(v_{in}) \cdot\widetilde{O}(\Delta v)\delta(\Delta v)\}\\ & =\mathrm{arg}\{\widetilde{P}_{in}^*(v_{in})\sum_{\Delta v}\widetilde{P}_{out}^*(v_{in}+\Delta v)\delta(\Delta v)\}\\ & =\mathrm{arg}\{\widetilde{P}_{in}^*(v_{in})\widetilde{P}_{out}^*(v_{in})\}\\ & =\mathrm{arg}\{\widetilde{P}_{in}^*(v_{in})\}+\mathrm{arg}\{\widetilde{P}_{out}^*(v_{in})\} \end{aligned}$$

Therefore, we use a fast phase-unwrapping algorithm [57], then divide the unwrapped phase by two and rewrap it to get the desired aberration for compensation. We term it the Divide-by-two strategy. So far, we do not consider shot-to-shot noise yet, so even in well-adjusted systems, this strategy can accelerate the convergence with more accurate results as it provides a better initialization in the first few iterations.

In an imperfect noisy system where the input aberration can not be well resolved, we only estimate and correct the output aberration to avoid input phase noise perturbations. This can provide more robust results at the price of incomplete correction. As shown in Eq. (2), after the correction of output aberration, the residual 2D image $O_{sca}$ (Omitting noise term $N_m$) is the multiplication of $O(r)$ and $P_{in}(r-r_{in})$. The framework is presented in Fig. 1. Although the original CLASS algorithm can provide a more confined PSF after the correction, the whole image is equivalently convoluted with the inverse input PSF $P_{in}^{-1}$. For highly scattered PSF, we can approximate $P_{in}^{-1}$ by complex conjugation of $P_{in}$ as $P_{in}^{-1}\sim P_{in}^{\dagger }$ [31,32,58] and finding that it reblurs the result.

 figure: Fig. 1.

Fig. 1. The proposed output aberration correction framework for laser scanning epi-detection system.

Download Full Size | PDF

From the corrected 2D image, we extract the center point $r=r_{in}$ and obtain the object image $I$ after the scan as $I(r_{in})=O(r_{in})P_{in}(0)$, which is also the OCM image deduced from corrected reflection matrix. After the correction, as we coherently gather part of the non-confocal signals (off-diagonal elements) to the confocal detection position (diagonal elements), the OCM image becomes sharper and clearer. However, the unwanted term $P_{in}(0)$ will also decrease the SNR of images. Therefore, other than only extracting the center point, we can further enhance the SNR by incoherently averaging the peripheral of the center point [45] controlled by hyper-parameter $\sigma$:

$$I(r)=\sum_{r_{in}} O(r)P_{in}(r-r_{in})e^{-(r-r_{in})^2/2\sigma^2}$$

In noisy systems, a few iterations of the modified CLASS algorithm with this post-processing step are sufficient to obtain a good result. More iterations of aberration corrections will produce the same blurred result as the original CLASS algorithm.

2.2 Experimental setup

Figure 2 shows the schematic layout of our time-gated laser scanning imaging system, similar to [44]. We use a super-luminescent diode laser as the light source (cBLMD-S-341, Superlum, center wavelength: 810 nm, bandwidth: 27 nm). After the condenser, the light enters the spatial filter and then it is separated by a polarized beam splitter (PBS) into the sample arm and the reference arm. A linear polarizer (LPVIS100, Thorlabs) is used to control the light intensity between these two arms. In the sample arm, the plane wave is scanned by a 2D Galvo Mirror (GVS012, Thorlabs) and impinges on the back focal plane of the objective (N20X-PF, Nikon, 20$\times$, NA=0.5) by two 4-f systems. The objective focuses light in the target plane and collects the backscattered signals which then travel back and enter the camera (Zyla-4.2-plus, Andor) with a magnification of 40$\times$. In the reference arm, two 4-f systems same to the sample arm are deployed to match the optical path length. The residual optical path length difference is adjusted by a motorized stage (MTS25/M-Z8, Thorlabs) where a silver mirror (MR4) is mounted. The reflected light from the mirror travels in a different path by a beam splitter (BS2) and enters the diffraction grating (DG). The first-order diffraction is selected by an iris diaphragm. Using diffraction grating other than tilting the mirror ensures the interference of low-coherence light throughout the whole image plane [59]. The sample beam and reference beam are interfered on the camera sensor plane, forming an off-axis interferogram. The interferogram is then Fourier transformed and the AC component is selected, which is then inverse Fourier transformed to generate the complex field of the target area. This 2D complex image is then reshaped into a vector and fills one column of reflection matrix $R$. After scanning the whole FOV, we get the complete reflection matrix.

 figure: Fig. 2.

Fig. 2. Schematic layout of our coherence-gated laser scanning imaging system. SLD: super-luminescent diode, C: condenser, PL: polarizer, BS: beam splitter, PBS: polarizing beam splitter, MR: mirror, GM: 2D Galvo mirror, DG: diffraction grating, f: focal length of lens in mm.

Download Full Size | PDF

The exposure time is set to 0.001 s for defocus imaging experiment and 0.05 s for the scattering imaging experiment, with a frame rate of about 20 Hz. As the diffraction limit of the system is d=$\lambda$/2NA=0.81 µm, and the magnification of the system is 40$\times$ with the pixel size of 6.5 µm, we have downsampled the captured image by 5 to achieve the output spatial step of 6.5/40$\times$5=0.8125 µm. The input sampling is set the same as the output spatial step for building the reflection matrix by controlling the scan angle of the Galvo mirror. The number of vectors contained in the matrix is set to 61, resulting in a FOV of 50 µm$\times$50 µm. Thus, the acquisition time is more than 5 minutes for a 2D scan in the scattering imaging experiment. Once the $61^2\times 61^2$ reflection matrix is obtained, the processing time is around tens of seconds.

3. Experimental results

3.1 Simulation

We first investigate the robustness and convergence of the proposed algorithm by simulation. We consider various aberrations, using different orders of Zernike function with Poisson noise and scattering-induced speckle noise. The target object for simulation is a binary USAF image and Elements 5 and 6 of Group 7 are selected (Fig. 3(a)). We first simulate the diffraction-limited case of the target as the ground truth, using physical parameters same to the experimental setup in Fig. 2. The scale bar indicates the intensity of the image and the diffraction causes half of the drop of the intensity in the OCM image, with non-uniformity across edges (Fig. 3(b)).

 figure: Fig. 3.

Fig. 3. Simulation of reflection matrix-based laser scanning imaging with various aberrations and noises. a). Raw image for simulation. b). Simulated diffraction-limited image with a uniform circle-shape pupil, only the Poisson noise is considered. c). Simulated OCM image obtained from diagonal elements of the uncorrected reflection matrix. The simulated pupil using 1-10 Zernike modes and the corresponding PSF are displayed on the left. d). Reconstruction result using the CLASS algorithm [44]. Although the estimated input (left top) and output (left bottom) aberrations are different from the simulated aberrations, their shapes are complementary so that they can also provide a relatively good recovery. e). Reconstruction result using the proposed algorithm. The estimated input and output aberrations are closer to the ground-truth simulated aberrations owing to better initialization with the Divide-by-two strategy. f). The maximum of the cross-correlation between the recovered image and diffraction-limited reference image along with iterations, arbitrary unit. The proposed algorithm shows faster convergence with more accurate recovery. g,h,i,j). Same as c-f, except that higher Zernike orders (1-25) are applied, the magnitude of the defocus term and the spherical term are further increased to 10 and 5, simulating most occurred aberrations in biophotonics imaging scenarios [60]. As the aberration becomes severer, the enhancement of convergence is more apparent. k,l,m,n). Same as c-f, except that a random phase term is uniformly added to the pupil in each scan and the magnitude of the defocus term is increased to 5. A simple Divide-by-two strategy is sufficient to provide a correct result, demonstrating its robustness against mild random phase noise. o,p,q). Same as k-m, except that the standard deviation of random phase noise is multiplied by four, and we add a random phase background to the pupil, simulating the speckle noise. The reflection matrix is seriously corrupted that we only correct the output aberration with one iteration, using the Divide-by-two strategy. r). The final reconstruction result after post-processing using Eq. (7).

Download Full Size | PDF

We then generate aberrations using low Zernike orders (1-10), and the corresponding magnitude is sampled from the standard normal distribution. We discover that the aberration causes a great drop in intensity (Fig. 3(c)) and after the correction, both methods can recover the diffraction-limited result with an enhancement of image intensity. The proposed method is more accurate in the estimation of aberration (Fig. 3(d),e), and 3$\times$ speed-up for convergence (Fig. 3(f)). As the result of the CLASS algorithm adds shift error, we compare the reconstruction quality by calculating the maximum cross-correlation to the diffraction-limited image (Fig. 3(b)).

For higher order of aberrations, we apply Zernike orders up to 25, the magnitude is also sampled from the standard normal distribution, except that the defocus and spherical aberration terms are set to 10 and 5 (Fig. 3(g)), as they are more likely to occur in biophotonics imaging [60]. As both the complexity and magnitude of the aberration increase, the proposed method can achieve a speed-up of 5$\times$ (Fig. 3(j)) with more accurate aberration estimation compared to the original method (Fig. 3(h),i).

The phase error is nontrivial in the focused laser scanning system, therefore we add a uniform phase noise sampled from the normal distribution with a standard deviation of 0.5 to the pupil in each scan. The aberration is restricted within 10 Zernike orders but the magnitude of defocus term is increased to 5. The phase error corrupts the reflection matrix, resulting in a null estimation of the input pupil aberration and an overestimation of the output pupil aberration (Fig. 3(l)). In this case, only the proposed method works (Fig. 3(m)) while the CLASS algorithm (Fig. 3(l)) can not separate the input and output aberrations. The divide-by-two strategy is more robust against the phase noise that well distinguishes the input and output aberrations to provide a correct result, and the algorithm can still converge (Fig. 3(n)) after just 3 iterations.

Finally, from the previous case, we add a speckle noise map to the pupil in which each pixel is generated from the uniform distribution between $-\pi$ and $\pi$ and averaged by a Gaussian kernel with the standard deviation of 0.45 pixel, simulating multiple-scattering process (Fig. 3(o)). Besides, the standard deviation of random phase noise is set to 2. In this case, we can no longer estimate the input aberration from the reflection matrix. Thus, we only run one iteration with the Divide-by-two strategy and get the intermediate result (Fig. 3(q)). More iterations will attribute the input aberration to the output path, causing a blurred result similar to Fig. 3(p). However, the result is still noisy as only the output aberration is roughly compensated. To compensate for the input distortion, a post-processing step using Eq. (7) can provide a high-contrast diffraction-limited result (Fig. 3(r)).

3.2 Experiment

We then test the proposed method using real samples. We first investigate the imaging scenario that only contains aberrations. We place a USAF resolution test target (R1DS1N, Group 7 with Elements 5 and 6, Thorlabs) as the sample. We add defocus aberration by shifting the mounting stage away from the focus plane with 15 µm. The diffraction of light introduces an extended illumination area (Fig. 4(a)). From the interferogram, we can deduce the amplitude and phase images (Fig. 4(b)), forming a complex image. The irregular shape of the pupil is due to the slight clip by the 2D Galvo mirror. The exact shape of the pupil is calculated by averaging the phase images and we filter out the noise term outside the pupil. As the Galvo mirror will descan the backscattered signals, we shift the complex images in different scans to the corresponding positions (Fig. 4(c)). After that, we reshape the complex image to a vector and fulfill one column of the reflection matrix. After scanning the whole field of view, we get the complete reflection matrix $R$ (Fig. 4(d)) and transform it to Fourier basis reflection matrix $\widetilde {R}$, for easier identification of input aberration $\widetilde {P}_i(u)$ and output aberration $\widetilde {P}_o(u)$ using the CLASS algorithm (Fig. 4(e)). We unwrap the output aberration (Fig. 4(f)), divide it by two, and rewrap it (Fig. 4(g)), to separate the input and output aberration. We apply phase conjugation to the output aberration and then multiply it back to the Fourier basis reflection matrix $\widetilde {R}$, forming corrected $\widetilde {R}_{cor}$ which is then basis transformed to $R_{cor}$. We compare the OCM image directly inferred from the diagonal reflection matrix before (Fig. 4(h)) and after the correction (Fig. 4(l)), and the spread width of the reflection matrix (Fig. 4(d),k), demonstrating that output aberration has been clearly eliminated. The reconstructed image shows diffraction-limited resolution. Same to the result in simulation, compensation with original input and output aberration using the CLASS algorithm results in higher peak intensity of the image (Fig. 4(i)) and a more confined reflection matrix (Fig. 4(j)), but at the price of the reblurred result. As the input aberration is entangled with random phase noise, even after iterations, the estimated aberration cannot converge to the correct result.

 figure: Fig. 4.

Fig. 4. Details of the reconstruction with strong defocus aberration. a). The captured interferogram in each scan. Scale bar, 20 µm. b). The deduced amplitude and phase image from the interferogram, the defocus aberration is apparent in the phase image. Scale bar, 10 µm and 0.2 NA. c). The complex images are shifted to the corresponding position for coordinate transformation. Scale bar, 10 µm. d). The measured reflection matrix after scanning. e). Calculated input and output aberration from the reflection matrix using the CLASS algorithm in the first iteration. f). Unwrapped output aberration. g). Rewrapped output aberration from half of g. h). OCM image obtained from the diagonal element of uncorrected reflection matrix in d. Scale bar, 10 µm. i). The corrected OCM image using the CLASS algorithm. j). The corrected reflection matrix using the CLASS algorithm [44]. k). The corrected reflection matrix using the proposed algorithm. l). The corrected OCM image using the proposed algorithm. m). The final reconstruction after post-processing.

Download Full Size | PDF

Then, we investigate the imaging ability through the turbid medium. We place a 1.5 mm thickness phantom above the USAF target. The phantom is made by mixing 110 µL 2.5% polystyrene beads solution of 1 µm size with 1 mL 1.5% agarose (Fig. 5(c)). The optical thickness is measured using a method similar to [39]. The laser used for measurement is a 785 nm coherent laser with a maximum power of 80 mW (MSL-III-785L, CNI Laser), slightly different from the light source used in the experiment. The measured power is 54 mW without scattering phantom and 1.71 µW on average with the phantom. Therefore, the optical thickness is approximately 10.5 scattering mean free paths.

 figure: Fig. 5.

Fig. 5. Imaging through the turbid medium. a). The wide-field imaging geometry with incoherent light illuminated from the bottom up. b). In the bright-field image of the USAF target (Group 7 with Element 5 and 6), the structure is clearly visible without the turbid medium. c). The turbid medium used in the experiment (left) and $10^{\circ }$ diffuser (10DKIT-C1, Newport) for comparison. d). When the target is covered with a 1.5 mm thickness phantom, the bright-field image is totally corrupted. e). The laser scanning epi-detection geometry. f,g). Typical amplitude and phase images in each scan. Scale bar, 10 µm and 0.2 NA. h). The OCM image deduced from the reflection matrix. Scale bar, 10 µm i). The measured reflection matrix. j,k). The estimated input and output aberration using the CLASS algorithm. l). The OCM image deduced from the corrected reflection matrix using the CLASS algorithm. m). The corrected reflection matrix using the CLASS algorithm n). The estimated output aberration using the proposed method. As we only run one iteration, the aberration is slightly underestimated. o). The OCM image deduced from the corrected reflection matrix using the proposed algorithm. p). The post-processing result from the CLASS algorithm. q). The final result from the proposed algorithm. r). The corrected reflection matrix using the proposed algorithm.

Download Full Size | PDF

Without the phantom, when we illuminate the object from the bottom up (Fig. 5(a)), the structure is clearly visible in the wide-field image (Fig. 5(b)). When the target is covered by the phantom, the severe deterioration caused by multiple scattering and aberration hides all the useful information about the target (Fig. 5(d)). With focused laser scanning geometry (Fig. 5(e)), which focuses more energy so that light can penetrate deeper, we can find a faint and corrupted focus pattern in the center of the image. We display both amplitude and phase images of a typical scan (Fig. 5(f),g). The multiple scattering events cause random phase maps in the pupil plane, severely degrade the focus and lower the image contrast. The reflection matrix is thus a banded matrix with a strong background (Fig. 5(i)), and the OCM image deduced from this reflection matrix is severely corrupted (Fig. 5(h)). Due to the shot-to-shot phase noise, the deduced input and output aberrations are still erroneous (Fig. 5(j),k). As we only correct the output aberration and the phase unwrapping algorithm is also affected by multiple-scattering noise, the contrast of the OCM image right after a few iterations of correction is not clear enough (Fig. 5(o)) compared to the result of the CLASS algorithm (Fig. 5(l)), the same as the reflection matrix (Fig. 5(m),r). However, as the proposed method well separates the input and output aberrations, after the post-processing step to compensate for incomplete correction using Eq. (7), the final result (Fig. 5(q)) is clearer and sharper than the CLASS algorithm with (Fig. 5(p)) and without the post-processing step (Fig. 5(l)).

4. Conclusion

We present an epi-detection laser scanning imaging method for imaging through turbid medium based on the reflection matrix, with a fast and efficient reconstruction algorithm, that can facilitate the implementation of the coherence-gated laser scanning microscopy system. The proposed method is investigated under both simulations and real experiments, showing its superiority in reconstruction speed and robustness against various noises. The demonstration is performed under extreme scattering scenarios with optical thickness beyond 10 scattering mean free paths, while offering diffraction-limited resolution. We believe that such techniques have the potential to lower the bar of coherence-gated systems, and thus will stir interest in microscopy with highly scattering scenarios.

The system is compatible with scanning-based microscopy such as confocal and multi-photon imaging. Instead of measuring a small area defined by the pinhole in front of the photomultiplier tube (PMT), we measure the whole backscattered signals from which we can directly estimate the aberration by digital holography. This enables us to evaluate high-order aberrations that are incapable for many traditional adaptive optics methods, to image deeper. The whole procedure is label-free, avoiding many problems such as precise labeling and photo-bleaching, showing the potential of applications in neuroscience and dermatology. For other scenarios that require specific labeling to distinguish different tissues or organelles, the proposed method can be used as a pre-processing step to estimate aberrations, enhancing the fluorescence image quality.

Our method is currently limited by the frame rate of the camera. As the number of photons required for the camera is larger than PMT to ensure a reasonable signal-to-noise ratio, the imaging speed is inferior to traditional scanning-based systems, placing a barrier to dynamic imaging. To deal with such problems, we might use a high-speed camera, multi-focus or line scan system with a high-power light source to accelerate the imaging process, and leverage new computational techniques such as compressive imaging or AI-based imaging methods [58,61] to increase the overall throughput.

Funding

National Natural Science Foundation of China (61620106005, 61827804, 62131011).

Disclosures

The authors declare no conflicts of interest.

Data availability

The data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. M. J. Booth, “Adaptive optics in microscopy,” Philos. Trans. R. Soc., A 365(1861), 2829–2843 (2007). [CrossRef]  

2. R. Q. Fugate, D. L. Fried, G. A. Ameer, B. Boeke, S. Browne, P. H. Roberts, R. Ruane, G. A. Tyler, and L. Wopat, “Measurement of atmospheric wavefront distortion using scattered light from a laser guide-star,” Nature 353(6340), 144–146 (1991). [CrossRef]  

3. J. W. Hardy, Adaptive optics for astronomical telescopes (Oxford University, 1998).

4. R. K. Tyson and B. W. Frazier, Principles of adaptive optics (CRC Press, 2022).

5. J. Liang, D. R. Williams, and D. T. Miller, “Supernormal vision and high-resolution retinal imaging through adaptive optics,” J. Opt. Soc. Am. A 14(11), 2884–2892 (1997). [CrossRef]  

6. D. Débarre, M. J. Booth, and T. Wilson, “Image based adaptive optics through optimisation of low spatial frequencies,” Opt. Express 15(13), 8176–8190 (2007). [CrossRef]  

7. Z. Kam, P. Kner, D. Agard, and J. W. Sedat, “Modelling the application of adaptive optics to wide-field microscope live imaging,” J. Microsc. 226(1), 33–42 (2007). [CrossRef]  

8. M. J. Booth, M. A. Neil, and T. Wilson, “Aberration correction for confocal imaging in refractive-index-mismatched media,” J. Microsc. 192(2), 90–98 (1998). [CrossRef]  

9. A. Roorda, F. Romero-Borja, W. J. Donnelly III, H. Queener, T. J. Hebert, and M. C. Campbell, “Adaptive optics scanning laser ophthalmoscopy,” Opt. Express 10(9), 405–412 (2002). [CrossRef]  

10. M. J. Booth, M. A. Neil, R. Juškaitis, and T. Wilson, “Adaptive aberration correction in a confocal microscope,” Proc. Natl. Acad. Sci. 99(9), 5788–5792 (2002). [CrossRef]  

11. X. Tao, B. Fernandez, O. Azucena, M. Fu, D. Garcia, Y. Zuo, D. C. Chen, and J. Kubby, “Adaptive optics confocal microscopy using direct wavefront sensing,” Opt. Lett. 36(7), 1062–1064 (2011). [CrossRef]  

12. O. Albert, L. Sherman, G. Mourou, T. Norris, and G. Vdovin, “Smart microscope: an adaptive optics learning system for aberration correction in multiphoton confocal microscopy,” Opt. Lett. 25(1), 52–54 (2000). [CrossRef]  

13. L. Sherman, J. Y. Ye, O. Albert, and T. B. Norris, “Adaptive correction of depth-induced aberrations in multiphoton scanning microscopy using a deformable mirror,” J. Microsc. 206(1), 65–71 (2002). [CrossRef]  

14. M. Rueckel, J. A. Mack-Bucher, and W. Denk, “Adaptive wavefront correction in two-photon microscopy using coherence-gated wavefront sensing,” Proc. Natl. Acad. Sci. 103(46), 17137–17142 (2006). [CrossRef]  

15. N. Ji, D. E. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods 7(2), 141–147 (2010). [CrossRef]  

16. K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11(6), 625–628 (2014). [CrossRef]  

17. J.-H. Park, L. Kong, Y. Zhou, and M. Cui, “Large-field-of-view imaging by multi-pupil adaptive optics,” Nat. Methods 14(6), 581–583 (2017). [CrossRef]  

18. W. Zheng, Y. Wu, P. Winter, R. Fischer, D. Dalle Nogare, A. Hong, C. McCormick, R. Christensen, W. P. Dempsey, D. B. Arnold, J. Zimmerberg, A. Chitnis, J. Sellers, C. Waterman, and H. Shroff, “Adaptive optics improves multiphoton super-resolution imaging,” Nat. Methods 14(9), 869–872 (2017). [CrossRef]  

19. B. Hermann, E. Fernández, A. Unterhuber, H. Sattmann, A. Fercher, W. Drexler, P. Prieto, and P. Artal, “Adaptive-optics ultrahigh-resolution optical coherence tomography,” Opt. Lett. 29(18), 2142–2144 (2004). [CrossRef]  

20. L. Ginner, A. Kumar, D. Fechtig, L. M. Wurster, M. Salas, M. Pircher, and R. A. Leitgeb, “Noniterative digital aberration correction for cellular resolution retinal optical coherence tomography in vivo,” Optica 4(8), 924–931 (2017). [CrossRef]  

21. N. Ji, “Adaptive optical fluorescence microscopy,” Nat. Methods 14(4), 374–380 (2017). [CrossRef]  

22. X. Hao, E. S. Allgeyer, D.-R. Lee, J. Antonello, K. Watters, J. A. Gerdes, L. K. Schroeder, F. Bottanelli, J. Zhao, P. Kidd, M. D. Lessard, J. E. Rothman, L. Cooley, T. Biederer, M. J. Booth, and J. Bewersdorf, “Three-dimensional adaptive optical nanoscopy for thick specimen imaging at sub-50-nm resolution,” Nat. Methods 18(6), 688–693 (2021). [CrossRef]  

23. J. Wu, Z. Lu, D. Jiang, et al., “Iterative tomography with digital adaptive optics permits hour-long intravital observation of 3d subcellular dynamics at millisecond scale,” Cell 184(12), 3318–3332.e17 (2021). [CrossRef]  

24. I. M. Vellekoop and A. Mosk, “Focusing coherent light through opaque strongly scattering media,” Opt. Lett. 32(16), 2309–2311 (2007). [CrossRef]  

25. S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61(7), 834–837 (1988). [CrossRef]  

26. I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61(20), 2328–2331 (1988). [CrossRef]  

27. I. M. Vellekoop and C. M. Aegerter, “Scattered light fluorescence microscopy: imaging through turbid layers,” Opt. Lett. 35(8), 1245–1247 (2010). [CrossRef]  

28. Z. Yaqoob, D. Psaltis, M. S. Feld, and C. Yang, “Optical phase conjugation for turbidity suppression in biological samples,” Nat. Photonics 2(2), 110–115 (2008). [CrossRef]  

29. S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1(1), 81 (2010). [CrossRef]  

30. Y. Choi, T. D. Yang, C. Fang-Yen, P. Kang, K. J. Lee, R. R. Dasari, M. S. Feld, and W. Choi, “Overcoming the diffraction limit using multiple light scattering in a highly disordered medium,” Phys. Rev. Lett. 107(2), 023902 (2011). [CrossRef]  

31. J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012). [CrossRef]  

32. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014). [CrossRef]  

33. X. Xu, H. Liu, and L. V. Wang, “Time-reversed ultrasonically encoded optical focusing into scattering media,” Nat. Photonics 5(3), 154–157 (2011). [CrossRef]  

34. J.-H. Park, W. Sun, and M. Cui, “High-resolution in vivo imaging of mouse brain through the intact skull,” Proc. Natl. Acad. Sci. 112(30), 9236–9241 (2015). [CrossRef]  

35. I. N. Papadopoulos, J.-S. Jouhanneau, J. F. Poulet, and B. Judkewitz, “Scattering compensation by focus scanning holographic aberration probing (f-sharp),” Nat. Photonics 11(2), 116–123 (2017). [CrossRef]  

36. S. Jeong, Y.-R. Lee, W. Choi, S. Kang, J. H. Hong, J.-S. Park, Y.-S. Lim, H.-G. Park, and W. Choi, “Focusing of light energy inside a scattering medium by controlling the time-gated multiple light scattering,” Nat. Photonics 12(5), 277–283 (2018). [CrossRef]  

37. J. Cao, Q. Yang, Y. Miao, Y. Li, S. Qiu, Z. Zhu, P. Wang, and Z. Chen, “Enhance the delivery of light energy ultra-deep into turbid medium by controlling multiple scattering photons to travel in open channels,” Light: Sci. Appl. 11(1), 108 (2022). [CrossRef]  

38. S. M. Popoff, A. Aubry, G. Lerosey, M. Fink, A.-C. Boccara, and S. Gigan, “Exploiting the time-reversal operator for adaptive optics, selective focusing, and scattering pattern analysis,” Phys. Rev. Lett. 107(26), 263901 (2011). [CrossRef]  

39. A. Badon, D. Li, G. Lerosey, A. C. Boccara, M. Fink, and A. Aubry, “Smart optical coherence tomography for ultra-deep imaging through highly scattering media,” Sci. Adv. 2(11), e1600370 (2016). [CrossRef]  

40. Q. Yang, Y. Miao, T. Huo, Y. Li, E. Heidari, J. Zhu, and Z. Chen, “Deep imaging in highly scattering media by combining reflection matrix measurement with bessel-like beam based optical coherence tomography,” Appl. Phys. Lett. 113(1), 011106 (2018). [CrossRef]  

41. S. Kang, S. Jeong, W. Choi, H. Ko, T. D. Yang, J. H. Joo, J.-S. Lee, Y.-S. Lim, Q.-H. Park, and W. Choi, “Imaging deep within a scattering medium using collective accumulation of single-scattered waves,” Nat. Photonics 9(4), 253–258 (2015). [CrossRef]  

42. S. Kang, P. Kang, S. Jeong, Y. Kwon, T. D. Yang, J. H. Hong, M. Kim, K.-D. Song, J. H. Park, J. H. Lee, M. J. Kim, K. H. Kim, and W. Choi, “High-resolution adaptive optical imaging within thick scattering media using closed-loop accumulation of single scattering,” Nat. Commun. 8(1), 2157 (2017). [CrossRef]  

43. M. Kim, Y. Jo, J. H. Hong, S. Kim, S. Yoon, K.-D. Song, S. Kang, B. Lee, G. H. Kim, H.-C. Park, and W. Choi, “Label-free neuroimaging in vivo using synchronous angular scanning microscopy with single-scattering accumulation algorithm,” Nat. Commun. 10, 1–9 (2019). [CrossRef]  

44. S. Yoon, H. Lee, J. H. Hong, Y.-S. Lim, and W. Choi, “Laser scanning reflection-matrix microscopy for aberration-free imaging through intact mouse skull,” Nat. Commun. 11(1), 5721 (2020). [CrossRef]  

45. A. Badon, V. Barolle, K. Irsch, A. C. Boccara, M. Fink, and A. Aubry, “Distortion matrix concept for deep optical imaging in scattering media,” Sci. Adv. 6(30), eaay7170 (2020). [CrossRef]  

46. M. Feierabend, M. Rückel, and W. Denk, “Coherence-gated wave-front sensing in strongly scattering samples,” Opt. Lett. 29(19), 2255–2257 (2004). [CrossRef]  

47. R. Fiolka, K. Si, and M. Cui, “Complex wavefront corrections for deep tissue focusing using low coherence backscattered light,” Opt. Express 20(15), 16532–16543 (2012). [CrossRef]  

48. C. Prada and M. Fink, “Eigenmodes of the time reversal operator: A solution to selective focusing in multiple-target media,” Wave Motion 20(2), 151–163 (1994). [CrossRef]  

49. A. Aubry and A. Derode, “Random matrix theory applied to acoustic backscattering and imaging in complex media,” Phys. Rev. Lett. 102(8), 084301 (2009). [CrossRef]  

50. Y. Jo, Y.-R. Lee, J. H. Hong, D.-Y. Kim, J. Kwon, M. Choi, M. Kim, and W. Choi, “Through-skull brain imaging in vivo at visible wavelengths via dimensionality reduction adaptive-optical microscopy,” Sci. Adv. 8(30), eabo4366 (2022). [CrossRef]  

51. J. W. Goodman, Statistical optics (John Wiley & Sons, 2015).

52. P. Kang, S. Kang, Y. Jo, H. Ko, G. Kim, Y.-R. Lee, and W. Choi, “Optical transfer function of time-gated coherent imaging in the presence of a scattering medium,” Opt. Express 29(3), 3395–3405 (2021). [CrossRef]  

53. D. L. Fried, “Anisoplanatism in adaptive optics,” J. Opt. Soc. Am. A 72(1), 52–61 (1982). [CrossRef]  

54. B. Judkewitz, R. Horstmeyer, I. M. Vellekoop, I. N. Papadopoulos, and C. Yang, “Translation correlations in anisotropically scattering media,” Nat. Phys. 11(8), 684–689 (2015). [CrossRef]  

55. Y. Chen, S.-W. Huang, C. Zhou, B. Potsaid, and J. G. Fujimoto, “Improved detection sensitivity of line-scanning optical coherence microscopy,” IEEE J. Sel. Top. Quantum Electron. 18(3), 1094–1099 (2011). [CrossRef]  

56. M. Born and E. Wolf, Principles of optics: electromagnetic theory of propagation, interference and diffraction of light (Elsevier, 2013).

57. M. A. Herráez, D. R. Burton, M. J. Lalor, and M. A. Gdeisat, “Fast two-dimensional phase-unwrapping algorithm based on sorting by reliability following a noncontinuous path,” Appl. Opt. 41(35), 7437–7444 (2002). [CrossRef]  

58. H. Lee, S. Yoon, P. Loohuis, J. H. Hong, S. Kang, and W. Choi, “High-throughput volumetric adaptive optical imaging using compressed time-reversal matrix,” Light: Sci. Appl. 11(1), 16 (2022). [CrossRef]  

59. Y. Choi, T. D. Yang, K. J. Lee, and W. Choi, “Full-field and single-shot quantitative phase microscopy using dynamic speckle illumination,” Opt. Lett. 36(13), 2465–2467 (2011). [CrossRef]  

60. M. Booth and T. Wilson, “Strategies for the compensation of specimen-induced spherical aberration in confocal microscopy of skin,” J. Microsc. 200(1), 68–74 (2000). [CrossRef]  

61. M. B. Wakin, J. N. Laska, M. F. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. F. Kelly, and R. G. Baraniuk, “An architecture for compressive imaging,” in International Conference on Image Processing, (IEEE, 2006), pp. 1273–1276.

Data availability

The data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. The proposed output aberration correction framework for laser scanning epi-detection system.
Fig. 2.
Fig. 2. Schematic layout of our coherence-gated laser scanning imaging system. SLD: super-luminescent diode, C: condenser, PL: polarizer, BS: beam splitter, PBS: polarizing beam splitter, MR: mirror, GM: 2D Galvo mirror, DG: diffraction grating, f: focal length of lens in mm.
Fig. 3.
Fig. 3. Simulation of reflection matrix-based laser scanning imaging with various aberrations and noises. a). Raw image for simulation. b). Simulated diffraction-limited image with a uniform circle-shape pupil, only the Poisson noise is considered. c). Simulated OCM image obtained from diagonal elements of the uncorrected reflection matrix. The simulated pupil using 1-10 Zernike modes and the corresponding PSF are displayed on the left. d). Reconstruction result using the CLASS algorithm [44]. Although the estimated input (left top) and output (left bottom) aberrations are different from the simulated aberrations, their shapes are complementary so that they can also provide a relatively good recovery. e). Reconstruction result using the proposed algorithm. The estimated input and output aberrations are closer to the ground-truth simulated aberrations owing to better initialization with the Divide-by-two strategy. f). The maximum of the cross-correlation between the recovered image and diffraction-limited reference image along with iterations, arbitrary unit. The proposed algorithm shows faster convergence with more accurate recovery. g,h,i,j). Same as c-f, except that higher Zernike orders (1-25) are applied, the magnitude of the defocus term and the spherical term are further increased to 10 and 5, simulating most occurred aberrations in biophotonics imaging scenarios [60]. As the aberration becomes severer, the enhancement of convergence is more apparent. k,l,m,n). Same as c-f, except that a random phase term is uniformly added to the pupil in each scan and the magnitude of the defocus term is increased to 5. A simple Divide-by-two strategy is sufficient to provide a correct result, demonstrating its robustness against mild random phase noise. o,p,q). Same as k-m, except that the standard deviation of random phase noise is multiplied by four, and we add a random phase background to the pupil, simulating the speckle noise. The reflection matrix is seriously corrupted that we only correct the output aberration with one iteration, using the Divide-by-two strategy. r). The final reconstruction result after post-processing using Eq. (7).
Fig. 4.
Fig. 4. Details of the reconstruction with strong defocus aberration. a). The captured interferogram in each scan. Scale bar, 20 µm. b). The deduced amplitude and phase image from the interferogram, the defocus aberration is apparent in the phase image. Scale bar, 10 µm and 0.2 NA. c). The complex images are shifted to the corresponding position for coordinate transformation. Scale bar, 10 µm. d). The measured reflection matrix after scanning. e). Calculated input and output aberration from the reflection matrix using the CLASS algorithm in the first iteration. f). Unwrapped output aberration. g). Rewrapped output aberration from half of g. h). OCM image obtained from the diagonal element of uncorrected reflection matrix in d. Scale bar, 10 µm. i). The corrected OCM image using the CLASS algorithm. j). The corrected reflection matrix using the CLASS algorithm [44]. k). The corrected reflection matrix using the proposed algorithm. l). The corrected OCM image using the proposed algorithm. m). The final reconstruction after post-processing.
Fig. 5.
Fig. 5. Imaging through the turbid medium. a). The wide-field imaging geometry with incoherent light illuminated from the bottom up. b). In the bright-field image of the USAF target (Group 7 with Element 5 and 6), the structure is clearly visible without the turbid medium. c). The turbid medium used in the experiment (left) and $10^{\circ }$ diffuser (10DKIT-C1, Newport) for comparison. d). When the target is covered with a 1.5 mm thickness phantom, the bright-field image is totally corrupted. e). The laser scanning epi-detection geometry. f,g). Typical amplitude and phase images in each scan. Scale bar, 10 µm and 0.2 NA. h). The OCM image deduced from the reflection matrix. Scale bar, 10 µm i). The measured reflection matrix. j,k). The estimated input and output aberration using the CLASS algorithm. l). The OCM image deduced from the corrected reflection matrix using the CLASS algorithm. m). The corrected reflection matrix using the CLASS algorithm n). The estimated output aberration using the proposed method. As we only run one iteration, the aberration is slightly underestimated. o). The OCM image deduced from the corrected reflection matrix using the proposed algorithm. p). The post-processing result from the CLASS algorithm. q). The final result from the proposed algorithm. r). The corrected reflection matrix using the proposed algorithm.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

E ( r o u t ; r i n ) = P o u t ( r o u t , r ) O ( r ) P i n ( r , r i n ) d 2 r + N m ( r o u t ; r i n )
E ( r o u t ; r i n ) = P o u t ( r o u t r ) O ( r ) P i n ( r r i n ) d 2 r + N m ( r o u t ; r i n ) = ( P o u t O s c a ) ( r o u t ) + N m ( r o u t ; r i n )
R ~ = P ~ o O ~ P ~ i + N ~ m
O ~ C A S S ( Δ v ) = v i n R ~ ( v o u t , v i n ) = O ~ ( Δ v ) v i n P ~ o u t ( v i n + Δ v ) P ~ i n ( v i n ) + v i n N ~ m ( v o u t ; v i n )
O ~ C A S S ( Δ v ) O ~ ( Δ v ) δ ( Δ v )
ϕ i n ( v i n ) = a r g { Δ v R ~ ( v i n + Δ v ; v i n ) O ~ C A S S ( Δ v ) } a r g { Δ v P ~ o u t ( v i n + Δ v ) O ~ ( Δ v ) P ~ i n ( v i n ) O ~ ( Δ v ) δ ( Δ v ) } = a r g { P ~ i n ( v i n ) Δ v P ~ o u t ( v i n + Δ v ) δ ( Δ v ) } = a r g { P ~ i n ( v i n ) P ~ o u t ( v i n ) } = a r g { P ~ i n ( v i n ) } + a r g { P ~ o u t ( v i n ) }
I ( r ) = r i n O ( r ) P i n ( r r i n ) e ( r r i n ) 2 / 2 σ 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.