Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single photon single pixel imaging into thick scattering medium

Open Access Open Access

Abstract

Imaging into thick scattering medium is a long-standing challenge. Beyond the quasi-ballistic regime, multiple scattering scrambles the spatiotemporal information of incident/emitted light, making canonical imaging based on light focusing nearly impossible. Diffusion optical tomography (DOT) is one of the most popular approach to look inside scattering medium, but quantitatively inverting the diffusion equation is ill-posed, and prior information of the medium is typically necessary, which is nontrivial to obtain. Here, we show theoretically and experimentally that, by synergizing the one-way light scattering characteristic of single pixel imaging with ultrasensitive single photon detection and a metric-guided image reconstruction, single photon single pixel imaging can serve as a simple and powerful alternative to DOT for imaging into thick scattering medium without prior knowledge or inverting the diffusion equation. We demonstrated an image resolution of 12 mm inside a 60 mm thick (∼ 78 mean free paths) scattering medium.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Looking through and inside scattering medium is important in many areas of fundamental and applied science. When passing through a scattering medium, light is scattered multiple times, and due to the random nature of this process, a photon lost its original direction, and the object light field are spatiotemporally broadened on a macroscopic level, making image retrieval challenging. Diffuse optical tomography (DOT) [14] is the most popular approach to imaging several centimeters inside scattering media. However, it requires a decent estimation of medium’s scattering and absorption coefficients (with ∼50% accuracy as noted in Ref. [4]) to properly initialize a quantitative inversion of the diffusion equation, being it a linearized [5,6] or a full nonlinear version [7,8]. Also, a reference measurement with the same imaging geometry on a homogenous medium is often needed [5,6], which is cumbersome to obtain in practice. Though recent efforts [9,10] demonstrated that a qualitative (rather than quantitative) approach to DOT imaging with time-of-flight measurement can significantly simplify the inverse problem, prior knowledge (such as absorption coefficient [10]) of the medium and an inversion (analytical or numerical) of the diffusion equation are still needed.

Given that imaging in clear medium rarely reveals quantitative scattering or absorption coefficients of an object, it is natural to ask whether a qualitative approach can eliminate the need of prior knowledge for imaging inside scattering medium. Indeed, as the coherence, polarization, propagation direction, and time-of-flight of light are all scrambled by multiple scattering, it is possible to design a corresponding filtering mechanism to abate multiply scattered light while leave ballistic photons largely intact, thereby enabling an image to be formed like in clear medium. Known as X-gating (e.g. time-gating), these filtering methods [11,12] are effective for looking inside or through relatively thin scattering medium without prior knowledge. However, as the thickness of scattering medium increases, ballistic photons will eventually vanish, and it becomes imperative to utilize multiply scattered photons for image retrieval. Fortunately, several seminal works [1315] demonstrated that time-gating remained useful to boost imaging resolution into scattering medium by extracting the weakly scattered photons before solving the diffusion equation.

To simplify the inversion of diffusion equation, computational diffuse optical tomography [9] reformulated the diffusion process inside a homogeneous scattering medium as a convolution model, thereby casting imaging inside thick scattering medium as a deconvolution problem. However, it is necessary to know the object’s depth inside the medium, along with the medium’s scattering and absorption coefficients to calculate the correct convolutional kernel with an analytical solution to the diffusion equation. By treating image retrieval from time-of-flight measurements of diffused light as an reverse diffusion wave propagation, the boundary migration model [10] made an important stride in attaining fast DOT image reconstruction with less prior knowledge. Nonetheless, the absorption coefficient of the medium need be accurately measured in order to solve the diffusion equation with an analytical solution. On the other hand, several works demonstrated that single pixel imaging can look through thick scattering medium without any prior knowledge [16] or inverting the diffusion equation. Yet, they all assumed a clear access to the object in the illumination path, which is not feasible when imaging inside a scattering medium. Although single pixel imaging had also been adapted to DOT for looking into scattering medium, the need of prior knowledge is reinstalled [17] and therefore less appealing. Up to now, single pixel imaging inside scattering medium is restricted to utilizing quasi-ballistic photons via correlation [18,19] or similar filtering techniques [20], limiting its achievable imaging depths to a few millimeters. A qualitative method capable of imaging inside thick scattering medium (more than 10 mean free paths) without inverting the diffusion equation nor any prior knowledge of the medium is currently lacking.

2. Methods

To address this challenge, we present here single photon single pixel imaging, which exploits the one-way light scattering characteristic of single pixel imaging (SPI) and recent advances in time-of-flight single photon detection for imaging in the diffusive regime. Because SPI can assimilate the spatial blurring caused by scattering in the reception path into its spatial integration, only scattering in the illumination path affects image formation. As a result, SPI is similar to multiphoton microscopy [21] in sidestepping one way of light scattering for imaging deeper into scattering medium, a clear advantage over conventional X-gating that needs to deal with scattering in both ways. With an ultrasensitive single photon detection at picoseconds temporal resolution, the weakly scattered photons can be reliably extracted by time-gating method to bolster imaging resolution. More importantly, we show that under convolutional approximation [5,10], time-gated measurements enable the scattering effects in the illumination path to be accurately modelled by a parametric Gaussian function, which can be jointly recovered with the unknown image. This not only eliminates prior knowledge of the medium or inversion of the diffusion equation, but further enhances the single pixel imaging resolution via deconvolution for looking inside scattering medium. Using single photon single pixel imaging, we demonstrated an imaging resolution of around 12 mm inside a 60 mm thick scattering medium (∼78 mean free paths) without prior knowledge or any reference measurements.

The experimental setup for single photon single pixel imaging is illustrated in Fig. 1(a). A picosecond laser (532 nm, ∼10 ps pulse width, 1.0 W average power, 40 MHz repetition rate) is expanded and collimated by lens L1-L2, and illuminates the digital mirror device (DMD, V-7001, Vialux GmbH) from an oblique angle, which encodes the incident beam before being expanded to interrogate the scattering medium. The transmitted light was collected by lens L3 onto a single photon avalanche diode (FastGated SPAD, MPD Inc.), which measures the complete time-of-flight signal together with a time-correlated single photon counter (Timeharp 260 Pico, PicoQuant).

 figure: Fig. 1.

Fig. 1. Single pixel imaging into scattering medium. (a) A picosecond laser is expanded and collimated by lens L1 and L2 to obliquely illuminate the DMD, which modulates the light for interrogating the scattering medium. An SPAD collects the transmitted light with lens L3. (b) The single pixel detector integrates over the object plane, thus assimilating the diffusion blur in the reception path. (c) For a fixed depth inside the medium, the diffusion blur grows monotonically with respect to time-gate.

Download Full Size | PDF

For approximately homogenous scattering medium in the illumination path as in Fig. 1(b), the scattering effects can be modeled as a depth-dependent but laterally shift-invariant blurring kernel ${f_z}$ [5,9]. Hence, the input illumination $c({m,n} )\textrm{}$ becomes $c\mathrm{^{\prime}}({m,n} )$ after propagating to the object plane:

$$c^{\prime}({m,n} )= {f_z}\ast c({m,n} ),$$
Where (m, n) is the spatial coordinates, and $\mathrm{\ast }$ denotes convolution operation. Equation (1) can be written in matrix formalism as $c\mathrm{^{\prime}} = {F_z}c$, with ${F_z}$, c and c’ being the Toeplitz convolution matrix of ${f_z}$ and the vectorized codes, respectively. Given an unknown (vectorized) image x in the object plane, the SPI measurement for the k-th code is:
$${y_k} = c'_k{^T}x = {\left( {{{\text{F}}_z}{c_k}} \right)^T}x = c_k^T\left( {F_z^Tx} \right),$$
where $c\mathrm{^{\prime}} = {F_z}c$ has been used and $F_z^T$ is the transpose of ${\textrm{F}_z}$. Equation (2) states that the single pixel measurement for an object plane inside scattering medium is equivalent to imaging in the clear medium with the object being $F_z^Tx$. The matrix $F_z^T$ corresponds to a convolution kernel $f{^{\prime}_z}(r )= {f_z}({ - r} )$, a mirrored version of the blurring kernel.

Besides scattering strength and propagation depth, the blurring kernel depends strongly on time-gate as well for a time-of-flight measurement—the Monte Carlo simulation of a pulsed pencil beam propagation inside scattering medium in Fig. 1(c) shows the kernel grows monotonically with respect to the duration of time-gates. Note that the time-gate can be implemented in hardware or numerically. For the later, the full time-of-flight measurement is truncated to a time-gate of $\Delta t$ and then integrated along time dimension to yield a light intensity. Although the spatial blurring in reception path also varies with time-gates, it can be consistently assimilated into the spatial integration of SPI. Thus, the model in Eq. (2) remains valid for time-gated measurements after changing the blurring kernel to be a function of time-gate $\Delta t$:

$${y_k} = c_k^TF_z^T({\Delta t} )x.$$

With m codes, the complete measurement data can be written as $y = CF_z^T({\Delta t} )x$, where $C = [{c_1^T;c_2^T;\textrm{} \cdots c_m^T;} ]$. One can therefore retrieve a multiply scattered image $x^{\prime} = F_z^T({\Delta t} )x$ from SPI measurement by solving regularized optimization problem:

$$\mathop {{\text{argmin}}}\limits_{x'} {\left\| {y - Cx'} \right\|^2} + {{\beta }}{\left\| {x'} \right\|_{TV}},$$
in which $\mathrm{\beta }$ is the regularization parameter, ${||\cdot ||^2}$ and $||x{\mathrm{^{\prime}}||_{TV}}$ denotes the Frobenius and total variation norm, respectively.

Recovering the clear image x requires to know the blurring kernel $F_z^T({\Delta t} )$, which is unavailable when no prior knowledge of the medium is given. Jointly recovering the clear image with its associated blurring kernel is possible via blind deconvolution algorithms [22]. However, because the multiply scattered image $x^{\prime}$ is mostly featureless (though x is not), canonical blind deconvolution algorithms typically fail to recover the correct image. A key observation to address this failure is that, the blurring kernel $F_z^T({\Delta t} )$ for a medium with mild inhomogeneities (∼20% variation in scattering and absorption coefficients) can be approximated by a Gaussian function $G(r )\cong F_z^T({\Delta t} )$ parameterized by scale r. Such parametric Gaussian approximation can be proved analytically for the studied geometry (Appendix A), and is simulated to remain valid for a large range of scattering coefficients and penetration depths (Appendix B). In fact, only large inhomogeneities will break the validity of this approximation and the convolutional model, which we will prove numerically shortly. Under this approximation, it is possible to jointly recover the blurring kernel $F_z^T({\Delta t} )$ and image x using a metric-guided reconstruction.

Essentially, the parametric Gaussian approximation reduces the estimation of blurring kernel to that of identifying the correct scale of the Gaussian function. To this end, we mimic the camera autofocus mechanism by using a simple image metric for guiding the search of the scale, which evaluates the total contrast of an image by summing all pixels’ local absolute contrast:

$$Q = \mathop \sum \nolimits_{m,n} |{Lo{G_h}(x )} |+ |{Lo{G_v}(x )} |,$$
where $|\cdot |$ denote the absolute value, $Lo{G_h}$ and $Lo{G_v}$ represents the Laplacian of Gaussian operator along the image’s horizontal and vertical dimension respectively. During the search, the scale r is varied from one pixel to half of the image size at step size of one pixel, and for each scale ${r_k}$, an image is recovered by a deconvolution step:
$$\mathop {{\text{argmin}}}\limits_x {\left\| {x' - G\left( {{r_k}} \right)x} \right\|^2} + {{\alpha }}{\left\| x \right\|_{TV}}.$$
where $\mathrm{\;\ \alpha }$ controls the regularization strength, and $G({{r_k}} )$ is the Gaussian kernel with scale ${r_k}$. The image metric Q is then evaluated for the reconstructed image, and the scale yielding the maximum Q across the reconstructed image stack is identified as the correct one.

Theoretically, the approximation error can be regarded as an extra object-dependent ‘noise’ or artefacts term. Denoting the approximation error as $\Delta F$ in matrix formalism and the noises after single pixel reconstruction in Eq. (4) as n, one has:

$$x^{\prime} = F_z^T({\Delta t} )x + n = G(r )x + \Delta Fx + n,$$
where $\Delta Fx$ is the object-dependent noise/artefacts. Similar to coded-aperture imaging [23] or image deconvolution [24], a large approximation error of the blurring kernel will produce significant image artefacts that drastically reduce the contrast of the recovered image x, which makes the metric-guided recovery of blurring kernel possible. Further illustrations on how this works is given in Appendix C.

The working flow of our proposed approach is summarized in Fig. 2. After time-gating the raw time-of-flight signals, a first step conventional single pixel image reconstruction is performed, for which numerous algorithms can be applied [25,26]. Afterwards, a second step metric-guided reconstruction is invoked to account for the scattering effects. Here, an array of Gaussian kernels of different scales are generated, and each is used to deconvolve the reconstructed image of the first step. The metric is then evaluated for each deconvolved image and compared across the image stack. The maximum metric finally locates the correct scale for Gaussian kernel and identifies corresponding deconvolved image. We solve both optimization problems in Eq. (4) and 6 with the FISTA algorithm [27] using plug-and-play strategy [28], and tuned the regularization parameters during reconstruction for best visualization of the target objects. The computational complexity of our method is dominated by the metric-guided recovery as it involves repeated solution of Eq. (6), which lumps up to be over ten times higher than that of conventional single pixel recovery. Because imaging with early photons through thick scattering medium is a typical light-starved application, and the solution of both problem in Eq. (4) and 6 will aggravate measurement noises due to the inversion of ill-conditioned matrices $G({{r_k}} )$ and C (when compressive acquisition is adopted) [29], it is preferable to minimize measurement noises. As such, a SPAD detector is chosen owing to its superior noise performance and sensitivity.

 figure: Fig. 2.

Fig. 2. Flow-chart for metric-guided single photon single pixel imaging recovery.

Download Full Size | PDF

3. Monte Carlo simulations

To validate parametric Gaussian approximation, we conducted a synthetic study based on Monte Carlo simulations. Using the open source MCXLAB toolbox [30], a 40 mm thick cubic medium with a reduced scattering coefficient of $u_s^\mathrm{^{\prime}} = 1.0\textrm{}m{m^{ - 1}}$ and an absorption coefficient of ${u_a} = 0.01\textrm{}m{m^{ - 1}}$ is built. To simulate inhomogeneities, we divide the cubic medium into different blocks of size 2 mm × 2 mm × 2 mm, and randomly varied the scattering and absorbing coefficients of these blocks from nominal values by different amounts ranging from ±10% to ±40%. To directly render the blurring kernel, a pencil beam with 109 photons are launched into the center of the medium, and the resultant time-of-flight light fluence inside the cube are recorded at 10 ps resolution.

Figure 3 shows the blurring kernel at depth of 20 mm measured with a 400 ps time-gate. Tabulated from left to right in Fig. 3(a) is the ground truth blurring kernels when the medium inhomogeneity is increased from 0% (homogeneous) to ±40%. The optimized Gaussian approximations is plotted in the insets (scaled to fit in) and corresponding approximations errors are computed in Fig. 3(b). For mild inhomogeneities (<20%), the Gaussian approximation yields a pixelwise maximum absolute error (MAE) smaller than 9% and a normalized mean square error (NMSE) below 0.08%. As the inhomogeneity gets larger, the approximation error gradually increases: for a medium whose scattering and absorbing coefficients varied (block to block) by ±40% from the nominal values, the Gaussian approximation yields an NMSE of 0.2% and a pixelwise MAE of about 10%. As will be shown shortly, the approximation errors for mild inhomogeneities are small enough to allow a successful image recovery. The agreement between the blurring kernel and its Gaussian approximation is illustrated in Fig. 3(c), which compares the central line profile of the true and approximated kernels. Additional approximation results for the blurring kernels with different measurement depths, time-gating, and scattering coefficients are shown in Appendix B (Fig. 10, Fig. 11, Fig. 12, and Fig. 13).

 figure: Fig. 3.

Fig. 3. Monte Carlo validation of the parametric Gaussian approximation. Top row: the true blurring kernel for a cubic scattering medium with random inhomogeneities. Gaussian approximations are given in the inset (not to scale). The inhomogeneities of the medium from left to right are: 0%, ± 10%, ± 20%, ± 30% and ±40%. Second row: the normalized error of the Gaussian approximations. Bottom: comparison of the true blurring kernel and the Gaussian approximation along the center line.

Download Full Size | PDF

For synthetic imaging studies, we modulated the collimated illumination with random binary codes at a resolution of 48 × 48 pixels, and a total of 2304 different codes are used for single pixel imaging. For each code, the time-of-flight signal is simulated within a 5 ns time window at a temporal resolution of 50 ps, and the transmitted light at exit surface is spatially integrated to compute the measurement data. To accelerate simulations, we reduced the photon number to 108 and changed the thickness and scattering coefficient of the medium to 30 mm and $u_s^\mathrm{^{\prime}} = 0.5\textrm{}m{m^{ - 1}}$ respectively. The objects to be imaged are alphabetic and numeric letters ($u_s^\mathrm{^{\prime}} = 10\textrm{}m{m^{ - 1}}$) placed at 15 mm depth inside the medium, and their scattering coefficients are set to $20\textrm{}m{m^{ - 1}}$. For comparison, the images obtained by conventional imaging (no time-of-flight information by integrating the time-of-flight signal along time dimension, referred to as DC imaging) and time-gated imaging that directly observe the transmitted image by a two-dimensional area detector at the exit surface are also computed.

Figure 4 showed from top to the bottom the simulated imaging results of letter E, two strips, and a triangle block inside a homogenous medium, and additional simulations results are given in Appendix D. It is observed that neither the conventional nor the time-gated (400 ps) method show discernable features of the objects, though time-gating has slightly improved the resolution. In contrast, single pixel image reconstruction with time-gated light noticeably reduced the scattering effects despite of larger noises, highlighting its advantage of one-way light scattering. While its imaging noises can be suppressed at the cost of a degraded resolution via stronger regularization, single pixel image reconstruction alone using FISTA [25] is still not able to resolve the shapes of the objects, except for the larger triangular block. On the contrary, the proposed metric-guided deconvolution can clearly reveal the objects inside the medium in all the cases.

 figure: Fig. 4.

Fig. 4. Synthetic imaging results. Top to bottom: images of letter E, two strips, and triangle block buried in the middle of a 30 mm thick medium. Left to right: ground truth image, conventional image, time-gated image, single pixel reconstruction, and metric guided image reconstruction. Scale bar: 12 mm.

Download Full Size | PDF

To evaluate the proposed method’s ability in handling inhomogeneous medium, we add inhomogeneities into the medium by randomly varying the scattering and absorption coefficient of medium’s component blocks as before. The imaging results of the same objects inside the medium with different inhomogeneities are given in Fig. 5. Compared with images retrieved from a homogenous medium in Fig. 4, the results are somewhat distorted, and for stronger inhomogeneities (±30%), only the simple triangular block can be correctly identified. Since larger inhomogeneities will cause the true blurring kernel to become shift-variant and deviate from the Gaussian form, both the convolution model and the parametric Gaussian approximation will gradually break down, leading to worse reconstructions in this case. Still, it is noted that the proposed single photon single pixel imaging method can deal with mild inhomogeneities (±20%) reasonably well for looking into thick scattering medium.

 figure: Fig. 5.

Fig. 5. Synthetic imaging results of the three objects for inhomogeneous medium, with the inhomogeneities being ±10%, ± 20% and ±30% from left to right. Scale bar: 4 mm.

Download Full Size | PDF

4. Experimental results

To validate the proposed imaging methods in experiments, two 30 mm thick plastic slab (white PBS material) are 3D printed and stacked together to form a scattering medium with a total thickness of 60 mm. Both positive (transmission mask) and negative targets (black tapes) are fabricated and then sandwiched between the two slabs. The scattering and absorbing coefficients of the medium is measured to be $1.3\textrm{}m{m^{ - 1}}$ and $0.008\textrm{}m{m^{ - 1}}$ respectively, which is obtained by illuminating the medium with a pencil beam picosecond laser, and fitting the transmitted time-of-flight signal with the analytical solution corresponding to a slab geometry [9]. The photograph of the scattering medium and measured time-of-flight signal along with fitted analytical solution are given in Fig. 6(a)-(b).

 figure: Fig. 6.

Fig. 6. (a) The scattering medium and (b) its measured time-of-flight signal (black dot). The solid red line is the fitting results with an analytic solution.

Download Full Size | PDF

During imaging experiments, the illumination beam was modulated by Hadamard codes at a spatial resolution of 32 × 32 by binning 24 DMD pixels as a macro pixel, and then expanded to cover a field of view of ∼110 mm × 110 mm on the scattering medium. With an exposure time of 50 ms for each code, the total image acquisition time is about 51 seconds for 1024 codes. The timing jitter of the SPAD and TCSPC system was about 80 ps, and the time-of-flight signal were recorded with a 25 ps time bin across 10 ns time window. As thick scattering medium broadens the temporal signal as in Fig. 6(b), the temporal resolution is sufficient for current experiments. For imaging in the infrared regime, the reduced scattering effects might need a better temporal resolution, which could be obtained by nonlinear optical gating [31,32]. For comparison, image recovery with the DC and time-gated intensities are obtained by integrating the time-of-flight signal over 10 ns window and the first 600 ps signal (about twice the ballistic photon transmission time) respectively before single pixel image reconstruction. We also compared our first step single pixel reconstruction algorithm (FISTA) with the popular DAMP algorithm [26], using the same total variation regularizer.

The imaging results for positive targets: digit 5, letter U and a cross shape, are shown in Fig. 7(a)-(c) respectively. The single pixel image reconstruction with DC intensities (left column) and time-gated reconstructions (middle column) didn’t render any discernable feature of the objects. And due to its lower SNR, time-gated reconstructions are noisier than the reconstructions with DC intensities, though a slightly improved resolution is also evident. This is consistent with the Monte Carlo simulations. Also, the DAMP reconstructions are very similar with our first step results. In contrast, by modeling multiple scattering effects with parametric Gaussian approximation, the proposed metric guided deconvolution in our method can correctly recover the objects in all cases.

 figure: Fig. 7.

Fig. 7. Experimental results with positive targets. Top to bottom: digit 5, letter U and X. Left to right: single pixel reconstruction with DC intensities, reconstruction with time-gated measurements, DAMP single pixel reconstruction with time-gated data, and the proposed metric guided image reconstruction. Scale bar: 24 mm.

Download Full Size | PDF

Imaging with negative objects is demonstrated in Fig. 8 for letters C, H, and digit 7. While they show similar characteristics with that of imaging with positive targets, it is also observed that negative targets reconstructions tend to suffer from noticeable boundary artefacts and a lower resolution. Boundary artefacts are common concerns in image deblurring [33], and it is more evident here for two reasons. First is because the blurring kernels of thick scattering medium is large in size compared with the imaging field of view, which is the major culprit for the poor imaging resolution inside thick scattering medium. Second is that, the boundary region typically shows large image intensities for negative targets, as observed in the DC and time-gated reconstruction, and hence tends to strengthen the reconstructed boundary visually. Despite of this, the proposed metric-guided reconstruction still manages to recover the shape of negative targets, while a reference measurement is typically needed in previous imaging studies [9].

 figure: Fig. 8.

Fig. 8. Experimental results with negative targets. Top to bottom, letter C, H, and digit 7. Left to right, single pixel reconstruction, reconstruction with time-gated measurements, DAMP reconstruction with time-gated data, and proposed metric guided deconvolution. Scale bar: 24 mm.

Download Full Size | PDF

Given that the point spread function may not characterize well the resolution of imaging systems relying on regularizations [33], we tested the resolution for imaging positive and negative targets empirically with two rectangular strips, whose separation was varied until they can be barely resolved in the reconstructed image. The resolution is then calculated as twice the minimum resolvable separation. The imaging results for positive strips are shown in Fig. 9(a), whereas Fig. 9(b) depicts the line profiles across the strips, which is averaged along the height direction of the white virtual box. Averaging is used here to get a robust resolution estimate because the recovered strips is not uniform along its height direction. By doing so, it is estimated that the imaging resolution for positive targets is around 12 mm in our current experiments. Similarly, Fig. 9(c)-(d) shows the recovered images and averaged line profiles across the strips for negative targets, where the resolution is estimated to be around 16 mm inside a 60 mm thick scattering medium, slightly worse than imaging positive targets.

 figure: Fig. 9.

Fig. 9. Experimental characterization of imaging resolution with two strips at different separations. (a) Reconstructed images for positive targets with separation being 11 mm, 8 mm and 6 mm from left to right. (b) The line profile across the two strips at different separations, averaged along the y axis of white virtual box. (c)-(d) Resolution characterization results for negative strips with separations being 13 mm, 10 mm and 8 mm from left to right. Scale bar: 24 mm.

Download Full Size | PDF

We note that the resolution characterization here is prone to measurement noises and image regularization strengths, and therefore not yet a definitive and fully objective estimation. This is similar to evaluating the imaging resolution of DOT [13]. At the meantime, it is possible to further bolster the resolution by using shorter time-gates. However, this will compromise imaging SNR and/or speed, since weakly scattered photons decays fast and only one photon can be detected in each laser pulse for single photon detection, rendering the imaging SNR a linear function of exposure time rather than laser power in this case. As a result, a multifaceted tradeoff needs be made among SNR, resolution and imaging speed. In practical applications, we found the time-gate that is about twice the ballistic photon transmission time through the scattering medium to be a reasonable choice. Last, we iterate here that such imaging resolution is achieved without prior knowledge of the medium nor acquiring any reference measurements, which is a major step forward for looking into thick scattering medium.

5. Discussion and conclusion

Using metric for guiding image reconstruction is not new. The computer vision and the adaptive optics [34,35] community has long used image metrics to infer depth information of a textured scene [26] or to get a wavefront corrected image, and designing efficient and robust image metric is still an active research topic [36]. However, looking into thick scattering medium typically renders images at a low resolution that smooths out texture, which makes metric evaluation unreliable. Our metric is a classic one, but we evaluated it across the whole image domain rather than the patch-by-patch method as done conventionally. Equivalently, our metric is a summation of the metrics from all patches, reflecting the fact the resolution for looking inside scattering medium still lags far behind that of imaging in clear medium. Additionally, we bolstered the resolution by synergizing the one-way light scattering characteristic of single pixel imaging and single photon detection of time-gated light, thereby improving the robustness of evaluating the image metric. This being said, it should be pointed out that our metric evaluation requires the object to show some texture, and therefore is not applicable for texture-less objects. Single photon detection is critical here because the weakly scattered photons decays quickly with a thick scattering medium: in our experiments, the maximum photon counts for the transmitted time-of-flight signal is ∼200 with a 25 ps time bin, and the total counts within a 400 ps time-gate is only 500.

A comparison with DOT [13,37,38] is in order. The proposed single photon single imaging method is qualitative, and is built upon the convolution approximation as in recent works [5,9,10,39]. While unable to handle strong inhomogeneities, we demonstrated via Monte Carlo simulations that it can deal with mild inhomogeneities reasonably well. A major improvement over recent works [5,9,10,39] and DOT is that the proposed method needs not to solve the diffusion equation either analytically or numerically, and needs no prior knowledge of the medium or any reference measurement, which is cumbersome to obtain in practice. On the other hand, DOT can handle arbitrary imaging geometries via finite element analysis [40,41], whereas our work and recent simplifications of DOT [5,9,10,39] are currently limited to a slab geometry. However, this is not a severe restriction, because compressing the object into a planarly-shaped chamber [14] or restricting the field of view (FOV) to a small area where the object’s surface is relatively flat [42] have been applied for minimizing defocus error in non-contact DOT. Lastly, although the qualitative approaches substantially simplified image retrieval inside scattering medium, its ability to recover objects showing small difference from the background (i.e., weak objects) still lags behind classic DOT. If a proper initialization is available, classic DOT’s quantitative inversion of diffusion equations [41,43] makes it more sensitive to weak objects and more robust against medium inhomogeneity. While further studies are needed to compare these two on a quantitative basis, the qualitative and quantitative approaches are largely complementary for looking into thick scattering medium owing to their distinct strengths and limitations.

In summary, single photon single pixel imaging can be a simple and powerful platform for imaging into thick scattering medium without inverting the diffusion equation or prior knowledge of the medium. We experimentally demonstrated a 12 to 16 mm resolution inside a 60 mm thick scattering medium (∼78 mean free paths) for approximately homogenous medium, and for mild inhomogeneities, it can still recover the shapes of the objects despite of some distortions and an amplified noise. It hence might be conducive for imaging in poor weather conditions such as fog. With an improved imaging resolution, it is possible to extend the proposed method for 3D imaging into scattering medium by reverting the metric evaluation to the patch-by-patch method [44], which we are actively exploring.

Appendix A

Here, we prove the Gaussian approximation for the blurring kernel and relate it to the time-gate. For homogeneous medium of a slab geometry, there is an analytical solution for the light fluence inside the medium [9]:

$$\phi ({\rho ,t;\rho^{\prime},t^{\prime}} )= \frac{c}{{4\pi Dc{{({t - t^{\prime}} )}^{3/2}}}}exp\left[ { - \frac{{||\rho - {{\rho^{\prime}}||^2}}}{{4Dc({t - t^{\prime}} )}}} \right]exp[{ - {\mu_a}c({t - t^{\prime}} )} ],$$
where $\rho $ is the spatial coordination inside the medium, $\rho ^{\prime}$ is the laser incident point, t is the measurement time and t’ is the time of laser illumination. D is the diffusion constant of the medium, c is the speed of light, and ${\mu _a}$ is the absorption coefficient of the medium. For simplicity without losing generality, we let t’=0 and $\rho ^{\prime} = 0$:
$$\phi ({\rho ,t;\rho^{\prime},t^{\prime}} )= \phi ({\rho ,t} )= \frac{c}{{4\pi Dc{{(t )}^{3/2}}}}exp\left[ { - \frac{{{\rho^2}}}{{4Dct}}} \right]exp[{ - {\mu_a}ct} ].$$

Then, the time-gated measurement is:

$${\boldsymbol F}_{\boldsymbol z}^{\boldsymbol T}({\Delta {\boldsymbol t}} )= \mathop \smallint \nolimits_{{\boldsymbol t} = \boldsymbol 0}^{{\boldsymbol t} = \Delta {\boldsymbol t}} \phi ({\rho ,t} )dt = \mathop \smallint \limits_{{\boldsymbol t} = \boldsymbol 0}^{{\boldsymbol t} = \Delta {\boldsymbol t}} \frac{c}{{4\pi Dc{{(t )}^{3/2}}}}exp\left[ { - \frac{{{\rho^2}}}{{4Dct}}} \right]exp[{ - {\mu_a}ct} ]dt,$$
which unfortunately contains the non-elementary exponential integral function Ei(x) due to the integration of $exp\left[ { - \frac{{{\rho^2}}}{{4Dct}}} \right]$ with respect to t. To proceed with an analytical solution, we invoke the mean value theorem:
$$\begin{gathered} \boldsymbol F_{\boldsymbol z}^{\boldsymbol T}\left( {\Delta \boldsymbol t} \right) = \mathop \int \limits_{\boldsymbol t = \boldsymbol 0}^{\boldsymbol t = \Delta \boldsymbol t} \phi \left( {\rho ,t} \right)dt = \phi \left( {\rho ,{t_m}} \right)\Delta \boldsymbol t = \frac{{c\Delta \boldsymbol t}}{{4\pi Dct_m^{3/2}}}exp\left[ { - \frac{{{\rho ^2}}}{{4Dc{t_m}}}} \right]exp\left[ { - {\mu _a}c{t_m}} \right] \hfill \\ = \frac{{c\Delta \boldsymbol t}}{{4\pi Dct_m^{3/2}}}exp\left[ { - {\mu _a}c{t_m}} \right]exp\left[ { - \frac{{{^2} + {d^2}}}{{4Dc{t_m}}}} \right] = Const \times exp\left[ { - \frac{{{^2}}}{{4Dc{t_m}}}} \right] = Const \times G\left(\right) \hfill \\ \end{gathered}$$
where ${t_m}$ is a time between 0 and time-gate $\Delta t$, and the spatial position $\rho $ is decomposed into in-plane radius $\epsilon $ and the depth d into the scattering medium as ${\rho ^2} = {\epsilon ^2} + {d^2}$. It is hence noted that ${\boldsymbol F}_{\boldsymbol z}^{\boldsymbol T}({\Delta {\boldsymbol t}} )$ is of a Gaussian form $G(\epsilon )$ when the time-gate $\Delta {\boldsymbol t}$ (hence ${t_m}$) and image depth d are fixed. Define the scale of the Gaussian function as its standard deviation, we get $r = \sqrt {2Dc{t_m}} $. Still, this is not a direct function of the time-gate $\Delta {\boldsymbol t}$ due to the application of the mean-value theorem.

Appendix B

Here, we show via several more numerical studies that, for roughly homogenous medium, the parametric Gaussian approximation remains valid across different measurement time-gates, imaging depths, and different medium scattering coefficients. As it is not possible to go through all kinds of combinations, we encourage the reader to run the Monte-Carlo simulation codes to evaluate the approximations’ validity. The code will be made available at Github. In all such plots, the horizontal axis is x coordinates and the vertical axis is normalized light intensity.

Appendix C

Here, we illustration how the metric-guided image reconstruction works. For each scale r, an image is reconstructed by a deconvolution step (Eq. (6)), and a complete focal stack is generated after accounting for all the scales. Figure 14(a) shows (as a montage) the reconstructed focal stack for the digit 2 in our simulations, and Fig. 14(b) gives the image metrics for the focal stack, along with the identified image that yields the maximum metric. It is evident that only with the approximately correct scale, does the reconstructed image yield a larger metric and become similar with the ground truth object. Imaging reconstruction using another metric [36] show a very similar result, which is shown in Fig. 14(c).

 figure: Fig. 10.

Fig. 10. Monte Carlo validation for the parametric Gaussian approximation at depth of 5 mm, depicted along the center line of Gaussian. The scattering and absorption coefficient are fixed at 1 mm-1 and 0.05 mm-1 respectively unless otherwise stated. From top to bottom: inhomogeneities in the scattering and absorbing coefficient are varied from 0% to 40%. Left to right: time-gates are varied from 100 ps to 200 ps, 300 ps, 400 ps and DC. The true blurring kernel is in sold black while the Gaussian approximation is in virtual red line. Note that the time-gate cannot be shorter than the ballistic time-of-flight, and should be scaled according to the thickness or imaging depths of the medium.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Monte Carlo validation for the parametric Gaussian approximation at depth of 10 mm, depicted along the center line of Gaussian. From top to bottom: inhomogeneities in the scattering and absorbing coefficient are varied from 0% to 40%. Left to right: time-gates are varied from 100 ps to 200 ps, 300 ps, 400 ps and DC. The true blurring kernel is in sold black while the Gaussian approximation is in virtual red line.

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. Monte Carlo validation for the parametric Gaussian approximation at depth of 40 mm, depicted along the center line of Gaussian. From top to bottom: inhomogeneities in the scattering and absorbing coefficient are varied from 0% to 40%. Left to right: time-gates are varied from 800 ps to 1000 ps, 1200 ps, 1500 ps and DC. The true blurring kernel is in sold black while the Gaussian approximation is in virtual red line.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. Monte Carlo validation for the parametric Gaussian approximation at depth of 40 mm, with the medium’s nominal reduced scattering and absorbing coefficient being varied to 0.1 mm-1 and 0.05 mm-1 respectively. From top to bottom: inhomogeneities in the scattering and absorbing coefficient are varied from 0% to 40%. Left to right: time-gates are varied from 800 ps to 1000 ps, 1200 ps, 1500 ps and DC. The true blurring kernel is in sold black while the Gaussian approximation is in virtual red line.

Download Full Size | PDF

 figure: Fig. 14.

Fig. 14. Illustration on the metric-guided image reconstruction. (a) Image focal stack of digit 2 using different scales r. (b)-(c) Image metric across the focal stack and the identified (final reconstructed) image.

Download Full Size | PDF

Appendix D

Additional Monte Carlo simulation results are shown here in Fig. 15. For these three simulation studies, we evaluated the image structural similarity (SSIM) and peak signal to noise ratio (PSNR) for each reconstruction, and averaged the results. The average SSIM and PSNR are 0.81 and 30.5 dB, respectively. While not yet a large sample size for a rigorous statistical analysis, it demonstrates concretely the effectiveness of our proposed method.

 figure: Fig. 15.

Fig. 15. Synthetic imaging results. Top to bottom: images of letter EF, digit 2, and letter A buried in the middle of a 30 mm thick medium. Left to right: ground truth image, conventional image, time-gated image, single pixel reconstruction, and metric guided image reconstruction. Scale bar: 12 mm.

Download Full Size | PDF

Funding

Zhejiang Lab Startup Fund (113010-PI2107); National Natural Science Foundation of China (61905152); National Natural Science Foundation of China (62205302).

Acknowledgments

Xiaohua Feng acknowledge the fruitful discussion with Dr. Liang Gao from University of California Los Angles on diffuse optical tomography.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are available in [45].

References

1. L. Azizi, K. Zarychta, D. Ettori, E. Tinet, and J.-M. Tualle, “Ultimate spatial resolution with Diffuse Optical Tomography,” Opt. Express 17(14), 12132–12144 (2009). [CrossRef]  

2. D. A. Boas, D. H. Brooks, E. L. Miller, C. A. DiMarzio, M. Kilmer, R. J. Gaudette, and Q. Zhang, “Imaging the body with diffuse optical tomography,” IEEE Signal Process. Mag. 18(6), 57–75 (2001). [CrossRef]  

3. A. T. Eggebrecht, S. L. Ferradal, A. Robichaux-Viehoever, M. S. Hassanpour, H. Dehghani, A. Z. Snyder, T. Hershey, and J. P. Culver, “Mapping distributed brain function and networks with diffuse optical tomography,” Nat. Photonics 8(6), 448–454 (2014). [CrossRef]  

4. H. Jiang, Diffuse Optical Tomography: Principles and Applications (CRC Press, 2018).

5. Y. Zhao, A. Raghuram, H. K. Kim, A. H. Hielscher, J. T. Robinson, and A. Veeraraghavan, “High Resolution, Deep Imaging Using Confocal Time-of-Flight Diffuse Optical Tomography,” IEEE Trans. Pattern Anal. Mach. Intell. 43(7), 2206–2219 (2021). [CrossRef]  

6. H. Jiang, B. W. Pogue, M. S. Patterson, K. D. Paulsen, and U. L. Osterberg, “Optical image reconstruction using frequency-domain data: simulations and experiments,” J. Opt. Soc. Am. A 13(2), 253 (1996). [CrossRef]  

7. B. W. Pogue and M. S. Patterson, “Frequency-domain optical absorption spectroscopy of finite tissue volumes using diffusion theory,” Phys. Med. Biol. 39(7), 1157–1180 (1994). [CrossRef]  

8. S. R. Arridge, “Optical tomography in medical imaging,” Inverse Probl. 15(2), R41–R93 (1999). [CrossRef]  

9. A. Lyons, F. Tonolini, A. Boccolini, A. Repetti, R. Henderson, Y. Wiaux, and D. Faccio, “Computational time-of-flight diffuse optical tomography,” Nat. Photonics 13(8), 575–579 (2019). [CrossRef]  

10. D. Du, X. Jin, R. Deng, J. Kang, H. Cao, Y. Fan, Z. Li, H. Wang, X. Ji, and J. Song, “A boundary migration model for imaging within volumetric scattering media,” Nat. Commun. 13(1), 3234 (2022). [CrossRef]  

11. J. J. Selb, D. K. Joseph, and D. A. Boas, “Time-gated optical system for depth-resolved functional brain imaging,” J. Biomed. Opt. 11(4), 044008 (2006). [CrossRef]  

12. Y. Jauregui-Sánchez, P. Clemente, J. Lancis, and E. Tajahuerce, “Single-pixel imaging with Fourier filtering: application to vision through scattering media,” Opt. Lett. 44(3), 679 (2019). [CrossRef]  

13. F. Leblond, H. Dehghani, D. Kepshire, and B. W. Pogue, “Early-photon fluorescence tomography: spatial resolution improvements and noise stability considerations,” J. Opt. Soc. Am. A 26(6), 1444–1457 (2009). [CrossRef]  

14. M. J. Niedre, R. H. de Kleine, E. Aikawa, D. G. Kirsch, R. Weissleder, and V. Ntziachristos, “Early photon tomography allows fluorescence detection of lung carcinomas and disease progression in mice in vivo,” Proc. Natl. Acad. Sci. U. S. A. 105(49), 19126–19131 (2008). [CrossRef]  

15. G. M. Turner, A. Soubret, and V. Ntziachristos, “Inversion with early photons,” Med. Phys. 34(4), 1405–1411 (2007). [CrossRef]  

16. Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015). [CrossRef]  

17. Q. Pian, R. Yao, N. Sinsuebphon, and X. Intes, “Compressive hyperspectral time-resolved wide-field fluorescence lifetime imaging,” Nat. Photonics 11(7), 411–414 (2017). [CrossRef]  

18. V. Durán, F. Soldevila, E. Irles, P. Clemente, E. Tajahuerce, P. Andrés, and J. Lancis, “Compressive imaging in scattering media,” Opt. Express 23(11), 14424 (2015). [CrossRef]  

19. E. Tajahuerce, V. Durán, P. Clemente, E. Irles, F. Soldevila, P. Andrés, and J. Lancis, “Image transmission through dynamic scattering media by single-pixel photodetection,” Opt. Express 22(14), 16945 (2014). [CrossRef]  

20. X. Zhao, X. Nie, Z. Yi, T. Peng, and M. O. Scully, “Imaging through scattering media via spatial–temporal encoded pattern illumination,” Photonics Res. 10(7), 1689 (2022). [CrossRef]  

21. A. Escobet-Montalbán, R. Spesyvtsev, M. Chen, W. A. Saber, M. Andrews, C. S. Herrington, M. Mazilu, and K. Dholakia, “Wide-field multiphoton imaging through scattering media without correction,” Sci. Adv. 4(10), eaau1338 (2018). [CrossRef]  

22. A. Levin, Y. Weiss, F. Durand, and W. T. Freeman, “Understanding and evaluating blind deconvolution algorithms,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 1964–1971.

23. A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26(3), 70–es (2007). [CrossRef]  

24. M. Z. Ali Bhotto, M. O. Ahmad, and M. N. S. Swamy, “An Improved Fast Iterative Shrinkage Thresholding Algorithm for Image Deblurring,” SIAM J. Imaging Sci. 8(3), 1640–1657 (2015). [CrossRef]  

25. M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13(1), 13–20 (2019). [CrossRef]  

26. C. A. Metzler, A. Maleki, and R. G. Baraniuk, “From Denoising to Compressed Sensing,” IEEE Trans. Inform. Theory 62(9), 5117–5144 (2016). [CrossRef]  

27. A. Beck and M. Teboulle, “A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems,” SIAM J. Imaging Sci. 2(1), 183–202 (2009). [CrossRef]  

28. U. S. Kamilov, H. Mansour, and B. Wohlberg, “A Plug-and-Play Priors Approach for Solving Nonlinear Imaging Inverse Problems,” IEEE Signal Process. Lett. 24(12), 1872–1876 (2017). [CrossRef]  

29. R. G. Baraniuk, V. Cevher, M. F. Duarte, and C. Hegde, “Model-Based Compressive Sensing,” IEEE Trans. Inform. Theory 56(4), 1982–2001 (2010). [CrossRef]  

30. Q. Fang and D. A. Boas, “Monte Carlo Simulation of Photon Migration in 3D Turbid Media Accelerated by Graphics Processing Units,” Opt. Express 17(22), 20178–20190 (2009). [CrossRef]  

31. K. Huang, Y. Wang, J. Fang, W. Kang, Y. Sun, Y. Liang, Q. Hao, M. Yan, and H. Zeng, “Mid-infrared photon counting and resolving via efficient frequency upconversion,” Photonics Res. 9(2), 259 (2021). [CrossRef]  

32. K. Huang, J. Fang, M. Yan, E. Wu, and H. Zeng, “Wide-field mid-infrared single-photon upconversion imaging,” Nat. Commun. 13(1), 1077 (2022). [CrossRef]  

33. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5(1), 1–9 (2018). [CrossRef]  

34. S. G. Adie, B. W. Graf, A. Ahmad, P. S. Carney, and S. A. Boppart, “Computational adaptive optics for broadband optical interferometric tomography of biological tissue,” Proc. Natl. Acad. Sci. U.S.A. 109(19), 7175–7180 (2012). [CrossRef]  

35. T. Yeminy and O. Katz, “Guidestar-free image-guided wavefront shaping,” Sci. Adv. 7(21), eabf5364 (2021). [CrossRef]  

36. J. Surh, H.-G. Jeon, Y. Park, S. Im, H. Ha, and I. S. Kweon, “Noise Robust Depth from Focus Using a Ring Difference Filter,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2017), pp. 2444–2453.

37. M. Alayed, M. A. Naser, I. Aden-Ali, and M. J. Deen, “Time-resolved diffuse optical tomography system using an accelerated inverse problem solver,” Opt. Express 26(2), 963–979 (2018). [CrossRef]  

38. D. J. Cuccia, F. Bevilacqua, A. J. Durkin, and B. J. Tromberg, “Modulated imaging: quantitative analysis and tomography of turbid media in the spatial-frequency domain,” Opt. Lett. 30(11), 1354 (2005). [CrossRef]  

39. D. B. Lindell and G. Wetzstein, “Three-dimensional imaging through scattering media based on confocal diffuse tomography,” Nat. Commun. 11(1), 4517 (2020). [CrossRef]  

40. S. R. Arridge, M. Schweiger, M. Hiraoka, and D. T. Delpy, “A finite element approach for modeling photon transport in tissue,” Med. Phys. 20(2), 299–309 (1993). [CrossRef]  

41. M. Schweiger, S. R. Arridge, and I. Nissilä, “Gauss–Newton method for image reconstruction in diffuse optical tomography,” Phys. Med. Biol. 50(10), 2365–2386 (2005). [CrossRef]  

42. N. Deliolanis, T. Lasser, D. Hyde, A. Soubret, J. Ripoll, and V. Ntziachristos, “Free-space fluorescence molecular tomography utilizing 360° geometry projections,” Opt. Lett. 32(4), 382 (2007). [CrossRef]  

43. S. K. Biswas, R. Kanhirodan, R. M. Vasu, and D. Roy, “Practical fully three-dimensional reconstruction algorithms for diffuse optical tomography,” J. Opt. Soc. Am. A 29(6), 1017–1026 (2012). [CrossRef]  

44. Y. Y. Schechner and N. Kiryati, “Depth from defocus vs. stereo: how different really are they?” in Proceedings. Fourteenth International Conference on Pattern Recognition(IEEE, 1998), Vol. 2, pp. 1784–1786.

45. X. Feng, “Single-photon-single-pixel-imaging,” Github (2023), https://github.com/fengxiaohua102/Single-photon-single-pixel-imaging.

Data availability

Data underlying the results presented in this paper are available in [45].

45. X. Feng, “Single-photon-single-pixel-imaging,” Github (2023), https://github.com/fengxiaohua102/Single-photon-single-pixel-imaging.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. Single pixel imaging into scattering medium. (a) A picosecond laser is expanded and collimated by lens L1 and L2 to obliquely illuminate the DMD, which modulates the light for interrogating the scattering medium. An SPAD collects the transmitted light with lens L3. (b) The single pixel detector integrates over the object plane, thus assimilating the diffusion blur in the reception path. (c) For a fixed depth inside the medium, the diffusion blur grows monotonically with respect to time-gate.
Fig. 2.
Fig. 2. Flow-chart for metric-guided single photon single pixel imaging recovery.
Fig. 3.
Fig. 3. Monte Carlo validation of the parametric Gaussian approximation. Top row: the true blurring kernel for a cubic scattering medium with random inhomogeneities. Gaussian approximations are given in the inset (not to scale). The inhomogeneities of the medium from left to right are: 0%, ± 10%, ± 20%, ± 30% and ±40%. Second row: the normalized error of the Gaussian approximations. Bottom: comparison of the true blurring kernel and the Gaussian approximation along the center line.
Fig. 4.
Fig. 4. Synthetic imaging results. Top to bottom: images of letter E, two strips, and triangle block buried in the middle of a 30 mm thick medium. Left to right: ground truth image, conventional image, time-gated image, single pixel reconstruction, and metric guided image reconstruction. Scale bar: 12 mm.
Fig. 5.
Fig. 5. Synthetic imaging results of the three objects for inhomogeneous medium, with the inhomogeneities being ±10%, ± 20% and ±30% from left to right. Scale bar: 4 mm.
Fig. 6.
Fig. 6. (a) The scattering medium and (b) its measured time-of-flight signal (black dot). The solid red line is the fitting results with an analytic solution.
Fig. 7.
Fig. 7. Experimental results with positive targets. Top to bottom: digit 5, letter U and X. Left to right: single pixel reconstruction with DC intensities, reconstruction with time-gated measurements, DAMP single pixel reconstruction with time-gated data, and the proposed metric guided image reconstruction. Scale bar: 24 mm.
Fig. 8.
Fig. 8. Experimental results with negative targets. Top to bottom, letter C, H, and digit 7. Left to right, single pixel reconstruction, reconstruction with time-gated measurements, DAMP reconstruction with time-gated data, and proposed metric guided deconvolution. Scale bar: 24 mm.
Fig. 9.
Fig. 9. Experimental characterization of imaging resolution with two strips at different separations. (a) Reconstructed images for positive targets with separation being 11 mm, 8 mm and 6 mm from left to right. (b) The line profile across the two strips at different separations, averaged along the y axis of white virtual box. (c)-(d) Resolution characterization results for negative strips with separations being 13 mm, 10 mm and 8 mm from left to right. Scale bar: 24 mm.
Fig. 10.
Fig. 10. Monte Carlo validation for the parametric Gaussian approximation at depth of 5 mm, depicted along the center line of Gaussian. The scattering and absorption coefficient are fixed at 1 mm-1 and 0.05 mm-1 respectively unless otherwise stated. From top to bottom: inhomogeneities in the scattering and absorbing coefficient are varied from 0% to 40%. Left to right: time-gates are varied from 100 ps to 200 ps, 300 ps, 400 ps and DC. The true blurring kernel is in sold black while the Gaussian approximation is in virtual red line. Note that the time-gate cannot be shorter than the ballistic time-of-flight, and should be scaled according to the thickness or imaging depths of the medium.
Fig. 11.
Fig. 11. Monte Carlo validation for the parametric Gaussian approximation at depth of 10 mm, depicted along the center line of Gaussian. From top to bottom: inhomogeneities in the scattering and absorbing coefficient are varied from 0% to 40%. Left to right: time-gates are varied from 100 ps to 200 ps, 300 ps, 400 ps and DC. The true blurring kernel is in sold black while the Gaussian approximation is in virtual red line.
Fig. 12.
Fig. 12. Monte Carlo validation for the parametric Gaussian approximation at depth of 40 mm, depicted along the center line of Gaussian. From top to bottom: inhomogeneities in the scattering and absorbing coefficient are varied from 0% to 40%. Left to right: time-gates are varied from 800 ps to 1000 ps, 1200 ps, 1500 ps and DC. The true blurring kernel is in sold black while the Gaussian approximation is in virtual red line.
Fig. 13.
Fig. 13. Monte Carlo validation for the parametric Gaussian approximation at depth of 40 mm, with the medium’s nominal reduced scattering and absorbing coefficient being varied to 0.1 mm-1 and 0.05 mm-1 respectively. From top to bottom: inhomogeneities in the scattering and absorbing coefficient are varied from 0% to 40%. Left to right: time-gates are varied from 800 ps to 1000 ps, 1200 ps, 1500 ps and DC. The true blurring kernel is in sold black while the Gaussian approximation is in virtual red line.
Fig. 14.
Fig. 14. Illustration on the metric-guided image reconstruction. (a) Image focal stack of digit 2 using different scales r. (b)-(c) Image metric across the focal stack and the identified (final reconstructed) image.
Fig. 15.
Fig. 15. Synthetic imaging results. Top to bottom: images of letter EF, digit 2, and letter A buried in the middle of a 30 mm thick medium. Left to right: ground truth image, conventional image, time-gated image, single pixel reconstruction, and metric guided image reconstruction. Scale bar: 12 mm.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

c ( m , n ) = f z c ( m , n ) ,
y k = c k T x = ( F z c k ) T x = c k T ( F z T x ) ,
y k = c k T F z T ( Δ t ) x .
argmin x y C x 2 + β x T V ,
Q = m , n | L o G h ( x ) | + | L o G v ( x ) | ,
argmin x x G ( r k ) x 2 + α x T V .
x = F z T ( Δ t ) x + n = G ( r ) x + Δ F x + n ,
ϕ ( ρ , t ; ρ , t ) = c 4 π D c ( t t ) 3 / 2 e x p [ | | ρ ρ | | 2 4 D c ( t t ) ] e x p [ μ a c ( t t ) ] ,
ϕ ( ρ , t ; ρ , t ) = ϕ ( ρ , t ) = c 4 π D c ( t ) 3 / 2 e x p [ ρ 2 4 D c t ] e x p [ μ a c t ] .
F z T ( Δ t ) = t = 0 t = Δ t ϕ ( ρ , t ) d t = t = 0 t = Δ t c 4 π D c ( t ) 3 / 2 e x p [ ρ 2 4 D c t ] e x p [ μ a c t ] d t ,
F z T ( Δ t ) = t = 0 t = Δ t ϕ ( ρ , t ) d t = ϕ ( ρ , t m ) Δ t = c Δ t 4 π D c t m 3 / 2 e x p [ ρ 2 4 D c t m ] e x p [ μ a c t m ] = c Δ t 4 π D c t m 3 / 2 e x p [ μ a c t m ] e x p [ 2 + d 2 4 D c t m ] = C o n s t × e x p [ 2 4 D c t m ] = C o n s t × G ( )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.