Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Non-line-of-sight imaging at infrared wavelengths using a superconducting nanowire single-photon detector

Open Access Open Access

Abstract

Non-line-of-sight (NLOS) imaging can visualize a remote object out of the direct line of sight and can potentially be used in endoscopy, unmanned vehicles, and robotic vision. In an NLOS imaging system, multiple diffusive reflections of light usually induce large optical attenuation, and therefore, a sensitive and efficient photodetector, or, their array, is required. Limited by the spectral sensitivity of the light sensors, up to now, most of the NLOS imaging experiments are performed in the visible bands, and a few at the near-infrared, 1550 nm. Here, to break this spectral limitation, we demonstrate a proof-of-principle NLOS imaging system using a fractal superconducting nanowire single-photon detector, which exhibits intrinsic single-photon sensitivity over an ultra-broad spectral range. We showcase NLOS imaging at 1560- and 1997-nm two wavelengths, both technologically important for specific applications. We develop a de-noising algorithm and combine it with the light-cone-transform algorithm to reconstruct the shape of the hidden objects with significantly enhanced signal-to-noise ratios. We believe that the joint advancement of the hardware and the algorithm presented in this paper could further expand the application spaces of the NLOS imaging systems.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Non-line-of-sight (NLOS) imaging [16] is an emerging technology that can visualize a remote hidden object out of the direct line of sight of the camera or other imaging devices. The optoelectronic hardware [2,79] of an NLOS imaging system detects the diffusively reflected light from a relay surface and computational algorithms [1,816] reconstruct the image of the objects from the detected signals. Such a "seeing-around-the-corner" capability can potentially be used in endoscopy [3], unmanned vehicles [17], and robotic vision [17].

Compared with traditional line-of-sight imaging, one of the prominent challenges of NLOS imaging is large optical attenuation induced by multiple scattering of light. This challenge further poses requirements on both the optoelectronic hardware and the computational algorithms of the NLOS imaging system. For the hardware, it is required to detect the echo photons as efficiently as possible. This efficiency limits the distance of NLOS imaging and affects the time acquiring the data needed for image reconstruction [2]. Additionally, for NLOS imaging based on the time-of-flight (ToF) method, the timing resolution, largely dependent on the timing jitter of the photodetector, should be high enough to achieve decent spatial resolution [7]. More importantly, the spectral range that the NLOS imaging system can work in is largely determined by the spectral response of the photodetector used, which is further limited by the long-wavelength limit for semiconductor photodetectors. For the algorithms, it is required to process typically large volumes of data efficiently [1], and sometimes, de-noising functionality is required to enhance the signal-to-noise ratios (SNRs) or signal-to-background ratios (SBRs) of the resulting images [17].

Up to now, most of the NLOS imaging experiments are performed in the visible bands [1,46,1214], and a few in the near-infrared [2,7], limited by the spectral responsivity of the photodetectors. Early proof-of-principle NLOS imaging experiments were performed in the visible bands, to demonstrate the effectiveness of various reconstructing algorithms [18,19]. The photodetectors used in these experiments were silicon single-photon avalanche diodes (SPADs), which could not detect photons with their wavelengths beyond 1.1 $\mathrm{\mu}$m. Recently, researchers made notable advancements by using InGaAs/InP SPADs and up-conversion single-photon detectors, extending the working wavelength of NLOS imaging to near-infrared. In particular, over 1.43-km long distance was demonstrated in Ref. [2], and superior spatial resolution was shown in Ref. [7]. A few experiments of passive NLOS imaging were performed at mid- [20] and long-wave-IR [21] by using thermal cameras.

Moving towards longer wavelengths to perform NLOS imaging has several advantages. From the technological points of view, the near-IR NLOS imaging that has been demonstrated features being eye-safe [22], low atmospheric losses [23], and with a plethora of relatively mature optoelectronic components available. For even longer wavelengths towards mid-IR, the potential applications can benefit from even lower background irradiance from the sunshine [24] and high SBR in biological imaging [25].

In this paper, we demonstrate NLOS imaging at 1560- and 1997-nm two wavelengths using a fractal superconducting nanowire single-photon detector (SNSPD) [26,27]. Different than the silicon and InGaAs/InP SPADs, SNSPDs feature ultra-broad spectral single-photon sensitivity, ranging from X-ray [28] to 10-$\mathrm{\mu}$m mid-IR [29]. They outperformed other single-photon detectors in terms of detection efficiency in the near- and mid-IR spectral ranges. Recently, fractal SNSPDs were proposed [30], demonstrated [31], and improved [32] to significantly reduce the dependence of detection efficiency on the states of polarization of the incident photons. Furthermore, the timing jitter of SNSPDs can be as low as a few picoseconds [33,34]. These combined advantages of broad spectral response with single-photon sensitivity, high detection efficiency, low polarization sensitivity, and low timing jitter, make them ideal candidates as the sensing elements in the near- and mid-IR NLOS imaging systems, and also motivate this work. Additionally, when the hardware of the NLOS imaging system shows lower SNRs due, for example, to limited optical power, reduced detection efficiency, a longer distance, strong atmosphere scattering, and/or a large number of ambient photons, we develop a de-noising algorithm and combine it with the light-cone-transform (LCT) algorithm [1] to reconstruct the shape of the hidden object with enhanced SNRs. The joint advancement of the hardware and the algorithm may further expand the application spaces of the NLOS imaging systems.

2. NLOS imaging setup

Figure 1(a) presents the experimental setup for NLOS imaging. We used a femtosecond fiber laser, with a central wavelength at 1560 nm, or, another with a central wavelength at 1997 nm, as the source. At each wavelength, the light was split into two parts by a 99:1 fiber directional coupler. 1% of the light was sent to a fast photodetector, providing the start signal for the time-to-amplitude converter (TAC); 99% of the light was collimated by a collimator (C). The focal lengths of the collimators for the wavelengths of 1560 nm and 1997 nm were 8.18 mm and 5.91 mm, respectively. The collimated beam passed through a mirror with a hole and was reflected by a beam-steering mirror (BSM) to perform raster scanning. The average optical power reflected from the BSM was 16 mW and 2.2 mW for the wavelengths of 1560 nm and 1997 nm, respectively. After being scattered by the relay wall, light illuminated the hidden scene. The echo light was re-scattered by the relay wall, propagated back along the same optical path, and then coupled into a single-mode fiber (SMF) by another collimator with the same focal length. The illumination point and the field of view (FoV) on the relay wall of our setup were slightly misaligned to avoid the strong echo light first-scattered by the relay wall that otherwise would make the SNSPD latched. Finally, the echo light was detected by the fractal SNSPD, and its output pulses were amplified by a low-noise cryogenic amplifier and then sent into the TAC as the stop signal for coincidence counting.

 figure: Fig. 1.

Fig. 1. Schematics of the NLOS imaging system with a fractal SNSPD. (a) Schematics of the experimental setup. (b) Scanning-electron micrograph of the fractal SNSPD. The white dotted box presents the second-order Peano structure. (c) Measured polarization-maximum and minimum system detection efficiency (SDE) of the fractal SNSPD at the wavelength of 1560 nm. (d) An example histogram of the coincidences at 1560 nm. The full width at half maximum (FWHM) of the direct peak is 52 ps. (e) Measured polarization-maximum and minimum SDE of the fractal SNSPD at the wavelength of 1997 nm. (f) An example histogram of the coincidences at 1997 nm. The FWHM of the direct peak is 60 ps.

Download Full Size | PDF

One of the key features of the experimental setup is the use of the SNSPD as the light-sensing element. The SNSPD in the fractal design combines the merits of high system detection efficiency (SDE), low polarization sensitivity, and high timing resolution [32,35]; additionally, it intrinsically has broadband single-photon sensitivity. These advantages make the fractal SNSPDs suitable for application in LiDAR [3638] and beyond, for example, NLOS imaging demonstrated in this work. Figure 1(b) presents the scanning-electron micrograph (SEM) of the fractal SNSPD, the white dashed box shows the second-order Peano fractal nanowires, with a width of 40 nm. Such two structures were connected in parallel functioning as a two superconducting nanowire avalanche photodetector (2-SNAP) [35]. The entire photosensitive area consists of 16 cascaded 2-SNAP, which covers a photosensitive area of 12.16 µm$\times$12.16 µm. It is the fractal geometry that almost completely eliminates the polarization dependence of SDE that most meander SNSPDs exhibit. Figure 1(c) presents the measured SDE as a function of bias current ($I_{\textrm{b}}$), at the wavelength of 1560 nm and at the base temperature of 2.1 K. At the bias current of 21.4 µA, the polarization-maximum SDE (SDE$_{\textrm{max}}$) reached 77% and the polarization-minimum SDE (SDE$_{\textrm{min}}$) reached 74%, corresponding to the polarization sensitivity of 1.04. The false-count rate (FCR) [32] was $3\times 10^4$ cps. Figure 1(d) presents an example histogram of coincidences measured at a scanning point on the relay wall. The time interval between the direct peak (left peak) and the indirect peak (right peak) was 1.6 ns, which corresponds to twice ToF of photons from the hidden object to the scanning point on the relay wall. The full width at half-maxima (FWHM) of the direct peak was 52 ps, indicating the temporal resolution of the entire system, which included the contributions from the timing jitter of the SNSPD and from the timing jitter caused by the spatial broadening of the light spot in FoV. Similarly, we also characterized the fractal SNSPD at the wavelength of 1997 nm. Figure 1(e) presents the measured SDE as a function of $I_{\textrm{b}}$, at the wavelength of 1997 nm. At the bias current of 21.4 µA, SDE$_{\textrm{max}}$ reached 3.2% and SDE$_{\textrm{min}}$ reached 3.0%, corresponding to the polarization sensitivity of 1.07, and FCR was $3\times 10^4$ cps. Figure 1(f) presents an example histogram of coincidences measured at a scanning point on the relay wall. The time interval between the direct peak and the indirect peak was 1.8 ns. The FWHM of the direct peak was 60 ps.

3. NLOS imaging results at the wavelength of 1560 nm

We first demonstrate NLOS imaging at the wavelength of 1560 nm. We used two types of hidden objects: (1) three letters, T, J, and U, made of retroreflective tapes, and Fig. 2(a) presents the photograph, and (2) a wooden puppet and Fig. 2(b) presents the photograph. The three letters T, J, and U were placed parallel to, and 39 cm, 31 cm, and 25 cm, respectively, away from the relay wall. The NLOS imaging system performed a raster scanning of $64\times 64$ grids over a $0.4\,\textrm {m}\times 0.4\,\textrm {m}$ region on the relay wall, with a pixel-dwell time of 100 ms for each pixel. The $1/e^2$ waist diameter of the collimated beam was 1.6 mm and the full-angle divergence was 0.073$^{\circ }$. Consequently, a set of temporal histograms were acquired by the TAC. Every single histogram contained 4096 time bins with a bin size of 4 ps. Figure 2(c) presents the raw ToF data of the measurements. Figure 2(d) presents the reconstructed albedo volume using the light-cone transform (LCT) algorithm [1]. Appendix A briefly summarizes the LCT algorithm. The color represents the reconstructed albedo of each point in the volume of the hidden object. The three letters, T, J, and U, were clearly recognized and their relative spatial locations were correctly reconstructed. Similarly, we replaced the retroreflective letters with the second object, the puppet, which was placed 20 cm away from the relay wall. Figure 2(e) presents the raw ToF data of the measurements. Our hardware and the LCT algorithms also successfully imaged the puppet. Figure 2(f) presents the reconstructed albedo volume.

 figure: Fig. 2.

Fig. 2. NLOS imaging results at the wavelength of 1560 nm. (a) Photograph of the hidden objects. Three letters, T, J, and U, made of retro-reflective tapes, are placed 39 cm, 31 cm, and 25 cm away from the relay wall, respectively. (b) Photograph of a hidden object puppet. (c) Raw data with dimensional sizes of 64, 64, and 1024 for $x'$, $y'$, and $t$, respectively. (d) Reconstructed results of three letters using the LCT algorithm. (e) Raw data with dimensional sizes of 64, 64, and 1024 for $x'$, $y'$, and $t$, respectively. (f) Reconstructed results of puppet using the LCT algorithm.

Download Full Size | PDF

4. De-noising algorithm and imaging results at the wavelength of 1997 nm

We then demonstrate NLOS imaging at a longer wavelength of 1997 nm. A letter, U, made of retroreflective tapes was used as the hidden object. The $1/e^2$ waist diameter of the collimated beam was 1.2 mm and the full-angle divergence was 0.13$^{\circ }$. Due to the reduced SDE at 1997 nm and lower optical power, the SNR of the measurements was decreased and we observed that reconstructed results by using the Wiener filter presented a significant amount of artifacts, as shown in the middle panel of Fig. 3(a). The possible reason for these artifacts is the following: The Wiener filter is optimal for de-noising only when both the signal and noise are Gaussian and independent. However, this assumption is too strong for NLOS imaging, where the signals are coded by ToF information of hidden objects and noises follow the Poissonian distribution. Poissonian noise has an asymmetric probability distribution with signal-dependent variance, and cannot be sufficiently suppressed in the Wiener reconstruction that is designed with the signal-independent Gaussian assumption. One possible way is to use variance-stabilizing transformation (VST) to transform Poissonian data into Gaussian distributed ones before reconstruction. However, VST’s performance is satisfactory only when each bin collects more than 20 photon counts [39], which is invalid in our NLOS imaging setting. Note that the temporal histograms of the indirect peaks are bell-like pulses with varying widths, which can be well modeled by a wavelet basis. Therefore, we propose removing noise by a wavelet de-noiser on the normalized histograms before reconstruction.

 figure: Fig. 3.

Fig. 3. Flowchart of the proposed de-noising algorithm and the reconstruction results of the NLOS imaging at the wavelength of 1997 nm. (a) Photograph of the scene and reconstructed images using the standard LCT algorithm and our proposed algorithm; (b) The flowchart of our proposed algorithm; (c) Histograms at the marked steps. ZSN: Z-score normalization; IZSN: inverse ZSN.

Download Full Size | PDF

The overall procedure of our reconstruction algorithm is depicted in Fig. 3(b). First, to remove the direct peak in the histograms of coincidences, we replaced the corresponding portion of the histogram with Poissonian noise simulated with statistics estimated from the segment of the histogram only containing noises, since zero padding [1] would introduce artificial discontinuities. Accordingly, Fig. 3(c)-(i) shows the original histogram, and Fig. 3(c)-(ii) shows the histogram after replacing the direct peak. For a stable data range, we applied the Z-score normalization (ZSN, Appendix B) to each histogram signal to have zero mean and unit variance, as shown in Fig. 3(c)-(iii). We de-noised each normalized histogram with wavelet thresholding, as shown in Fig. 3(c)-(iv), and followed by inverse ZSN (IZSN) to map the de-noised data to its original range, as shown in Fig. 3(c)-(v).

Note that the wavelet-based de-noiser mainly removes high-frequency noises. Non-negative Poissonian noise has a direct-current component, induced by dark counts, that cannot be removed by wavelet thresholding. This component will be projected into the reconstructed volume as severe artifacts, as shown in the middle panel of Fig. 3(a). Therefore, we applied an operation of so-called dark-count compensation (DCC) by subtracting noise’s direct-current component, estimated from the histogram segment only containing noises, as shown in Fig. 3(c)-(vi). Additionally, the de-noised histograms were clipped to ensure the non-negativity. After de-noising each histogram of measurement, $\mathbf {\tau }$, we applied the Wiener de-convolution in Eq. (9) to reconstruct the hidden albedo. As shown in Fig. 3(c)-(ii) and 3(c)-(vi), noise in the histogram has been effectively suppressed. Consequently, as shown in Fig. 3(a), the final 3D volume reconstructed by our method is much cleaner than one constructed by the standard LCT method, in which notable artifacts distribute over the entire volume. Appendix B presents more details about the de-nosing algorithm.

We compared our proposed algorithm, the standard LCT, and LCT with 3D smoothing on data with four SNR levels. To this end, the same scene, the letter U, was scanned with four pixel-dwell times, 5 ms, 10 ms, 20 ms, and 100 ms. And 3D smoothing was applied by a 3D Gaussian Filter with the bandwidth parameter $\mathrm {\sigma }=1.0$ to the reconstruction of LCT. To quantitatively evaluate the SNR of the reconstructed albedos, we used the albedo with high SNR as the ground truth (GT), shown in Fig. 4(a). Specifically, we used the one taken at 1560 nm and with 100 ms pixel-dwell time. Then, for each reconstructed albedo taken at 1997 nm, we calculated the root mean square error (RMSE), relative to the GT. RMSE is defined as

$$\textrm{RMSE} = \sqrt{\sum_{i}^{n_x}\sum_{j}^{n_y}\frac{(\hat{D}_{ij}-D_{ij})^2}{n_xn_y}},$$
where $i$ is the index of $x$-axis, $j$ is the index of $y$-axis, $\hat {D}_{ij}$ and $D_{ij}$ are the depth values of the element with maximum albedo at the same pixel $(i,j)$ in the reconstructed results and the GT, respectively. Note that only pixels in the ideal U-shape region of the GT were selected for the RMSE calculation.

 figure: Fig. 4.

Fig. 4. Comparison of the reconstructed albedos at four pixel-dwell times for letter "U", using our wavelet de-noising (WD) method with LCT, the standard LCT method, and LCT with 3D smoothing. (a) We used the reconstructed albedo, taken at 1560 nm and with 100 ms pixel-dwell time, reconstructed by the standard LCT method, as the "ground truth" (GT). (b) The reconstructed albedos by WD with LCT. (c) The reconstructed albedos by standard LCT method. (d) The reconstructed albedos by LCT with 3D smoothing. The root mean square error (RMSE), compared with the GT, is presented for each reconstructed scene. All the reconstructed scenes share the color bar in (a).

Download Full Size | PDF

As compared in Fig. 4(b) and (c), for each pixel dwell time, the reconstructed albedos by our method shows significantly reduced RMSE than the one reconstructed by the standard LCT method. In particular, when the pixel-dwell time is short, meaning that the SNR of the original data obtained by the hardware is relatively low, the enhancement of the SNR by our de-noising algorithm is more prominent. For example, at the pixel-dwell time of 5 ms, the RMSE of the image reconstructed by our method is 1.25 mm, which is only 12.1% of the RMSE of the image reconstructed by the standard LCT method (10.29 mm). Although LCT with 3D smoothing [Fig. 4(d)] also reduces noises and therefore RMSE, compared with the standard LCT [Fig. 4(c)], the background noise cannot be suppressed as the direct-current component induced by the noise counts in the histograms falls into the passband of the 3D Gaussian lowpass filter. In contrast, our proposed method achieves better reconstruction by removing the noise counts and Poissonian noise from the histogram to avoid noise propagation into the reconstruction. Thus, our algorithm effectively reduces the noises of the reconstructed scene at a given pixel-dwell time and provides opportunities to accelerate the speed of NLOS imaging while obtaining a decent SNR of the reconstructed scene.

5. Discussions

First, in our experimental demonstration, the SNR at the wavelength of 1997 nm is relatively lower, compared with the SNR at 1560 nm. One reason is that the SDE of the SNSPD at 1997 nm is lower. Further enhancing the SDE at the longer wavelength would accordingly increase the SNR. We calculated the SNR at three representative wavelengths, 520 nm, 1560 nm and 1997 nm, in the daylight condition and assuming other experimental parameters same.

For a NLOS imaging system, the received signals attenuate rapidly with the distance, $D$, from the NLOS imaging system to the relay wall and the distance, $d$, from the relay wall to the hidden object. Furthermore, in the outdoor environment, ambient photons could couple into the detector. Here, we consider a confocal NLOS imaging system with a width of $\Delta \mathrm {\lambda }$ ideal bandpass filter centering at the working wavelength, $\mathrm {\lambda }$, the signal photons detected per second by the detector can be calculated by [42]

$$N_{\mathrm{s}}=\frac{\mathrm{\lambda} P_{\mathrm{out}}}{\mathrm{hc}}\mathrm{\alpha}_{\mathrm{obj}}\mathrm{\alpha}_{\mathrm{wall}}^{2}\cos\mathrm{\theta}_{\mathrm{wall}}\frac{A_{\mathrm{obj}}A_{\mathrm{FoV}}A_{\mathrm{R}}}{\mathrm{\pi}^{3}d^{4}D^{2}}\mathrm{\eta}_{\mathrm{d}}\mathrm{\eta}_{\mathrm{R}}\mathrm{\eta}_{\mathrm{A}}^{2},$$
where $P_{\mathrm {out}}$ is the average optical power of the emitted laser pulse, $\mathrm {\alpha }_{\mathrm {obj}}$ is the hidden object reflectivity, $\mathrm {\alpha }_{\mathrm {wall}}$ is the relay wall reflectivity, $\mathrm {\theta }_{\mathrm {wall}}$ is the angle between the normal of the relay wall and the axis of the emitted laser beam, $A_{\mathrm {obj}}$ is the area of hidden object, $A_{\mathrm {FoV}}$ is FoV area which relating with the half-Angle divergence of receiving optical structure, $\mathrm {\theta }_{\mathrm {dev}}$, and $D$ as $A_{\mathrm {FoV}} = \mathrm {\pi }(\mathrm {\theta }_{\mathrm {dev}}D)^2$, $A_{\mathrm {R}}$ is the area of telescope aperture, $\mathrm {\eta }_{\mathrm {d}}$ is the SDE of the detector, $\mathrm {\eta }_{\mathrm {R}}$ is the transmittance of the receiving system and $\mathrm {\eta }_{\mathrm {A}}$ is the transmittance of air. The noise counts per second can be calculated by [42]
$$N_{\mathrm{n}}=\frac{\mathrm{\lambda} E_{\mathrm{sun}}\Delta\mathrm{\lambda} A_{\mathrm{FoV}}}{\mathrm{hc}}\cos\mathrm{\theta}_{\mathrm{sun}}\mathrm{\alpha}_{\mathrm{wall}}\cos\mathrm{\theta}_{\mathrm{wall}}\frac{A_{\mathrm{R}}}{\mathrm{\pi} D^{2}}\mathrm{\eta}_{\mathrm{d}}\mathrm{\eta}_{\mathrm{R}}\mathrm{\eta}_{\mathrm{A}}+\mathrm{DCR},$$
where $E_{\mathrm {sun}}$ is solar spectral irradiance, which is shown as the black line of Fig. 5(a), and $\mathrm {\theta }_{\mathrm {sun}}$ is the solar incidence angle.

 figure: Fig. 5.

Fig. 5. Solar irradiance and the simulated SNR of NLOS measurements. (a) The black line shows the spectrum of solar irradiance, from ASTM [40]. The blue line shows the spectrum of atmosphere absorption, from HITRAN 2016 [41]; (b) Simulated SNR as a function of $D$; (c) Simulated SNR as a function of $d$.

Download Full Size | PDF

Figure 5(b) and (c) present the calculated $\mathrm {SNR}=N_{\mathrm {s}}/N_{\mathrm {n}}$ as a function of $D$ and $d$, respectively, at three wavelengths, 520 nm, 1560 nm and 1997 nm. The parameters used in the calculation are listed in Table 1. Our SNR computation assumes that the optical absorption of atmosphere induced by air molecules and aerosol [data in the blue line in Fig. 5(a)] at each wavelength is approximately the same, therefore, the transmittance is $\mathrm {\eta }_{\mathrm {A}}=0.99$. Thanks to lower background irradiance from the sunshine, the NLOS imaging system would have the highest SNR at the wavelength of 1997 nm, among the three cases, assuming the same set of parameters for the hardware.

Tables Icon

Table 1. The parameters used in the calculations of signal-to-noise ratios

Note that the roughness of the relay wall and the hidden object will affect the solid angle of the scattered light and finally affect the magnitude of optical attenuation. For longer wavelengths, for example, at mid- and long-wave-IR, the relay wall will have stronger specular reflections rather than diffuse reflections at shorter wavelengths [21]. The specular nature of the relay wall will reduce the optical attenuation of signal light. Furthermore, the forward model used here assumes the diffuse reflection, and we speculate that this assumption would need to be carefully re-evaluated and justified if we further extend the wavelength to long-wave-IR.

Second, measured and evaluated the lateral resolution at the wavelengths of 1560 nm and 1997 nm. The lateral resolution of NLOS imaging is limited by the temporal characteristic of detectors, the size of the FoV, the distance, $d$, from the relay wall to the hidden object, and the width of the scanning area on the relay wall. The theoretical lateral resolution, $R_x$, follows a relation, given by [1]

$$\sqrt{w^2+d^2}-\sqrt{(w-R_x)^2+d^2}=\frac{c \times \gamma}{2},$$
where $w$ is the half width of the scanning area, and $\gamma$ is the timing jitter of the system.

Figure 6 presents the measurements of the lateral resolution at both 1560 nm and 1997 nm. We used a chart consisting of 5 strips with different spaces between adjacent ones [1,7]. As shown in Fig. 6(b)-(e), the gaps between the strips with spaces of 4 cm, 3 cm, and 2 cm can be resolved, while the strips with the space of 1 cm cannot. Therefore, based on the measurements, we estimated the resolution to be between 1 cm and 2 cm. With the following experimental parameters, $w=0.2$ m, $d=0.27$ m, $\gamma = 52\,\mathrm {ps}$ for 1560 nm and $\gamma = 60\,\mathrm {ps}$ for 1997 nm, we calculated the theoretical lateral resolutions, $R_x$, to be $13.4\,\mathrm {mm}$ and $15.5\,\mathrm {mm}$, for 1560 nm and 1997 nm, respectively.

 figure: Fig. 6.

Fig. 6. Measurements of the lateral resolutions at 1560 nm and 1997 nm. (a) Photograph of the resolution chart. The spaces between adjacent strips are set to be 1 cm, 2 cm, 3 cm, and 4 cm; (b) The front view of the reconstructed result at 1560 nm; (c) The reconstructed albedo along $y$-direction of the blue dotted line in (b) and the multi-peak fitting results; (d) The front view of the reconstructed result at 1997 nm; (e) The reconstructed albedo along y-direction of the blue dotted line in (d) and the multi-peak fitting results.

Download Full Size | PDF

To further illustrate how the characteristics of the single-photon detectors impact the performance of NLOS imaging systems, we numerically simulated the histograms and re-constructed scenes for three cases: (1) using an "ideal" single-photon detector, (2) using a SNSPD, and (3) using a InGaAs/InP SPAD. The scene used for the simulation is a circular disc with a radius of 1 cm at a distance of 0.5 m from the relay wall [Fig. 7(a)], and the characteristics of the single-photon detectors at 1550 nm are listed in Table 2. We note that for timing jitter, here we only took into account the timing jitter of the single-photon detectors, without considering the timing jitter caused by the spatial broadening of the light spot in FoV. The photon flux of indirect light is set as $4 \times 10^4$ photon per second. The scanning points are $64\times 64$, the area is 0.5 m $\times$ 0.5 m and the pixel-dwell time is set as 100 ms. The forward model, as shown in Eq. (5) in Appendix A, is used to generate the ToF of indirect light received in each scanning point and the method of Ref. [44] is used to generate histograms. Figure 7(b), (c), and (d) present the simulated histograms detected by the "ideal" detector, SNSPD, and SPAD, respectively. Figure 7(e), (f), and (g) present the NLOS reconstructions corresponding to measurements detected by the "ideal" detector, SNSPD, and SPAD, respectively. These simulations illustrate that timing jitter affects the spatial resolution, and the SDE and DCR affect the SNR of the re-constructed images.

 figure: Fig. 7.

Fig. 7. Numerical simulation of NLOS imaging by using different detectors. (a) The hidden scene used for the simulation is a circular disc with a radius of 1 cm at a distance of 0.5 m from the relay wall. (b), (c) and (d) Simulated histograms detected by the "ideal" detector, SNSPD, and SPAD, respectively. (e), (f) and (g) The re-constructed NLOS images corresponding to the measurements by the "ideal" detector, SNSPD, and SPAD, respectively. All the reconstructions share the color bar on the right.

Download Full Size | PDF

Tables Icon

Table 2. The parameters used in the numerical simulations in Fig. 7

Finally, our de-noising algorithm is quite general, in the sense that it cannot only be combined with LCT, but also can be effectively applied to other re-construction algorithms and improved the SNR. As an example, we applied our de-noising algorithm to the phasor-field method [13], and the results are presented in Appendix B. In Fig. 8, one can see that the wavelet de-noising further reduces the RMSE.

 figure: Fig. 8.

Fig. 8. Comparison of the reconstructed albedos at four pixel-dwell times for letter "U", using the phasor-field method without and with WD. (a) We used the reconstructed albedo, taken at 1560 nm and with 100 ms pixel-dwell time, reconstructed by the standard Rayleigh–Sommerfeld diffraction (RSD) method, as the "ground truth". (b) The reconstructed albedos by standard RSD. (c) The reconstructed albedos by RSD with our WD method. The root mean square error (RMSE), compared with the GT, is presented for each reconstructed scene. All the reconstructed scenes share the color bar in (a).

Download Full Size | PDF

6. Conclusions

In summary, using a fractal SNSPD as the light-sensing element, we have demonstrated NLOS imaging at two infrared wavelengths, 1560 nm and 1997 nm. We have improved the LCT algorithms by de-noising the received signals in the wavelet domain. This improved algorithm reduces the artifacts generated by the standard LCT methods and enhances the SNR, and thereby, is very useful in cases where the received echo light is faint. Based on this work, several further extensions are possible in the future: (1) NLOS imaging can be demonstrated at other wavelengths of interest, using the fractal SNSPDs. (2) Replacing the single-element SNSPD with an array composed of multiple SNSPDs and coupling the array with a multi-mode fiber can increase the collection efficiency of the echo photons and can reduce the scanning time and/or increase the SNR. (3) Combination of the fractal SNSPD and reconstruction algorithms, such as f-k migration [12] and D-LCT algorithms [11], may further improve the imaging quality and the robustness of the entire system.

Appendix A: light-cone transform algorithm

Light-cone transform (LCT) algorithm [1] was used to reconstruct the albedo of hidden objects from the measured ToF data. There are several assumptions used in the LCT algorithm [1]: (1) Only one scattering in the hidden scene is taken into account; (2) the reflectivity of hidden objects is the same in each direction; and (3) the hidden objects are not covered.

The NLOS measurements, denoted by $\mathrm {\tau }$, are a set of temporal histograms, each of which is collected from the associated scanning points $\{(x', y')\}$ on the relay wall at $z'=0$:

$$\begin{aligned} & \mathrm{\tau}(x',y',t)=\iiint_{\mathrm{\Omega}}{\quad\frac{1}{r^4}\mathrm{\rho}(x,y,z)\cdot} \\ & {\quad \quad \mathrm{\delta}(2\sqrt{(x'-x)^2+(y'-y)^2+z^2}-t\textrm{c})\textrm{d}x\textrm{d}y\textrm{d}z}, \end{aligned}$$
where $\mathrm {\rho }$ denotes the three-dimensional albedo volume of the hidden scene, and $\textrm {c}$ is the speed of light in vacuum. Assuming diffuse reflection, $1/r^4 = (2/t\textrm {c})^4$ is the dual-reflection radiometric decay of the photons scattered by the relay wall. The Dirac delta function $\mathrm {\delta }(\cdot )$ samples the 3D locations with the same round-trip time $t$, which is the surface of a spatio-temporal cone.

The task of solving $\mathrm {\rho }$ from the measurements, $\mathrm {\tau }$, is an ill-posed problem. We can get $\mathrm {\rho }$ by solving a regularized least-square problem

$$\begin{aligned} \operatorname{minimize}_{\mathrm{\rho}} & \left\|\mathrm{\tau}(x',y',t) - \int_{\mathrm{\Omega}}{\frac{1}{r^4}\mathrm{\rho}(x,y,z)}\mathrm{\delta}\left(2\sqrt{(x'-x)^2+(y'-y)^2+z^2}-t\textrm{c}\right)\textrm{d}x\textrm{d}y\textrm{d}z\right\|^{2} \\ & + \frac{1}{\mathrm{\alpha}} \int_{\mathrm{\Omega}}{\mathrm{\rho}^2(x,y,z)\textrm{d}x\textrm{d}y\textrm{d}z}. \end{aligned}$$

O’Toole et al. [1] solved the problem Eq. (6) more effectively by using the LCT algorithm. With variable substitution $z=\sqrt {u}$, $\textrm {d}z/\textrm {d}u=1/(2\sqrt {u})$, and $v=(t\textrm {c}/2)^2$, Eq. (5) can be rewritten as

$$\underbrace{v^{3 / 2} \mathrm{\tau}\left(x^{\prime}, y^{\prime}, 2 \sqrt{v} / c\right)}_{\boldsymbol{T}_{t}\mathbf{\tau}\left(x^{\prime}, y^{\prime}, v\right)}= \iiint_{\mathrm{\Omega}} \underbrace{\frac{1}{2 \sqrt{u}} \mathrm{\rho}(x, y, \sqrt{u})}_{\boldsymbol{T}_{z}\boldsymbol{\mathrm{\rho}}(x, y, u)} \underbrace{\mathrm{\delta}\left(\left(x^{\prime}-x\right)^{2}+\left(y^{\prime}-y\right)^{2}+u-v\right)}_{h\left(x^{\prime}-x, y^{\prime}-y, v-u\right)} \mathrm{d}x \mathrm{d}y \mathrm{d}u.$$

This equation is essentially a 3D convolution process: $\boldsymbol {T}_{t}\mathbf {\tau }=h*\boldsymbol {T}_{z}\boldsymbol{\mathrm{\rho }}$, where the function $h$ represents the 3D convolution kernel, the transform $\boldsymbol {T}_{z}$ and $\boldsymbol {T}_{t}$ represent the operation of re-sampling and scaling along $z$-axis and $t$-axis, respectively.

Let $\boldsymbol {H}$ be a 3D convolution operation associated with kernel $h$. The discretized form of Eq. (6) can be written in a matrix notation as

$$\operatorname{minimize}_{\mathrm{\rho}} \left\|\boldsymbol{T}_t\mathbf{\tau} - \boldsymbol{HT}_z \mathbf{\rho}\right\|^2_2 + \frac{1}{\mathrm{\alpha}} \|\boldsymbol{\mathrm{\rho}}\|^2_2,$$
whose closed-form solution is the Wiener filter described as:
$$\boldsymbol{\mathrm{\rho}}^{{\star}} = \boldsymbol{T}_z^{\top}\Big( \boldsymbol{H}^\top\boldsymbol{H} + \frac{1}{\mathrm{\alpha}} \boldsymbol{I} \Big)^{{-}1} \boldsymbol{H}^\top \boldsymbol{T}_t\mathbf{\tau},$$
where $\mathrm {\alpha }$ is the regularity parameter associated with the SNR level of the measurements and $\boldsymbol {I}$ represents unit matrix. By exploring the circulant structure of $\boldsymbol{H}$, the Wiener filter can be more efficiently implemented in the Fourier domain.

Appendix B: more details about the de-noising algorithm

Here, we give more implementing details in the flowchart presented in Fig. 3(b) of the main text.

Noise generation. To generate a noise histogram, we need a segment of simulated noise. We compute the mean coincidence $\overline {n}$ of time bins after the indirect peak in the measurement domain and generate Poissonian noise with the mean of $\overline {n}$.

Wavelet de-noising. We suppress noise in the wavelet domain [46]. Denote by $\mathrm {\tau }_{x',y'}$, the measurement histogram for the scanning position $x',y'$. Let $\mathbf {W}$ be the matrix of wavelet transform, and the wavelet coefficients of signal $\mathrm {\tau }_{x',y'}$ are obtained by $z_{x',y'}=\mathbf {W}\mathrm {\tau }_{x',y'}$. Then, we de-noise the wavelet coefficients by thresholding $\hat {z}_{x',y'}=S(z_{x',y'},\varepsilon )$, where $S(x,\varepsilon )$ is soft thresholding with threshold $\varepsilon$. The de-noised signal $\hat {\mathrm {\tau }}_{x',y'}$ is obtained by inverse wavelet transform from the de-noised coefficients $\hat {\mathrm {\tau }}_{x',y'}=\mathbf {W}^{-1}\hat {z}_{x',y'}$, where $\mathbf {W^{-1}}$ is the matrix of inverse wavelet transform. In our implementation, we used the symmlet wavelet basis with three-level decomposition and threshold $\varepsilon$ is determined using the VisuShrink method [47], which is given by

$$\varepsilon=\sigma\sqrt{2\ln{N}}$$
where $\sigma$ is the standard deviation of noise and $N$ is the length of the signal. Here, the noise strength $\sigma$ is estimated by $\frac {\textrm {MAD}}{0.6745}$, where MAD is the median of the absolute values of wavelet coefficients at the finest sub-band. In our experiment, we found that the average value, 0.5526, of MADs yields stable performance. Therefore, we obtain a threshold, $\varepsilon =3.0$.

Z-score normalization. We employ the z-score normalization (ZSN) to normalize to have unit variance. The ZSN is described as

$$\textrm{ZSN}: \mathrm{\tau} \mapsto (\mathrm{\tau} - \mathrm{\mu})/\mathrm{\sigma},$$
where $\mathrm{\mu}$ and $\mathrm {\sigma }$ are the mean and standard deviation, respectively. These two ZSN parameters are estimated from the whole segments of the input data after removing the direct reflections. The inverse ZSN (IZSN) is written as
$$\textrm{IZSN}: \mathrm{\tau} \mapsto \mathrm{\tau}\mathrm{\sigma}+\mathrm{\mu}.$$

We also applied our de-noising method to the phasor-field algorithm [13], named Rayleigh-Sommerfeld diffraction (RSD). As RSD uses a different reconstruction paradigm from LCT, we evaluate the superiority of our wavelet de-noising scheme by applying it to the histogram measurements before RSD reconstruction, hence named WD+RSD. As shown in Fig. 8, by incorporating our powerful histogram de-noising, WD+RSD can outperform RSD in RMSE, and can remove the artifacts in the reconstructed volume.

Funding

National Natural Science Foundation of China (62071322, 62231018).

Acknowledgment

Portions of this work were presented at CLEO 2022 [45].

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light-cone transform,” Nature 555(7696), 338–341 (2018). [CrossRef]  

2. C. Wu, J. Liu, X. Huang, Z.-P. Li, C. Yu, J.-T. Ye, J. Zhang, Q. Zhang, X. Dou, V. K. Goyal, F. Xu, and J.-W. Pan, “Non–line-of-sight imaging over 1.43 km,” Proc. Natl. Acad. Sci. 118(10), e2024468118 (2021). [CrossRef]  

3. F. Willomitzer, P. V. Rangarajan, F. Li, M. M. Balaji, M. P. Christensen, and O. Cossairt, “Fast non-line-of-sight imaging with high-resolution and wide field of view using synthetic wavelength holography,” Nat. Commun. 12(1), 6647 (2021). [CrossRef]  

4. J. H. Nam, E. Brandt, S. Bauer, X. Liu, M. Renna, A. Tosi, E. Sifakis, and A. Velten, “Low-latency time-of-flight non-line-of-sight imaging at 5 frames per second,” Nat. Commun. 12(1), 6526 (2021). [CrossRef]  

5. C. Saunders, J. Murray-Bruce, and V. K. Goyal, “Computational periscopy with an ordinary digital camera,” Nature 565(7740), 472–475 (2019). [CrossRef]  

6. F. Xu, G. Shulkind, C. Thrampoulidis, J. H. Shapiro, A. Torralba, F. N. Wong, and G. W. Wornell, “Revealing hidden scenes by photon-efficient occlusion-based opportunistic active imaging,” Opt. Express 26(8), 9945–9962 (2018). [CrossRef]  

7. B. Wang, M.-Y. Zheng, J.-J. Han, X. Huang, X.-P. Xie, F. Xu, Q. Zhang, and J.-W. Pan, “Non-line-of-sight imaging with picosecond temporal resolution,” Phys. Rev. Lett. 127(5), 053602 (2021). [CrossRef]  

8. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014). [CrossRef]  

9. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012). [CrossRef]  

10. V. Arellano, D. Gutierrez, and A. Jarabo, “Fast back-projection for non-line of sight reconstruction,” Opt. Express 25(10), 11574–11583 (2017). [CrossRef]  

11. S. I. Young, D. B. Lindell, B. Girod, D. Taubman, and G. Wetzstein, “Non-line-of-sight surface reconstruction using the directional light-cone transform,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2020), pp. 1407–1416.

12. D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast fk migration,” ACM Trans. Graph. 38(4), 1–13 (2019). [CrossRef]  

13. X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. Huu Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual wave optics,” Nature 572(7771), 620–623 (2019). [CrossRef]  

14. X. Liu, S. Bauer, and A. Velten, “Phasor field diffraction based reconstruction for fast non-line-of-sight imaging systems,” Nat. Commun. 11(1), 1645 (2020). [CrossRef]  

15. J. Peng, F. Mu, J. H. Nam, S. Raghavan, Y. Li, A. Velten, and Z. Xiong, “Towards non-line-of-sight photography,” arXiv, arXiv:2109.07783 (2021). [CrossRef]  

16. W. Chen, F. Wei, K. N. Kutulakos, S. Rusinkiewicz, and F. Heide, “Learned feature embeddings for non-line-of-sight imaging and recognition,” ACM Transactions on Graphics 39, 1–18 (2020). [CrossRef]  

17. D. Faccio, A. Velten, and G. Wetzstein, “Non-line-of-sight imaging,” Nat. Rev. Phys. 2(6), 318–327 (2020). [CrossRef]  

18. M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23(16), 20997–21011 (2015). [CrossRef]  

19. G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10(1), 23–26 (2016). [CrossRef]  

20. S. Divitt, D. F. Gardner, and A. T. Watnik, “Imaging around corners in the mid-infrared using speckle correlations,” Opt. Express 28(8), 11051–11064 (2020). [CrossRef]  

21. T. Maeda, Y. Wang, R. Raskar, and A. Kadambi, “Thermal non-line-of-sight imaging,” in 2019 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2019), pp. 1–11.

22. R. J. Thomas, B. A. Rockwell, W. J. Marshall, R. C. Aldrich, S. A. Zimmerman, and R. J. Rockwell Jr, “A procedure for multiple-pulse maximum permissible exposure determination under the Z136. 1-2000 American National Standard for Safe Use of Lasers,” J. Laser Appl. 13(4), 134–140 (2001). [CrossRef]  

23. H. Hemmati and D. Caplan, “Optical satellite communications,” Optical Fiber Telecommunications, 121–162 (2013).

24. G. G. Taylor, D. Morozov, N. R. Gemmell, K. Erotokritou, S. Miki, H. Terai, and R. H. Hadfield, “Photon counting lidar at 2.3 µm wavelength with superconducting nanowires,” Opt. Express 27(26), 38147–38158 (2019). [CrossRef]  

25. Z. Feng, T. Tang, T. Wu, X. Yu, Y. Zhang, M. Wang, J. Zheng, Y. Ying, S. Chen, J. Zhou, X. Fan, D. Zhang, S. Li, M. Zhang, and J. Qian, “Perfecting and extending the near-infrared imaging window,” Light: Sci. Appl. 10(1), 197 (2021). [CrossRef]  

26. G. N. Gol’tsman, O. Okunev, G. Chulkova, A. Lipatov, A. Semenov, K. Smirnov, B. Voronov, A. Dzardanov, C. Williams, and R. Sobolewski, “Picosecond superconducting single-photon optical detector,” Appl. Phys. Lett. 79(6), 705–707 (2001). [CrossRef]  

27. X. Hu, N. Hu, K. Zou, Y. Meng, L. Xu, and Y. Feng, “Twenty-year research and development of SNSPDs: Review and prospects,” Laser Technol. 46, 1–37 (2022).

28. S. Guo, J. Tan, H. Zhang, et al., “High-timing-precision detection of single X-ray photons by superconducting nanowires,” National Science Review (2023). https://doi.org/10.1093/nsr/nwad102.

29. V. Verma, B. Korzh, A. B. Walter, A. E. Lita, R. M. Briggs, M. Colangelo, Y. Zhai, E. E. Wollman, A. D. Beyer, J. P. Allmaras, H. Vora, D. Zhu, E. Schmidt, A. G. Kozorezov, K. K. Berggren, R. P. Mirln, S. W. Nam, and M. D. Shaw, “Single-photon detection in the mid-infrared up to 10 µm wavelength using tungsten silicide superconducting nanowire detectors,” APL Photonics 6(5), 056101 (2021). [CrossRef]  

30. C. Gu, Y. Cheng, X. Zhu, and X. Hu, “Fractal-inspired, polarization-insensitive superconducting nanowire single-photon detectors,” in Advanced Photonics Conference, (Optical Society of America, 2015). Paper JM3A–10.

31. X. Chi, K. Zou, C. Gu, J. Zichi, Y. Cheng, N. Hu, X. Lan, S. Chen, Z. Lin, V. Zwiller, and X. Hu, “Fractal superconducting nanowire single-photon detectors with reduced polarization sensitivity,” Opt. Lett. 43(20), 5017–5020 (2018). [CrossRef]  

32. Y. Meng, K. Zou, N. Hu, L. Xu, X. Lan, S. Steinhauer, S. Gyger, V. Zwiller, and X. Hu, “Fractal superconducting nanowires detect infrared single photons with 84% system detection efficiency, 1.02 polarization sensitivity, and 20.8 ps timing resolution,” ACS Photonics 9(5), 1547–1553 (2022). [CrossRef]  

33. I. Esmaeil Zadeh, J. W. Los, R. B. Gourgues, et al., “Efficient single-photon detection with 7.7 ps time resolution for photon-correlation measurements,” ACS Photonics 7(7), 1780–1787 (2020). [CrossRef]  

34. B. Korzh, Q.-Y. Zhao, J. P. Allmaras, et al., “Demonstration of sub-3 ps temporal resolution with a superconducting nanowire single-photon detector,” Nat. Photonics 14(4), 250–255 (2020). [CrossRef]  

35. Y. Meng, K. Zou, N. Hu, X. Lan, L. Xu, J. Zichi, S. Steinhauer, V. Zwiller, and X. Hu, “Fractal superconducting nanowire avalanche photodetector at 1550 nm with 60% system detection efficiency and 1.05 polarization sensitivity,” Opt. Lett. 45(2), 471–474 (2020). [CrossRef]  

36. N. Hu, Y. Feng, L. Xu, Y. Meng, K. Zou, S. Gyger, S. Steinhauer, V. Zwiller, and X. Hu, “Photon-counting lidar based on a fractal SNSPD,” in Optical Fiber Communication Conference, (Optical Society of America, 2021). Paper Tu5E–4.

37. K. Zou, Z. Hao, Y. Feng, Y. Meng, N. Hu, S. Steinhauer, S. Gyger, V. Zwiller, and X. Hu, “Fractal superconducting nanowire single-photon detectors working in dual bands and their applications in free-space and underwater hybrid LIDAR,” Opt. Lett. 48(2), 415–418 (2023). [CrossRef]  

38. T. Staffas, M. Brunzell, S. Gyger, L. Schweickert, S. Steinhauer, and V. Zwiller, “3D scanning quantum LIDAR,” in CLEO: Applications and Technology, (Optica Publishing Group, 2022). Paper AM2K–1.

39. J.-L. Starck and F. Murtagh, “Astronomical image and signal processing: looking at noise, information and scale,” IEEE Signal Process. Mag. 18(2), 30–40 (2001). [CrossRef]  

40. American Society for Testing and Materials. Committee G03 on Weathering and Durability, Standard Tables for Reference Solar Spectral Irradiances: Direct Normal and Hemispherical on 37° Tilted Surface (ASTM International, 2003).

41. R. V. Kochanov, I. Gordon, L. Rothman, P. Wcisło, C. Hill, and J. Wilzewski, “Hitran application programming interface (hapi): A comprehensive approach to working with spectroscopic data,” J. Quant. Spectrosc. Radiat. Transfer 177, 15–30 (2016). [CrossRef]  

42. W. Wagner, A. Ullrich, V. Ducic, T. Melzer, and N. Studnicka, “Gaussian decomposition and calibration of a novel small-footprint full-waveform digitising airborne laser scanner,” ISPRS journal Photogramm. Remote. Sens. 60(2), 100–112 (2006). [CrossRef]  

43. R. H. Hadfield, J. Leach, F. Fleming, D. J. Paul, C. H. Tan, J. S. Ng, R. K. Henderson, and G. S. Buller, “Single-photon detection for long-range imaging and sensing,” Optica 10(9), 1124–1141 (2023). [CrossRef]  

44. Q. Hernandez, D. Gutierrez, and A. Jarabo, “A computational model of a single-photon avalanche diode sensor for transient imaging,” arXiv, arXiv:1703.02635 (2017). [CrossRef]  

45. Y. Feng, X. Cui, Y. Meng, X. Yin, K. Zou, Z. Hao, J. Yang, and X. Hu, “Non-line-of-sight imaging using a fractal superconducting nanowire single-photon detector,” in CLEO: Applications and Technology, (Optica Publishing Group, 2022). Paper AW5P–5.

46. A. Antoniadis and G. Oppenheim, Wavelets and statistics, vol. 103 (Springer Science & Business Media, 2012).

47. D. L. Donoho and I. M. Johnstone, “Ideal spatial adaptation by wavelet shrinkage,” Biometrika 81(3), 425–455 (1994). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Schematics of the NLOS imaging system with a fractal SNSPD. (a) Schematics of the experimental setup. (b) Scanning-electron micrograph of the fractal SNSPD. The white dotted box presents the second-order Peano structure. (c) Measured polarization-maximum and minimum system detection efficiency (SDE) of the fractal SNSPD at the wavelength of 1560 nm. (d) An example histogram of the coincidences at 1560 nm. The full width at half maximum (FWHM) of the direct peak is 52 ps. (e) Measured polarization-maximum and minimum SDE of the fractal SNSPD at the wavelength of 1997 nm. (f) An example histogram of the coincidences at 1997 nm. The FWHM of the direct peak is 60 ps.
Fig. 2.
Fig. 2. NLOS imaging results at the wavelength of 1560 nm. (a) Photograph of the hidden objects. Three letters, T, J, and U, made of retro-reflective tapes, are placed 39 cm, 31 cm, and 25 cm away from the relay wall, respectively. (b) Photograph of a hidden object puppet. (c) Raw data with dimensional sizes of 64, 64, and 1024 for $x'$, $y'$, and $t$, respectively. (d) Reconstructed results of three letters using the LCT algorithm. (e) Raw data with dimensional sizes of 64, 64, and 1024 for $x'$, $y'$, and $t$, respectively. (f) Reconstructed results of puppet using the LCT algorithm.
Fig. 3.
Fig. 3. Flowchart of the proposed de-noising algorithm and the reconstruction results of the NLOS imaging at the wavelength of 1997 nm. (a) Photograph of the scene and reconstructed images using the standard LCT algorithm and our proposed algorithm; (b) The flowchart of our proposed algorithm; (c) Histograms at the marked steps. ZSN: Z-score normalization; IZSN: inverse ZSN.
Fig. 4.
Fig. 4. Comparison of the reconstructed albedos at four pixel-dwell times for letter "U", using our wavelet de-noising (WD) method with LCT, the standard LCT method, and LCT with 3D smoothing. (a) We used the reconstructed albedo, taken at 1560 nm and with 100 ms pixel-dwell time, reconstructed by the standard LCT method, as the "ground truth" (GT). (b) The reconstructed albedos by WD with LCT. (c) The reconstructed albedos by standard LCT method. (d) The reconstructed albedos by LCT with 3D smoothing. The root mean square error (RMSE), compared with the GT, is presented for each reconstructed scene. All the reconstructed scenes share the color bar in (a).
Fig. 5.
Fig. 5. Solar irradiance and the simulated SNR of NLOS measurements. (a) The black line shows the spectrum of solar irradiance, from ASTM [40]. The blue line shows the spectrum of atmosphere absorption, from HITRAN 2016 [41]; (b) Simulated SNR as a function of $D$; (c) Simulated SNR as a function of $d$.
Fig. 6.
Fig. 6. Measurements of the lateral resolutions at 1560 nm and 1997 nm. (a) Photograph of the resolution chart. The spaces between adjacent strips are set to be 1 cm, 2 cm, 3 cm, and 4 cm; (b) The front view of the reconstructed result at 1560 nm; (c) The reconstructed albedo along $y$-direction of the blue dotted line in (b) and the multi-peak fitting results; (d) The front view of the reconstructed result at 1997 nm; (e) The reconstructed albedo along y-direction of the blue dotted line in (d) and the multi-peak fitting results.
Fig. 7.
Fig. 7. Numerical simulation of NLOS imaging by using different detectors. (a) The hidden scene used for the simulation is a circular disc with a radius of 1 cm at a distance of 0.5 m from the relay wall. (b), (c) and (d) Simulated histograms detected by the "ideal" detector, SNSPD, and SPAD, respectively. (e), (f) and (g) The re-constructed NLOS images corresponding to the measurements by the "ideal" detector, SNSPD, and SPAD, respectively. All the reconstructions share the color bar on the right.
Fig. 8.
Fig. 8. Comparison of the reconstructed albedos at four pixel-dwell times for letter "U", using the phasor-field method without and with WD. (a) We used the reconstructed albedo, taken at 1560 nm and with 100 ms pixel-dwell time, reconstructed by the standard Rayleigh–Sommerfeld diffraction (RSD) method, as the "ground truth". (b) The reconstructed albedos by standard RSD. (c) The reconstructed albedos by RSD with our WD method. The root mean square error (RMSE), compared with the GT, is presented for each reconstructed scene. All the reconstructed scenes share the color bar in (a).

Tables (2)

Tables Icon

Table 1. The parameters used in the calculations of signal-to-noise ratios

Tables Icon

Table 2. The parameters used in the numerical simulations in Fig. 7

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

RMSE = i n x j n y ( D ^ i j D i j ) 2 n x n y ,
N s = λ P o u t h c α o b j α w a l l 2 cos θ w a l l A o b j A F o V A R π 3 d 4 D 2 η d η R η A 2 ,
N n = λ E s u n Δ λ A F o V h c cos θ s u n α w a l l cos θ w a l l A R π D 2 η d η R η A + D C R ,
w 2 + d 2 ( w R x ) 2 + d 2 = c × γ 2 ,
τ ( x , y , t ) = Ω 1 r 4 ρ ( x , y , z ) δ ( 2 ( x x ) 2 + ( y y ) 2 + z 2 t c ) d x d y d z ,
minimize ρ τ ( x , y , t ) Ω 1 r 4 ρ ( x , y , z ) δ ( 2 ( x x ) 2 + ( y y ) 2 + z 2 t c ) d x d y d z 2 + 1 α Ω ρ 2 ( x , y , z ) d x d y d z .
v 3 / 2 τ ( x , y , 2 v / c ) T t τ ( x , y , v ) = Ω 1 2 u ρ ( x , y , u ) T z ρ ( x , y , u ) δ ( ( x x ) 2 + ( y y ) 2 + u v ) h ( x x , y y , v u ) d x d y d u .
minimize ρ T t τ H T z ρ 2 2 + 1 α ρ 2 2 ,
ρ = T z ( H H + 1 α I ) 1 H T t τ ,
ε = σ 2 ln N
ZSN : τ ( τ μ ) / σ ,
IZSN : τ τ σ + μ .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.