Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single photon imaging and sensing of highly obscured objects around the corner

Open Access Open Access

Abstract

Non-line-of-sight (NLOS) optical imaging and sensing of objects imply new capabilities valuable to autonomous technology, machine vision, and other applications, in which case very few informative photons are buried in strong background counts. Here, we introduce a new approach to NLOS imaging and sensing using the picosecond-gated single photon detection generated by nonlinear frequency conversion. With exceptional signal isolation, this approach can reliably achieve imaging and position retrieval of obscured objects around the corner, in which case only 4 × 10−3 photons are needed to be detected per pulse for each pixel with high temporal resolution. Furthermore, the vibration frequencies of different objects can be resolved by analyzing the photon number fluctuation received within a ten-picosecond window, allowing NLOS acoustic sensing. Our results highlight the prospect of photon efficient NLOS imaging and sensing for real-world applications.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The capacity of optical detection and imaging technology is ever expanding to keep pace with emerging autonomous technology and evolving sensing needs. In particular, the desire to see around corners has attracted much research interest from various fields, with the prospect of unlocking new imaging modalities over a breadth of applications, such as non-line-of-sight (NLOS) imaging and NLOS tracking for machine vision and sensing, autonomous driving, and biomedical imaging [1]. The ability to sense, track, and image occluded objects with sufficient resolution and accuracy is valuable for autonomous technology and machine vision when direct line-of-sight is prohibited or split second decision-making is needed for preemptive safety measures [2,3]. Practical NLOS imaging and sensing is an interdisciplinary problem at the intersection of physics, optics, and signal processing. It requires a sophisticated optical measurement system for capturing information-carrying photons, combined with an appropriate light transport model for efficient and reliable reconstruction of hidden scenes with reasonable computational overhead.

Over the last decade, we witnessed considerable progress in NLOS imaging and sensing based on advanced measurement systems such as streak cameras [4], single-photon sensitive avalanche diodes (SPADs) [59], and interferometric detection [1013]. Leveraging their high sensitivity, a diverse toolbox of NLOS reconstruction algorithms have been developed based on various light transport models for recovering hidden scenes [4,6,14,15]. Arguably, all those NLOS reconstruction approaches rely on faithful measurements of the optical signals that carry the information of the scene.

In a typical NLOS scenario, the probe laser beam first bounces off a single point on a diffusive wall, with some photons redirected towards the hidden scene. A small portion of those photons are back-scattered by the scene and redirected again by the wall to reach a detector. The detector can consist of a separate receiver, which captures photons from a different point on the wall. It can also use a transceiver to capture those from the same point, for coaxial NLOS measurement. In either case, as the intensity of light scattered from a diffusive surface is bounded by the inverse-square (distance) law, in such a NLOS scenario, those triply-bounced information-carrying photons decay several orders of magnitude faster amid much brighter photons returning directly from the wall. Many existing optical NLOS imaging and NLOS tracking systems, achieved by a single pixel [7,8,16] or 2D [46] single-photon detector, capture the back-scattered photons using a separate receiver to avoid receiving the photons directly returning from the wall, which may saturate the single photon detector and suffer the pile-up effect [17]. NLOS with a separate receiver requires accurately aligning both the transmitter and receiver to illuminate and image pairs of distinct points on the wall for the time-resolved single photon detection [18], increasing the complexity of the imaging system or reconstruction algorithm [19]. On the other hand, coaxial NLOS systems which uses a monostatic single transceiver setup, benefit from a straightforward geometrical relation between the time-of-flight measurement and the hidden scene. It can utilize simpler algorithms with much less computational complexity, such as light-cone transformation, to reconstruct the scene [8,14,19]. The drawback, however, is the strong pile-up effect. To avoid this issue, in the previous confocal NLOS setup, the targeted object was placed far away from the wall, and the receiving field-of-view of the SPAD was carefully aligned to be slightly off the illumination point of the outgoing probe beam on the wall [14]. Furthermore, in the presence of obscurants between the target and the wall, the information-carrying photons will be not only attenuated by the obstacle, but also hidden by the rising background photons from the obstacle, rendering complications in reconstructing the hidden scene [2024]. The above restrictions pose a significant challenge in practical NLOS imaging and sensing, preventing them from deployment with complex scenes and possible presences of obscurants.

Here, we aim at improving the current state of NLOS imaging and sensing by distinguishing the information-carrying photons from overwhelming background photons in a coaxial NLOS setting and introduce a new optical detection modality for NLOS imaging and sensing. We demonstrate a single pixel NLOS imaging and sensing system based on time-correlated single-photon counting through nonlinear optical gating [25]. It achieves an absolute 10 ps temporal resolution for object imaging, positioning, as well as vibration sensing of highly obscured objects around the corner. Our method employs highly-efficient and low-noise nonlinear frequency conversion of single photons in a nonlinear waveguide, where a time correlated 6-ps pump pulse performs effectively as a narrow nonlinear optical gating to up-convert 6-ps signal photon via sum-frequency generation (SFG). Crucially, the pump pulses create a high extinction picosecond photon detection window much narrower than the timing jitter of the detector and its associated electronics, thus realizing a mechanism to isolate and distinguish information-carrying NLOS photons from the background photons that are usually several orders of magnitude stronger [25,26]. Thus, the nonlinear gated single photon detection (NGSPD) system works better than the conventional SPAD based single photon detection system in two aspects. First, picoseconds photon detection window of the NGSPD will minimize the photon count distortion from pile-up and detector saturation, which are otherwise plaguing applications based on conventional SPAD based single photon detection [17,27]. This is crucial for accurate retrieval of photon arrival time histograms for NLOS applications with limited information-carrying photons. Second, the temporal resolution of the NGSPD system is defined by the picoseconds width of the optical gating (i.e. laser pulse width), rather than the timing jitter of the SPAD along with the associated time tagging instrument of the conventional SPAD based systems [17]. Much improved temporal resolution in photon detection will improve the spatial resolution in reconstructed NLOS scene [14]. The system thus provides the ability of picosecond-precision NLOS objects recovery even in highly obscured scenarios. The capability of exclusively capturing the photons from the object opens the path towards NLOS vibration sensing, which will underpin the prospect of new hybrid imaging modalities, such as acousto-optics imaging or photoacoustic remote sensing, for NLOS applications [28].

2. System setup

The proof-of-principle experiments demonstrating NLOS imaging, positioning and sensing of highly obscured objects are shown in Fig. 1. The setup consists of a mode-locked laser (MLL), a micro-electromechanical-system (MEMS) scanning mirror, a single-mode fiber (SMF) coaxial optical transceiver, a programmable optical delay line (ODL) and a silicon SPAD. The detection system realizes nonlinear gated single photon detection (NGSPD) [29], which distinguishes the information-carrying NLOS photons while rejecting the photons scattered back directly from the diffuser or obscurant even in this coaxial transceiver setup.

 figure: Fig. 1.

Fig. 1. A sketch for the NLOS system. The system transmits probe laser and receives signal photons using the transceiver, then the hidden object obscured by the aluminum mesh is imaged and sensed. The red dots on the diffuser indicate the scanning points, where point grids will be scanned for NLOS imaging. On each scanning point, probe laser will be redirected onto the hidden object. The transceiver captures the back scattered signal photons from the same scanning point for illumination. MLL: mode lock laser; WDM: dense-wavelength-division multiplexer, which is used as optical filter just after the MLL, and as combiner before going into the nonlinear optical gated detector; ODL: optical delay line; FPC: fiber polarization controller; MEMS: micro-electromechanical-system. The nonlinear gated single photon detector contains a quasi-phase-matched nonlinear waveguide module and a silicon single photon avalanche diode(SPAD).

Download Full Size | PDF

The nonlinear optical gating NLOS system utilizes the sum-frequency of two picosecond-pulse trains, where one is the pump(6.8ps full-width-half-maximum(FWHM), 1565.5 nm) and the other is the outgoing probe(6.3ps FWHM, 1554.1 ps). The two pulse trains are generated by carving a mode-locked laser(50MHz) using a pair of cascaded 200 GHz dense-wavelength-division multiplexing (DWDM) filters for each frequency, thus the two pulse trains are nearly transform-limited. The temporal intensity and phase profile of the two pulse trains are measured using a frequency resolved optical gating (FROG) pulse analyzer to quantify the temporal gating width shown in Fig. 2. The pump is used as optical gating that only up-converts the signal at certain temporal-frequency(TF) mode effectively, which passes through a programmable optical delay line for temporal scanning to facilitate time-resolved photon counting with high temporal resolution. The probe is first amplified by an EDFA to about 0.2 nanoJoule per pulse and then transmitted through the transceiver - a free space fiber coupler. The probe beam from the angle-polished single mode fiber is collimated into a Gaussian beam (2.2 mm FWHM) in free space via an aspheric lens ($f = 11.0 mm$, $NA = 0.25$).

 figure: Fig. 2.

Fig. 2. Pulse shapes The pump and probe pulse are measured by the frequency resolved optical gating (FROG). The sum-frequency response is obtained by time-resolved measurement of the probe pulse attenuated by in-fiber attenuators.

Download Full Size | PDF

The transceiver guides the collimated probe laser towards the MEMS mirror, which redirects the beam to various scanning points on the optical diffuser. The probe beam is then scattered towards the hidden scene, few signal photons reach the hidden object and the obscurant, and get back-scattered to the diffuser. The transceiver collects these triply-bounced information-carrying photons from the same point for illumination, so that only those photons returning from the same path are collected. In the NLOS imaging or positioning experiments, on each scanning point, a temporal-resolved returning photon histogram is retrieved by temporally scanning the pump pulse using optical delay line and counting photons at different time-of-flight, which is then used for image reconstruction or position retrieval. For the NLOS acoustic sensing experiment, returning photon counts are sampled with the pump pulse temporally aligned to capture the photons off the NLOS target for acoustic frequency retrieval.

The scene around the corner is realized by using the 2-inch diameter metallic diffuser (120 grit, reflectivity $>96\%$) as the wall, an aluminum mesh (1 mm diameter wire grid with 2 x 2 mm openings) as the obscurant before the hidden object. The diffuser is fixed on a rotational stage to measure the relative angle between the its normal and the probe beam. The distance between the MEMS mirror and the diffuser is 90 cm, and the tilt angle between the normal of the diffuser and the collimated probe beam(MEMS at rest) is 20$^{\circ }$. The triply-bounced information-carrying photons returning to the transceiver are separated from the probe by a fiber circulator with a minimum isolation ratio of 55 dB. The residue probe from the circulator and the information-carrying photons are temporally separated thanks to the narrow optical gating. The information-carrying photons will then be recombined with pump via another DWDM and subsequently fiber-coupled into the NGSPD.

The NGSPD is composed of a commercial periodically poled lithium niobate nonlinear waveguide module and a silicon SPAD (${\sim }70\%$ efficiency at 780 nm). The signal photons are up-converted to sum-frequency photons in the waveguide, whose center phase-matching wavelength is 1559.8 nm and internal conversion efficiency of the up-conversion waveguide is $121\% /(W \dot {} cm^{2})$. The information-carrying signal photons can be efficiently upconverted into sum-frequency photons only if they are 1) temporally aligned with the pump pulse; 2) lying in the phase matched wavelength for the pump; and 3) in the fundamental TF mode of the pump. The impulse response of the NGSPD system is 10 ps, as shown in Fig. 2, which is irrelevant of the timing jitter of the electric signal. The residue Raman noise is filtered using a narrow band thin film filter after the waveguide. The sum-frequency photons are then detected by the free-running silicon SPAD. A field-programmable-gate-array (FPGA) controls the steering of the MEMS mirror, the ODL and collecting the signal photon counts from the SPAD as the central processor.

The background noise of the NGSPD consists of the intrinsic dark count rate of the SPAD (100 Hz) and the Raman noise of the upconversion module (2000 Hz). Two Raman scattering processes predominantly generate noise photon counts on the sum-frequency band in the upconversion module: (i) pump photons Raman scatterring into the signal band (centered at 1554.1 nm) then being upconverted with the strong pump (centered at 1565.5 nm) via SFG, (ii) the Raman scattering of the second harmonic light created by the pump. Operating the system with pump peak power of about 0.7 W (220 $\mu$W average power), the Raman noise photon count rate is about 2000 Hz, giving a total dark count rate of 2100 Hz per voxel. As we are using an EDFA to amplify the probe power, a tiny portion of the amplified spontaneous emission (ASE) power [30] are reflected by the circulator and goes into the NGSPD. However, the ASE occurs in the full temporal domain while the NGSPD only effectively detects a narrow temporal window within the whole period(10 ps in 20000 ps period), such that the background counts caused by the residue ASE is greatly suppressed. Summing the noise source together, this corresponds to low noise probability of 4.2$\times 10^{-5}$ per pulse per delay point due to the single detection mode of NGSPD [31].

Besides the constant background noise from the NGSPD, our system rejects external noise photon counts far better than conventional single photon detection. By inserting a noise source (the amplified spontaneous emission(ASE) noise of another EDFA, filtered using the same filter as the probe pulse) which has identical spectral distribution of the signal, our NGSPD shows 36 dB higher noise rejection than a 1-ns gating InGaAs detector [29]. Thus the external noise counts can be neglected, and the background noise can be treated as a constant number.

3. NLOS imaging of highly obscured object

To probe and image the targeted scene, the MEMS scanning mirror steers the probe laser beam for raster scanning 32$\times$32 points on the diffuser, while recording the photon count as a function of temporal delay of the pump at each scanning point. This results in a temporally resolved 3-dimensional photon count array whose axes are $x, y$ (scanning coordinates on the diffuser) and $t$ (relative temporal delay of the pump).

The 3-dimensional photon count array is then processed to reconstruct the NLOS scene. Prior to image reconstruction, we pre-process the raw data by first compensating the relative time-of-flight difference caused by the tilt angle of the diffuser - since the time-of-flight from the transceiver to different scanning points on the diffuser varies with optical path (see appendix). The compensated time-resolved photon counting histogram $y(t)$ can be approximately treated as $y(t)=A(t)*x(t)+e$, where $A(t)$ is the impulse response, $e$ is the background noise level, $x(t)$ is the reflection distribution from the hidden object, since the the photon count fluctuation is much lower than the signal itself. Then the time-resolved photon counting histogram at each scanning point is filtered individually using a one-dimensional convex target function

$$f = \mathop{argmin}_{\textbf{x}} (\|\textbf{y} - \textbf{Ax} - \textbf{e}\|_{2} + \lambda \| \textbf{x} \|_{1})$$
with CVX toolbox [32,33] for Matlab. In the target function above, y is the time-resolved histogram measurement (Fig. 3(c) as an example) on one scanning point, e is the average background noise level, x is the filtered time-resolved histogram(target), $\textbf {A}$ is the impulse response matrix of single point object, where the impulse response of the system is measured to be 10 ps FWHM. This optimization procedure has a similar form of compressive sensing recovery, which removes the constant background noise due to the intrinsic dark count of the NGSPD. $\lambda \| \textbf {x} \|_{1}$ is added as a $l_{1}$ regularizer to prevent over-fitting of the processed data, and $\lambda$ is set at a low value(0.1) to preserve the signal response thus not overly sparsifying the target. Subsequently, the targeted scene can be recovered from the processed data by using the 3-dimensional reconstruction algorithm based on light-cone transformation [14].

 figure: Fig. 3.

Fig. 3. Imaging results of NLOS imaging Imaging result of a retroreflective arrow. (a) and (b) are the front views of the reconstructed NLOS imaging result for no obscurant and with obscurant, with red lines indicating the profile of the arrow. Most of the arrow shape is reconstructed with the existence of obscurant, except for the right arrow tip labeled with dashed red circle. The photo of the arrow is at the bottom-left corner of (e). (c) is an example of the time-histogram for the obscured object, and (d) is the side view of the NLOS reconstructed result with the obscurant, which shows ps-level depth resolution of the surface. Note that few counts from the obscurant was received by the single mode fiber transceiver, which explains the very weak response from the obscurant itself lies in the front of the arrow. (e) renders a 3D point-cloud of the reconstructed result (no obscurant) using MeshLab [34], which is labeled in millimeter.

Download Full Size | PDF

A typical retroreflective [14] arrowhead is used as the imaging target, shown in the inset of Fig. 3(e). The arrowhead is positioned at 12 cm in front of the diffuser, where its line-of-sight to the transceiver is blocked. We first perform the NLOS imaging as is, and afterward insert the obscurant at about 1 cm right in front of the target. The obscurant reduces considerable amount of the information-carrying photons from the target while inducing substantial back-scattered photon ahead of them, thus likely to conceal the target from non-gated single photon detection [14,17]. Utilizing NGSPD to negate the drawbacks due to the obscurant, we are able to reconstruct the image of the NLOS arrowhead behind the obscuring aluminum mesh in high accordance to the arrowhead as shown in Fig. 3(b), which maintains most of the image features compared to the front view and 3D point-cloud of the ground truth under the same illuminating probe power, shown in Fig. 3(a) and (e). This is due to few-picosecond gate and timing resolution of nonlinear gating, as the NGSPD NLOS imaging system can distinguish the back-scattered photons from the diffuser and the obscurant despite using a coaxial transceiver. The other reason for this obscurant-rejection is due to that the SMF coupled coaxial optical transceiver’s point spread function (PSF) on the diffuser is equivalent to a spatial filter which prevents the detector from being overwhelmed by the back-scattered photons from the obscurant. Even though the probe pulse is diffusely illuminating the target and obscurant, only the back-scattered photons falling into the transceiver’s PSF on the diffuser will be captured and detected in the time-resolved histogram. The existence of the obscurant increases the background count slightly while reduces considerable amount of detected photons thus deteriorated the reconstruction quality compared to ground truth, as the tip of arrow (labeled with red dashed line circle) was not manifested in Fig. 3(b).

Considering that the temporal resolution of the NGSPD ${\Delta }t \approx 10 ps$, the spatial resolution of this NGSPD-based coaxial NLOS imaging system can be estimated as ${\Delta }w=\frac {c\sqrt {w^2+z^2}}{2w}{\Delta }t \approx 1.1cm$ [14], where $z$ is the distance from the diffuser to the object, $w$ is half of the spatial scanning range on the diffuser and ${\Delta }t$ is the temporal resolution. As the size of the arrow is in centimeter scale, it is remarkably well resolved in the reconstructed images except at the sharp tips of the arrow with feature size well below 1 cm. The total acquisition time for one image is about 15 minutes at a rate of 10 ms dwell time per delay point. For reconstructing Fig. 3(b), only about $4 \times 10^{-3}$ detected information-carrying NLOS photon counts per pulse per pixel is required at the peak of the time-resolved histograms, thanks to the very low noise of NGSPD.

4. NLOS position and orientation retrieval of obscured objects

Identifying the position and surface normal of the obscured NLOS target requires the capability of isolating or being able to identify the information-carrying photons from the object rather than obscurant [5,9]. This can be achieved via NGSPD, by acquiring pristine and picosecond-resolved photon arrival time-resolved histogram.

In this experiment, we place two 4-cm distanced retroreflective bars that are both about 12 cm in front of the diffuser, and having the obscurant in between, shown in Fig. 4(a). We scan the probe on the diffuser along a single horizontal row of points and record the photon arrival time-resolved histogram. The NGSPD temporally resolves the back-scattered NLOS photons from different objects with small time-of-flight difference, which enables retrieving each bar’s position. An example of the time-resolved histogram at one scanning point is shown in Fig. 4(c), where the NLOS photon counts from two target bars are isolated from the obscurant and clearly distinguishable despite only separated by about 60 picoseconds. To best assess the capability of the NGSPD in locating the obscured targets, we use two 5 mm wide bars whose width are smaller than the spatial resolution of the system. This width minimizes "long tail" in the histogram attributed to the late arriving photons back-scattered off the object. Also with the narrow bar width, the first returning photon counting peaks can be identified for estimating the nearest distance from the bars to the diffuser with minimal ambiguity [35]. In the meanwhile, the simple coaxial single transceiver setup allows us to have same scanning point on the diffuser for both illuminating and photon capturing, providing simpler spherical geometry for the light path between the diffuser and the object rather than ellipsoidal geometry [5]. Given that $\textbf {r}_{\textbf {di}}$ the $i^{th}$ scanning point where $d$ denotes diffuser, $\textbf {r}_{\textbf {oj}}$ as the position of the $j^{th}$ object, the arrival time of the back-scattered photons at each scanning point is simply $t_{i}= \frac {2(\|\textbf {r}_{\textbf {oj}} -\textbf {r}_{\textbf {di}} \|_{2})}{c}$ which denotes the round-trip time-of-flight from the $i^{th}$ scanning point to the bar.

 figure: Fig. 4.

Fig. 4. NLOS target’s position retrieval (a) The setup sketch top view of the NLOS positioning through obscurant, where the distance from the MEMS mirror to the diffuser is 90 cm. (b) is the least-sum-square fitting result of the x-z coordinate of the two bars, where the color scale indicates the probability where the bars stand. The x-z plane is originated at the MEMS mirror. For better seeing the result, the reconstruction is not in the same scale of the setup sketch. (c) The time resolved measurement on one pixel shows different time-of-flight result from the two bars.

Download Full Size | PDF

The measured arrival time of the first-photon in the time-resolved histogram at the $i^{th}$ scanning point indicates the round-trip time-of-flight of the NLOS photons between the MEMS mirror and the object via this scanning point. Thus it is first corrected to compensate the optical path difference from transceiver to each scanning points on the diffuser (see appendix). The corrected time-of-flight $t_{ei}$ (e denotes experiment) is then the true round-trip time-of-flight from the $i^{th}$ scanning point to the object, which is later used for retrieving the object position. The top view of the experiment setup is shown in Fig. 4(a), with the position of the object retrieved on the x-z plane. Assuming the object position to be $(x, z)$, we can simulate and map out the arrival time $t_{i}$ of first returning photons from each position $(x,z)$ for every scanning point $\textbf {r}_{\textbf {di}}$. By matching the measured first-photon arrival time $t_{ei}$ in the experiment against the $t_{i}$, the position of the bar can be retrieved by a simple least-sum-square evaluation. The sum-square of the error at all the $N$ scanning points is

$$err(x, z) = \sum_{i=1}^{N} \|t_{ei}-t_{i}(x, z)\|_{2}^{2}$$
for a given coordinate $(x,z)$ on the plane. In this evaluation, the ensemble of probable positions for the object $\textbf {r}_{\textbf {oj}}$ retrieved from one scanning point $\textbf {r}_{\textbf {di}}$ forms a spherical surface centered at $\textbf {r}_{\textbf {di}}$ with radius $\frac {c t_{ei}}{2}$. With $N$ scanning points, $N$ probability distribution spheres are defined. The $(x,z)$ point with least sum-square-distance to all the spheres, or minimum $err(x,z)$, gives the most probable position for the object. Simple geometry of first returning photon is due to coaxial single transceiver setup compared with the separate receiver case [36]. We use the joint probability density [5] of the least-sum-square to approximate the position of the two bars. Since the NGSPD system has a 10 ps Gaussian-like FWHM of impulse response, the joint probability density is approximated in Gaussian form as
$$P(x,z) = \prod_{i=1}^{N} e^{-\frac{\|t_{ei}-t_{i}(x, z)\|_{2}^{2}}{2\sigma_{t}^{2}}} = e^{-\frac{err(x,z)}{2\sigma_{t}^{2}}},$$
where $\sigma _{t}$ is the standard deviation of the time-resolved measurement and approximated to be FWHM$/2=5$ps. The joint probability of the object position on x-z plane are labeled in Fig. 4(b) which is in 0.5 $\times$ 0.5 mm resolution, where the highest probability reveals the exact locations of the bars. This NLOS positioning retrieval requires only time-of-flight information to reach millimeter resolution, thanks to the advantage of NGSPD in negating undesirable photons. Naturally, the highly resolved photon counting histogram acquired via NGSPD also allows distinguishing the surface normal [37] of the obscured bars, which is briefly introduced in Supplement 1.

5. NLOS acousto-optics sensing

Being able to optically gate the information-carrying NLOS photons off the obscured NLOS target with few picosecond resolution, the NGSPD provides a straightforward method of capturing the quickly diminishing informative photon in NLOS scenario. This capability provides a new NLOS detection modality of NLOS optical sensing in complex environment. Comparing with many existing NLOS detection methods, the NGSPD can directly retrieve the vibration information of the hidden object.

We perform a proof-of-principle acousto-optics sensing on obscured NLOS target via non-interferometric single-photon counting vibrometry [38] based on sampling of photon counts while gating the photons from the object. The single-photon counting based vibrometry captures the acoustic signals by continuously sampling the detected photon counts over a fixed dwell time using FPGA (1 kHz sampling rate in this case). Then we apply short-time Fourier Transform to the acquired time series of photon counts to obtain the spectrogram which reveals the acoustic signals. The experiment setup is identical to Fig. 4(a), where the 2 bars are excited separately at 2 different vibrating frequency by using two cellphones. The two cellphones playing sound wave at constant but different frequencies are actuating the two bars by simply leaning on each mounting base. The probe beam is pointed on a fix scanning point on the diffuser, where the time-resolved measurement capturing the photon counting peaks originated from two bars at different temporal positions. As the time-of-flight locations of the bars were identified in the previous section, the NGSPD detects back-scattered photons of one single bar by temporally setting the pump at corresponding delay. Thus, acoustic vibration signal from the same bar is captured via time series of photon counting measurement with a preset dwell time, while photons from the other bar is temporally gated out. At this scanning point, the time-of-flight difference of the first-arriving peak of the two bars is 60 ps as observed in Fig. 4(c).

The vibration signals from the two bars are isolated, as demonstrated in Fig. 5. In each spectrogram, only one actuation frequency is manifested which highlights another advantage of NGSPD on targeted NLOS acoustic-optics sensing with high selectivity and spatial resolution [28,39]. High extinction isolation of undesirable photons is enabled by the picosecond temporal gating and single-mode fiber transceiver that captures very few photons other than those from intended object. Note that, one can observe the frequency noises at 120 Hz due to the power line frequency supplied to the ambient LED lighting, and at 335 Hz due to the resonant frequency of the MEMS mirror in the Fourier Transform figures.

 figure: Fig. 5.

Fig. 5. Results of NLOS acoustic sensing Acoustic information retrieved from the two bars under the sampling rate of 1 kHz. Figure (a) and (b) shows the Fourier Transform and spectrogram of the 250 Hz acoustic signal from one bar. The second harmonic response of the actuation shows up at the bottom of (b). The unevenness of the frequency response is due to the vibration-caused speckle perturbation of the back-scattered light. Figure (c) and (d) gives the 420 Hz signal actuated on the other bar.

Download Full Size | PDF

6. Conclusion

By nonlinear optical gating and single photon detection, we have demonstrated a novel approach that achieves picosecond single-photon time gating while rejecting orders of magnitude stronger background noise. It eliminates the otherwise detrimental detection piling-up effects [17] and allows coaxial NLOS measurement to provide direct time-of-flight information of hidden objects. As such, hidden NLOS scenes, even those additionally occluded, can be reliably reconstructed at centimeter spatial resolution. The same approach also enables non-interferometric NLOS acousto-optics sensing capable of locating hidden objects by recording their vibrational frequencies or acoustic signals [40]. These results highlight the prospect of hybrid or cross-modality NLOS imaging and sensing, by applying far-reaching acoustics waves to excite objects around the corner and using NLOS single photon detection to read the acoustic response [28]. One major drawback of the current NGSPD approach is the need of temporally delay the gating pump pulse for retrieving photon arrival time information, which makes the data acquisition time-consuming and limits the imaging depth. Several improvements can be applied to decrease the data acquisition time. For example, multi wavelength phase matching enables up-converting two or even more wavelength bands in one waveguide [41], which will reduce the acquisition time by times of the multi peak numbers. On the other hand, using a synchronized pump pulse train with higher repetition rate and combined with a correlated time tagger [42] for acquiring the macro arrival time of triply-bounced photons, the maximum imaging and sensing depth of the NGSPD system can be improved significantly. Compressive sensing can also decrease the amount of data needed for reconstructing a better spatial resolution image [43], which may speed up the acquisition. By engineering the quasi phase matching in the nonlinear waveguide, pump and probe pulses narrower than 1 ps FWHM can be used, which evidently increases the temporal resolution [44].

With the above advantages, this NGSPD system can perform NLOS imaging and sensing over realistic, complex environment, including those of obscured and partially occluded objects, yet without complex reconstruction models. Meanwhile, the nonlinear gated single photon detection presents a new optical measurement modality for various potential NLOS applications in imaging, sensing, and communications [45,46]. An interesting future study of this NLOS imaging technique is to exploit pristine and picosecond-resolved photons arrival time histogram for reconstructing NLOS spatial information with only single illumination point aided by machine learning [47,48], which is expected to significantly improve its functionality and imaging speed.

Appendix

In the current NLOS setup, the outgoing probe beam reaches the object via the MEMS mirror and the diffuser. The diffuser has a $20^{\circ }$ angle between its surface normal to the direction of the probe beam when the MEMS mirror is at rest. Thus, the time-of-flight from the MEMS mirror to different scanning points on the diffuser are different and need to be corrected. The distance between the MEMS to the $i^{th}$ scanning point on the diffuser can be expressed as $d_{i} = \frac {d}{\cos {\gamma } \cos {\beta } (1 - \tan {\beta }\tan {\alpha })}$, where $d$ is the distance between the MEMS mirror and the diffuser (probe beam at zero tilt angle when MEMS at rest), $\alpha$ is the tilt angle between the normal of the diffuser and the outgoing probe beam, $\beta$ is the yaw angle and $\gamma$ pitch angle of the MEMS mirror at the $i^{th}$ scanning point. Then the distance differences between the scanning point in the center of the diffuser and $i^{th}$ scanning point are compensated by simply shifting the temporal resolved measurement by the time-of-flight difference $t_{i\_adj} = \frac {d_{i}-d_{center\_pixel}}{2c}$.

Acknowledgments

This research was supported in part by the Earth Science Technology Office, NASA, through the Instrument Incubator Program.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. T. Maeda, G. Satat, T. Swedish, L. Sinha, and R. Raskar, “Recent advances in imaging around corners,” arXiv preprint: 1910.05613 (2019).

2. M. Isogawa, Y. Yuan, M. O’Toole, and K. M. Kitani, “Optical non-line-of-sight physics-based 3d human pose estimation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2020).

3. B. Zubairu, Novel Approach of Spoofing Attack in VANET Location Verification for Non-Line-of-Sight (NLOS) (Springer Singapore, Singapore, 2018), pp. 45–59.

4. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012). [CrossRef]  

5. G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10(1), 23–26 (2016). [CrossRef]  

6. X. Liu, S. Bauer, and A. Velten, “Phasor field diffraction based reconstruction for fast non-line-of-sight imaging systems,” Nat. Commun. 11(1), 1645 (2020). [CrossRef]  

7. S. Chan, R. E. Warburton, G. Gariepy, J. Leach, and D. Faccio, “Non-line-of-sight tracking of people at long range,” Opt. Express 25(9), 10109–10117 (2017). [CrossRef]  

8. X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual wave optics,” Nature 572(7771), 620–623 (2019). [CrossRef]  

9. J. Brooks and D. Faccio, “A single-shot non-line-of-sight range-finder,” Sensors 19(21), 4820 (2019). [CrossRef]  

10. X. Lei, L. He, Y. Tan, K. X. Wang, X. Wang, Y. Du, S. Fan, and Z. Yu, “Direct object recognition without line-of-sight using optical coherence,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2019), pp. 11737–11746.

11. M. Batarseh, S. Sukhov, Z. Shen, H. Gemar, R. Rezvani, and A. Dogariu, “Passive sensing around the corner using spatial coherence,” Nat. Commun. 9(1), 3629 (2018). [CrossRef]  

12. J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2d intensity images,” Sci. Rep. 6(1), 32491 (2016). [CrossRef]  

13. C. A. Metzler, F. Heide, P. Rangarajan, M. M. Balaji, A. Viswanath, A. Veeraraghavan, and R. G. Baraniuk, “Deep-inverse correlography: towards real-time high-resolution non-line-of-sight imaging,” Optica 7(1), 63–71 (2020). [CrossRef]  

14. M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light-cone transform,” Nature 555(7696), 338–341 (2018). [CrossRef]  

15. J. Iseringhausen and M. B. Hullin, “Non-line-of-sight reconstruction using efficient transient rendering,” ACM Trans. Graph. 39(1), 1–14 (2020). [CrossRef]  

16. G. Musarra, A. Lyons, E. Conca, Y. Altmann, F. Villa, F. Zappa, M. Padgett, and D. Faccio, “Non-line-of-sight three-dimensional imaging with a single-pixel camera,” Phys. Rev. Appl. 12(1), 011002 (2019). [CrossRef]  

17. F. Heide, S. Diamond, D. B. Lindell, and G. Wetzstein, “Sub-picosecond photon-efficient 3d imaging using single-photon sensors,” Sci. Rep. 8(1), 17726 (2018). [CrossRef]  

18. C. Wu, J. Liu, X. Huang, Z.-P. Li, C. Yu, J.-T. Ye, J. Zhang, Q. Zhang, X. Dou, V. K. Goyal, F. Xu, and J.-W. Pan, “Non–line-of-sight imaging over 1.43 km,” Proc. Natl. Acad. Sci. 118(10), e2024468118 (2021). [CrossRef]  

19. D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast f-k migration,” ACM Trans. Graph. 38(4), 1–13 (2019). [CrossRef]  

20. C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Exploiting occlusion in non-line-of-sight active imaging,” IEEE Trans. Comput. Imaging 4(3), 419–431 (2018). [CrossRef]  

21. A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded imaging with time-of-flight sensors,” ACM Trans. Graph. 35(2), 1–12 (2016). [CrossRef]  

22. F. Heide, M. O’Toole, K. Zang, D. B. Lindell, S. Diamond, and G. Wetzstein, “Non-line-of-sight imaging with partial occluders and surface normals,” ACM Trans. Graph. 38(3), 1–10 (2019). [CrossRef]  

23. F. Xu, G. Shulkind, C. Thrampoulidis, J. H. Shapiro, A. Torralba, F. N. Wong, and G. W. Wornell, “Revealing hidden scenes by photon-efficient occlusion-based opportunistic active imaging,” Opt. Express 26(8), 9945–9962 (2018). [CrossRef]  

24. J. Rapp, C. Saunders, J. Tachella, J. Murray-Bruce, Y. Altmann, J.-Y. Tourneret, S. McLaughlin, R. Dawson, F. N. Wong, and V. K. Goyal, “Seeing around corners with edge-resolved transient imaging,” Nat. Commun. 11(1), 5929 (2020). [CrossRef]  

25. A. Shahverdi, Y. M. Sua, I. Dickson, M. Garikapati, and Y.-P. Huang, “Mode selective up-conversion detection for lidar applications,” Opt. Express 26(12), 15914–15923 (2018). [CrossRef]  

26. A. Shahverdi, Y. M. Sua, L. Tumeh, and Y.-P. Huang, “Quantum parametric mode sorting: Beating the time-frequency filtering,” Sci. Rep. 7(1), 6495 (2017). [CrossRef]  

27. S. Maruca, P. Rehain, Y. M. Sua, S. Zhu, and Y. Huang, “Non-invasive single photon imaging through strongly scattering media,” Opt. Express 29(7), 9981–9990 (2021). [CrossRef]  

28. D. B. Lindell, G. Wetzstein, and V. Koltun, “Acoustic non-line-of-sight imaging,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2019), pp. 6780–6789.

29. P. Rehain, Y. M. Sua, S. Zhu, I. Dickson, B. Muthuswamy, J. Ramanathan, A. Shahverdi, and Y.-P. Huang, “Noise-tolerant single photon sensitive three-dimensional imager,” Nat. Commun. 11(1), 921 (2020). [CrossRef]  

30. Z.-P. Li, J.-T. Ye, X. Huang, P.-Y. Jiang, Y. Cao, Y. Hong, C. Yu, J. Zhang, Q. Zhang, C.-Z. Peng, F. Xu, and J.-W. Pan, “Single-photon imaging over 200 km,” Optica 8(3), 344–349 (2021). [CrossRef]  

31. Y. M. Sua, H. Fan, A. Shahverdi, J.-Y. Chen, and Y.-P. Huang, “Direct generation and detection of quantum correlated photons with 3.2 um wavelength spacing,” Sci. Rep. 7(1), 17494 (2017). [CrossRef]  

32. M. Grant and S. Boyd, “CVX: Matlab software for disciplined convex programming, version 2.1,” http://cvxr.com/cvx (2014).f

33. M. Grant and S. Boyd, “Graph implementations for nonsmooth convex programs,” in Recent Advances in Learning and Control, V. Blondel, S. Boyd, and H. Kimura, eds. (Springer-Verlag Limited, 2008), Lecture Notes in Control and Information Sciences, pp. 95–110. http://stanford.edu/boyd/graph_dcp.html.

34. P. Cignoni, M. Callieri, M. Corsini, M. Dellepiane, F. Ganovelli, and G. Ranzuglia, “MeshLab: an Open-Source Mesh Processing Tool,” in Eurographics Italian Chapter Conference, V. Scarano, R. D. Chiara, and U. Erra, eds. (The Eurographics Association, 2008).

35. A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343(6166), 58–61 (2014). [CrossRef]  

36. C.-Y. Tsai, K. N. Kutulakos, S. G. Narasimhan, and A. C. Sankaranarayanan, “The geometry of first-returning photons for non-line-of-sight imaging,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2017), pp. 7216–7224.

37. S. I. Young, D. B. Lindell, B. Girod, D. Taubman, and G. Wetzstein, “Non-line-of-sight surface reconstruction using the directional light-cone transform,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2020).

38. P. Rehain, J. Ramanathan, Y. M. Sua, S. Zhu, D. Tafone, and Y.-P. Huang, “Single-photon vibrometry,” Opt. Lett. 46(17), 4346–4349 (2021). [CrossRef]  

39. D. Doktofsky, M. Rosenfeld, and O. Katz, “Acousto optic imaging beyond the acoustic diffraction limit using speckle decorrelation,” Commun. Phys. 3(1), 5 (2020). [CrossRef]  

40. S. Sami, S. R. X. Tan, Y. Dai, N. Roy, and J. Han, “Lidarphone: Acoustic eavesdropping using a lidar sensor: Poster abstract,” in Proceedings of the 18th Conference on Embedded Networked Sensor Systems, (Association for Computing Machinery, New York, NY, USA, 2020), SenSys ’20, p. 701–702.

41. M. Chou, K. Parameswaran, M. M. Fejer, and I. Brener, “Multiple-channel wavelength conversion by use of engineered quasi-phase-matching structures in linbo 3 waveguides,” Opt. Lett. 24(16), 1157–1159 (1999). [CrossRef]  

42. M. L. Manna, J.-H. Nam, S. A. Reza, and A. Velten, “Non-line-of-sight-imaging using dynamic relay surfaces,” Opt. Express 28(4), 5331–5339 (2020). [CrossRef]  

43. J.-T. Ye, X. Huang, Z.-P. Li, and F. Xu, “Compressed sensing for active non-line-of-sight imaging,” Opt. Express 29(2), 1749–1763 (2021). [CrossRef]  

44. B. Wang, M.-Y. Zheng, J.-J. Han, X. Huang, X.-P. Xie, F. Xu, Q. Zhang, and J.-W. Pan, “Non-line-of-sight imaging with picosecond temporal resolution,” Phys. Rev. Lett. 127(5), 053602 (2021). [CrossRef]  

45. Z. Cao, X. Zhang, G. Osnabrugge, J. Li, I. M. Vellekoop, and A. M. J. Koonen, “Reconfigurable beam system for non-line-of-sight free-space optical communication,” Light: Sci. Appl. 8(1), 69 (2019). [CrossRef]  

46. S. Sajeed and T. Jennewein, “Observing quantum coherence from photons scattered in free-space,” Light: Sci. Appl. 10(1), 121 (2021). [CrossRef]  

47. W. Chen, F. Wei, K. N. Kutulakos, S. Rusinkiewicz, and F. Heide, “Learned feature embeddings for non-line-of-sight imaging and recognition,” ACM Trans. Graph. 39(6), 1–18 (2020). [CrossRef]  

48. A. Turpin, G. Musarra, V. Kapitany, F. Tonolini, A. Lyons, I. Starshynov, F. Villa, E. Conca, F. Fioranelli, R. Murray-Smith, and D. Faccio, “Spatial images from temporal data,” Optica 7(8), 900–905 (2020). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Supplementary file

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. A sketch for the NLOS system. The system transmits probe laser and receives signal photons using the transceiver, then the hidden object obscured by the aluminum mesh is imaged and sensed. The red dots on the diffuser indicate the scanning points, where point grids will be scanned for NLOS imaging. On each scanning point, probe laser will be redirected onto the hidden object. The transceiver captures the back scattered signal photons from the same scanning point for illumination. MLL: mode lock laser; WDM: dense-wavelength-division multiplexer, which is used as optical filter just after the MLL, and as combiner before going into the nonlinear optical gated detector; ODL: optical delay line; FPC: fiber polarization controller; MEMS: micro-electromechanical-system. The nonlinear gated single photon detector contains a quasi-phase-matched nonlinear waveguide module and a silicon single photon avalanche diode(SPAD).
Fig. 2.
Fig. 2. Pulse shapes The pump and probe pulse are measured by the frequency resolved optical gating (FROG). The sum-frequency response is obtained by time-resolved measurement of the probe pulse attenuated by in-fiber attenuators.
Fig. 3.
Fig. 3. Imaging results of NLOS imaging Imaging result of a retroreflective arrow. (a) and (b) are the front views of the reconstructed NLOS imaging result for no obscurant and with obscurant, with red lines indicating the profile of the arrow. Most of the arrow shape is reconstructed with the existence of obscurant, except for the right arrow tip labeled with dashed red circle. The photo of the arrow is at the bottom-left corner of (e). (c) is an example of the time-histogram for the obscured object, and (d) is the side view of the NLOS reconstructed result with the obscurant, which shows ps-level depth resolution of the surface. Note that few counts from the obscurant was received by the single mode fiber transceiver, which explains the very weak response from the obscurant itself lies in the front of the arrow. (e) renders a 3D point-cloud of the reconstructed result (no obscurant) using MeshLab [34], which is labeled in millimeter.
Fig. 4.
Fig. 4. NLOS target’s position retrieval (a) The setup sketch top view of the NLOS positioning through obscurant, where the distance from the MEMS mirror to the diffuser is 90 cm. (b) is the least-sum-square fitting result of the x-z coordinate of the two bars, where the color scale indicates the probability where the bars stand. The x-z plane is originated at the MEMS mirror. For better seeing the result, the reconstruction is not in the same scale of the setup sketch. (c) The time resolved measurement on one pixel shows different time-of-flight result from the two bars.
Fig. 5.
Fig. 5. Results of NLOS acoustic sensing Acoustic information retrieved from the two bars under the sampling rate of 1 kHz. Figure (a) and (b) shows the Fourier Transform and spectrogram of the 250 Hz acoustic signal from one bar. The second harmonic response of the actuation shows up at the bottom of (b). The unevenness of the frequency response is due to the vibration-caused speckle perturbation of the back-scattered light. Figure (c) and (d) gives the 420 Hz signal actuated on the other bar.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

f = a r g m i n x ( y Ax e 2 + λ x 1 )
e r r ( x , z ) = i = 1 N t e i t i ( x , z ) 2 2
P ( x , z ) = i = 1 N e t e i t i ( x , z ) 2 2 2 σ t 2 = e e r r ( x , z ) 2 σ t 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.