Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

None-line-of-sight imaging enhanced with spatial multiplexing

Open Access Open Access

Abstract

Non-line-of-sight (NLOS) imaging provides a fascinating way to see through obstacles. As one of the dominating NLOS imaging approaches, transient NLOS imaging uses ultrafast illumination and detection to sense hidden objects. Because ultrafast array detectors still face challenges in manufacture or cost, most existing transient NLOS imaging schemes use a point detector and therefore need a point-by-point scanning (PPS) process, rendering a relative low detection efficiency and long imaging time. In this work, we apply a passive mode single-pixel camera to implement spatial multiplexing detection (SMD) in NLOS imaging and achieve a higher efficiency of data acquisition. We analyze and demonstrate the superiority of SMD through both simulation and experiment. We also demonstrate a SMD scheme with compressed sensing (CS) strategy. A compression ratio as low as 18% is achieved. By utilizing SMD, we accomplish a boost of detection efficiency of up to 5 times compared with the traditional PPS mode. We believe that this SMD modality is certainly an important approach to prompt the development of NLOS imaging technologies.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Non-line-of-sight (NLOS) imaging reveals three dimensional (3D) shape and visual appearance of objects outside the direct line of sight by analyzing light scattered from multiple surfaces. Over the last few years, NLOS imaging technologies have drawn wide attention, on account of its application in medicine, manufacture, transportation, public safety, and fundamental science. To achieve NLOS imaging, various approaches have been proposed, such as speckle correlations [1], wavefront shaping [2], thermal imaging [3], acoustic imaging [4], occlusion-based imaging technique [5,6] and transient imaging technique [711]. Among these various approaches, transient imaging technique is the most popular way for its high spatial resolution and detection sensitivity.

In a transient NLOS imaging system, an ultrashort pulsed laser is used for active illumination over the target behind a relay surface with diffuse reflection. An ultrafast detector can effectively distinguish signal photons from noise photons caused by multiple background reflection. Once sufficient signals are collected, an inverse calculation method is used to reconstruct the shape of the object. To achieve reasonable spatial resolution, detectors used in a transient NLOS imaging system normally require high temporal resolution, e.g., picoseconds or even higher. Due to this fact, most transient NLOS imaging works employ point detector rather than detector array cameras. Although novel ultrafast array detectors such as single photon avalanche detector (SPAD) array cameras have been intensively developed and tested [12], their temporal resolution is still much lower than that of single pixel SPADs. Other issues such as spatial non-uniformity also limit the property of imaging quality of SPAD arrays. Therefore most existing transient NLOS imaging works are based on point detectors. Two typical examples are point SPAD, and streak camera in raster scan mode. Between these two types of detectors, streak cameras can achieve higher time resolution, while SPAD is preferable for its financial and operation accessibility.

Although a number of works have realized high quality reconstruction and take into account many actual situations, data acquisition speed is always a challenge that obstructs the realization of real-time NLOS imaging. With a point detector, NLOS imaging system needs to carry out an iterative spatial scanning process on the surface of the relay wall. Normally, this is achieved by scanning a laser beam point-by-point over the relay surface. During each scan iteration, effective signals are in an extreme low level after several (typically three) times of diffuse reflection, so it always requires a relative long detection duration to gather sufficient signal photons. These two aspects together render a relative long acquisition time. In this work, to overcome this issue, we demonstrate a NLOS imaging scheme using single pixel camera (SPC) for spatial multiplexing detection (SMD) to increase data acquisition efficiency. Single pixel imaging (SPI) is an alternative method to traditional focal-plane detector array technologies. It uses single pixel detector and structured sampling (changing DMD patterns) for image retrieval [13,14]. In recent years, SPI has been intensively studied, which is inspired by both quantum imaging and CS [1517]. Using SMD for detection enhancement in SPI has been studied as early as Hadamard optics [18].

In work [19], the advantage of SMD is further explained that designed orthogonal patterns (multi-pixel structured scan) can benefit in sampling efficiency over point-by-point scanning (PPS) and meanwhile avoiding pixel aliasing. This is useful especially under passive illumination, which fits the scenario of NLOS imaging perfectly. In NLOS imaging, a collimated pulsed laser beam illuminates a relay wall at a small point (determined by the beam size). A small part of reflected photons are incident on the object and reflected again onto the relay wall. After a third time reflection, or the second reflection on the relay wall, useful signal photons are incident onto a point detector through an imaging lens. To obtain geometrical information of signal photons for image reconstruction, in one moment, the detector only detects photons from a small point of the relay wall, that is the conjugate area of the detector window regarding to the imaging lens. However, taking into account the fact that photons reflected from the object illuminates over the whole area of the relay wall, this point detection approach is in a low efficiency, as most of useful signal photons are left undetected.

Here, we apply a SPI system and demonstrate the advance of SMD in NLOS imaging. By introducing SMD, signals over the whole detection area of the relay wall can be collected simultaneously, leading to a promotion of detection efficiency. Through the experiment, we prove that compared with PPS, designed orthogonal patterns are able to increase the data acquisition speed by a factor of five with a small array of only $16 \times 16$ detection points. Moreover, we also perform NLOS imaging based on compressive detection. Compressed sensing (CS) proposed in work [20] is a powerful way to improve the efficiency of data collection by reducing the detection steps. In this work, compared with designed orthogonal sampling, CS can further improve the data acquisition speed by 4 times with a $16 \times 16$ sampling array (or $16 \times 16$ detection points) in the case of sufficient detection time.

2. Methods

Schematic of the experimental setup is shown in Fig. 1. A laser is used to actively illuminate a fixed spot of a relay wall. Part of the laser photons are reflected onto the object, get reflected again by the object, then arrive onto the relay wall for a third-time reflection. This third-time reflection light contains useful information of the object is finally detected by a detection system, with known detection location information on the relay wall. The laser has a wavelength of 532 nm, an average power of 160 mW and pulse duration of 5 ps, emitting pulses at a repetition frequency of 10 MHz. A square area of the relay wall in a size of $40 \times 40$ $\mathrm {cm}^{2}$, centered at the illuminating spot of the laser, is used to provide varying detection location. In our work, information of varying detection location is obtained by spatial sampling using a SPC. Shown in bottom left of Fig. 1, it is composed of a camera lens (14 $\mathrm {mm}$ focal length, $f/2.8$) that images the $40 \times 40$ $\mathrm {cm}^{2}$ square area onto a digital micro-mirror device (DMD), which has a refresh rate up to 22 kHz. The SPC is placed 90 $\mathrm {cm}$ from the wall, imaging the sampling area onto an area of $576 \times 576$ pixels of the DMD chip. Each detection area on the relay wall is set in a size of 0.9 $\times$ 0.9 $\mathrm {cm}^{2}$, corresponding to $12 \times 12$ pixels of the DMD chip. In Fig. 1, green spots on the wall indicate detected points under certain coded pattern. Photons spatially encoded after the DMD are detected by a single pixel single photon avalanched detector (SPAD) (MPD, PD-100-CTC-FC) through a signal gathering system, which is composed of a 160-mm focal length lens and an objective (magnification factor 20). The single-photon detector therefore acts as a bucket detector, it has a 37 ps timing jitter and a detection area of $100 \times 100$ $\mathrm {\mu m}^{2}$. Single-photon data is recorded in a time correlated single photon counting (TCSPC), triggered by a reference beam that is split by a beam splitter from the laser before sending to the wall. In this work, all histograms of the photon counts versus the number of time bins are set in a time bins of 3 ps duration and 20000 bin depth. The system bandwidth is measured about 72 ps, with the method mentioned in work [9]. To carry out NLOS reconstruction, we need to know the photon histogram of each detection point. However, one photon histogram recorded by TCSPC is a combination of photon signals over different detection points on the relay wall. With appropriate detection structures, photon histograms of each sampling point can be recovered from TCSPC data by solving an inverse problem. Mathematically, the sampling process of SPI can be expressed as

$$Y=AX$$
where $A$ is an $N\times M$ matrix that denotes the sampling basis, $M$ is the number of imaging pixels, $N$ represents measurement steps. $X$ is a $M\times N_{t}$ dimension matrix that represents collected signal from the relay wall, $N_{t}$ is the sampling depth of photon histograms in time scale. $Y$ is a $N\times N_{t}$ matrix that represents the detection signals from TCSPC. Sampling basis is designed in different form according different sampling strategies. For PPS, it is an identity matrix, the scale of which is equal to the number of imaging pixel. For SMD in this, $A$ is a Hadamard matrix in orthogonal detection, or a random matrix and a partial Hadamard matrix [21] for CS. For orthogonal matrix, $N$ is equal to $M$. While for CS, normally $N$ is smaller than $M$. As spatial sampling is normally performed in 2D, in one iteration of the sampling, one row of matrix A is reshaped into a matrix for 2D structured sampling.

With Hadamard sampling basis, signal at each detection point on the relay wall $X$ can be obtained from an iterative algorithm, which can be expressed as

$$O=A^{T}Y$$
here $T$ denotes matrix transpose. $O$ is a $N\times N_t$ matrix that represents the detection point dependent photon histograms. The detection points is indexed in row-wise, and time scale is indexed in column-wise. To illustrate the detection and recovering process of SMD NLOS imaging, a schematic is shown in Fig. 2. For straightforward illustration, we set the detection array on the relay wall in $2 \times 2$ pixels, and use a $4th$ order Hadamard matrix as sampling basis. Four photon histograms (Fig. 2(a)) are detected under different detection patterns. Each $2 \times 2$ pattern (Fig. 2(b)) is derived from corresponding row of the Hadamard matrix. It should be noted that Hadamard matrix contains $\pm 1$ elements. To achieve $\pm 1$ modulation, differential measurement is applied. The display of one Hadamard derived pattern $P_i$ is combined with the display of two complementary (0, 1) patterns. Corresponding signals are therefore the differential of two non-negative histograms. Therefore to achieve a $2\times 2$ sampling, we actually use 8 iterative structured display. Under each modulation pattern, a time resolved photon count histogram is obtained, which is the combination of photon signals over different detection points (Fig. 2(c)). The process of restoration can be represented as Eq. (2). This strategy is suitable for any reconstruction algorithm that is based on time-of-flight (ToF) and ellipsoid model, such as f-k migration [10], convolutional approximations [22] and so on.

 figure: Fig. 1.

Fig. 1. Schematic of the experimental set up for NLOS imaging. A laser is used to actively illuminate a fixed spot of a relay wall. Part of the laser photons are reflected onto the object, get reflected again by the object, then arrive the relay wall for a third-time reflection. A camera lens imaging a square area of the relay wall onto DMD, photons spatially encoded after the DMD are detected by a single pixel SPAD through a signal gathering system, which combined with a lens and a micro-objective. The picture of modulating system is shown in bottom left. Single-photon data is recorded in a TCSPC, triggered by a reference beam that is split by a beam splitter from the laser before sending to the wall.

Download Full Size | PPT Slide | PDF

 figure: Fig. 2.

Fig. 2. Principle of SMD NLOS imaging. For straightforward illustration, we set the detection array in $2 \times 2$ pixels, and use a $4th$ order Hadamard matrix H as sampling basis. Figure 2(a) are original signals of each detection point. Figure 2(b) are four measurement patterns, which are derived from corresponding row of the Hadamard matrix. Under each measurement pattern, time resolved photon count histograms shown in Fig. 2(c) is obtained, which is the combination of photon signals over different detection points. Fig. 2(d) are restored detection point dependent signals.

Download Full Size | PPT Slide | PDF

Having retrieved the signal of each detection point, we can retrieve the NLOS image under certain geometrical model. In our work, as illumination and detection system are arranged in a non-coaxial modality, we adopt the ellipsoid model, and therefore filtered back-projection algorithm [8,23,24] is used for NLOS reconstruction. For convenience, we assume that the detection locations ${x}'$, ${y}'$ are on the plane of $z=0$, and the illumination spot is at $(0 ,0 , 0)$. The position of detection point is therefore indexed as $(x' ,y' ,0)$. The time-resolved detection point dependent photon histogram is therefore related to the shape and location of the hidden object in a way that can be expressed according to the following formulation,

$$o(x',y',t)=\iiint_{\Omega} \tfrac{1}{r_{l}^{2}r_{v}^{2}}\rho (x,y,z)\delta ({r_{l}+r_{v}-tc})dxdydz$$
This is the formula of forward model for non-confocal NLOS systems. Here, $\rho$ is the unknown reflectance values of the hidden scene albedo, $c$ is the speed of light, $x$, $y$, $z$ are the spatial coordinates of the hidden volume. $r_{l}=\sqrt {x^{2}+y^{2}+z^{2}}$ is the distance between the object voxels and the laser spot, and $r_{v}=\sqrt {(x-x')^{2}+(y-y')^{2}+z^{2}}$ is the distance between the portion of the object and the detected points. The Dirac delta function $\delta (\cdot )$ relates the time of flight $t$ to the distance function $r=r_{l}+r_{v}$, which define the major axis of an ellipsoid, the foci of which are locations of illumination and related detection spots on the relay wall. The $1/r^{2}$ terms encodes the intensity decay with distance due to diffusive reflection from the wall and object. Having obtained detection-point-dependent histograms, we can use a filtered back-projection algorithm for 3D NLOS reconstruction. In such a model, photon histograms together with their illumination and detection locations on the relay wall contribute to reconstruct all possible ellipsoidal shells represented as $r_{l}+r_{v}=tc$. Each ellipsoidal shell provides necessary condition for part of the object. By accumulating the number of ellipsoidal shells, the spatial and location information of NLOS object can be determined by their intersection. The reconstruction can be expressed as
$$\rho_{bp} (x,y,z)=\iiint_{\Omega} \tfrac{1}{r_{l}^{2}r_{v}^{2}}o(x',y',t)\delta ({r_{l}+r_{v}-tc})dx'dy'dt$$
Here $\rho _{bp}$ is the back-projected volumetric albedo.

3. Advantage of spatial multiplexing detection NLOS imaging

Advantage of SMD in ordinary SPI schemes has been discussed in work [19]. Briefly, in case of passive illumination where light flux is fixed, rather than spatially sampling the region of interest point by point, it is more efficient to perform structured detection over multiple points simultaneously. Especially, with orthogonal detection structures, this efficiency advantage can be maximized with the avoidance of pixel aliasing. As mentioned above, most existing NLOS imaging schemes are based on point detector, which conforms the single-pixel imaging modality. A proof-of-principle work applying SPI in NLOS scenario has been demonstrated earlier [25], yet the advantage of SMD is not discussed. In this section, we demonstrate the advantage of SMD in NLOS imaging by comparing two detection methods, the traditional PPS and orthogonal SMD. Although transient NLOS imaging uses active illumination, signal light scattered by the hidden object distributes at the whole intermediary surface. Detection of the scattered light perfectly fits the criterion of passive-illumination single-pixel imaging. In this section we demonstrate the improvement of reconstruction quality by orthogonal Hadamard sampling, the effect of partial Hadamard sampling matrix and random matrix will be discussed combined with CS in the next section.

3.1 Simulation of spatial multiplexing detection NLOS imaging

We first analyze the process of SMD NLOS imaging in simulation. Hidden objects of letters U and S is placed 30 $\mathrm {cm}$ away from the relay wall. The hidden scene is contained within a 3D space of 30 $\mathrm {cm}$ $\times$ 30 $\mathrm {cm}$ $\times$ 15 $\mathrm {cm}$ volume, which is divided into $100\times 100\times 50$ voxels in our reconstruction. The sampling basis is a Hadamard matrix for SMD or an identity matrix for PPS. Here sampling array for PPS is set in $32 \times 32$ pixels, while for SMD mode three different sizes of array are tested. All photon histograms are detected with a time bin width of 1 ps. System time resolution is set as 50 ps. Iterative detection duration is set as 1 s both for PPS mode and SMD mode. In the simulation, only detection noise is considered, which is generated following a Poisson distribution. Signal-to-dark-counts ratio in one PPS detection is about 10$\%$, detection noise is set in the same level among all imaging scenarios. Simulation NLOS results are shown in Fig. 3. All signals shown in Fig. 3 are from the same detection point. To quantify the image quality, the structural similarity index (SSIM) as the figure of merit is adopted. SSIM [26] is a full-reference metric that describes the statistical similarity between two images, which can be expressed as:

$$SSIM=\frac{(2 \mu _{x} \mu _{y}+C_{1})(2\sigma _{xy}+C_{2})}{(\mu_{x}^{2}+\mu_{y}^{2}+C_{1})(\sigma _{x}^{2}+\sigma _{y}^{2}+C_{2}) }$$
where $\mu _{x}$ and $\mu _{y}$ are the averages of x, y, respectively, $\sigma _{x}$ and $\sigma _{y}$ are standard deviation of x, y, $\sigma _{xy}$ is covariance of x and y, respectively. $C_{1}$ and $C_{2}$ are variables to stabilize the division with the weak denominator (constants). $C_{1} = (K_1L)^{2}$, $C_{2} = (K_2L)^{2}$. Generally, $K_1 = 0.01$, $K_2 = 0.03$, L = 255. (L is the dynamic range of the pixel value, generally taken as 255). Figure 3(a) shows the photon histogram of a selected detection point and the final reconstruction result from PPS. With detection time of 1 s per pattern, signal is submerged in the noise. Insufficient detection Signal-to-noise ratio (SNR) therefore leads to the failure of image reconstruction. The detection point dependent signals and reconstruction results of SMD under different detection array sizes are shown in Fig. 3(b), Fig. 3(c) and Fig. 3(d). SNR and image quality increases obviously with the increase of detection point. In PPS, the total detection time is 1024s, which is the same as that for $32 \times 32$ sampling array in SMD mode, and much longer than the other two detection arrays in SMD. However, all three tests in SMD mode lead to reasonable reconstructions. Especially, Fig. 3(b) uses $6.25\%$ detection time of that of Fig. 3(a) to achieve a reasonable image. This is because that, in SMD mode, one sampling point is detected in all detection iterations. The effective detection duration of each detection point $\tau$ is therefore determined as the multiplication of iterative detection time $\tau _i$ and detection steps $N$:
$$\tau=\tau_i\times N.$$
where $N$ equals to the number of detection points $M$ for Hadamard sampling. In contrast, in PPS mode, one sampling point is only detected for one unit time, i.e, the duration of an iteration $\tau _i$. Therefore, within the same total detection time, SMD can significantly increase effective detection time.

 figure: Fig. 3.

Fig. 3. Simulation results of SMD NLOS imaing. In the case that only the detector noise is considered, the noise level of PPS signal in each detection is the same as that of SMD. All demonstrated signals are from the same detection point. Figure 3(a) shows the signal and reconstruction result of PPS, the sampling array is $32 \times 32$. Figure 3(b), (c) and (d) are detection point dependent signals and reconstruction results of SMD with sampling array of $8 \times 8$, $16 \times 16$ and $32 \times 32$ respectively.

Download Full Size | PPT Slide | PDF

3.2 Experiment of spatial multiplexing detection NLOS imaging

We further demonstrate the advantage of SMD in experiment. We verify the superiority of SMD by comparing the imaging quality of different sampling mode with the same iterative detection time or the iterative detection time required to achieve the same imaging quality. Two hidden objects are adopted for different experiments, two 12 $\mathrm {cm}$ long rectangles with the interval of 4.5 $\mathrm {cm}$ and a 12 $\mathrm {cm}$ $\times$ 10 $\mathrm {cm}$ letter T, which are both placed 15 $\mathrm {cm}$ away from the wall. We divide the object space of an overall area $30 \times 30 \times 10$ $\mathrm {cm}^{3}$ into a $150\times 150\times 100$ voxels, each voxel being in a size of $0.2 \times 0.2 \times 0.2$ $\mathrm {cm}^{3}$. Reconstruction algorithm in this experiment is filtered back-projection algorithm [8]. In the first experiment, we collect the signal scattered from two rectangles using a $8 \times 8$ sampling array, iteration detection time (the measurement time of each row vector in Hadamard matrix or identity matrix) varies from 0.3 s to 5 s. In the second experiment, hidden object is the letter T, we explore the effect of sampling size in SMD mode, the sampling arrays are $8 \times 8$ and $16 \times 16$. In SMD mode, we apply orthogonal Hadamard pattern with differential sampling. For each Hadamard pattern, one binary mask and its opposite are used and combined for $\pm 1$ modulation. In PPS mode, signals from just one detection point are collected in every pattern, a differential sampling also applies to PPS for a unified sampling time with SMD. The Peak Signal to Noise Ratio (PSNR) is employed to evaluate the imaging quality. PSNR is an important parameter in the objective evaluation of image quality. The larger the PSNR, the better the image quality.

Figure 4 shows the experimental results of two rectangles, signals and reconstruction results are both included. Figure 4(a) is the histogram acquired from one row of Hadamard matrix, Fig. 4(b) is the histogram of certain detection point, which is solved by using Eq. (2). Figure 4(c) is the histogram of PPS, that is obtained from the same detection point of Fig. 4(b). Apparently, the signal obtained from SMD has a higher SNR than that from PPS. We take the values from 15000 ps to 16000 ps in the histograms to estimate the noise, and use the peak of signal as the value of signal. Detection SNR of SMD is estimated as 8 times higher than that of PPS mode. Reconstruction results are shown in Fig. 4(d), with the same detection time, reconstruction of SMD has higher PSNR. Moreover, PSNR of SMD with 0.3s iterative detection time is higher than that of PPS with 1s iterative detection time, thus, the speed of data collection is improved by more than three times.

 figure: Fig. 4.

Fig. 4. Experimental results of two rectangles, iteration detection time of demonstrated signals is 1 s. Figure 4(a) is the signal obtained from the projection of one row of Hadamard matrix. Figure 4(b) is the signal of a certain detection point, which is one solution of spatial multiplexing. Figure 4(c) is a signal obtained by PPS mode. Figure 4(c) and Fig. 4(b) are signals from the same detection point. Figure 4(d) are reconstruction results of two rectangles both in PPS mode and SMD mode, the iteration detection time varies from 0.3 s to 5 s. Figure 4(d) is 3D shape of the object, which contains the projections of $x-y$ plane and $x-z$ plane.

Download Full Size | PPT Slide | PDF

Figure 5 shows detected signals and reconstruction results of letter T. Here, we demonstrate the advantages of SMD by showing signals with the same iteration detection time and the reconstruction results with the same PSNR. Figure 5(a) shows the detected signals of different detection modes with iteration detection time of 1s, sampling arrays are $8\times 8$ and $16 \times 16$ respectively. The SNR of SMD signals increase as the increase of detection points, however, the signals of PPS are almost unchanged.

 figure: Fig. 5.

Fig. 5. Experimental results of letter T. Figure 5(a) shows the signals of a certain detection point on the relay wall with different number of detection points and in different detection modes, the iteration detection time is 1s, detection points are $8\times 8$ and $16 \times 16$ respectively. Figure 5(b) shows reconstruction results with different iteration detection time and different number of detection points. We choose the iteration detection time when the hidden object can just be reconstructed.

Download Full Size | PPT Slide | PDF

Figure 5(b) shows reconstruction results with different detection time and different sampling arrays. We compare the data acquisition speed by comparing the detection time of reconstruction results with the same PSNR. Compared with PPS mode, the speed of data collection is improved by more than three times with a $8\times 8$ sampling array and five times with a $16\times 16$ sampling array. According to the reconstruction results, SMD can achieve better results even with much less iteration detection time $\tau _i$. This is because effective detection duration $\tau$ in SMD is improved by $64$ or $256$ times and therefore iterative detection duration $\tau _i$ needed to achieve similar quality of reconstruction is reduced. While in PPS mode, the effective detection duration $\tau$ is equal to iterative detection duration $\tau _i$. It is noticed that the improvement of imaging speed is less obvious than the improve of effective detection duration $\tau$. It indicates that our SPC system may introduce extra error such as time jitter that may prohibit the promotion of detection efficiency.

4. Compressed sensing

An alternative way to conduct SMD is to employ compressed sensing (CS) for reconstruction both in sapce domain and time domain [27]. CS can reconstruct images in under sampling condition, i.e., $\frac {N}{M}<1$, and the compression ratio $\alpha$ is equal to $\frac {N}{M}$. According to CS theory, unknown signal can be fully recovered with a small numbers of samples when, first, the signal has a sparse representation in some basis, and second, the sensing matrix is incoherent regarding to the representation basis [28], in this work we choose random binary matrix [29] and partial Hadamard matrix [21] as sensing matrix to conduct CS-SMD NLOS imaging. In NLOS imaging, signal on the wall is continuous and slowly changing, which means that it is compressible. Work [30] has applied the CS to NLOS imaging in PPS mode. With sufficient detection SNR, it achieves reducing the number of PPS points while keeping a reasonable reconstruction quality. Here, we apply the CS to NLOS imaging by SMD, which can not only achieve high-resolution reconstruction with fewer samples, but also improve the speed of data acquisition. Furthermore, different sensing matrices are demonstrated.

4.1 Simulation of CS-SMD NLOS imaging

We simulate the CS performance in SMD NLOS imaging, and analyze the minimum achievable compression ratio under different size of detection arrays. The hidden object is a letter T with the horizontal and longitudinal length are both 10 $\mathrm {cm}$, placed 30 $\mathrm {cm}$ away from the wall. The sampling array varies from $8 \times 8$ to $32 \times 32$. The detection area spans a 40 $\mathrm {cm}$ $\times$ 40 $\mathrm {cm}$ square. The hidden scene is within a 30 $\mathrm {cm}$ $\times$ 30 $\mathrm {cm}$ $\times$ 15 $\mathrm {cm}$ volume, and was divided into $100\times 100\times 50$ voxels. iteration detection time is 1 s. All photon histograms are detected with a time bin width of 1 ps. The total system time resolution is set as 50 ps.

As discussed in above section, the total effective detection time $\tau$ plays an important influence over the NLOS reconstruction quality. To focus on the study of how compression ratio effects the reconstruction, we assume sufficient $\tau$ and therefore detection SNR in our simulation. We employ TVAL3 algorithm [31] to restore the detection-point-dependent photon histograms, and filtered back-projection algorithm to reconstruct the hidden object. SSIM is used to evaluate the image quality. Simulation results are shown in Fig. 6. The results of CS reconstruction by random sampling and partial Hadamard matrix are shown in Fig. 6(a) and in Fig. 6(b), respectively. Apart from the low compression ratio condition, reconstructions using partial Hadamard matrix are better than that using random patterns. What is more important, our simulation results indicate that, even with sufficient detection SNR, the reduction of compression ratio will affect NLOS reconstruction. This is because CS reconstruction itself is an underdetermined problem. Under low sampling ratio, errors will be introduced to the reconstructed detection-point-dependent photon histograms, which will then significantly affect the NLOS reconstruction. This is true especially when the size of sampling array is small. With a small sampling array size, low compression ratio means a limited number of sampling steps, error introduced to restored photon histograms becomes significant. On the contrary, for a given object, a larger sampling array size gives more space for sampling ratio compression.

 figure: Fig. 6.

Fig. 6. Simulation result of CS. Figure 6(a) are reconstruction results with random binary matrix as sampling base. Figure 6(b) are reconstruction results with partial Hadamard matirx as sampling base. The compression ratio varies from 12.5$\%$ to 50$\%$, and sampling array varies from $8 \times 8$ to $32 \times 32$. We apply the SSIM to evaluate the image quality.

Download Full Size | PPT Slide | PDF

4.2 Experiment of CS-SMD NLOS imaging

We carry out experimental demonstration of the performance of CS-SMD detection in the NLOS imaging, and discuss the influence of the number of detection points on the compression ratio. The hidden object is a 12 $\mathrm {cm}$ $\times$ 10 $\mathrm {cm}$ letter T, which is placed 15 $\mathrm {cm}$ away from the wall. In reconstruction we discretize the space of an overall area $30 \times 30 \times 10$ $\mathrm {cm}^{3}$ into a $150\times 150\times 100$ voxels, each voxel being in a size of $0.2 \times 0.2 \times 0.2$ $\mathrm {cm}^{3}$. The TVAL3 algorithm and filtered back-projection algorithm are also used in the experiment. Sufficient iteration detection time is used to obtain a good reconstruction result, the iteration detection time is 3s per pattern. PSNR is employed to evaluate the imaging quality. Reconstruction results with $8 \times 8$ and $16 \times 16$ sampling array are shown in Fig. 7. For both random patterns and partial Hadamard patterns, we regard 75$\%$ and 18$\%$ as the minimum compression ratio that NLOS reconstruction is achieved for $8 \times 8$ and $16 \times 16$ sampling arrays, respectively. Consistently to simulation results, experiment results using partial Hadamard matrix are better than random matrix, but the minimum compression ratio they can achieve is the same. It means that it takes the same total detection time to reach the minimum compression ratio for different sampling array in the experiment. For comparison, with $16 \times 16$ sampling array, the reconstruction result of orthogonal Hadamard sampling with 3s iteration detection time in Fig. 7(c), which has a similar reconstruction PSNR to that using 25$\%$ partial Hadamard matrix. Compared to orthogonal Hadamard sampling, CS-SMD can further improve the data acquisition speed by 4 times under sufficient detection SNR. It indicates that, CS can be effectively reduce compression ratio for large sampling array, which will be preferred for certain NLOS schemes [810]

 figure: Fig. 7.

Fig. 7. Experimental results of CS with iteration detection time of 3s. Figure 7(a) and 7(b) are recontruction results with random matrix. Figure 7(d) and 7(e) are recontruction results with partial Hadamard matrix. Figure 7(a) and 7(d) show the experimental results of letter T with a $8 \times 8$ sampling array, the minimum compression ratio is 75$\%$. Figure 7(b) and 7(e) show the experimental results with a $16 \times 16$ sampling array, the minimum compression ratio can reach 18$\%$. Figure 7(c) shows the reconstruction results of orthogonal Hadamard sampling with a $16 \times 16$ sampling array, the iteration detection time is 3s.

Download Full Size | PPT Slide | PDF

5. Conclusion

In this work, we demonstrate a NLOS imaging modality with SMD. By detecting structured back-scattered light over under certain basis, detection efficiency can be significantly increased by SMD. In experiment, this is achieved by using a SPC in the collecting end of single photon detection. Point-wise photon histograms are computationally derived, which are then used in NLOS imaging. To demonstrate the proposed method, both simulation and experiment are conducted. With orthogonal sampling basis, we prove that SMD surpasses the traditional PPS mode in terms of detection efficiency and imaging speed. With a $16 \times 16$ sampling array, we demonstrate a 5 times boost of imaging speed using SMD compared with PPS mode. We also apply compressed sensing sampling approaches in NLOS imaging. With a $16 \times 16$ sampling array, compression ratio of 18$\%$ is achieved. In both orthogonal Hadamard and CS mode, SMD can significantly increase the imaging speed in NLOS scenario. More importantly, this is achieved without the cost of any other properties. Nevertheless, the boost of imaging speed is not as good as theoretically expected, indicating that there is still space to improve the performance of our SMD system, i.e., the SPC. In the further, effort should be devoted to increase the collection efficiency of SPC, aiming for a higher SMD performance while keeping the advantage of temporal resolution of single-pixel SPAD.

Funding

Shandong Key Research and Development Program (2019GGX104002, 2020CXGC010104); Shandong University Inter-discipline Research Grant.; Shandong Joint Funds of Natural Science (ZR2019LLZ003-1).

Disclosures

The authors declare no conflicts of interest.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. Bertolotti, E. Putten, C. Blum, A. Lagendijk, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012). [CrossRef]  

2. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012). [CrossRef]  

3. T. Maeda, Y. Wang, R. Raskar, and A. Kadambi, “Thermal non-line-of-sight imaging,” in 2019 IEEE International Conference on Computational Photography (ICCP), (2019), pp. 1–11.

4. D. B. Lindell, G. Wetzstein, and V. Koltun, “Acoustic non-line-of-sight imaging,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019).

5. C. Saunders, J. Murray-Bruce, and V. K. Goyal, “Computational periscopy with an ordinary digital camera,” Nature 565(7740), 472–475 (2019). [CrossRef]  

6. F. Xu, G. Shulkind, C. Thrampoulidis, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Revealing hidden scenes by photon-efficient occlusion-based opportunistic active imaging,” Opt. Express 26(8), 9945–9962 (2018). [CrossRef]  

7. X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. Huu Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual waveoptics,” Nature 572(7771), 620–623 (2019). [CrossRef]  

8. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012). [CrossRef]  

9. M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light-cone transform,” Nature 555(7696), 338–341 (2018). [CrossRef]  

10. D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast f-k migration,” ACM Trans. Graph. 38(4), 1–13 (2019). [CrossRef]  

11. C. Wu, J. Liu, X. Huang, Z. P. Li, and J. W. Pan, “Non-line-of-sight imaging over 1.43 km,” Proc. Natl. Acad. Sci. 118, e2024468118 (2021). [CrossRef]  

12. M. Renna, J. H. Nam, M. Buttafava, F. Villa, A. Velten, and A. Tosi, “Fast-gated 16× 1 spad array for non-line-of-sight imaging applications,” Instruments 4(2), 14 (2020). [CrossRef]  

13. N. J. A. Sloane and M. Harwit, “Masks for hadamard transform optics, and weighing designs,” Appl. Opt. 15(1), 107 (1976). [CrossRef]  

14. L. Bian, J. Suo, Q. Dai, and F. Chen, “Experimental comparison of single-pixel imaging algorithms,” J. Opt. Soc. Am. A 35(1), 78–87 (2018). [CrossRef]  

15. Q. Guo, H. Chen, Z. Weng, M. Chen, S. Yang, and S. Xie, “Compressive sensing based high-speed time-stretch optical microscopy for two-dimensional image acquisition,” Opt. Express 23(23), 29639–29646 (2015). [CrossRef]  

16. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3d computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013). [CrossRef]  

17. G. M. Gibson, B. Sun, M. P. Edgar, D. B. Phillips, and M. J. Padgett, “Real-time imaging of methane gas leaks using a single-pixel camera,” Opt. Express 25(4), 2998 (2017). [CrossRef]  

18. M. Harwit and N. J. A. Sloane, >Hadamard Transform Optics (Academic Press, 1979).

19. S. Jiang, X. Li, Z. Zhang, W. Jiang, and B. Sun, “Scan efficiency of structured illumination in iterative single pixel imaging,” Opt. Express 27(16), 22499 (2019). [CrossRef]  

20. E. Candes, J. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Comm. Pure Appl. Math. 59(8), 1207–1223 (2006). [CrossRef]  

21. Y. Tsaig and D. L. Donoho, “Extensions of compressed sensing,” Signal processing 86(3), 549–571 (2006). [CrossRef]  

22. B. Ahn, A. Dave, A. Veeraraghavan, I. Gkioulekas, and A. C. Sankaranarayanan, “Convolutional approximations to the general non-line-of-sight imaging operator,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, (2019), pp. 7889–7899.

23. M. La Manna, F. Kine, E. Breitbach, J. Jackson, T. Sultan, and A. Velten, “Error backprojection algorithms for non-line-of-sight imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1615–1626 (2018). [CrossRef]  

24. M. Laurenzis and A. Velten, “Feature selection and back-projection algorithms for nonline-of-sight laser–gated viewing,” J. Electron. Imaging 23(6), 063003 (2014). [CrossRef]  

25. G. Musarra, A. Lyons, E. Conca, Y. Altmann, and D. Faccio, “Non-line-of-sight three-dimensional imaging with a single-pixel camera,” Phys. Rev. Appl. 12(1), 011002 (2019). [CrossRef]  

26. C. M. Ward, J. Harguess, B. Crabb, and S. Parameswaran, “Image quality assessment for determining efficacy and limitations of super-resolution convolutional neural network (srcnn),” in Applications of Digital Image Processing XL, vol. 10396 (International Society for Optics and Photonics, 2017), p. 1039605.

27. J. Zhao, J. Dai, B. Braverman, X.-C. Zhang, and R. W. Boyd, “Compressive ultrafast pulse measurement via time-domain single-pixel imaging,” Optica 8(9), 1176–1185 (2021). [CrossRef]  

28. R. M. Willett, R. F. Marcia, and J. Nichols, “Compressed sensing for practical optical imaging systems: a tutorial,” in Photonics Conference, (2012).

29. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]  

30. J.-T. Ye, X. Huang, Z.-P. Li, and F. Xu, “Compressed sensing for active non-line-of-sight imaging,” Opt. Express 29(2), 1749–1763 (2021). [CrossRef]  

31. C. Li, “An efficient algorithm for total variation regularization with applications to the single pixel camera and compressive sensing,” Dissertations & Theses Gradworks (2011).

References

  • View by:

  1. J. Bertolotti, E. Putten, C. Blum, A. Lagendijk, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
    [Crossref]
  2. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012).
    [Crossref]
  3. T. Maeda, Y. Wang, R. Raskar, and A. Kadambi, “Thermal non-line-of-sight imaging,” in 2019 IEEE International Conference on Computational Photography (ICCP), (2019), pp. 1–11.
  4. D. B. Lindell, G. Wetzstein, and V. Koltun, “Acoustic non-line-of-sight imaging,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019).
  5. C. Saunders, J. Murray-Bruce, and V. K. Goyal, “Computational periscopy with an ordinary digital camera,” Nature 565(7740), 472–475 (2019).
    [Crossref]
  6. F. Xu, G. Shulkind, C. Thrampoulidis, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Revealing hidden scenes by photon-efficient occlusion-based opportunistic active imaging,” Opt. Express 26(8), 9945–9962 (2018).
    [Crossref]
  7. X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. Huu Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual waveoptics,” Nature 572(7771), 620–623 (2019).
    [Crossref]
  8. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012).
    [Crossref]
  9. M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light-cone transform,” Nature 555(7696), 338–341 (2018).
    [Crossref]
  10. D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast f-k migration,” ACM Trans. Graph. 38(4), 1–13 (2019).
    [Crossref]
  11. C. Wu, J. Liu, X. Huang, Z. P. Li, and J. W. Pan, “Non-line-of-sight imaging over 1.43 km,” Proc. Natl. Acad. Sci. 118, e2024468118 (2021).
    [Crossref]
  12. M. Renna, J. H. Nam, M. Buttafava, F. Villa, A. Velten, and A. Tosi, “Fast-gated 16× 1 spad array for non-line-of-sight imaging applications,” Instruments 4(2), 14 (2020).
    [Crossref]
  13. N. J. A. Sloane and M. Harwit, “Masks for hadamard transform optics, and weighing designs,” Appl. Opt. 15(1), 107 (1976).
    [Crossref]
  14. L. Bian, J. Suo, Q. Dai, and F. Chen, “Experimental comparison of single-pixel imaging algorithms,” J. Opt. Soc. Am. A 35(1), 78–87 (2018).
    [Crossref]
  15. Q. Guo, H. Chen, Z. Weng, M. Chen, S. Yang, and S. Xie, “Compressive sensing based high-speed time-stretch optical microscopy for two-dimensional image acquisition,” Opt. Express 23(23), 29639–29646 (2015).
    [Crossref]
  16. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3d computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
    [Crossref]
  17. G. M. Gibson, B. Sun, M. P. Edgar, D. B. Phillips, and M. J. Padgett, “Real-time imaging of methane gas leaks using a single-pixel camera,” Opt. Express 25(4), 2998 (2017).
    [Crossref]
  18. M. Harwit and N. J. A. Sloane, >Hadamard Transform Optics (Academic Press, 1979).
  19. S. Jiang, X. Li, Z. Zhang, W. Jiang, and B. Sun, “Scan efficiency of structured illumination in iterative single pixel imaging,” Opt. Express 27(16), 22499 (2019).
    [Crossref]
  20. E. Candes, J. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Comm. Pure Appl. Math. 59(8), 1207–1223 (2006).
    [Crossref]
  21. Y. Tsaig and D. L. Donoho, “Extensions of compressed sensing,” Signal processing 86(3), 549–571 (2006).
    [Crossref]
  22. B. Ahn, A. Dave, A. Veeraraghavan, I. Gkioulekas, and A. C. Sankaranarayanan, “Convolutional approximations to the general non-line-of-sight imaging operator,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, (2019), pp. 7889–7899.
  23. M. La Manna, F. Kine, E. Breitbach, J. Jackson, T. Sultan, and A. Velten, “Error backprojection algorithms for non-line-of-sight imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1615–1626 (2018).
    [Crossref]
  24. M. Laurenzis and A. Velten, “Feature selection and back-projection algorithms for nonline-of-sight laser–gated viewing,” J. Electron. Imaging 23(6), 063003 (2014).
    [Crossref]
  25. G. Musarra, A. Lyons, E. Conca, Y. Altmann, and D. Faccio, “Non-line-of-sight three-dimensional imaging with a single-pixel camera,” Phys. Rev. Appl. 12(1), 011002 (2019).
    [Crossref]
  26. C. M. Ward, J. Harguess, B. Crabb, and S. Parameswaran, “Image quality assessment for determining efficacy and limitations of super-resolution convolutional neural network (srcnn),” in Applications of Digital Image Processing XL, vol. 10396 (International Society for Optics and Photonics, 2017), p. 1039605.
  27. J. Zhao, J. Dai, B. Braverman, X.-C. Zhang, and R. W. Boyd, “Compressive ultrafast pulse measurement via time-domain single-pixel imaging,” Optica 8(9), 1176–1185 (2021).
    [Crossref]
  28. R. M. Willett, R. F. Marcia, and J. Nichols, “Compressed sensing for practical optical imaging systems: a tutorial,” in Photonics Conference, (2012).
  29. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006).
    [Crossref]
  30. J.-T. Ye, X. Huang, Z.-P. Li, and F. Xu, “Compressed sensing for active non-line-of-sight imaging,” Opt. Express 29(2), 1749–1763 (2021).
    [Crossref]
  31. C. Li, “An efficient algorithm for total variation regularization with applications to the single pixel camera and compressive sensing,” Dissertations & Theses Gradworks (2011).

2021 (3)

2020 (1)

M. Renna, J. H. Nam, M. Buttafava, F. Villa, A. Velten, and A. Tosi, “Fast-gated 16× 1 spad array for non-line-of-sight imaging applications,” Instruments 4(2), 14 (2020).
[Crossref]

2019 (5)

S. Jiang, X. Li, Z. Zhang, W. Jiang, and B. Sun, “Scan efficiency of structured illumination in iterative single pixel imaging,” Opt. Express 27(16), 22499 (2019).
[Crossref]

C. Saunders, J. Murray-Bruce, and V. K. Goyal, “Computational periscopy with an ordinary digital camera,” Nature 565(7740), 472–475 (2019).
[Crossref]

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. Huu Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual waveoptics,” Nature 572(7771), 620–623 (2019).
[Crossref]

D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast f-k migration,” ACM Trans. Graph. 38(4), 1–13 (2019).
[Crossref]

G. Musarra, A. Lyons, E. Conca, Y. Altmann, and D. Faccio, “Non-line-of-sight three-dimensional imaging with a single-pixel camera,” Phys. Rev. Appl. 12(1), 011002 (2019).
[Crossref]

2018 (4)

M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light-cone transform,” Nature 555(7696), 338–341 (2018).
[Crossref]

F. Xu, G. Shulkind, C. Thrampoulidis, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Revealing hidden scenes by photon-efficient occlusion-based opportunistic active imaging,” Opt. Express 26(8), 9945–9962 (2018).
[Crossref]

M. La Manna, F. Kine, E. Breitbach, J. Jackson, T. Sultan, and A. Velten, “Error backprojection algorithms for non-line-of-sight imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1615–1626 (2018).
[Crossref]

L. Bian, J. Suo, Q. Dai, and F. Chen, “Experimental comparison of single-pixel imaging algorithms,” J. Opt. Soc. Am. A 35(1), 78–87 (2018).
[Crossref]

2017 (1)

2015 (1)

2014 (1)

M. Laurenzis and A. Velten, “Feature selection and back-projection algorithms for nonline-of-sight laser–gated viewing,” J. Electron. Imaging 23(6), 063003 (2014).
[Crossref]

2013 (1)

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3d computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

2012 (3)

J. Bertolotti, E. Putten, C. Blum, A. Lagendijk, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012).
[Crossref]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012).
[Crossref]

2006 (3)

E. Candes, J. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Comm. Pure Appl. Math. 59(8), 1207–1223 (2006).
[Crossref]

Y. Tsaig and D. L. Donoho, “Extensions of compressed sensing,” Signal processing 86(3), 549–571 (2006).
[Crossref]

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006).
[Crossref]

1976 (1)

Ahn, B.

B. Ahn, A. Dave, A. Veeraraghavan, I. Gkioulekas, and A. C. Sankaranarayanan, “Convolutional approximations to the general non-line-of-sight imaging operator,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, (2019), pp. 7889–7899.

Altmann, Y.

G. Musarra, A. Lyons, E. Conca, Y. Altmann, and D. Faccio, “Non-line-of-sight three-dimensional imaging with a single-pixel camera,” Phys. Rev. Appl. 12(1), 011002 (2019).
[Crossref]

Bawendi, M. G.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012).
[Crossref]

Bertolotti, J.

J. Bertolotti, E. Putten, C. Blum, A. Lagendijk, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

Bian, L.

Blum, C.

J. Bertolotti, E. Putten, C. Blum, A. Lagendijk, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

Bowman, A.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3d computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Bowman, R.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3d computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Boyd, R. W.

Braverman, B.

Breitbach, E.

M. La Manna, F. Kine, E. Breitbach, J. Jackson, T. Sultan, and A. Velten, “Error backprojection algorithms for non-line-of-sight imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1615–1626 (2018).
[Crossref]

Buttafava, M.

M. Renna, J. H. Nam, M. Buttafava, F. Villa, A. Velten, and A. Tosi, “Fast-gated 16× 1 spad array for non-line-of-sight imaging applications,” Instruments 4(2), 14 (2020).
[Crossref]

Candes, E.

E. Candes, J. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Comm. Pure Appl. Math. 59(8), 1207–1223 (2006).
[Crossref]

Chen, F.

Chen, H.

Chen, M.

Conca, E.

G. Musarra, A. Lyons, E. Conca, Y. Altmann, and D. Faccio, “Non-line-of-sight three-dimensional imaging with a single-pixel camera,” Phys. Rev. Appl. 12(1), 011002 (2019).
[Crossref]

Crabb, B.

C. M. Ward, J. Harguess, B. Crabb, and S. Parameswaran, “Image quality assessment for determining efficacy and limitations of super-resolution convolutional neural network (srcnn),” in Applications of Digital Image Processing XL, vol. 10396 (International Society for Optics and Photonics, 2017), p. 1039605.

Dai, J.

Dai, Q.

Dave, A.

B. Ahn, A. Dave, A. Veeraraghavan, I. Gkioulekas, and A. C. Sankaranarayanan, “Convolutional approximations to the general non-line-of-sight imaging operator,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, (2019), pp. 7889–7899.

Donoho, D. L.

Y. Tsaig and D. L. Donoho, “Extensions of compressed sensing,” Signal processing 86(3), 549–571 (2006).
[Crossref]

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006).
[Crossref]

Edgar, M. P.

G. M. Gibson, B. Sun, M. P. Edgar, D. B. Phillips, and M. J. Padgett, “Real-time imaging of methane gas leaks using a single-pixel camera,” Opt. Express 25(4), 2998 (2017).
[Crossref]

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3d computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Faccio, D.

G. Musarra, A. Lyons, E. Conca, Y. Altmann, and D. Faccio, “Non-line-of-sight three-dimensional imaging with a single-pixel camera,” Phys. Rev. Appl. 12(1), 011002 (2019).
[Crossref]

Gibson, G. M.

Gkioulekas, I.

B. Ahn, A. Dave, A. Veeraraghavan, I. Gkioulekas, and A. C. Sankaranarayanan, “Convolutional approximations to the general non-line-of-sight imaging operator,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, (2019), pp. 7889–7899.

Goyal, V. K.

C. Saunders, J. Murray-Bruce, and V. K. Goyal, “Computational periscopy with an ordinary digital camera,” Nature 565(7740), 472–475 (2019).
[Crossref]

Guillén, I.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. Huu Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual waveoptics,” Nature 572(7771), 620–623 (2019).
[Crossref]

Guo, Q.

Gupta, O.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012).
[Crossref]

Gutierrez, D.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. Huu Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual waveoptics,” Nature 572(7771), 620–623 (2019).
[Crossref]

Harguess, J.

C. M. Ward, J. Harguess, B. Crabb, and S. Parameswaran, “Image quality assessment for determining efficacy and limitations of super-resolution convolutional neural network (srcnn),” in Applications of Digital Image Processing XL, vol. 10396 (International Society for Optics and Photonics, 2017), p. 1039605.

Harwit, M.

Huang, X.

J.-T. Ye, X. Huang, Z.-P. Li, and F. Xu, “Compressed sensing for active non-line-of-sight imaging,” Opt. Express 29(2), 1749–1763 (2021).
[Crossref]

C. Wu, J. Liu, X. Huang, Z. P. Li, and J. W. Pan, “Non-line-of-sight imaging over 1.43 km,” Proc. Natl. Acad. Sci. 118, e2024468118 (2021).
[Crossref]

Huu Le, T.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. Huu Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual waveoptics,” Nature 572(7771), 620–623 (2019).
[Crossref]

Jackson, J.

M. La Manna, F. Kine, E. Breitbach, J. Jackson, T. Sultan, and A. Velten, “Error backprojection algorithms for non-line-of-sight imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1615–1626 (2018).
[Crossref]

Jarabo, A.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. Huu Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual waveoptics,” Nature 572(7771), 620–623 (2019).
[Crossref]

Jiang, S.

Jiang, W.

Kadambi, A.

T. Maeda, Y. Wang, R. Raskar, and A. Kadambi, “Thermal non-line-of-sight imaging,” in 2019 IEEE International Conference on Computational Photography (ICCP), (2019), pp. 1–11.

Katz, O.

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012).
[Crossref]

Kine, F.

M. La Manna, F. Kine, E. Breitbach, J. Jackson, T. Sultan, and A. Velten, “Error backprojection algorithms for non-line-of-sight imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1615–1626 (2018).
[Crossref]

Koltun, V.

D. B. Lindell, G. Wetzstein, and V. Koltun, “Acoustic non-line-of-sight imaging,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019).

La Manna, M.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. Huu Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual waveoptics,” Nature 572(7771), 620–623 (2019).
[Crossref]

M. La Manna, F. Kine, E. Breitbach, J. Jackson, T. Sultan, and A. Velten, “Error backprojection algorithms for non-line-of-sight imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1615–1626 (2018).
[Crossref]

Lagendijk, A.

J. Bertolotti, E. Putten, C. Blum, A. Lagendijk, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

Laurenzis, M.

M. Laurenzis and A. Velten, “Feature selection and back-projection algorithms for nonline-of-sight laser–gated viewing,” J. Electron. Imaging 23(6), 063003 (2014).
[Crossref]

Li, C.

C. Li, “An efficient algorithm for total variation regularization with applications to the single pixel camera and compressive sensing,” Dissertations & Theses Gradworks (2011).

Li, X.

Li, Z. P.

C. Wu, J. Liu, X. Huang, Z. P. Li, and J. W. Pan, “Non-line-of-sight imaging over 1.43 km,” Proc. Natl. Acad. Sci. 118, e2024468118 (2021).
[Crossref]

Li, Z.-P.

Lindell, D. B.

D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast f-k migration,” ACM Trans. Graph. 38(4), 1–13 (2019).
[Crossref]

M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light-cone transform,” Nature 555(7696), 338–341 (2018).
[Crossref]

D. B. Lindell, G. Wetzstein, and V. Koltun, “Acoustic non-line-of-sight imaging,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019).

Liu, J.

C. Wu, J. Liu, X. Huang, Z. P. Li, and J. W. Pan, “Non-line-of-sight imaging over 1.43 km,” Proc. Natl. Acad. Sci. 118, e2024468118 (2021).
[Crossref]

Liu, X.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. Huu Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual waveoptics,” Nature 572(7771), 620–623 (2019).
[Crossref]

Lyons, A.

G. Musarra, A. Lyons, E. Conca, Y. Altmann, and D. Faccio, “Non-line-of-sight three-dimensional imaging with a single-pixel camera,” Phys. Rev. Appl. 12(1), 011002 (2019).
[Crossref]

Maeda, T.

T. Maeda, Y. Wang, R. Raskar, and A. Kadambi, “Thermal non-line-of-sight imaging,” in 2019 IEEE International Conference on Computational Photography (ICCP), (2019), pp. 1–11.

Marcia, R. F.

R. M. Willett, R. F. Marcia, and J. Nichols, “Compressed sensing for practical optical imaging systems: a tutorial,” in Photonics Conference, (2012).

Mosk, A. P.

J. Bertolotti, E. Putten, C. Blum, A. Lagendijk, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

Murray-Bruce, J.

C. Saunders, J. Murray-Bruce, and V. K. Goyal, “Computational periscopy with an ordinary digital camera,” Nature 565(7740), 472–475 (2019).
[Crossref]

Musarra, G.

G. Musarra, A. Lyons, E. Conca, Y. Altmann, and D. Faccio, “Non-line-of-sight three-dimensional imaging with a single-pixel camera,” Phys. Rev. Appl. 12(1), 011002 (2019).
[Crossref]

Nam, J. H.

M. Renna, J. H. Nam, M. Buttafava, F. Villa, A. Velten, and A. Tosi, “Fast-gated 16× 1 spad array for non-line-of-sight imaging applications,” Instruments 4(2), 14 (2020).
[Crossref]

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. Huu Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual waveoptics,” Nature 572(7771), 620–623 (2019).
[Crossref]

Nichols, J.

R. M. Willett, R. F. Marcia, and J. Nichols, “Compressed sensing for practical optical imaging systems: a tutorial,” in Photonics Conference, (2012).

O’Toole, M.

D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast f-k migration,” ACM Trans. Graph. 38(4), 1–13 (2019).
[Crossref]

M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light-cone transform,” Nature 555(7696), 338–341 (2018).
[Crossref]

Padgett, M. J.

G. M. Gibson, B. Sun, M. P. Edgar, D. B. Phillips, and M. J. Padgett, “Real-time imaging of methane gas leaks using a single-pixel camera,” Opt. Express 25(4), 2998 (2017).
[Crossref]

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3d computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Pan, J. W.

C. Wu, J. Liu, X. Huang, Z. P. Li, and J. W. Pan, “Non-line-of-sight imaging over 1.43 km,” Proc. Natl. Acad. Sci. 118, e2024468118 (2021).
[Crossref]

Parameswaran, S.

C. M. Ward, J. Harguess, B. Crabb, and S. Parameswaran, “Image quality assessment for determining efficacy and limitations of super-resolution convolutional neural network (srcnn),” in Applications of Digital Image Processing XL, vol. 10396 (International Society for Optics and Photonics, 2017), p. 1039605.

Phillips, D. B.

Putten, E.

J. Bertolotti, E. Putten, C. Blum, A. Lagendijk, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

Raskar, R.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012).
[Crossref]

T. Maeda, Y. Wang, R. Raskar, and A. Kadambi, “Thermal non-line-of-sight imaging,” in 2019 IEEE International Conference on Computational Photography (ICCP), (2019), pp. 1–11.

Renna, M.

M. Renna, J. H. Nam, M. Buttafava, F. Villa, A. Velten, and A. Tosi, “Fast-gated 16× 1 spad array for non-line-of-sight imaging applications,” Instruments 4(2), 14 (2020).
[Crossref]

Reza, S. A.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. Huu Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual waveoptics,” Nature 572(7771), 620–623 (2019).
[Crossref]

Romberg, J.

E. Candes, J. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Comm. Pure Appl. Math. 59(8), 1207–1223 (2006).
[Crossref]

Sankaranarayanan, A. C.

B. Ahn, A. Dave, A. Veeraraghavan, I. Gkioulekas, and A. C. Sankaranarayanan, “Convolutional approximations to the general non-line-of-sight imaging operator,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, (2019), pp. 7889–7899.

Saunders, C.

C. Saunders, J. Murray-Bruce, and V. K. Goyal, “Computational periscopy with an ordinary digital camera,” Nature 565(7740), 472–475 (2019).
[Crossref]

Shapiro, J. H.

Shulkind, G.

Silberberg, Y.

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012).
[Crossref]

Sloane, N. J. A.

Small, E.

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012).
[Crossref]

Sultan, T.

M. La Manna, F. Kine, E. Breitbach, J. Jackson, T. Sultan, and A. Velten, “Error backprojection algorithms for non-line-of-sight imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1615–1626 (2018).
[Crossref]

Sun, B.

Suo, J.

Tao, T.

E. Candes, J. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Comm. Pure Appl. Math. 59(8), 1207–1223 (2006).
[Crossref]

Thrampoulidis, C.

Torralba, A.

Tosi, A.

M. Renna, J. H. Nam, M. Buttafava, F. Villa, A. Velten, and A. Tosi, “Fast-gated 16× 1 spad array for non-line-of-sight imaging applications,” Instruments 4(2), 14 (2020).
[Crossref]

Tsaig, Y.

Y. Tsaig and D. L. Donoho, “Extensions of compressed sensing,” Signal processing 86(3), 549–571 (2006).
[Crossref]

Veeraraghavan, A.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012).
[Crossref]

B. Ahn, A. Dave, A. Veeraraghavan, I. Gkioulekas, and A. C. Sankaranarayanan, “Convolutional approximations to the general non-line-of-sight imaging operator,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, (2019), pp. 7889–7899.

Velten, A.

M. Renna, J. H. Nam, M. Buttafava, F. Villa, A. Velten, and A. Tosi, “Fast-gated 16× 1 spad array for non-line-of-sight imaging applications,” Instruments 4(2), 14 (2020).
[Crossref]

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. Huu Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual waveoptics,” Nature 572(7771), 620–623 (2019).
[Crossref]

M. La Manna, F. Kine, E. Breitbach, J. Jackson, T. Sultan, and A. Velten, “Error backprojection algorithms for non-line-of-sight imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1615–1626 (2018).
[Crossref]

M. Laurenzis and A. Velten, “Feature selection and back-projection algorithms for nonline-of-sight laser–gated viewing,” J. Electron. Imaging 23(6), 063003 (2014).
[Crossref]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012).
[Crossref]

Villa, F.

M. Renna, J. H. Nam, M. Buttafava, F. Villa, A. Velten, and A. Tosi, “Fast-gated 16× 1 spad array for non-line-of-sight imaging applications,” Instruments 4(2), 14 (2020).
[Crossref]

Vittert, L. E.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3d computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Wang, Y.

T. Maeda, Y. Wang, R. Raskar, and A. Kadambi, “Thermal non-line-of-sight imaging,” in 2019 IEEE International Conference on Computational Photography (ICCP), (2019), pp. 1–11.

Ward, C. M.

C. M. Ward, J. Harguess, B. Crabb, and S. Parameswaran, “Image quality assessment for determining efficacy and limitations of super-resolution convolutional neural network (srcnn),” in Applications of Digital Image Processing XL, vol. 10396 (International Society for Optics and Photonics, 2017), p. 1039605.

Welsh, S.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3d computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Weng, Z.

Wetzstein, G.

D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast f-k migration,” ACM Trans. Graph. 38(4), 1–13 (2019).
[Crossref]

M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light-cone transform,” Nature 555(7696), 338–341 (2018).
[Crossref]

D. B. Lindell, G. Wetzstein, and V. Koltun, “Acoustic non-line-of-sight imaging,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019).

Willett, R. M.

R. M. Willett, R. F. Marcia, and J. Nichols, “Compressed sensing for practical optical imaging systems: a tutorial,” in Photonics Conference, (2012).

Willwacher, T.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012).
[Crossref]

Wong, F. N. C.

Wornell, G. W.

Wu, C.

C. Wu, J. Liu, X. Huang, Z. P. Li, and J. W. Pan, “Non-line-of-sight imaging over 1.43 km,” Proc. Natl. Acad. Sci. 118, e2024468118 (2021).
[Crossref]

Xie, S.

Xu, F.

Yang, S.

Ye, J.-T.

Zhang, X.-C.

Zhang, Z.

Zhao, J.

ACM Trans. Graph. (1)

D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast f-k migration,” ACM Trans. Graph. 38(4), 1–13 (2019).
[Crossref]

Appl. Opt. (1)

Comm. Pure Appl. Math. (1)

E. Candes, J. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Comm. Pure Appl. Math. 59(8), 1207–1223 (2006).
[Crossref]

IEEE Trans. Inf. Theory (1)

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

M. La Manna, F. Kine, E. Breitbach, J. Jackson, T. Sultan, and A. Velten, “Error backprojection algorithms for non-line-of-sight imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1615–1626 (2018).
[Crossref]

Instruments (1)

M. Renna, J. H. Nam, M. Buttafava, F. Villa, A. Velten, and A. Tosi, “Fast-gated 16× 1 spad array for non-line-of-sight imaging applications,” Instruments 4(2), 14 (2020).
[Crossref]

J. Electron. Imaging (1)

M. Laurenzis and A. Velten, “Feature selection and back-projection algorithms for nonline-of-sight laser–gated viewing,” J. Electron. Imaging 23(6), 063003 (2014).
[Crossref]

J. Opt. Soc. Am. A (1)

Nat. Commun. (1)

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012).
[Crossref]

Nat. Photonics (1)

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012).
[Crossref]

Nature (4)

C. Saunders, J. Murray-Bruce, and V. K. Goyal, “Computational periscopy with an ordinary digital camera,” Nature 565(7740), 472–475 (2019).
[Crossref]

M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light-cone transform,” Nature 555(7696), 338–341 (2018).
[Crossref]

J. Bertolotti, E. Putten, C. Blum, A. Lagendijk, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. Huu Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual waveoptics,” Nature 572(7771), 620–623 (2019).
[Crossref]

Opt. Express (5)

Optica (1)

Phys. Rev. Appl. (1)

G. Musarra, A. Lyons, E. Conca, Y. Altmann, and D. Faccio, “Non-line-of-sight three-dimensional imaging with a single-pixel camera,” Phys. Rev. Appl. 12(1), 011002 (2019).
[Crossref]

Proc. Natl. Acad. Sci. (1)

C. Wu, J. Liu, X. Huang, Z. P. Li, and J. W. Pan, “Non-line-of-sight imaging over 1.43 km,” Proc. Natl. Acad. Sci. 118, e2024468118 (2021).
[Crossref]

Science (1)

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3d computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Signal processing (1)

Y. Tsaig and D. L. Donoho, “Extensions of compressed sensing,” Signal processing 86(3), 549–571 (2006).
[Crossref]

Other (7)

B. Ahn, A. Dave, A. Veeraraghavan, I. Gkioulekas, and A. C. Sankaranarayanan, “Convolutional approximations to the general non-line-of-sight imaging operator,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, (2019), pp. 7889–7899.

C. M. Ward, J. Harguess, B. Crabb, and S. Parameswaran, “Image quality assessment for determining efficacy and limitations of super-resolution convolutional neural network (srcnn),” in Applications of Digital Image Processing XL, vol. 10396 (International Society for Optics and Photonics, 2017), p. 1039605.

R. M. Willett, R. F. Marcia, and J. Nichols, “Compressed sensing for practical optical imaging systems: a tutorial,” in Photonics Conference, (2012).

C. Li, “An efficient algorithm for total variation regularization with applications to the single pixel camera and compressive sensing,” Dissertations & Theses Gradworks (2011).

M. Harwit and N. J. A. Sloane, >Hadamard Transform Optics (Academic Press, 1979).

T. Maeda, Y. Wang, R. Raskar, and A. Kadambi, “Thermal non-line-of-sight imaging,” in 2019 IEEE International Conference on Computational Photography (ICCP), (2019), pp. 1–11.

D. B. Lindell, G. Wetzstein, and V. Koltun, “Acoustic non-line-of-sight imaging,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019).

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Schematic of the experimental set up for NLOS imaging. A laser is used to actively illuminate a fixed spot of a relay wall. Part of the laser photons are reflected onto the object, get reflected again by the object, then arrive the relay wall for a third-time reflection. A camera lens imaging a square area of the relay wall onto DMD, photons spatially encoded after the DMD are detected by a single pixel SPAD through a signal gathering system, which combined with a lens and a micro-objective. The picture of modulating system is shown in bottom left. Single-photon data is recorded in a TCSPC, triggered by a reference beam that is split by a beam splitter from the laser before sending to the wall.
Fig. 2.
Fig. 2. Principle of SMD NLOS imaging. For straightforward illustration, we set the detection array in $2 \times 2$ pixels, and use a $4th$ order Hadamard matrix H as sampling basis. Figure 2(a) are original signals of each detection point. Figure 2(b) are four measurement patterns, which are derived from corresponding row of the Hadamard matrix. Under each measurement pattern, time resolved photon count histograms shown in Fig. 2(c) is obtained, which is the combination of photon signals over different detection points. Fig. 2(d) are restored detection point dependent signals.
Fig. 3.
Fig. 3. Simulation results of SMD NLOS imaing. In the case that only the detector noise is considered, the noise level of PPS signal in each detection is the same as that of SMD. All demonstrated signals are from the same detection point. Figure 3(a) shows the signal and reconstruction result of PPS, the sampling array is $32 \times 32$. Figure 3(b), (c) and (d) are detection point dependent signals and reconstruction results of SMD with sampling array of $8 \times 8$, $16 \times 16$ and $32 \times 32$ respectively.
Fig. 4.
Fig. 4. Experimental results of two rectangles, iteration detection time of demonstrated signals is 1 s. Figure 4(a) is the signal obtained from the projection of one row of Hadamard matrix. Figure 4(b) is the signal of a certain detection point, which is one solution of spatial multiplexing. Figure 4(c) is a signal obtained by PPS mode. Figure 4(c) and Fig. 4(b) are signals from the same detection point. Figure 4(d) are reconstruction results of two rectangles both in PPS mode and SMD mode, the iteration detection time varies from 0.3 s to 5 s. Figure 4(d) is 3D shape of the object, which contains the projections of $x-y$ plane and $x-z$ plane.
Fig. 5.
Fig. 5. Experimental results of letter T. Figure 5(a) shows the signals of a certain detection point on the relay wall with different number of detection points and in different detection modes, the iteration detection time is 1s, detection points are $8\times 8$ and $16 \times 16$ respectively. Figure 5(b) shows reconstruction results with different iteration detection time and different number of detection points. We choose the iteration detection time when the hidden object can just be reconstructed.
Fig. 6.
Fig. 6. Simulation result of CS. Figure 6(a) are reconstruction results with random binary matrix as sampling base. Figure 6(b) are reconstruction results with partial Hadamard matirx as sampling base. The compression ratio varies from 12.5$\%$ to 50$\%$, and sampling array varies from $8 \times 8$ to $32 \times 32$. We apply the SSIM to evaluate the image quality.
Fig. 7.
Fig. 7. Experimental results of CS with iteration detection time of 3s. Figure 7(a) and 7(b) are recontruction results with random matrix. Figure 7(d) and 7(e) are recontruction results with partial Hadamard matrix. Figure 7(a) and 7(d) show the experimental results of letter T with a $8 \times 8$ sampling array, the minimum compression ratio is 75$\%$. Figure 7(b) and 7(e) show the experimental results with a $16 \times 16$ sampling array, the minimum compression ratio can reach 18$\%$. Figure 7(c) shows the reconstruction results of orthogonal Hadamard sampling with a $16 \times 16$ sampling array, the iteration detection time is 3s.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

Y = A X
O = A T Y
o ( x , y , t ) = Ω 1 r l 2 r v 2 ρ ( x , y , z ) δ ( r l + r v t c ) d x d y d z
ρ b p ( x , y , z ) = Ω 1 r l 2 r v 2 o ( x , y , t ) δ ( r l + r v t c ) d x d y d t
S S I M = ( 2 μ x μ y + C 1 ) ( 2 σ x y + C 2 ) ( μ x 2 + μ y 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 )
τ = τ i × N .

Metrics

Select as filters


Select Topics Cancel
© Copyright 2022 | Optica Publishing Group. All Rights Reserved