Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single photon imaging based on a photon driven sparse sampling

Open Access Open Access

Abstract

Single photon three-dimensional (3D) imager can capture 3D profile details and see through obscuring objects with high sensitivity, making it promising in sensing and imaging applications. The key capabilities of such 3D imager lie on its depth resolution and multi-return discrimination. For conventional pulsed single photon lidar, these capabilities are limited by transmitter bandwidth and receiver bandwidth simultaneously. A single photon imager is proposed and experimentally demonstrated to implement time-resolved and multi-return imaging. Time-to-frequency conversion is performed to achieve millimetric depth resolution. Experimental results show that the depth resolution is better than 4.5 mm, even though time jitter of the SPAD reaches 1 ns and time resolution of the TCSPC module reaches 10 ns. Furthermore, photon driven sparse sampling mechanism allows us to discriminate multiple near surfaces, no longer limited by the receiver bandwidth. The simplicity of the system hardware enables low-cost and compact 3D imaging.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Single photon detection technique can detect extremely weak optical signals with single photon sensitivity, enabling 3D imaging under low flux of backscattered photons [1]. Due to its excellent sensitivity, distance to be measured is extended from tens of kilometers [2,3,4] to hundreds of kilometers [5,6,7], or lower power laser source is allowed to be used in long range applications [8,9]. Normally, a histogram can be accumulated by repeatedly recording photon events against emission time of associated pulsed laser [10,11]. Afterwards, applying statistical measurement technique (time-correlated single photon counting, TCSPC), sensitivity of single photon avalanche diode (SPAD) can be increased far beyond classical devices (e.g. linear mode APD or PIN diode), and the needed photon flux is significantly lower [12,13].

In general, high resolution 3D imaging can capture surface profile details and distinguish distance between two near surfaces by performing time-of-flight (ToF) measurement [14,15]. Nonetheless, for pulsed single photon lidar, depth resolution is limited not only by transmitter bandwidth (i.e. pulse width of laser source) but also by receiver bandwidth (subject to time jitter of the SPAD and time resolution of the TCSPC module). Therefore, sub-picosecond laser source combining with low timing-jitter SPAD and high time resolution TCSPC module was applied to improve the depth resolution [16]. For instance, few-photon imaging using a low timing-jitter SPAD was demonstrated [17]. Time jitter of the optimized system was 44 ps full width at half maximum (FWHM), which not only provided a higher depth resolution but also improved signal-to-noise ratio (SNR). Similarly, sub-picosecond photon-efficient 3D imaging was demonstrated [18]. Using a pulsed laser source with a FWHM wider than 50 ps, the imaging system was proved to provide excellent depth resolution. Recently, a QPMS-based 3D imager was reported with exceptional detection sensitivity and noise tolerance [1], where sub-picosecond timing resolution and quantum parametric mode sorting (QPMS) allowed to perform 3D imaging with high resolution and sensitivity despite strong background noise. Besides, combining single photon detection with computational techniques, such as image reconstruction algorithms [4,19,20], signal processing [21], and artificial intelligence assisted imaging [22], imaging capabilities were enhanced under extreme conditions.

Excellent depth resolution enables to discriminate objects at different depths, where multiple returned light is backscattered from partially-obscuring object along with partially-scattering object and then collected by a single pixel. When integration time was long enough to form a photon histogram, full waveform was reconstructed to implement multi-depth measurement [23,–26]. Unfortunately, distorting pileup effect [18] would occur when ToF difference between obscuring object and scattering object was less than dead time of the SPAD, because the SPAD can’t response any received photon during the dead time [27,28]. By applying an artificially time-gating in the single photon system, near obscuring-object was isolated from far scattering-object of interest at the cost of losing depth information about the near obscuring-object [29,30]. Thus, it is still a challenging work for single photon lidar to implement high resolution 3D imaging as well as independently discriminating multiple near surfaces until now. It was noted that a Geiger-mode avalanche photodiode array was used for collecting continuous wave after pixels binning operation [31,32,33], even though there was a dead time following each triggered event. Besides, photon-counting detectors were used in frequency modulated continuous wave (FMCW) ranging system [34]. Unlike conventional balanced difference detection, maximum likelihood estimation was used to perform range reconstruction. Such a novel concept might enable the development of new sensing methods and technologies to enhance the capabilities on depth resolution and multi-depth measurement, and open single photon 3D imaging to new fields of application.

A single photon imager based on photon driven sparse sampling is proposed and experimentally demonstrated in our work, where a single-pixel SPAD is utilized to collect the intermediate frequency (IF) signal and range reconstruction can be implemented be performing compressive sampling (CS). Based on the FMCW technique, depth resolution is no longer limited by the receiver bandwidth. Therefore, millimetric depth resolution can be implemented without combining complex image reconstruction algorithms, even though the time jitter of the SPAD reaches 1 ns and the time resolution of the TCSPC module reaches 10 ns. Furthermore, photon driven sparse sampling mechanism allows us to discriminate multiple near surfaces, no longer limited by the receiver bandwidth. These advantages enable our 3D imager to outperform the pulsed single photon lidar in high depth resolution and multi-return imaging.

2. High resolution imaging with poor optical receiver

2.1 Single photon heterodyne

Time-to-frequency conversion based on the FMCW technique is performed to improve the depth resolution in this paper. Due to different propagation paths, there is a frequency difference between the returned light and the local oscillator (LO) light. When the returned light is mixed with the LO light at the optical coupler, an IF signal is produced and expressed as

$$N(t )= {C_1}\frac{{{N_S}}}{2}\textrm{ + }{C_2}\frac{{{N_L}}}{2}\textrm{ + }\sqrt {{C_1}{C_2}{N_S}{N_L}} \cos ({{f_{IF}}t + {\varphi_{IF}}} )+ {N_{Dark}} + {N_{Bkgr}}$$
where $N(t )$ is the mean number of IF signal photons.$\mathrm{\; }{N_S}$ and $\mathrm{\; }{N_L}$ are the mean numbers of the returned photons and LO photons, respectively. ${f_{IF}}$ and ${\varphi _{IF}}$ are the frequency difference (intermediate frequency) and the phase difference between the returned light and the LO light, respectively. ${N_{Dark}}$ is the dark count of the SPAD and ${N_{Bkgr}}$ is the background count. $({{C_1}:{C_2}} )$ is the splitting ratio of the optical coupler. Normally, balanced detection is used in classical heterodyne system to improve the common mode rejection ratio (CMRR), where a 50:50 optical coupler is utilized to mix the returned light with the LO light. However, it is unsuitable for the single photon heterodyne to perform balanced detection. On the one hand, when the photons hit the photosensitive surface of the SPAD, a large number of photon events (photon arrival sequence) are triggered and output in the form of digital pulses, which can’t provide the amplitude of the IF signal. On the other hand, the photon events in the photon arrival sequence distribute randomly in time domain. In this case, an extraordinary splitting ratio, which is given by (${C_1}:{C_2}$ = 99:1), is designated to obtain a higher heterodyn efficiency for single photon heterodyne. According to Eq. (1), the heterodyne efficiency $\eta $ is given by
$$\eta = \sqrt {({{C_1}{N_S}} )({{C_2}{N_L}} )}$$
To improve the heterodyne performance, the LO photons (${C_2}{N_L}$) arriving at the SPAD should be approximately equal to that of the returned photons (${C_1}{N_S}$) in single photon heterodyne [32]. In this case, the heterodyne efficiency can be approximated to $\eta = {C_1}{N_S}$. When the splitting ratio is given by (99:1), most of the returned photons (99%) are mixed into Port-I and collected by the SPAD (99$\mathrm{\%} \times {N_S}$) while a small fraction of them are mixed into Port-II and abandoned. Although only small fraction of the LO photons (1%) are mixed into Port-I and collected by the SPAD, the mean number of the LO photons can be supplemented by adjusting the optical attenuator. Eventually, a higher heterodyne efficiency can be implemented in our imager.

A linearly chirped waveform is applied to the tunable laser. During a modulation period, the intermediate frequency ${f_{IF}}$ is proportional to the round-trip travel time t. Meanwhile, the distance R between the imager and the object is determined by the round-trip travel time t. When the intermediate frequency ${f_{IF}}$ is given, the distance of the object ($R$) can be obtained.

$$R = \frac{{cT}}{{2B}}{f_{IF}}$$
where B is the modulation bandwidth, T is the modulation period, and c is the light speed in vacuum. According to Eq. (3), the distance R is linearly proportional to the intermediate frequency ${f_{IF}}$ when the modulation bandwidth and the modulation period are given. In long range applications, an intermediate frequency ${f_{IF}}$ with high frequency is produced and then a wideband receiver is required. Fortunately, the bandwidth of single photon heterodyne reaches up to 1 GHz [35], which is sufficient for most applications. The depth resolution dR is given by
$$dR = \frac{c}{{2B}}$$

According to the equation, the depth resolution depends only on the modulation bandwidth of the laser source, which reaches up to 52 GHz in our system. Hence, a theoretical resolution of 2.9 mm can be derived from Eq. (4).

2.2 Photon driven sparse sampling

The SPAD is unable to response any photon during the dead time, which will limit the time interval between two adjacent photon events and restrict the maximum count rate. This phenomenon seems to limit the maximum sampling rate. However, the photon arrival sequence provides nature-based randomness [36,37]. Such stochastic behavior makes it possible to recover the spectrum of the IF signal from the photon arrival sequence, even though the dead time of the SPAD reaches tens of nanoseconds or longer. It should be noted that the IF signal is frequency-sparse and it can be modeled as a sum of several cosine functions. The photon arrival sequence is a sum of delta functions and each delta function corresponds to a photon event, which is given by

$${p} = \sum\limits_{i\textrm{ = }1}^{{N_{PE}}} {\delta ({n - {n_i}} )} \;\;,\;\;({1 \le {n_i} \le {N_{PS}}} )$$
where $\delta ({\cdot} )$ is a delta function. ${n_i}$ is the arrival time of the ith photon in the photon arrival sequence. ${N_{PE}}$ is the number of the photon events, and ${N_{PS}}$ is length of the photon arrival sequence. It should be noted that ${N_{PE}}$ is far less than ${N_{PS}}$ due to the stochastic behavior of the SPAD (${N_{PE}} \ll {N_{PS}}$). Since the photon arrival sequence is output in the form of random pulses, it can’t provide the amplitude of the IF signal. Consequently, 1-bit compressive sensing (CS) [38, 39] is applied to recover the spectrum of IF signal, which is given by
$$\hat{{x}} = \arg \mathop {\min }\limits_{x} {||{x} ||_1},\;\;s.t.\;\;{y} = \frac{1}{2}[{sign ({{\Phi^{\prime}x}} )\textrm{ + }{I}} ]$$

The binary iterative hard thresholding (BIHT) method is used to reconstruct the sparse signal from the photon arrival sequence, which can be represented as follows.

$${{x}^{{n + 1}}} = {\mathcal{H}_K}\left\{ {{{x}^{n}} + {\Phi }\left[ {{y} - \frac{{sign \left( {{\Phi '}{{x}^{n}}} \right)\textrm{ + }{I}}}{2}} \right]} \right\}$$
where ${{x}^{n}}$ is the recovered spectrum. ${y}$ is the measurement vector. ${\Phi }$ is a ${N_{PE}} \times {N_{PS}}$ random matrix consisted of ${N_{PE}}$ columns of the discrete Fourier transform matrix, whose dimension (${N_{PE}} \times {N_{PS}}$) is determined by the length of the photon arrival sequence (${N_{PS}}$) and the number of the photon events (${N_{PE}}$). ${\Phi }^{\prime}$ is the conjugate matrix of ${\Phi }$. ${\mathrm{{\cal H}}_K}\{{\cdot} \}$ is a threshold operation.

The photon flux should be limited to ensure that the SPAD would not be saturated, otherwise the spectrum of the IF signal can’t be recovered from the photon arrival sequence. Therefore, the system should avoid operating at too high count rate (no more than 10 MHz), so as to provide randomness behavior for the photon arrival sequence. Meanwhile, it should avoid operating at too low count rate as well (larger than 1 MHz), so as to obtain a high SNR. Nevertheless, high count rate is helpful to shorten the photon arrival sequence, which can improve the imaging speed.

As shown in Fig. 1, when the emitted light illuminates multiple objects at different depths, it is scattered by these objects successively due to the different propagation paths. The backscattered light is collected by the receiver and then a multi-return light comes into being. The multi-return IF signal hits the photosensitive surface of the SPAD, a large number of photon events (photon arrival sequence) are triggered, whose count rate (density) is modulated by the amplitude of the multi-return IF signal. In this sense, the SPAD acts as both a photon collector and a sparse sampler simultaneously in the FMCW-based single photon 3D imager. On the one hand, it collects the multi-return IF signal and implements photoelectric conversion. On the other hand, it sparsely “samples” the multi-return IF signal and outputs a photon arrival sequence in the form of digital pulses. Even though the dead time effect prevents the SPAD from collecting any photons during the dead time, stochastic behavior of the SPAD allows it to “sample” the multi-return IF signal randomly, undistinguishing whether the returned photons are scattered from a near object or a far object.

 figure: Fig. 1.

Fig. 1. Photon driven sparse sampling for multi-return IF signal. (a) Multi-return IF signal is produced by interference between the multi-return light and the LO light; (b) a photon arrival sequence is generated when the multi-return IF signal is collected by the SPAD, where the count rate (density) of the photon events is modulated by the amplitude of the multi-return IF signal; (c) the spectrum is recovered from the photon arrival sequence.

Download Full Size | PDF

By performing spectrum recovery to the photon arrival sequence, the intermediate frequencies can be derived from the spectrum for 3D reconstruction, and the SNRs are obtain from the recovered spectrum.

$$SN{R_j} = 10\lg \left( {\frac{{{S_j}^2}}{{\sum\limits_{i = 1}^M {N_i^2} }}} \right)$$
where $SN{R_j}$ is the signal-to-noise ratio of the jth returned light scattered from the jth object, ${S_j}$ is the amplitude of the jth returned light in the recovered spectrum, ${N_i}$ is the amplitude of the noise at the frequency component $2i/({M \cdot {T_{Bin}}} )$, M. is the number of frequency components in the recovered spectrum, and ${T_{Bin}}$. is the time resolution of the FPGA-based counter. The $SN{R_j}$ depends on distribution of the light spot on multiple objects. In particular, when the light spot distributes uniformly on two objects at different depths, the SNR of one object should be approximately equal to that of the other.

3. Single photon imager setup

Our experimental setup is illustrated in Fig. 2. The illumination source is a distributed feedback (DFB) tunable laser diode (Eagleyard Photonics GmbH EYP-DFB series with a center wavelength of 780 nm), the laser diode is driven by a current controller and then a frequency modulated continuous wave is generated accordingly, whose modulation bandwidth reaches up to 52 GHz. The FMCW-based emitted light is split into two parts by a 1×2 polarization-maintaining fiber beam splitter (99:1), including the signal light (99%) and the LO light (1%). In the LO optical path, an optical attenuator is utilized to adjust the mean number of the LO photons, such that the SPAD will not be saturated. The signal light is scattered by the object, and then the returned light is collected by the optical transceiver. In the receiving optical path, a narrow bandpass filter (NBF) with a bandwidth of 1 nm is applied to remove most of the background noise from sunlight and other source. Subsequently, the returned light is mixed with the LO light at a 2×2 polarization-maintaining fiber coupler (${C_1}:{C_2}$). Afterwards, an intermediate frequency (IF) signal is produced and collected by a COUNT-500N SPAD manufactured by Laser components GmbH. Since the single photon heterodyne can detect weak optical signals with extremely high sensitivity, a low emitted power (∼ 0.29 mW) and a small aperture (1 inch) are sufficient in our experimental system.

It should be noted that time jitter of the COUNT-500N is relatively poor (∼ 1 ns) within the available SPADs (e.g. some SPADs are provided with time jitters of 35 ps, 40 ps, and 350 ps by PDM series, ID100 series and SPCM-AQRH series, respectively). Nevertheless, it is more than enough for our 3D imager to obtain profiles with high depth resolution. In addition, a TCSPC module, which is operated in time tagging mode, is required to record the time tags of the photon events. Although TCSPC shelf products can provide excellent performance with a time resolution of 1 picoseconds, a coarse time resolution (e.g. 10 ns or larger) is sufficient in our 3D imager because the time jitter of the SPAD reaches 1 ns and the maximum frequency of the IF signal is far less than 50 MHz. Hence, a FPGA-based counter, whose time resolution is designated to 10 ns, is utilized to record the time tags of the photon events. Although the time jitter of the SPAD and the time resolution of the FPGA-based counter is relatively poor, depth resolution is not degraded any more. To increase the number of pixels and extend the region that can be imaged, multiple pixels scanning is applied to form a larger-size image. In particular, the transceiver is mounted on a fast-steering mechanism to produce images with 100×100 pixels.

 figure: Fig. 2.

Fig. 2. Experimental setup of the single photon 3D imager. Optical fiber devices, including the beam splitter, optical coupler and attenuator, are utilized to establish the experimental system. The simplicity of the system hardware enables compact 3D imaging.

Download Full Size | PDF

4. Experimental results

4.1 Single photon imaging with millimetric depth resolution

An orthogonal cuboid is located in front of the transceiver. Such geometry target is regular and therefore a true profile can be obtained by measuring with a Vernier caliper. Afterwards, a comparison can be performed between the reconstructed profile and the true profile. The gray image in Fig. 3(a) is obtained from a commercial CCD camera with visible-band. The 3D profile in Fig. 3(b) is reconstructed with the (99:1) optical coupler. Benefiting from the high sensitivity of single photon detection, count rate of the photon arrival sequence reaches appropriately 5 MHz when an average optical power of 0.29 mW is emitted from the tunable laser. The depth difference between Fig. 3(b) and the true depth is shown in Fig. 3(c), which ranges from -10 mm to 10 mm. It is difficult to guarantee that placement of the orthogonal cuboid is parallel to the true profile, enlarging the depth difference between the reconstructed depth and the true depth.

 figure: Fig. 3.

Fig. 3. Time-resolved imaging on an orthogonal cuboid. (a) is the photograph obtained from a commercial CCD camera with visible-band; (b) is the 3D profile; (c) is the depth difference between (b) and the true depth; (d) is the cross sections corresponding to the lines in (a).

Download Full Size | PDF

Several cross sections of the 3D profiles, corresponding to the chosen regions of interest in Fig. 3(a), are shown in Fig. 3(d). It can be seen from Fig. 3(d) that the gradient surfaces in the middle of the orthogonal cuboid are consistent well with the truth depth. Furthermore, depth resolution can be evaluated by performing a comparison between reconstructed depth and true depth. In particular, the true depth between two adjacent cross sections can be measured by a Vernier caliper from the orthogonal cuboid, which is approximately 4.5 mm. As shown in Fig. 3(d), any cross section can be distinguished from each other, and the reconstructed depth obtained from our 3D imager is consistent well with the truth depth, which indicates that the depth resolution is better than 4.5 mm for our 3D imager.

A flat block is used to explore the ranging accuracy. As shown in Fig. 4(a), the distance between surface I and surface II is measured by a Vernier caliper, which is given by 4.5 mm. The reconstructed result of the block is shown in Fig. 4(b). It can be seen from the figure that the star surface at the center of the block can be distinguished clearly although it is merely 4.5 mm away from the surrounding surface (surface II). Other surfaces, including the square surface, heart-shaped surface and round surface, are more easily distinguished because the distances between these surfaces and the surrounding surface (surface II) reach up to a dozen millimeters.

 figure: Fig. 4.

Fig. 4. Time-resolved imaging on a flat target. (a) The photograph is obtained from a commercial CCD camera with visible-band; (b) is the depth image.

Download Full Size | PDF

Two selected regions in Fig. 4(a), each corresponding to 10×10 pixels, are used to evaluate the ranging accuracy. The result is shown in Table 1, where ${R_I}$ and ${R_{II}}$ are the distances between the surface I (II) and the system, respectively. $\varDelta {R_{Meas}}$ and $\varDelta {R_{True}}$ are the measured distance and the true value between the surface I and the surface II, respectively. Their mean values and root mean squares (RMS) are obtained by performing statistical operation to 10×10 pixels. Apparently, the measured distance between two surfaces is consistent well with the true value. Finally, a ranging accuracy of 3.48 mm can be evaluated for our system.

Tables Icon

Table 1. Comparison of the distances between the experimental results and the true value

4.2 Multi-return imaging

Due to nature-based randomness of the photon events, the SPAD acts as a sparse sampler to randomly “sample” the multi-return IF signal, no matter the returned photons are scattered from a near object or a far object. Such an advantage enables our 3D imager to discriminate objects of interest in a complex scene with multiple surfaces, without applying high bandwidth receiver and any physically time-gating. Hence, our 3D imager will outperform the pulsed single photon lidar in multi-return discrimination and shows superiority in seeing through obscuring objects.

Multi-return imaging is performed to see through two obscuring objects. As shown in Fig. 5(a), a geometry target, consisting of a cube at the bottom and a dodecahedron at the top, is located behind two iron wire meshes with different hole sizes. In particular, hole shape of the first mesh is regular while that of the second is irregular, and hole size of the first mesh is a little larger than that of the second mesh, which are given by approximately 12×12 $\textrm{m}{\textrm{m}^2}$ and less than 4×4 $\textrm{m}{\textrm{m}^2}$, respectively. These objects are located approximately 2.5 m away from the transceiver. To highlight the superiority of multi-return imaging, two obscuring objects and the geometry target are placed in close to each other, spanning within 150 mm in depth dimension. Thus, ToF difference between the nearest and farthest surfaces is less than 1 ns, corresponding to the time jitter of the SPAD.

 figure: Fig. 5.

Fig. 5. Multi-return imaging through two obscuring objects. (a) The photograph is obtained from a CCD camera, showing the location of two iron wire meshes (obscuring objects) and a geometry target (scattering object); (b) The 3D profiles, including multiple objects, is reconstructed be performing multi-return imaging; (c)–(e) are the 3D profiles of the first mesh, the second mesh and the geometry target, respectively; (f) cross section corresponds to the profile of interest marked with dash-dot line in (e).

Download Full Size | PDF

A 3D profile in Fig. 5(b) is reconstructed by performing spectrum recovery to the photon arrival sequences. Colors in the profile represent depth information between the transceiver and the objects, including the obscuring objects and the geometry target. It’s seen from the 3D profile that the obscuring objects and the geometry target can be discriminated clearly, ranging from 2445 mm to 2600 mm. It should be noted that time-of-flights among the first mesh, the second mesh and the geometry target are almost less than 1 ns (corresponding to a depth difference of 150 mm). Nevertheless, our 3D imager is still capable of detecting and distinguishing both the geometry target and any mesh, even though time jitter of the SPAD reaches 1 ns and time resolution of the FPGA-based counter reaches 10 ns. In addition, the capabilities in depth resolution and multi-return discrimination enable our 3D imager to distinguish the first mesh, the second mesh and the geometry target from each other in Fig. 5(c)–5(e), and capture surface profile details in Fig. 5(f). Furthermore, the hole in the first mesh is distinguished more obvious than that in the second mesh due to different hole sizes, which can be seen in Fig. 5(c) and (d). Because the light spot can completely pass through the large hole size (without returned light) but partially pass through the small hole size (with returned light) during multiple pixels scanning.

In contrast to the pulsed single photon lidar detecting multiple returned light with fixed ToF difference, our 3D imager detects multiple returned light with fixed frequency difference by performing time-to-frequency conversion. Such a conversion enables multi-return imaging independent of the receiver bandwidth. Therefore, the SNR is nearly equal for each returned light under the same intensities. Multi-return spectrums are shown in Fig. 6(a)–6(c), corresponding to two-return, three-return and four-return situations. Apparently, the peak of spectrum (SNR) is only determined by the intensity of the returned light, without being limited by the receiver bandwidth.

 figure: Fig. 6.

Fig. 6. Multi-return spectrums through two obscuring objects. (a) Two-return is backscattered from the two meshes; (b) Three-return is backscattered from the two meshes and the cube at the bottom of the target; (c) Four-return is backscattered from the two meshes and the geometry target, where one part of the light spot illuminates the cube while another part of that illuminates the dodecahedron.

Download Full Size | PDF

5. Discussion

A FMCW-based single photon 3D imager is proposed and experimentally demonstrated in our work, which is capable of capturing a scene with millimetric depth resolution using low-bandwidth and low-cost hardware. Multi-return imaging capability enables to distinguish the distances between multiple near surfaces, no longer being limited by the receiver bandwidth. Besides, single photon heterodyne provides extremely high sensitivity while allowing to reduce the local oscillator power as low as 1 pW, which will promote the advantage in flash coherent imaging.

Owing to difficulties in providing electrical and photonic connections to every pixel, previous traditional coherent systems have been restricted to fewer than 30 pixels [4042]. Even with a large-scale coherent detector array consisting of 512 pixels [43], beam steering is required to sequentially illuminate the scene in small patches due to optical power limitation. Nevertheless, our FMCW-based single photon 3D imager can achieve single photon heterodyne with a single-pixel SPAD, without requiring strong local oscillator light. In addition, a low-cost SPAD and a FPGA-based counter are sufficient for electrical and photonic connections to every pixel, without requiring complex integrated circuit. Although the SPAD camera can’t still operate in time-tagging mode currently, attractive advantages, including low optical power, low receiver bandwidth and low cost, make our FMCW-based single photon 3D imager especially suitable for flash coherent imaging, which will enable the applications in robotics and autonomous navigation.

Funding

National Natural Science Foundation of China (61805249); Youth Innovation Promotion Association of the Chinese Academy of Sciences (2019369).

Disclosures

The authors declare no competing financial interests.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. P. Rehain, Y. Sua, S. Zhu, I. Dickson, B. Muthuswamy, J. Ramanathan, A. Shahverdi, and Y. Huang, “Noise-tolerant single photon sensitive three-dimensional imager,” Nat. Commun. 11(1), 921 (2020). [CrossRef]  

2. A. M. Pawlikowska, A. Halimi, R. A. Lamb, and G. S. Buller, “Single-photon three-dimensional imaging at up to 10 kilometers range,” Opt. Express 25(10), 11919–11931 (2017). [CrossRef]  

3. Z. Li, X. Huang, P. Jiang, Y. Hong, C. Yu, Y. Cao, J. Zhang, F. Xu, and J. Pan, “Super-resolution single-photon imaging at 8.2 kilometers,” Opt. Express 28(3), 4076–4087 (2020). [CrossRef]  

4. Z. Li, X. Huang, Y. Cao, B. Wang, Y. Li, W. Jin, C. Yu, J. Zhang, Q. Zhang, C. Peng, F. Xu, and J. Pan, “Single-photon computational 3D imaging at 45 km,” Photonics Res. 8(9), 1532–1540 (2020). [CrossRef]  

5. Z. Li, J. Ye, X. Huang, P. Jiang, Y. Cao, Y. Hong, C. Yu, J. Zhang, Q. Zhang, C. Peng, F. Xu, and J. Pan, “Single-photon imaging over 200 km,” Optica 8(3), 344–349 (2021). [CrossRef]  

6. X. Sun, J. Blair, J. L. Bufton, M. Faina, S. Dahl, P. Bérard, and R. J. Seymour, “Advanced silicon avalanche photodiodes on NASA’s Global Ecosystem Dynamics Investigation (GEDI) mission,” Proc. SPIE 11287, 39 (2020). [CrossRef]  

7. R. Dubayah, J. Blair, S. Goetz, L. Fatoyinbo, M. Hansen, S. Healey, M. Hofton, G. Hurtt, J. Kellner, S. Luthcke, J. Armston, H. Tang, L. Duncanson, S. Hancock, P. Jantz, S. Marselis, P. L. Patterson, W. Qi, and C. Silva, “The Global Ecosystem Dynamics Investigation: High-resolution laser ranging of the Earth’s forests and topography,” Science of Remote Sensing 1, 100002 (2020). [CrossRef]  

8. B. Li, G. Deng, R. Zhang, Z. Ou, H. Zhou, Y. Ling, Y. Wang, Y. Wang, K. Qiu, H. Song, and Q. Zhou, “High dynamic range externally time-gated photon counting optical time-domain reflectometry,” J. Lightwave Technol. 37(23), 5899–5906 (2019). [CrossRef]  

9. B. Li, R. Zhang, Y. Wang, H. Li, L. You, Z. Ou, H. Zhou, Y. Ling, Y. Wang, G. Deng, Y. Wang, H. Song, K. Qiu, and Q. Zhou, “Dispersion independent long-haul photon counting optical time-domain reflectometry,” Opt. Lett. 45(9), 2640–2643 (2020). [CrossRef]  

10. Z. Li, E. Wu, C. Pang, B. Du, Y. Tao, H. Peng, H. Zeng, and G. Wu, “Multi-beam single-photon-counting three-dimensional imaging lidar,” Opt. Express 25(9), 10189–10195 (2017). [CrossRef]  

11. T. Zheng, G. Shen, Z. Li, L. Yang, H. Zhang, E. Wu, and G. Wu, “Frequency-multiplexing photon-counting multi-beam LiDAR,” Photonics Res. 7(12), 1381–1385 (2019). [CrossRef]  

12. M. Laurenzis, “Single photon range, intensity and photon flux imaging with kilohertz frame rate and high dynamic range,” Opt. Express 27(26), 38391–38403 (2019). [CrossRef]  

13. Z. Chen, B. Liu, and G. Guo, “Adaptive single photon detection under fluctuating background noise,” Opt. Express 28(20), 30199–30209 (2020). [CrossRef]  

14. G. Berkovic and E. Shafir, “Optical methods for distance and displacement measurements,” Adv. Opt. Photonics 4(4), 441–471 (2012). [CrossRef]  

15. G. S. Buller and A. M. Wallace, “Ranging and three-dimensional imaging using time-correlated single-photon counting and point-by-point acquisition,” IEEE J. Sel. Top. Quantum Electron. 13(4), 1006–1015 (2007). [CrossRef]  

16. B. Li, J. Bartos, Y. Xie, and S. Huang, “Time-magnified photon counting with 550-fs resolution,” Optica 8(8), 1109–1112 (2021). [CrossRef]  

17. H. Zhou, Y. He, L. You, S. Chen, W. Zhang, J. Wu, Z. Wang, and X. Xie, “Few-photon imaging at 1550 nm using a low-timing-jitter superconducting nanowire single-photon detector,” Opt. Express 23(11), 14603–14611 (2015). [CrossRef]  

18. F. Heide, S. Diamond, D. B. Lindell, and G. Wetzstein, “Sub-picosecond photon-efficient 3D imaging using single-photon sensors,” Sci. Rep. 8(1), 17726 (2018). [CrossRef]  

19. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013). [CrossRef]  

20. D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7(1), 12046 (2016). [CrossRef]  

21. A. Lyons, F. Tonolini, A. Boccolini, A. Repetti, R. Henderson, Y. Wiaux, and D. Faccio, “Computational time-of-flight diffuse optical tomography,” Nat. Photonics 13(8), 575–579 (2019). [CrossRef]  

22. G. Wetzstein, A. Ozcan, S. Gigan, S. Fan, D. Englund, M. Soljačić, C. Denz, D. A. B. Miller, and D. Psaltis, “Inference in artificial intelligence with deep optics and photonics,” Nature 588(7836), 39–47 (2020). [CrossRef]  

23. M. Laurenzis, J. Klein, E. Bacher, and N. Metzger, “Multiple-return single-photon counting of light in flight and sensing of non-line-of-sight objects at shortwave infrared wavelengths,” Opt. Lett. 40(20), 4815–4818 (2015). [CrossRef]  

24. G. Gariepy, N. Krstajić, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-flight imaging,” Nat. Commun. 6(1), 7021 (2015). [CrossRef]  

25. D. Shin, F. Xu, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal, “Computational multi-depth single-photon imaging,” Opt. Express 24(3), 1873–1888 (2016). [CrossRef]  

26. C. Mallet and F. Bretar, “Full-waveform topographic lidar: State-of-the-art,” ISPRS J. Photogrammetry and Remote Sensing 64(1), 1–16 (2009). [CrossRef]  

27. P. F. McManamon, “Review of ladar: a historic, yet emerging, sensor technology with rich phenomenology,” Opt. Eng. 51(6), 060901 (2012). [CrossRef]  

28. P. Gatt, S. Johnson, and T. Nichols, “Dead-time effects on geiger-mode APD performance,” Proc. SPIE 6550, 65500I–1 (2007). [CrossRef]  

29. M. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7(1), 12010–12016 (2016). [CrossRef]  

30. X. Ren, P. W. R. Connolly, A. Halimi, Y. Altmann, S. McLaughlin, I. Gyongy, R. K. Henderson, and G. S. Buller, “High-resolution depth profiling using a range-gated CMOS SPAD quanta image sensor,” Opt. Express 26(5), 5541–5557 (2018). [CrossRef]  

31. J. X. Luu and L. A. Jiang, “Saturation effects in heterodyne detection with Geiger-mode InGaAs avalanche photodiode detector arrays,” Appl. Opt. 45(16), 3798–3804 (2006). [CrossRef]  

32. L. A. Jiang and J. X. Luu, “Heterodyne detection with a weak local oscillator,” Appl. Opt. 47(10), 1486–1503 (2008). [CrossRef]  

33. M. Shcherbatenko, Y. Lobanov, A. Semenov, V. Kovalyuk, A. Korneev, R. Ozhegov, A. Kazakov, B. M. Voronov, and G. N. Goltsman, “Potential of a superconducting photon counter for heterodyne detection at the telecommunication wavelength,” Opt. Express 24(26), 30474–30484 (2016). [CrossRef]  

34. B. I. Erkmen, Z. W. Barber, and J. Dahl, “Maximum-likelihood estimation for frequency-modulated continuous-wave laser ranging using photon-counting detectors,” Appl. Opt. 52(10), 2008–2018 (2013). [CrossRef]  

35. Z. Chen, B. Liu, and Z. Li, “Wideband spectrum estimation for photon counting heterodyne,” IEEE Photonics Technol. Lett. 34(6), 313–316 (2022). [CrossRef]  

36. C. Tsai and Y. Liu, “Anti-interference single-photon LiDAR using stochastic pulse position modulation,” Opt. Lett. 45(2), 439–442 (2020). [CrossRef]  

37. B. Liu, Y. Yu, Z. Chen, and W. Han, “True random coded photon counting Lidar,” Opto-Electron. Adv. 3(2), 19004401 (2020). [CrossRef]  

38. L. Jacques, J. N. Laska, P. T. Boufounos, and R. G. Baraniuk, “Robust 1-Bit compressive sensing via binary stable embeddings of sparse vectors,” IEEE Trans. Inf. Theory 59(4), 2082–2102 (2013). [CrossRef]  

39. Z. Chen, B. Liu, G. Guo, K. Hua, and W. Han, “Photon counting heterodyne with a single photon avalanche diode,” IEEE Photonics Technol. Lett. 33(17), 931–934 (2021). [CrossRef]  

40. F. Aflatouni, B. Abiri, A. Rekhi, and A. Hajimiri, “Nanophotonic coherent imager,” Opt. Express 23(4), 5117–5125 (2015). [CrossRef]  

41. A. Martin, D. Dodane, L. Leviandier, D. Dolfi, A. Naughton, P. O’Brien, T. Spuessens, R. Baets, G. Lepage, P. Verheyen, P. D. Heyn, P. Absil, P. Feneyrou, and J. Bourderionnet, “Photonic integrated circuit-based FMCW coherent LiDAR,” J. Lightwave Technol. 36(19), 4640–4645 (2018). [CrossRef]  

42. J. Riemensberger, A. Lukashchuk, M. Karpov, W. Weng, E. Lucas, J. Liu, and T. J. Kippenberg, “Massively parallel coherent laser ranging using a soliton microcomb,” Nature 581(7807), 164–170 (2020). [CrossRef]  

43. C. Rogers, A. Y. Piggott, D. J. Thomson, R. F. Wiser, I. E. Opris, S. A. Fortune, A. J. Compston, A. Gondarenko, F. Meng, X. Chen, G. T. Reed, and R. Nicolaescu, “A universal 3D imaging sensor on a silicon photonics platform,” Nature 590(7845), 256–261 (2021). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Photon driven sparse sampling for multi-return IF signal. (a) Multi-return IF signal is produced by interference between the multi-return light and the LO light; (b) a photon arrival sequence is generated when the multi-return IF signal is collected by the SPAD, where the count rate (density) of the photon events is modulated by the amplitude of the multi-return IF signal; (c) the spectrum is recovered from the photon arrival sequence.
Fig. 2.
Fig. 2. Experimental setup of the single photon 3D imager. Optical fiber devices, including the beam splitter, optical coupler and attenuator, are utilized to establish the experimental system. The simplicity of the system hardware enables compact 3D imaging.
Fig. 3.
Fig. 3. Time-resolved imaging on an orthogonal cuboid. (a) is the photograph obtained from a commercial CCD camera with visible-band; (b) is the 3D profile; (c) is the depth difference between (b) and the true depth; (d) is the cross sections corresponding to the lines in (a).
Fig. 4.
Fig. 4. Time-resolved imaging on a flat target. (a) The photograph is obtained from a commercial CCD camera with visible-band; (b) is the depth image.
Fig. 5.
Fig. 5. Multi-return imaging through two obscuring objects. (a) The photograph is obtained from a CCD camera, showing the location of two iron wire meshes (obscuring objects) and a geometry target (scattering object); (b) The 3D profiles, including multiple objects, is reconstructed be performing multi-return imaging; (c)–(e) are the 3D profiles of the first mesh, the second mesh and the geometry target, respectively; (f) cross section corresponds to the profile of interest marked with dash-dot line in (e).
Fig. 6.
Fig. 6. Multi-return spectrums through two obscuring objects. (a) Two-return is backscattered from the two meshes; (b) Three-return is backscattered from the two meshes and the cube at the bottom of the target; (c) Four-return is backscattered from the two meshes and the geometry target, where one part of the light spot illuminates the cube while another part of that illuminates the dodecahedron.

Tables (1)

Tables Icon

Table 1. Comparison of the distances between the experimental results and the true value

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

N ( t ) = C 1 N S 2  +  C 2 N L 2  +  C 1 C 2 N S N L cos ( f I F t + φ I F ) + N D a r k + N B k g r
η = ( C 1 N S ) ( C 2 N L )
R = c T 2 B f I F
d R = c 2 B
p = i  =  1 N P E δ ( n n i ) , ( 1 n i N P S )
x ^ = arg min x | | x | | 1 , s . t . y = 1 2 [ s i g n ( Φ x )  +  I ]
x n + 1 = H K { x n + Φ [ y s i g n ( Φ x n )  +  I 2 ] }
S N R j = 10 lg ( S j 2 i = 1 M N i 2 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.