Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-resolution multi-spectral snapshot 3D imaging with a SPAD array camera

Open Access Open Access

Abstract

Currently, mainstream light detection and ranging (LiDAR) systems usually involve a mechanical scanner component, which enables large-scale, high-resolution and multi-spectral imaging, but is difficult to assemble and has a larger system size. Furthermore, the mechanical wear on the moving parts of the scanner reduces its usage lifetime. Here, we propose a high-resolution scan-less multi-spectral three-dimensional (3D) imaging system, which improves the resolution with a four-times increase in the pixel number and can achieve multi-spectral imaging in a single snapshot. This system utilizes a specially designed multiple field-of-view (multi-FOV) system to separate four-wavelength echoes carrying depth and spectral reflectance information with predetermined temporal intervals, such that one single pixel of the SPAD array can sample four adjacent positions through the four channels’ FOVs with subpixel offset. The positions and reflectivity are thus mapped to wavelengths in different time-bins. Our results show that the system can achieve high-resolution multi-spectral 3D imaging in a single exposure without scanning component. This scheme is the first to realize scan-less single-exposure high-resolution and multi-spectral imaging with a SPAD array sensor.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

In recent years, time-correlated single photon counting (TCSPC) has been used in single photon LiDAR [16]. This provides single-photon sensitivity and picosecond resolution for time-of-flight measurements [4,79]. In the array type single-photon counting LiDAR [1012], each avalanche photodiode detector is integrated with a TCSPC module [1315], which greatly reduces the 3D imaging time [7,16,17] and enables long-range depth imaging [12]. However, the circuitry occupies a significant portion of the space surrounding SPAD chip, resulting in very low proportion of the avalanche multiplication area and thus a low native resolution [13,18,19]. Typically, a SPAD array sensor contains only a few thousand pixels, which severely limits practical applications. To increase the image native resolution from a SPAD camera, many proposals have been put forward. Scan based processing, including both mechanical scanning [5,10,2024] and non-mechanical scanning [16,2527] approaches, is a very useful strategy to improve the image resolution. Meanwhile, the scanning strategy suffers from several shortcomings: it requires a longer acquisition time, has motion artifacts, is prone to scanner damage, and leads to a larger-size and unstable LiDAR system. LiDAR designs with no moving parts are desirable. We note a time-stretch inertia-free LiDAR was reported to realize high resolution with improved mechanical stability [28], but the scheme is restricted to a single-pixel detector, limiting the imaging speed. For a single SPAD detector, the sampling rates of pixels on a target can be increased by illumination multiple positions with multiple laser pulses that has a fixed delay [2830]. SPAD array can realize more efficient sampling than single SPAD detector, which is the main development direction of non-scanning imaging. However, at present, the native spatial resolution of a SPAD array is too low. And there is no inertial-free method to improve the spatial sampling rate of the SPAD array to obtain a high-resolution imaging. Increase in spatial sampling rate of the SPAD array has only been achieved via scanning.

In addition, multi-spectral information is highly demanded in the fields of target identification [9,3135], remote sensing [36,37], time-resolved fluorescence spectroscopy [38,39], among others. A large number of researchers are devoted to realizing multi-spectral imaging [32,35,4052]. Yet almost all multi-spectral LiDAR systems require multiple detectors [32,42,43,48,49,52] or laser scanning [47,50]. Multi-detector methods lead to bulky and expensive systems. For the latter approach, simultaneous access to all spectral modes and fast acquisition of multi-spectral information are beyond reach due to scanning. Recently, several scanning-free methods have been proposed [35,53]. Ximing Ren et al. demonstrated multi-spectral single-photon 3D imaging through multi-spectral illumination and demultiplexing [35]. A drawback of the method is that this approach makes one pixel sample only one spatial position, reducing the information content available in a single measurement outcome. Yash D. Shah et al. achieved single-photon color image reconstruction [53] and simultaneous multi-spectral fluorescence imaging using a SPAD image sensor integrated with a mosaic filter array [40]. However, the mosaic filter array has adverse effects on reflectivity image quality: The filter array inherently necessitates multi-spectral reflectivity interpolation; its broad spectral range implies there will be strong background noise when used outdoors.

In this work, we propose and experimentally demonstrate a scheme for high-resolution snapshot multi-spectral LiDAR using a SPAD array detector. Therein the target is illuminated with a train of different-wavelength pulses delayed with predetermined temporal intervals. We specially designed a multi-FOV system to split the spectral-temporal echo signals into distinct channels where subpixel shifts among the corresponding FOVs are introduced. Then the imaging lens collects the echo signals from these channels and directs them to the SPAD array. From the echo signals both spatial and spectral information of the target can be extracted. The depth information is encoded in individual echo pulse’s temporal delay, and the position and spectral reflectivity are mapped to wavelength-resolved channels recorded in different time-bins. In the experiments, we demonstrate multi-spectral high-resolution imaging with a resolution of 64 × 64 using a 32 × 32 SPAD array in a snapshot.

2. Scheme and experimental set-up

Before elaborating on the concrete setup used in our experiments and the experimental procedures, we firstly introduce the basic idea of multi-FOV technique with a toy scheme. As shown in Fig. 1(a), the toy scheme includes one pixel of the SPAD array and a two-channel multi-FOV system, which is placed in front of the objective lens to split and then recombine the laser beams to obtain two FOVs. The long-pass filter (LP) reflects the light pulse of shorter wavelength into channel 1 (C1), and transmits the long-wavelength pulse into channel 2 (C2). The two pulses are as well temporally separated with a known interval. The mirrors M1 and M2 then change the propagation direction of the two pulses. At last, the short-pass filter (SP) merges the light pulses from the two channels. The two channels are specially adjusted in the way that the near fields and far fields differ by half a pixel. In this manner, the single pixel is able to sample two adjacent positions in a scene as illustrated in Fig. 1(b). In addition, the time delay and correlated spectra of the pulses carry the depth and multi-spectral information of the target, respectively. According to the sampling theorem, the sampling rate of a signal is a critical factor that affects the system’s ability to capture high-frequency information. Increasing the sampling rate enables the system to acquire information at higher frequencies. Therefore, the resolution of the imaging system can be improved by the multi-FOV system.

 figure: Fig. 1.

Fig. 1. Schematic diagram of the proposed high-resolution snapshot multi-spectral method. (a) The FOV of the objective lens (L) is extended to two regions by a multi-FOV system. The FOVs of channel 1 (C1) and channel 2 (C2) are adjusted with a long-pass filter (LP) and mirror 1 (M1), and a short-pass filter (SP) and mirror 2 (M2), respectively. (b) Comparison of the traditional (leftmost) and the proposed imaging method. In the traditional method, one pixel corresponds to one FOV. In our method, each pixel can have multiple wavelength-resolved FOVs, enabling multi-spectral high-resolution imaging with a snapshot.

Download Full Size | PDF

The working principle of our proposal is concretely demonstrated in the laboratory with the experimental setup in Fig. 2(a). A summary of the system parameters is provided in Table 1. For the light source, we use a supercontinuum laser (SC-Pro, YSL Anyang laser) with a 20 MHz repetition rate and a 150 ps pulse width. Moreover, the laser emits an electrical stop pulse for the TCSPC. The laser output is divided into four pulses of different wavelengths by filters and coupled into optical fibers of different lengths to generate time delays. The central wavelengths of the illumination light are 532 nm, 633 nm, 690 nm and 740 nm, and the spectral linewidth is 10 nm. The temporal intervals between pulses are 4∼8 ns, cf. Figure 3(i), which ensures the individual pulses are unambiguously distinguishable by the TCSPC clock. The laser pulses come out of the fibers and provide the flood illumination for the scene of interest. The light pulses backscattered off the target are then sequentially split into four receiving channels by the multi-FOV system, recombined at the photographic lens (focal length = 35 mm, MVL35M1, Navitar), and at last received by the photosensitive region of the SPAD array detector PF32 (Photon Force). PF32 consists of 32 × 32 pixels of fully independent Si-SPADs, and the pixel has a time-to-digital converter circuit with 56 ps time resolution. Each pixel has a pixel pitch 50 × 50µm, and an active area of 6.95 µm diameter. Figure 2(b) schematically shows that each pixel of PF32 is able to sample four scene positions, giving rise to four-times increase in the pixel number of the image compared with conventional schemes. The timing starts when the pixel detects a photon and stops when it receives a synchronization signal from the SC-Pro laser. Therefore, the depth information recorded by PF32 is a depth relative to a fixed reference point. The multi-FOV system consists of six filters (TSP01-790, TSP01-704, TLP01-790, TLP01-628, TSP01-887, TLP01-704; Semrock) and two mirrors, and is designed to make light of four different wavelengths go into separate channels. The four channels have their corresponding passbands that do not overlap with each other and cover the wavelengths of the respective fed-in light. The passbands and pulse wavelengths are indicated in Fig. 2(c). The dashed lines in Fig. 2(c) represent the channels’ transmission spectra obtained by measuring the transmission of the SC-Pro laser light with a spectrometer (Ocean Optics). Note that the passband of a channel can be customized by choosing different combinations of filters. The combination shown in Fig. 2(a) is employed to realize the passbands concerned in the current experiments. We place the filters and mirrors of the multi-FOV system on a two-adjuster mount. A one-dimensional displacement stage is placed under the mount of F1, F4, F5, M1 and M2. By adjusting the mounts and displacement stage, we are able to make the FOV of each channel to differ by half a pixel from that of the adjacent channel in both the near and far fields. To guarantee a good imaging quality the four channels’ FOVs on the scene need to be distributed uniformly and precisely. We achieve this requirement for the accurate image registration with a few maneuvers. We first use a lens of longer focal length (e.g. 150 mm) to image a point target on an industrial camera having a smaller pixel pitch (e.g. with a 5.2 µm diameter). According to Gaussian thin-lens formula, when the distances between the resulting images through the four channels are tuned to 22.6 ± 0.5 industrial camera pixels apart by adjusting the filters and mirrors, uniform FOV distribution can be achieved after changing back to our PF32 and the lens of 35 mm focal length.

 figure: Fig. 2.

Fig. 2. (a) Experimental setup of the proposed high-resolution snapshot multi-spectral system. Multi-spectral pulses from the laser are divided into four pulses of central wavelengths λ1 = 740 nm, λ2 = 532 nm, λ3 = 690 nm, λ4 = 633 nm, and separated with 4∼8 ns intervals. The target is illuminated with flood illumination. Subpixel shifts exist between the FOVs of different channels (C1, C2, C3, C4), and the returned echo signals record the 3D information of the target. Position and reflectivity information is mapped to wavelengths in different time-bins. F, filter: ${\textrm{F}_1}$, TSP01-790; ${\textrm{F}_2}$, TSP01-704; ${\textrm{F}_3}$, TLP01-790; ${\textrm{F}_4}$, TLP01-628; ${\textrm{F}_5}$, TSP01-887; ${\textrm{F}_6}$, TLP01-704; Semrock. M1, M2, mirror; L, objective lens. (b) FOVs of the traditional and our imaging methods. (c) The solid lines depict the reflectance (R) and transmittance (T) of the filters. The dashed lines are the spectra of the SC-Pro laser incidence into the channels measured by a spectrometer (Ocean Optics).

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Imaging of a multi-color target by the proposed method. (a) The target. (b) 2D pseudo-color estimates. (c) Reconstructed 3D pseudo-color image. (d)-(g) The reflectivity imaging results from four individual channels. (h) The reflectance spectra of the target at the dashed cross-section lines of (d)-(g). (i) The histogram of echo photons of four wavelengths measured by one pixel. The black line denotes the reflectance spectrum of a reference diffuse reflector with $89{\%}$ reflectance.

Download Full Size | PDF

Tables Icon

Table 1. Summary of main parameters

3. Reconstruction procedure and results

In this section, we describe the procedures for reconstructing images, including the determination of the channels’ spectral ranges, and how the reflectivity and 3D images are obtained. Then we present the reconstruction results of our experiment. The spectral ranges of the four channels in our setup are determined by the spectral properties and arrangement of the filters. Taking channel 1 as an example, for the given arrangement of filters in Fig. 2(a), the spectral range is obtained with the help of a Boolean function of wavelength p(λ)=rF1(λ)tF3(λ)tF5(λ)tF6(λ). The wavelength is iterated from 500 nm to 800 nm with the step of 1 nm. The Boolean functions r(λ) and t(λ) are defined as:

$$r(\lambda ) = \left\{ \begin{array}{l} 0,\textrm{ }0 < {R_\lambda } < 0.1\%\\ 1,\textrm{ }0.1\%< {R_\lambda } < 100\%\end{array} \right.$$
$$t(\lambda ) = \left\{ \begin{array}{l} 0,\textrm{ }0 < {T_\lambda } < 0.1\%\\ 1,\textrm{ }0.1\%< {T_\lambda } < 100\%\end{array} \right.$$
where Rλ and Tλ are the reflectance and transmittance of a filter for the light of wavelength λ. Thereby the spectral range of channel 1 is defined by the wavelength range where the Boolean function p(λ) is 1. In our experiment, we choose the filter arrangement as shown in Fig. 2(a) and the resulting passbands are 734-783 nm for channel 1, 500-582 nm for channel 2, 653-702 nm for channel 3 and 606-652 nm for channel 4. We note that the spectral ranges for the channels can be designed according to one’s need following the procedure described above.

In the experiment, the laser pulses going through the four channels are received by the SPAD array and their waveforms are shown in Fig. 3(i). The TCSPC accumulated histogram in Fig. 3(i) exhibits four peaks, and each peak corresponds to the wavelength of the laser pulses from one channel. The histogram contains both the spectral and depth information of the target to be imaged. Since the histogram clusters are only several hundred ps wide, we can selectively process the photons registered in the time-bins within a small range for each channel. More specifically, 27 time-bins are used in the experiment for each cluster as shown in Fig. 3(i). We retrieve the reflectance information of the target surface through the intensity of the echo signals. Noteworthily, the inhomogeneity of the illumination on the target and the variance in pixel sensitivity of the SPAD array should be considered to accurately extract the pixel-resolved reflectance information. In practice, these two factors are accounted for by using a reference setup, where the same illumination is shone on a diffuse reflector with the known reflectance of about 89%. The histogram resulting from the signals reflected by the diffuse reflector is shown in Fig. 3(i). For each pixel, the total photon counts C of the histogram have contributions from four channels indicated with index λ, C=∑λCλ. The photon counts from one channel Cλ in turn depends on the spectral sensitivity ${\mathrm{\alpha }_\mathrm{\lambda }}$ of the pixel, the spectral reflectance Rλ of the target, light intensity Iλ, and the integration time t = n/f.

$${C_\lambda } = \int_0^t {{\alpha _\lambda }{R_\lambda }{I_\lambda }d\tau } = \frac{{{\alpha _\lambda }{R_\lambda }{I_\lambda }n}}{f}\;$$
where n is the number of frames captured by the SPAD array and f is the SPAD frame rate. For the reference diffuse reflector object, we photograph nr frames with a frame rate fr, and get the photon counts Cλr. According to Eq. (3), we can find $ {{\alpha }_{\lambda }}{{I}_\mathrm{\lambda }}{ = }\frac{{{{C}_{{\lambda r}}}{{f}_{r}}}}{{{{R}_{{\lambda r}}}{{n}_{r}}}}$. Then we carry out the measurement for the target to be imaged, and obtain the echo signal counts Cλ0. At last, the target reflectance is calculated as
$${R_{\lambda 0}} = \frac{{{R_{\lambda r}}{C_{\lambda 0}}}}{{{C_{\lambda r}}}}(\frac{{{n_r}{f_0}}}{{{n_0}{f_r}}})$$

The depth information is indirectly obtained from the relative temporal intervals among the histogram clusters received by the SPAD array. The planar diffuse reflector used previously serves in this scenario to provide the reference temporal intervals. Specifically, we take the weighted mean of each histogram cluster to obtain the pulses’ time of flights (TOFs) from the diffuse reflector and target. The inherent temporal intervals among light of different wavelengths emitted from the fibers are first obtained from the TOFs measured in the flat diffuse reflector case. Then the 3D information of the target is obtained by subtracting the inherent temporal intervals of each histogram cluster from the temporal intervals measured in the target case.

Using the above procedures to process the SPAD data, we are now able to demonstrate multi-FOV high-resolution multi-spectral 3D imaging. During data processing we also employ median filtering to mitigate the effects of hot pixels, which have >2 kHz dark-count rates and account for about 10% of the pixels on our PF32. In order to maximally retain the original images, here we carry out 3 × 3 median filtering only for the hot pixels. Our experiments operate under low-flux condition. 500,000 frames of data are used to reconstruct the images at 20 µs per frame for ameliorating the quality degradation due to Poisson noise. It should be noted that an unnecessarily long exposure time of 10 s has been set to secure a large photon number. The image quality won’t be affected by using a one order of magnitude shorter exposure time. When even higher data sampling rate is demanded in certain applications, advanced algorithms [20] can be deployed to significantly shorten the acquisition time to e.g. 50 ms level without sacrificing image quality. Below two instances of 2D imaging are first presented. To inspect the imaging performance of our LiDAR system for target of different colors, we use a multi-color target consisting of three clay cylinders in red, green and yellow colors as shown in Fig. 3(a). The intensity images for the target resulting from the four individual channels are exhibited in Fig. 3(d)–3(g).

More quantitatively, we extract the intensity of each pixel on the dotted line in Fig. 3(d)–3(g) and plot the reflectance at different illumination wavelengths as functions of pixel index in Fig. 3(f). The traces show that the yellow part of the target strongly reflects both red (633 nm, 690 nm, 740 nm) and green (532 nm) light, while the green and red parts only reflect green and red light, respectively. The results are in accordance with the spectral composition of light colors. Combining the data from multiple channels, we further present in Fig. 3(b) and 3(ca) 2D pseudo-color image, and a multi-channel colored 3D image, respectively. It is worth noting that the apparent difference in color between the pseudo-color images and Fig. 3(a) is because our system uses wavelengths different from those of RGB cameras (B: 438 nm, G: 546 nm, and R: 700 nm). We obtain a set of RGB values from three channels. In this instance, we set IR = IC1, IG = IC2, IB = IC4, and I is the intensity measured by the respective channels of the SPAD array. The images vividly illustrate the multi-spectral information in the reflection from the target.

Next, we consider a set of two targets as shown in Fig. 4(a) and 4(d) to demonstrate high-resolution imaging of our LiDAR system. The patterns are printed on paper in black and white. The width of the rectangular bars are 5 mm, 4 mm, 3 mm and 2 mm, and the widest part of a sector in the star pattern is 2 mm. We record the 2D intensity images from the four individual channels as shown in Fig. 4(c) and 4(f). Given the PF32 SPAD array, these images from a single-FOV channel exhibits low quality. Then we combine the intensity data from the four channels and form the multi-channel intensity images displayed in Fig. 4(b) and 4(e). In comparison, our multi-FOV imaging strategy markedly improves the resolution of the images.

 figure: Fig. 4.

Fig. 4. Reflectivity reconstruction of two printed patterns (a) and (g). (b) and (h) The reflectivity estimates with a resolution of 64 × 64 by the proposed method. Our method produces high-resolution images (b) and (h) by combining multiple native resolution images (c)-(f) and (i)-(l) respectively of the same scenes, breaking the physical limits of the camera sensors.

Download Full Size | PDF

In the last example we demonstrate the 3D high-resolution capability of our LiDAR system by imaging a target as shown in Fig. 5(a). The cartoon character target to image is about 6 cm tall and 4 cm wide, and consists of parts of different colors. When the data collected from single channels are used to generate images, as displayed in Fig. 5(d)–5(g), the spectral response of the target still can be visually seen, but the details of the cartoon character can hardly be distinguished. Some parts, such as the hands, scarf and nose, are blurred. For comparison, we combine the data from all the channels and generate the image shown in Fig. 5(b) and 5(c). Besides the multi-spectral information, the image generated with our system is able to resolve the details of the target and clearly manifests enhanced resolution. We remark by passing that in the background of the images there exists slight non-uniformity, which is due to integration nonlinearity caused by the independent timing of each pixel of the SPAD array. This non-uniformity has negligible effect on the image quality.

 figure: Fig. 5.

Fig. 5. Reflectivity reconstruction of a 3D structure. For 3D visualization, the reflectivity estimates are overlaid on depth maps. (a) The 3D target. (b) Front view of the reflectance overlaid 3D image by the proposed method. (c) A rotated view of the image. (d)-(g) The images of the target obtained through four single channels.

Download Full Size | PDF

4. Discussion and conclusion

In this work, a high-resolution inertia-free multi-spectral LiDAR system has been proposed. We have demonstrated that the system can achieve resolution enhancement with a four-times increase in the pixel number and imaging in four spectral bands. Specifically, a multi-color target is used to illustrate multi-spectral imaging, and 2D patterns and a 3D cartoon figure are imaged to manifest the resolution improvement. The functioning of the proposed LiDAR lies in an originally introduced multi-FOV system which provides the spectral channels and, more importantly, enables us to spatially separate multiple FOVs, helping enhance the imaging resolution. This study utilizes a conventional SPAD array with a low fill factor. SPAD arrays using the latest technologies of 3D stacking have been reported to have high fill factors [54]. We note that, owing to the sub-pixel displacement among distinct channels, even if the SPAD array has a fill factor approaching 100%, our scheme can result in a higher resolution.

To our knowledge, our work presents the first snapshot-based high-resolution and multi-spectral 3D imaging scheme. High-resolution [20,23,5557], multi-spectral [35,42,52,53], and inertia-free imaging [5] have been reported separately in literature. Compared to previous studies, our approach can combine all the three functions in a simpler and more robust system. Apart from the above merits, our scheme has advantages in the following aspects. First, our scheme is very flexible. Based on the same working mechanism, imaging in more spectral bands and higher lateral resolution are feasible by using more filters. In case multi-spectral information is not needed, the multi-FOV system can be comprised of beam-splitting prisms in place of filters. Second, since the image registration is performed before the installation of the system and only a single exposure is required to obtain a large-scale multi-spectral image, our work greatly reduces the computation for SPAD array imaging. Third, our optical system has very limited impact on the maximum unambiguous range of the LiDAR system. Specifically, in our experiment, the largest delay of the light pulse is 330-time bins, and thus may at worst result in a maximum of 32% reduction in the system's maximum unambiguous range. Thanks to the spatial correlation nature of the scene and the way the four sub-pixels in a single pixel receive information from adjacent locations, we are able to set a very small pulse delay to distinguish the spatial depths corresponding to the sub-pixel FOVs. Therefore, the impact of this multi-FOV system on the maximum unambiguous range can be made very limited. Last but not least, in our scheme a significant percentage of the background noise can be filtered by specially designing the filters in the FOV system or adding a narrowband filter in each channel. This point is particularly critical for active multi-spectral imaging in outdoor applications with strong ambient illumination, for example, remote mapping of foliage and agricultural regions. In summary, the proposed scheme has great potential for autonomous vehicles, target recognition, and simultaneous measurement of fluorescence lifetimes at multiple wavelengths.

Acknowledgments

We thank Jianwei Tang and Shupei Lin for their helpful discussions.

Disclosures

The authors declare that there are no conflicts of interest related to this article. F.H.Q. conceived the scheme, designed and performed the experiment, and analyzed the data. F.H.Q. and P.Z wrote the manuscript.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. Z.-P. Li, J.-T. Ye, X. Huang, P.-Y. Jiang, Y. Cao, Y. Hong, C. Yu, J. Zhang, Q. Zhang, C.-Z. Peng, F. Xu, and J.-W. Pan, “Single-photon imaging over 200 km,” Optica 8(3), 344–349 (2021). [CrossRef]  

2. R. H. Hadfield, “Single-photon detectors for optical quantum information applications,” Nat. Photonics 3(12), 696–705 (2009). [CrossRef]  

3. R. Tobin, A. Halimi, A. McCarthy, M. Laurenzis, F. Christnacher, and G. S. Buller, “Three-dimensional single-photon imaging through obscurants,” Opt. Express 27(4), 4590–4611 (2019). [CrossRef]  

4. G. Buller and A. Wallace, “Ranging and Three-Dimensional Imaging Using Time-Correlated Single-Photon Counting and Point-by-Point Acquisition,” IEEE J. Sel. Top. Quantum Electron. 13(4), 1006–1015 (2007). [CrossRef]  

5. A. Mahjoubfar, D. V. Churkin, S. Barland, N. Broderick, S. K. Turitsyn, and B. Jalali, “Time stretch and its applications,” Nat. Photonics 11(6), 341–351 (2017). [CrossRef]  

6. Z.-P. Li, X. Huang, Y. Cao, B. Wang, Y.-H. Li, W. Jin, C. Yu, J. Zhang, Q. Zhang, C.-Z. Peng, F. Xu, and J.-W. Pan, “Single-photon computational 3D imaging at 45 km,” Photonics Res. 8(9), 1532 (2020). [CrossRef]  

7. A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal, “First-Photon Imaging,” Science 343(6166), 58–61 (2014). [CrossRef]  

8. A. Lyons, F. Tonolini, A. Boccolini, A. Repetti, R. Henderson, Y. Wiaux, and D. Faccio, “Computational time-of-flight diffuse optical tomography,” Nat. Photonics 13(8), 575–579 (2019). [CrossRef]  

9. A. M. Pawlikowska, A. Halimi, R. A. Lamb, and G. S. Buller, “Single-photon three-dimensional imaging at up to 10 kilometers range,” Opt. Express 25(10), 11919–11931 (2017). [CrossRef]  

10. W. R. D. Richard M and Jr. Marino, “Jigsaw: a foliage-penetrating 3D imaging laser radar system,” Lincoln Laboratory Journal 15, 65500N (2005). [CrossRef]  

11. M. A. Albota, B. F. Aull, D. G. Fouche, R. M. Heinrichs, D. G. Kocher, R. M. Marino, J. G. Mooney, N. R. Newbury, M. E. O’Brien, B. E. Player, B. C. Willard, and J. J. Zayhowski, “Three-dimensional imaging laser radars with Geiger-mode avalanche photodiode arrays,” Lincoln Laboratory Journal 13, 351–370 (2002).

12. S. Chan, A. Halimi, F. Zhu, I. Gyongy, R. K. Henderson, R. Bowman, S. McLaughlin, G. S. Buller, and J. Leach, “Long-range depth imaging using a single-photon detector array and non-local data fusion,” Sci. Rep. 9(1), 8075 (2019). [CrossRef]  

13. C. Niclass, C. Favi, T. Kluter, M. Gersbach, and E. Charbon, “A 128×128 Single-Photon Image Sensor With Column-Level 10-Bit Time-to-Digital Converter Array,” IEEE J. Solid-State Circuits 43(12), 2977–2989 (2008). [CrossRef]  

14. A. T. Erdogan, R. Walker, N. Finlayson, N. Krstajic, G. O. S. Williams, and R. K. Henderson, “A 16.5 giga events/s 1024 × 8 SPAD line sensor with per-pixel zoomable 50ps-6.4 ns/bin histogramming TDC,” in 2017 Symposium on VLSI Circuits, 2017), C292–C293.

15. J. Richardson, R. Walker, L. Grant, D. Stoppa, F. Borghetti, E. Charbon, M. Gersbach, and R. K. Henderson, “A 32×32 50ps resolution 10 bit time to digital converter array in 130 nm CMOS for time correlated imaging,” in 2009 IEEE Custom Integrated Circuits Conference, 2009), 77–80.

16. I. Gyongy, S. W. Hutchings, A. Halimi, M. Tyler, S. Chan, F. Zhu, S. McLaughlin, R. K. Henderson, and J. Leach, “High-speed 3D sensing via hybrid-mode imaging and guided upsampling,” Optica 7(10), 1253–1260 (2020). [CrossRef]  

17. J. Tachella, Y. Altmann, N. Mellado, A. McCarthy, R. Tobin, G. S. Buller, J.-Y. Tourneret, and S. McLaughlin, “Real-time 3D reconstruction from single-photon lidar data using plug-and-play point cloud denoisers,” Nat. Commun. 10(1), 4984 (2019). [CrossRef]  

18. F. Villa, R. Lussana, D. Bronzi, S. Tisa, A. Tosi, F. Zappa, A. Dalla Mora, D. Contini, D. Durini, S. Weyers, and W. Brockherde, “CMOS Imager With 1024 SPADs and TDCs for Single-Photon Timing and 3-D Time-of-Flight,” IEEE J. Sel. Top. Quantum Electron. 20(6), 364–373 (2014). [CrossRef]  

19. B. Aull, E. Duerr, J. Frechette, K. A. McIntosh, D. Schuette, V. Suntharalingam, and R. Younger, “Large-format image sensors based on custom Geiger-mode avalanche photodiode arrays,” in SPIE Nanoscience + Engineering, (Proc.SPIE, 2018).

20. D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7(1), 12046 (2016). [CrossRef]  

21. E. Wade, A. McCarthy, R. Tobin, A. Halimi, J. Garcia-Armenta, and G. Buller, Micro-scanning of a focal plane detector array in a single-photon LiDAR system for improved depth and intensity image reconstruction, SPIE Security + Defence (SPIE, 2022), Vol. 12274.

22. R. Xue, Y. Kang, T. Zhang, L. Li, and W. Zhao, “Sub-Pixel Scanning High-Resolution Panoramic 3D Imaging Based on a SPAD Array,” IEEE Photonics J. 13(4), 1–6 (2021). [CrossRef]  

23. Z.-P. Li, X. Huang, P.-Y. Jiang, Y. Hong, C. Yu, Y. Cao, J. Zhang, F. Xu, and A. Jian-Wei Pan, “Super-resolution single-photon imaging at 8.2 kilometers,” Opt. Express 28(3), 4076–4087 (2020). [CrossRef]  

24. Z. Li, E. Wu, C. Pang, B. Du, Y. Tao, H. Peng, H. Zeng, and G. Wu, “Multi-beam single-photon-counting three-dimensional imaging lidar,” Opt. Express 25(9), 10189–10195 (2017). [CrossRef]  

25. T. Zheng, G. Shen, Z. Li, L. Yang, H. Zhang, E. Wu, and G. Wu, “Frequency-multiplexing photon-counting multi-beam LiDAR,” Photonics Res. 7(12), 1381 (2019). [CrossRef]  

26. J. J. Degnan, “Scanning, Multibeam, Single Photon Lidars for Rapid, Large Scale, High Resolution, Topographic and Bathymetric Mapping,” Remote Sens. 8(11), 958 (2016). [CrossRef]  

27. X. S. Yao, X. Liu, and P. Hao, “Scan-less 3D optical sensing/Lidar scheme enabled by wavelength division demultiplexing and position-to-angle conversion of a lens,” Opt. Express 28(24), 35884–35897 (2020). [CrossRef]  

28. Y. Jiang, S. Karpf, and B. Jalali, “Time-stretch LiDAR as a spectrally scanned time-of-flight ranging camera,” Nat. Photonics 14(1), 14–18 (2020). [CrossRef]  

29. Z. Zang, Z. Li, Y. Luo, Y. Han, H. Li, X. Liu, and H. Y. Fu, “Ultrafast parallel single-pixel LiDAR with all-optical spectro-temporal encoding,” APL Photonics 7(4), 046102 (2022). [CrossRef]  

30. D. Wu, T. Zheng, L. Wang, X. Chen, L. Yang, Z. Li, and G. Wu, “Multi-beam single-photon LiDAR with hybrid multiplexing in wavelength and time,” Opt. Laser Technol. 145, 107477 (2022). [CrossRef]  

31. Y. Altmann, A. Maccarone, A. McCarthy, S. McLaughlin, and G. S. Buller, “Spectral classification of sparse photon depth images,” Opt. Express 26(5), 5514–5530 (2018). [CrossRef]  

32. G. S. Buller, R. D. Harkins, A. McCarthy, P. A. Hiskett, G. R. MacKinnon, G. R. Smith, R. Sung, A. M. Wallace, R. A. Lamb, K. D. Ridley, and J. G. Rarity, “Multiple wavelength time-of-flight sensor based on time-correlated single-photon counting,” Rev. Sci. Instrum. 76(8), 083112 (2005). [CrossRef]  

33. E. Puttonen, J. Suomalainen, T. Hakala, E. Räikkönen, H. Kaartinen, S. Kaasalainen, and P. Litkey, “Tree species classification from fused active hyperspectral reflectance and LIDAR measurements,” For. Ecol. Manage. 260(10), 1843–1852 (2010). [CrossRef]  

34. T.-A. Teo and H.-M. Wu, “Analysis of land cover classification using multi-wavelength LiDAR system,” Appl. Sci. 7(7), 663 (2017). [CrossRef]  

35. X. Ren, Y. Altmann, R. Tobin, A. McCarthy, S. McLaughlin, and G. S. Buller, “Wavelength-time coding for multispectral 3D imaging using single-photon LiDAR,” Opt. Express 26(23), 30146–30161 (2018). [CrossRef]  

36. A. Romero, C. Gatta, and G. Camps-Valls, “Unsupervised Deep Feature Extraction for Remote Sensing Image Classification,” IEEE Trans. Geosci. Remote Sensing 54(3), 1349–1362 (2016). [CrossRef]  

37. L. Matikainen, K. Karila, P. Litkey, E. Ahokas, and J. Hyyppä, “Combining single photon and multispectral airborne laser scanning for land cover classification,” ISPRS Journal of Photogrammetry and Remote Sensing 164, 200–216 (2020). [CrossRef]  

38. H. Xie, J. Bec, J. Liu, Y. Sun, M. Lam, D. R. Yankelevich, and L. Marcu, “Multispectral scanning time-resolved fluorescence spectroscopy (TRFS) technique for intravascular diagnosis,” Biomed. Opt. Express 3(7), 1521–1533 (2012). [CrossRef]  

39. L. Marcu, “Fluorescence Lifetime Techniques in Medical Applications,” Ann. Biomed. Eng. 40(2), 304–331 (2012). [CrossRef]  

40. P. W. R. Connolly, J. Valli, Y. D. Shah, Y. Altmann, J. Grant, C. Accarino, C. Rickman, D. R. S. Cumming, and G. S. Buller, “Simultaneous multi-spectral, single-photon fluorescence imaging using a plasmonic colour filter array,” J. Biophotonics 14, e202000505 (2021). [CrossRef]  

41. H. K. Chandrasekharan, F. Izdebski, I. Gris-Sánchez, N. Krstajić, R. Walker, H. L. Bridle, P. A. Dalgarno, W. N. MacPherson, R. K. Henderson, T. A. Birks, and R. R. Thomson, “Multiplexed single-mode wavelength-to-time mapping of multimode light,” Nat. Commun. 8(1), 14080 (2017). [CrossRef]  

42. G. Wei, S. Shalei, Z. Bo, S. Shuo, L. Faquan, and C. Xuewu, “Multi-wavelength canopy LiDAR for remote sensing of vegetation: Design and system performance,” ISPRS Journal of Photogrammetry and Remote Sensing 69, 1–9 (2012). [CrossRef]  

43. A. M. Wallace, A. McCarthy, C. J. Nichol, X. Ren, S. Morak, D. Martinez-Ramirez, I. H. Woodhouse, and G. S. Buller, “Design and Evaluation of Multispectral LiDAR for the Recovery of Arboreal Parameters,” IEEE Trans. Geosci. Remote Sensing 52(8), 4942–4954 (2014). [CrossRef]  

44. C. P. H. Nanxi Li, I. Ting Wang, Prakash Pitchappa, Yuan Hsing Fu, Yao Zhu, and Lennon Yao Ting Lee, “Spectral imaging and spectral LIDAR systems: moving toward compact nanophotonics-based sensing,” Nanophotonics (2021).

45. J. Tachella, Y. Altmann, M. Márquez, H. Arguello-Fuentes, J. Y. Tourneret, and S. McLaughlin, “Bayesian 3D Reconstruction of Subsampled Multispectral Single-Photon Lidar Signals,” IEEE Trans. Comput. Imaging 6, 208–220 (2020). [CrossRef]  

46. Y. Altmann, R. Tobin, A. Maccarone, X. Ren, A. McCarthy, G. S. Buller, and S. McLaughlin, “Bayesian restoration of reflectivity and range profiles from subsampled single-photon multispectral Lidar data,” in 2017 25th European Signal Processing Conference (EUSIPCO), 2017), 1410–1414.

47. B. Chen, S. Shi, J. Sun, W. Gong, J. Yang, L. Du, K. Guo, B. Wang, and B. Chen, “Hyperspectral lidar point cloud segmentation based on geometric and spectral information,” Opt. Express 27(17), 24043–24059 (2019). [CrossRef]  

48. W. Li, Z. Niu, G. Sun, S. Gao, and M. Wu, “Deriving backscatter reflective factors from 32-channel full-waveform LiDAR data for the estimation of leaf biochemical contents,” Opt. Express 24(5), 4771–4785 (2016). [CrossRef]  

49. S. S. Welsh, M. P. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. J. Padgett, “Fast full-color computational imaging with single-pixel detectors,” Opt. Express 21(20), 23068–23074 (2013). [CrossRef]  

50. T. Hakala, J. Suomalainen, S. Kaasalainen, and Y. Chen, “Full waveform hyperspectral LiDAR for terrestrial laser scanning,” Opt. Express 20(7), 7119–7127 (2012). [CrossRef]  

51. A. O. C. Davis, P. M. Saulnier, M. Karpiński, and B. J. Smith, “Pulsed single-photon spectrometer by frequency-to-time mapping using chirped fiber Bragg gratings,” Opt. Express 25(11), 12804–12811 (2017). [CrossRef]  

52. S. Song, B. Wang, W. Gong, Z. Chen, X. Lin, J. Sun, and S. Shi, “A new waveform decomposition method for multispectral LiDAR,” ISPRS Journal of Photogrammetry and Remote Sensing 149, 40–49 (2019). [CrossRef]  

53. Y. D. Shah, P. W. R. Connolly, J. P. Grant, D. Hao, C. Accarino, X. Ren, M. Kenney, V. Annese, K. G. Rew, Z. M. Greener, Y. Altmann, D. Faccio, G. S. Buller, and D. R. S. Cumming, “Ultralow-light-level color image reconstruction using high-efficiency plasmonic metasurface mosaic filters,” Optica 7(6), 632–639 (2020). [CrossRef]  

54. O. Kumagai, J. Ohmachi, M. Matsumura, et al., “7.3 A 189×600 Back-Illuminated Stacked SPAD Direct Time-of-Flight Depth Sensor for Automotive LiDAR Systems,” in 2021 IEEE International Solid- State Circuits Conference (ISSCC), 2021), 110–112.

55. Y. Kang, R. Xue, X. Wang, T. Zhang, F. Meng, L. Li, and W. Zhao, “High-resolution depth imaging with a small-scale SPAD array based on the temporal-spatial filter and intensity image guidance,” Opt. Express 30(19), 33994–34011 (2022). [CrossRef]  

56. A. Ruget, S. McLaughlin, R. K. Henderson, I. Gyongy, A. Halimi, and J. Leach, “Robust super-resolution depth imaging via a multi-feature fusion deep network,” Opt. Express 29(8), 11917–11937 (2021). [CrossRef]  

57. C. Callenberg, A. Lyons, D. D. Brok, A. Fatima, A. Turpin, V. Zickus, L. Machesky, J. Whitelaw, D. Faccio, and M. B. Hullin, “Super-resolution time-resolved imaging using computational sensor fusion,” Sci. Rep. 11(1), 1689 (2021). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Schematic diagram of the proposed high-resolution snapshot multi-spectral method. (a) The FOV of the objective lens (L) is extended to two regions by a multi-FOV system. The FOVs of channel 1 (C1) and channel 2 (C2) are adjusted with a long-pass filter (LP) and mirror 1 (M1), and a short-pass filter (SP) and mirror 2 (M2), respectively. (b) Comparison of the traditional (leftmost) and the proposed imaging method. In the traditional method, one pixel corresponds to one FOV. In our method, each pixel can have multiple wavelength-resolved FOVs, enabling multi-spectral high-resolution imaging with a snapshot.
Fig. 2.
Fig. 2. (a) Experimental setup of the proposed high-resolution snapshot multi-spectral system. Multi-spectral pulses from the laser are divided into four pulses of central wavelengths λ1 = 740 nm, λ2 = 532 nm, λ3 = 690 nm, λ4 = 633 nm, and separated with 4∼8 ns intervals. The target is illuminated with flood illumination. Subpixel shifts exist between the FOVs of different channels (C1, C2, C3, C4), and the returned echo signals record the 3D information of the target. Position and reflectivity information is mapped to wavelengths in different time-bins. F, filter: ${\textrm{F}_1}$ , TSP01-790; ${\textrm{F}_2}$ , TSP01-704; ${\textrm{F}_3}$ , TLP01-790; ${\textrm{F}_4}$ , TLP01-628; ${\textrm{F}_5}$ , TSP01-887; ${\textrm{F}_6}$ , TLP01-704; Semrock. M1, M2, mirror; L, objective lens. (b) FOVs of the traditional and our imaging methods. (c) The solid lines depict the reflectance (R) and transmittance (T) of the filters. The dashed lines are the spectra of the SC-Pro laser incidence into the channels measured by a spectrometer (Ocean Optics).
Fig. 3.
Fig. 3. Imaging of a multi-color target by the proposed method. (a) The target. (b) 2D pseudo-color estimates. (c) Reconstructed 3D pseudo-color image. (d)-(g) The reflectivity imaging results from four individual channels. (h) The reflectance spectra of the target at the dashed cross-section lines of (d)-(g). (i) The histogram of echo photons of four wavelengths measured by one pixel. The black line denotes the reflectance spectrum of a reference diffuse reflector with $89{\%}$ reflectance.
Fig. 4.
Fig. 4. Reflectivity reconstruction of two printed patterns (a) and (g). (b) and (h) The reflectivity estimates with a resolution of 64 × 64 by the proposed method. Our method produces high-resolution images (b) and (h) by combining multiple native resolution images (c)-(f) and (i)-(l) respectively of the same scenes, breaking the physical limits of the camera sensors.
Fig. 5.
Fig. 5. Reflectivity reconstruction of a 3D structure. For 3D visualization, the reflectivity estimates are overlaid on depth maps. (a) The 3D target. (b) Front view of the reflectance overlaid 3D image by the proposed method. (c) A rotated view of the image. (d)-(g) The images of the target obtained through four single channels.

Tables (1)

Tables Icon

Table 1. Summary of main parameters

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

r ( λ ) = { 0 ,   0 < R λ < 0.1 % 1 ,   0.1 % < R λ < 100 %
t ( λ ) = { 0 ,   0 < T λ < 0.1 % 1 ,   0.1 % < T λ < 100 %
C λ = 0 t α λ R λ I λ d τ = α λ R λ I λ n f
R λ 0 = R λ r C λ 0 C λ r ( n r f 0 n 0 f r )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.