Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

3D LIDAR imaging using Ge-on-Si single–photon avalanche diode detectors

Open Access Open Access

Abstract

We present a scanning light detection and ranging (LIDAR) system incorporating an individual Ge-on-Si single-photon avalanche diode (SPAD) detector for depth and intensity imaging in the short-wavelength infrared region. The time-correlated single-photon counting technique was used to determine the return photon time-of-flight for target depth information. In laboratory demonstrations, depth and intensity reconstructions were made of targets at short range, using advanced image processing algorithms tailored for the analysis of single–photon time-of-flight data. These laboratory measurements were used to predict the performance of the single-photon LIDAR system at longer ranges, providing estimations that sub-milliwatt average power levels would be required for kilometer range depth measurements.

Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. Introduction

Light detection and ranging (LIDAR) is a common technique used for distance measurements in a variety of applications [13]. More recently, time-correlated single-photon counting (TCSPC) has emerged as a candidate detection technique for LIDAR due to its high sensitivity and improved surface-to-surface resolution. This has been shown in a variety of challenging application areas, including kilometer-range depth imaging [46], multispectral depth imaging [7], imaging through obscurants [8] and underwater depth imaging [9,10]. The TCSPC technique is based on the measurement of the time difference between a reference signal in synchronization with the pulsed optical source, and a photon detection recorded by a single–photon detector [11]. The timing difference is recorded over many pulses and typically aggregated into a timing histogram. This approach is commonly used in photon-starved applications such as fluorescence spectroscopy [12], and quantum communications [13,14]. It is particularly suitable for the time-of-flight (ToF) ranging and imaging where picosecond time resolution is required and a very low average photon return is expected.

LIDAR systems operating in the short-wavelength infrared (SWIR) region - at wavelengths between 1.4 µm and 3 µm - can have several advantages compared with LIDAR using near–infrared (NIR) illumination. Firstly, compared to wavelengths in the retinal hazard region (400 nm to 1400 nm), the use of an operating wavelength outside this spectral region - such as 1450 nm - means that higher optical power levels can be used whilst still being eye-safe [15], ensuring significantly improved LIDAR performance [16]. Secondly, operation at selected SWIR spectral bands is less affected by atmospheric attenuation than the visible and NIR regions [17]. Finally, the level of solar radiation, which is typically the main contribution to undesirable background noise, is considerably lower in SWIR systems compared with those operating at shorter wavelengths [18,19].

Previous demonstrations of single-photon depth imaging and LIDAR at wavelengths shorter than 1 µm have used a variety of silicon-based SPAD sensors [2023] and CMOS Si SPAD detector arrays [2426]. Development of a highly efficient semiconductor single-photon detector for SWIR wavelengths, however, remains a significant challenge. At present, InGaAs/InP SPAD detectors are commonly used in experiments involving single-photon detection of light at wavelengths up to 1.6 µm [4,27,28]. These SPADs, however, have limitations on their maximum count rates due to the deleterious effects of detector afterpulsing [29], where carriers trapped during the avalanche current are emitted after the detection event, triggering further avalanches. The effect of afterpulsing is to restrict the user to relatively low count rates, as the detector is required to be de-activated for a pre-determined duration after every event in order to permit the traps to empty before the SPAD is returned to the quiescent state, ready for further optical detection. In addition, arrays of InGaAs/InP SPAD detectors are incompatible with Si CMOS processing and often prove expensive for mass market applications. Another common alternative for single-photon detection in the SWIR region is superconducting nanowire single–photon detectors (SNSPDs) [30,31]. These detectors have been successfully used for single–photon LIDAR experiments [32,33], however, SNSPDs operate at cryogenic temperatures, typically below 4 K [31,34], presenting a major practical disadvantage for system operation on mobile platforms, for example.

Here we report a ToF TCSPC LIDAR imaging system based on an individual planar geometry Ge-on-Si SPAD detector. These SPAD detectors are separate absorption charge and multiplication structures that employ Ge as an absorption layer and Si as a carrier multiplication layer. The use of the Ge absorber allows the operational wavelength to be extended to above that possible in an all-silicon detector. At room temperature, Ge will absorb illumination at wavelengths of up to 1.6 µm. A thin Si charge sheet is used between the Si multiplication region and the Ge absorption layer to control the electric field so that a low electric field is maintained in the Ge absorber to allow electrons to drift into the Si multiplication region, whilst having the high electric field necessary in the Si multiplication layer to operate the device above avalanche breakdown. The charge sheet ensures the electric field in the Ge layer is sufficiently low to inhibit impact ionization and tunneling in this layer, which would have a detrimental effect on device performance. Early examples of SPAD detectors that used the separate absorption and multiplication approach in Group IV semiconductors utilized strained SiGe/Si superlattice absorber layers [35], however these devices lacked sufficient optical absorption in the SWIR region to demonstrate efficient single-photon detection. Over the past decade there has been significant expansion of activity as thick (i.e. > 1 µm) Ge has been grown epitaxially on Si. Several Ge-on-Si SPAD design structures have been demonstrated, including normal incidence mesa geometry Ge-on-Si SPADs [36,37], and waveguide Ge-on-Si SPADs [38]. These SPADs have demonstrated potential of single-photon detection in the SWIR region with single-photon detection efficiencies (SPDEs) of up to 5.27% for 1310 nm wavelength illumination when operated at 100 K [36]. These devices, however, also exhibited prohibitively high dark count rates, making them unsuitable for practical demonstrations of LIDAR and other applications using single-photon detection.

More recently, the use of planar geometry devices [39] yielded a significant step change improvement in performance. Vines et al. [39] reported a normal incidence planar geometry Ge-on-Si SPADs with 38% SPDE at 125 K at a wavelength of 1310 nm and a noise–equivalent power (NEP) of 2 × 10−16 WHz-1/2. In addition, these devices clearly demonstrated lower levels of afterpulsing compared with InGaAs/InP SPAD detectors operated under nominally identical conditions. The high SPDEs of Ge-on-Si SPADs and their reduced afterpulsing compared to InGaAs/InP SPADs provides the potential for significantly higher count rate operation and, consequently, reduced data acquisition times. Planar Ge-on-Si SPADs exhibit compatibility with Si CMOS processing, potentially leading to the development of inexpensive, highly efficient Ge-on-Si SPAD detector arrays. Here we report a successful demonstration of LIDAR 3D imaging using an individual planar Ge-on-Si SPAD operating at a wavelength of 1450 nm. Based on these results, we modelled the behavior of the Ge-on-Si SPAD LIDAR system under more challenging conditions to ascertain its likely practicality over longer ranges.

2. Imaging system description

A Ge-on-Si SPAD was used as a detector in a series of laboratory-based LIDAR measurements to establish the suitability of this detector in future longer range LIDAR measurements. The detector was situated inside an Oxford Instruments liquid nitrogen cryostat and was operated at a temperature of 100 K. These 100 µm diameter detectors had been previously shown to operate at temperatures in the range of 78 K to 175 K [39], and smaller versions (26 µm diameter) have been operated at temperatures up to 200 K. The SPAD used in these experiments exhibited a dark count rate of 4.7 Mcounts/s with a single-photon detection efficiency of 10% at an excess bias of 1.5%. The SPAD detector was operated in an electrically gated mode where it was activated for a duration of 50 ns around the expected time of photon arrival.

A schematic of the LIDAR imaging set-up is shown in Fig. 1. A supercontinuum laser source (SuperK EXTREME EXW-6, NKT Photonics) was used in conjunction with an acousto–optical tunable filter to select the illumination wavelength. The low afterpulsing rate of the Ge-on-Si SPAD allowed a relatively high laser repetition rate of 104 kHz to be used and ToF data for each pixel was collected over many laser pulses. A fiber-based JGR Optics AO5 optical attenuator was used to vary the laser power incident on the sample. After the attenuator, the light was collimated for free-space operation with a reflective collimator package. The incident light was focused onto the target by a lens (L1 in the Fig. 1), with a clear aperture of approximately 23 mm and an effective focal length of 400 mm. In this case, a monostatic configuration was used, hence scattered light from the target was collected by lens L1 and collimated into the common transmit/receive channel. By using a beamsplitter in the common channel, a fraction of the scattered light was focused onto the SPAD detector via lens L2.

 figure: Fig. 1.

Fig. 1. A schematic diagram representing the experimental set-up of the monostatic transceiver. The target was illuminated using a spectrally tunable supercontinuum laser system. The target was mounted on computer controlled motorized translation stages, which allowed it to be raster scanned relative to the stationary illuminating beam. Light reflected from the beam splitter was then focused via lens L2 onto the 100 µm diameter planar Ge-on-Si SPAD. BS denotes a beam-splitter. The detector was located inside a cryostat and operated at a temperature of 100 K.

Download Full Size | PDF

The TCSPC timing module recorded the time elapsed between the electrical trigger emitted in synchronization with the outgoing laser pulse and the events recorded by the SPAD detector. This time-tagged detection event information was then transferred to a computer via a USB connection. By averaging over many laser pulses, it is possible to obtain a statistically accurate timing histogram containing time-of-flight information, even in the presence of background events originating from detector dark counts, from detector afterpulsing events, and photon events from (un-correlated) ambient light sources. The photon return time-of-flight to the target provides an estimate of the target depth, and the rate of photon returns from the target provides an estimate of the target reflectivity.

All measurements were performed at a stand-off distance of 0.4 meters. The target was mounted on computer-controlled translation stages to allow an X - Y raster scan of the target across the stationary incident beam. The movements of the translation stages were synchronized with the photon return data stream in order to assign the appropriate timing information to each target position. These measurements were performed in a dark laboratory environment to avoid ambient contributions to the detection background. The maximum average laser power used for the measurements was only 912 pW.

3. Image reconstruction

LIDAR used for automotive applications requires rapid data acquisition and processing times, overall typically in the range of milliseconds. The requirement for rapid image acquisition gives a clear incentive to reduce the number of recorded photon events necessary for image reconstruction. Similarly, increasing the average laser power can also reduce the necessary image acquisition time. LIDAR applications, however, are often restricted by laser eye-safety thresholds, which will place an upper limit on the laser power used [16]. The development of bespoke image processing algorithms for single-photon data is a clear path to reduced acquisition times, whilst maintaining laser powers consistent with eye-safety thresholds. Different approaches have been previously demonstrated for depth and intensity image restoration from sparse and noisy single-photon data [4045], due to long-range imaging [5] and/or underwater imaging [10]. In this section, we will present depth profiles and intensity images generated using three reconstructing algorithms: a simple pixel–wise cross-correlation approach, the Restoration of Depth and Intensity using Total Variation (RDI-TV) algorithm [40], and the Manifold Point Process (ManiPoP) algorithm [46]. Statistical image processing techniques such as RDI-TV and ManiPoP use spatial correlations in the image to allow reconstruction of the image when only partial information of the scene is available.

In these measurements, timing histograms containing ToF information were constructed for each pixel. Depth and intensity information were estimated from these histograms using a cross-correlation method, previously described in several Refs. [8,9,28]. For each pixel location, a cross-correlation, c, between the measured timing histogram, h, and an instrumental response, g, (reference histogram acquired during calibrations) was performed:

$${c_\tau } = \sum\nolimits_{j = 1}^N {{h_{\tau + j}}\, \times {g_j}} ,$$
where N is the number of timing bins in the instrumental response and the histogram, $j$ represents the indexing of the time bins, and $\tau$ represents the lag time. The time-of-flight, and hence target depth, is then defined as the time lag for which the cross-correlation is maximised. The intensity information can be deduced from the number of photons in the cross-correlation peak. This operation is repeated in turn for each pixel, eventually leading to a three-dimensional image, composed of X and Y positions of the pixels and depth information of the target. The instrumental response function, g, was obtained from a histogram of a 60 s long measurement from an individual pixel when a flat scattering surface was located in the plane of the target.

This section presents the results of the depth and intensity estimations reconstructed using the pixel-wise cross-correlation algorithm. Several models of vehicles were used as targets for this experiment. A model of a double decker bus and of a Mini Cooper car were selected to demonstrate the high-resolution depth imaging capabilities of the system. Figure 2(a) and 2(b), show photographs of the targets taken with a visible camera. The depth (shown in Fig. 2(c) and 2(d)) and intensity (shown in Fig. 2(e) and 2(f)) profiles of the targets were obtained using the cross-correlation method. For each of the depth and intensity profiles the inter-pixel spacing was 1 mm with the pixel format of 123 × 72 (X × Y) for the bus scan and 100 × 70 for the car. The model dimensions were 110 × 38 × 60 mm (L × W × H) and 95 × 60 × 45 mm for the bus and the car, respectively. The measurements were performed using 300 ms per pixel acquisition time, which corresponded to 44.28 minutes total photon acquisition time for the bus and 35 minutes for the car. An illumination wavelength of 1450 nm was used in these measurements, with the maximum average laser power directed at the target being 813 pW for the bus and 912 pW for the car.

 figure: Fig. 2.

Fig. 2. The depth and intensity profile measurements reconstructed using the pixel-wise cross–correlation approach. The images (a) and (b) are close-up visible photographs of the targets: a double decker bus model (110 × 60 × 38 mm) and a Mini Cooper car model (95 × 45 × 60 mm). The depth reconstructions are shown in (c) and (d), and the intensity reconstructions are shown in (e) and (f). The measurements were performed in a dark laboratory environment at a stand–off distance of 0.4 m. The scanned scene consisted of the target mounted in front of a white cardboard backplane with a maximum front-to-back separation of approximately 100 mm. To improve the presentation clarity, an arbitrary zero depth position was used for the scale on depth profiles (c) and (d). The measurement parameters are described in the main text.

Download Full Size | PDF

In both cases, the low signature parts of the target (e.g. wheels) are poorly resolved. Features are evident, however, from inside the windows of the bus target, especially in the intensity profile. In addition, Figs. 2(e) and 2(f) demonstrate that even small features on the hood of the Mini Cooper car are recognisable, and other fine details can be reconstructed.

We also demonstrated the reconstructed images of these targets recorded using shorter per-pixel acquisition times from 30 ms down to 0.5 ms. Figure 3 illustrates the depth and intensity profiles reconstructed from the data acquired at different per-pixel acquisition times:(a) 30 ms, (b) 10 ms, (c) 3 ms, (d) 1 ms, and (e) 0.5 ms. For a pixel acquisition time of 0.5 ms, the average number of photons per pixel over the entire scene was 1.4 photons per pixel. As expected, the quality of image reconstruction degrades as the acquisition time is reduced, becoming difficult to discern in isolation at the 0.5 ms acquisition time per pixel.

 figure: Fig. 3.

Fig. 3. Depth and intensity profiles reconstructed using the pixel-wise cross correlation technique using data acquired with varying per-pixel acquisition times: (a) 30 ms; (b) 10 ms; (c) 3 ms; (d) 1 ms; and (e) 0.5 ms. The scene was scanned at a wavelength of 1450 nm in dark laboratory conditions. The image format used was 100 × 70 pixels, which covered an area of approximately 100 × 70 mm at a range of 0.4 m. In order to improve the presentation clarity, an arbitrary zero depth position was used for the scale on the depth profiles.

Download Full Size | PDF

Several algorithms have been developed to restore depth and intensity images at extreme cases, such as the photon-starved regimes, including RDI-TV [40], ManiPoP [46], UA [44], NR3D [45]. In this paper, we highlight the benefit that both the RDI-TV algorithm [40] and the ManiPoP algorithms [46] provide in reducing the acquisition time. Both algorithms account for data statistics and the spatial correlations between the neighbouring pixels to reconstruct depth and intensity images. To reduce the computational cost, RDI-TV considers assumptions that are valid in this work, namely: the background counts are negligible since the measurements were made in dark laboratory conditions; the depth of the object is inside the observation window; and the system temporal response is narrower than the observation window. More details on the algorithm are provided in Halimi et al. [40], and in [5,8,28] where the algorithm is applied to single-photon data.

The ManiPoP algorithm [46] is more general since it accounts for the presence of background noise, and the presence of multiple peaks per-pixel. This algorithm restores a point cloud by considering a Bayesian formulation combining a Poisson based likelihood term with statistical prior distributions. More precisely, a marked point process is considered to restore the point cloud. More specifically, a marked point process is used to restore the point cloud and a sophisticated reversible jump Markov chain Monte Carlo (RJ-MCMC) approach is then applied to sample the resulting posterior distribution [46].

The benefit of these advanced computational methods in reducing acquisition time is highlighted by considering a scenario where only partial information is acquired. This is obtained by scanning random parts of the scene, i.e. randomly selected pixels ranging from a total of 75% to 25% of the full number of pixels in the scene. Figures 45, and 6 demonstrate a comparison of reconstructed depth and intensity profiles restored using the cross-correlation, RDI-TV and ManiPoP algorithms with 25%, 50% and 75% pixels randomly removed from the entire 100  × 70 scene. Figure 4 demonstrates that in a scenario when 75% of the scene was randomly scanned, both advanced techniques work well in filling missing pixels. The intensity profiles work particularly well with RDI-TV.

 figure: Fig. 4.

Fig. 4. Depth and intensity images reconstructed using the cross-correlation technique (left), the RDI-TV algorithm (middle), and the ManiPoP algorithm (right) with 25% of the pixels removed. The image contained 100 × 70 pixels prior to the randomly selected removal of 25% of the pixels. A 10 ms per pixel acquisition time was used in these measurements.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Depth and intensity images reconstructed using the cross-correlation technique (left), the RDI-TV algorithm (middle), and the ManiPoP algorithm (right) with 50% of the pixels removed. The image contained 100 × 70 pixels prior to the randomly selected removal of 50% of the pixels. A 10 ms per pixel acquisition time was used in these measurements.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Depth and intensity images reconstructed using the cross-correlation technique (left), the RDI-TV algorithm (middle), and the ManiPoP algorithm (right) with 75% of the pixels removed. The image contained 100 × 70 pixels prior to the randomly selected removal of 75% of the pixels. A 10 ms per pixel acquisition time was used in these measurements.

Download Full Size | PDF

As shown in Fig. 5, the identification of details for the cross-correlation approach in this image significantly degrades with 50% of the data missing. The RDI-TV and the ManiPoP algorithms, however, restore missing pixels by finding depth relationships between neighbouring pixels, improving the visual identification of image details. Figure 6 demonstrates that with only 25% of the pixels, advanced algorithms provide a good quality reconstructed image in which the target and important details such as wheels and windows can be easily identified. Note however that ManiPoP operates on a point cloud which results in the presence of some gaps in the 2D representation of depth and reflectivity images (i.e., black points in Fig. 6).

4. LIDAR performance estimations for future ranging scenarios

Although the results presented in the previous sections demonstrate LIDAR imaging using a planar geometry Ge–on–Si SPAD detector at short range, further theoretical estimations were made to assess the potential of using these detectors in long range LIDAR, consistent with the requirements of, for example, automotive LIDAR. In this section, we will consider a LIDAR model based on the photon-counting version of the LIDAR equation [47,9], and present estimations of the operating conditions necessary for reliable depth imaging at longer distances and in the presence of attenuating media.

The LIDAR equation model, tailored for photon-counting systems, that was developed in our previous work [9] included several operational parameters to estimate the performance of the system in different conditions. Initially, the model provided an estimate of the number of photon events recorded in the timing bin corresponding to the peak of the photon return histogram, np, as:

$${n_p} = \frac{{{E_{Pulse}}F\lambda }}{{hc}}t\frac{{{A_{Lens}}\rho }}{{2\pi {R^2}}}{e^{ - 2\alpha R}}{C_{in}}{C_{det}}\eta $$
where the main parameters are listed in Table 1. h is the Planck’s constant, and c is the speed of light in vacuum. Here, we assume a laser source with a pulse energy, EPulse, that is independent of the repetition rate, F.

Tables Icon

Table 1. Description of the parameters in Eq. (2).

Secondly, the model estimated the average background counts per bin nb as

$${n_b} = t\,DCR\,\,{\tau _b}\,F$$
where DCR is the dark count rate of the detector at a given temperature of operation, and τb is the bin size. From these estimations, the signal–to–noise ratio (SNR) can be obtained as [48]:
$$SNR = \frac{{{n_p}}}{{\sqrt {{n_p} + {n_b}} }}.$$
In this work, Eqs. (2)–(4) were used to obtain an estimate of the average optical laser power required for successful imaging at long distances. In order to make a prediction of the system performance at higher temperatures of operation in a LIDAR application we used the data from single-photon characterization of a 26 µm diameter Ge–on–Si SPAD, similar to the ones reported in [39]. This detector exhibited SPDE of 15% at a wavelength of 1310 nm and SPDE of 9% at a wavelength of 1450 nm, DCR of 2.6 k count s−1 at 2.5% excess bias and an operating temperature of 125 K. The attenuation of light in air at both 1310 nm and 1450 nm wavelengths is regarded as negligible in this analysis, and the spot size was assumed to be smaller than the detection window. From previous work [9], an empirically established minimum signal–to–noise ratio (SNR) equal to 1.4 was required to reliably obtain a depth estimate from a single pixel measurement. SNR values of less than this can be used effectively when used in conjunction with computational imaging techniques which utilize spatial correlations over a number of adjacent pixels. The minimum value of np was obtained from Eq. (4) using a SNR of 1.4. Then, the average laser power levels required for successful imaging at much longer distances were estimated as:
$${P_{out}} = \frac{{hc}}{\lambda }\frac{{2\pi {R^2}{n_p}}}{{{A_{lens}}\rho t\eta {C_{{\mathop{\rm in}\nolimits} }}{C_{\det }}}}.$$
For the LIDAR calculations, the repetition rate of the illuminating light was 100 kHz, and the internal loss of the system was assigned the conservatively high estimate of 10 dB. The reflectivity of the target was considered as Lambertian in nature with overall target back-scatter equal to 10% of the incident radiation.

Figure 7 shows estimations of the laser power required for imaging in free-space at target stand–off distances of up to 1 km using different acquisition times (1, 3, 10 and 30 ms). It is clear from Fig. 7 that lower average optical powers required for reliable depth imaging can be achieved when longer acquisition times are employed. Whilst longer acquisition times may be acceptable in some applications, such as certain environmental monitoring applications, for rapid imaging applications the shorter acquisition times necessitate higher optical powers. We predict that a LIDAR system incorporating Ge–on–Si SPADs would be able to form a depth image of an object at a distance of 1 km while remaining eye–safe using per–pixel acquisition times as low as 1 ms at 1310 nm wavelength or 3 ms at a wavelength of 1450 nm. The longer acquisition times at longer wavelengths are related to the reduction in efficiency at longer wavelengths due to the decrease in the absorption coefficient of Ge [39]. These acquisition times could be further reduced by improving the SPDE through an increase in the thickness of the Ge absorber in the SPAD devices [39]. The Ge-on-Si SPADs have a clear advantage due to the lack of an energy barrier between the Ge absorber and Si avalanche region for the photogenerated electrons [49], when compared to InGaAs/InP SPAD detectors where there is a significant energy barrier between the InGaAs absorber and the InP multiplication region for the photogenerated holes to traverse. With an optimized device design, this advantage could result in significantly increased SPDE for Ge-on-Si SPAD detectors. These results are modelled using data for current generation detectors which require cryogenic cooling, and further detector development is required in order to achieve sufficiently low dark count rates in the operating range of Peltier cooling. In order to achieve such higher temperature operation, performance improvements will be required by way of reducing device volumes, use of optimized material and advanced device geometries, as described in Vines et al. [39].

 figure: Fig. 7.

Fig. 7. The average laser power required to image a target at different stand-off distances from 100 m to 1 km using 1310 nm (a) or 1450 nm (b) wavelength illuminating light. The estimation considers different acquisition times per pixel: 1 ms (magenta triangles), 3 ms (blue triangles), 10 ms (red circles) and 30 ms (black squares). The estimate is based on a collecting lens of 23 mm diameter and a 26 µm diameter planar Ge-on-Si SPAD operated at a temperature of 125 K under an excess bias of 2.5% above avalanche breakdown. The repetition rate of the pulsed illumination laser is 100 kHz.

Download Full Size | PDF

Finally, the effect of a weakly attenuating medium was included in the model using different values of the attenuation coefficient, a target stand–off distance of 300 meters from the transceiver, and a 10 ms acquisition time per pixel. Figure 8 shows the average optical power versus the number of attenuation lengths, defined as a reduction of a factor of 1/e in the optical illumination as the light travels in the attenuating medium. The results of the simulation show that less than 1 mW of average optical power would be required for successful single-pixel imaging through a weak attenuating medium, and the average optical power increases to mW levels for higher attenuation levels. As described by Tobin et al. [8], improved results are possible in imaging applications by use of advanced computational imaging approaches.

 figure: Fig. 8.

Fig. 8. The average laser power required to image a target at a stand-off distance of 300 meters for different attenuation lengths between the system and the target using operating wavelengths of 1310 nm (black squares) and 1450 nm (red circles) and 10 ms per pixel acquisition time. The SPAD operating conditions are the same as used in Fig. 7.

Download Full Size | PDF

5. Conclusions

A laboratory-based LIDAR system incorporating an individual planar geometry Ge-on-Si SPAD detector has been demonstrated over short range, and high-quality depth and intensity profiles have been reconstructed. Whilst at an early stage of development, Ge-on-Si SPAD detector technology offers high single-photon detection efficiency, relatively low dark count rates and picosecond temporal response [39] – key attributes for long-range single-photon LIDAR applications. These detectors possess the advantages of compatibility with more mature Si technology while expanding the wavelength of operation beyond the sensitivity of Si SPADs, potentially up to 1550 nm. LIDAR systems operating in SWIR benefit from lower solar background noise and reduced atmospheric attenuation compared to systems operating in the NIR. More importantly, significantly higher optical power levels can be used in the SWIR region compared to the NIR region whilst remaining below laser eye-safety thresholds.

We acquired high-resolution 3D images of various targets using TCSPC technique in a dark laboratory environment. Millimeter depth resolution 3D profiles were achieved at a stand–off distance of 0.4 m using per pixel acquisition times of milliseconds duration. All measurements were performed using pW average optical power levels. We demonstrated the potential for a reduction in the total acquisition time by obtaining partial information about the target and reconstructing the depth and intensity profiles using the RDI-TV and ManiPoP algorithms. These algorithms allow good image reconstruction even when large fractions of data are missing or corrupted, in this case we demonstrated target reconstruction with 75% of the data removed. The use of computational imaging algorithms that utilize spatial correlations offer the capability of full image reconstruction for partial scan coverage and for reductions in overall photon return, although this can often result in a trade-off with reduced image spatial resolution and prohibitively long data processing times. Recent results in real-time video rate 3D reconstruction of complex target scenes using single-photon data do point towards significant advances in the reduction of the computational cost of some classes of these algorithms [50].

By extrapolating the laboratory performance using the LIDAR modelling, such a system would require sub-mW average laser powers at wavelengths of 1310 nm and 1450 nm to reliably register a depth measurement at 1 km range. The results were obtained using a prototype individual Ge-on-Si SPAD, however the development of Ge-on-Si SPAD detector arrays could lead to more rapid data acquisition over the full optical field, avoiding the need for optical scanning. This provides clear motivation for the use of Ge–on–Si SPAD detector arrays for eye-safe full–field, depth imaging at long range at video frame rates. The same LIDAR model was used to provide estimations of the laser power required to image a target through an attenuating medium. The results presented in this paper show a potential for a new low-cost LIDAR system for single-photon sensing and 3D imaging in the eye-safe SWIR region. Various advanced applications that operate in this wavelength region such as automotive and autonomous vehicles, security and environmental monitoring would benefit from a LIDAR system that employs future generation planar Ge-on-Si SPADs detectors.

Funding

country="GB">United Kingdom Engineering and Physical Sciences Research Council (EP/N003225/1, EP/K015338/1, EP/L024020/1, EP/M01326X/1, EP/N003446/1); Royal Academy of Engineering (RF/201718/17128, RF/201819/18/187).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Data availability

The data associated with this work can be downloaded from the Heriot-Watt data archive at https://doi.org/10.17861/f4f42c63-c6f6-4986-a69b-9e443f945fe9

References

1. C. Mallet and F. Bretar, “Full-waveform topographic lidar: State-of-the-art,” ISPRS J. Photogramm. 64(1), 1–16 (2009). [CrossRef]  

2. M.-C. Amann, T. M. Bosch, M. Lescure, R. A. Myllylae, and M. Rioux, “Laser ranging: a critical review of unusual techniques for distance measurement,” Opt. Eng. 40(1), 10–20 (2001). [CrossRef]  

3. B. Schwarz, “Mapping the world in 3D,” Nat. Photonics 4(7), 429–430 (2010). [CrossRef]  

4. A. McCarthy, X. Ren, A. Della Frera, N. R. Gemmell, N. J. Krichel, C. Scarcella, A. Ruggeri, A. Tosi, and G. S. Buller, “Kilometer-range depth imaging at 1550 nm wavelength using an InGaAs/InP single-photon avalanche diode detector,” Opt. Express 21(19), 22098–22113 (2013). [CrossRef]  

5. A. M. Pawlikowska, A. Halimi, R. A. Lamb, and G. S. Buller, “Single-photon three-dimensional imaging at up to 10 kilometers range,” Opt. Express 25(10), 11919–11931 (2017). [CrossRef]  

6. Z. Li, E. Wu, C. Pang, B. Du, Y. Tao, H. Peng, H. Zeng, and G. Wu, “Multi-beam single-photon-counting three-dimensional imaging lidar,” Opt. Express 25(9), 10189–10195 (2017). [CrossRef]  

7. A. M. Wallace, A. McCarthy, C. J. Nichol, X. Ren, S. Morak, D. Martinez-Ramirez, I. H. Woodhouse, and G. S. Buller, “Design and evaluation of multispectral lidar for the recovery of arboreal parameters,” IEEE Trans. Geosci. Remote Sensing 52(8), 4942–4954 (2014). [CrossRef]  

8. R. Tobin, A. Halimi, A. McCarthy, M. Laurenzis, F. Christnacher, and G. S. Buller, “Three-dimensional single-photon imaging through obscurants,” Opt. Express 27(4), 4590–4611 (2019). [CrossRef]  

9. A. Maccarone, A. McCarthy, X. Ren, R. E. Warburton, A. M. Wallace, J. Moffat, Y. Petillot, and G. S. Buller, “Underwater depth imaging using time-correlated single-photon counting,” Opt. Express 23(26), 33911–33926 (2015). [CrossRef]  

10. A. Halimi, A. Maccarone, A. McCarthy, S. McLaughlin, and G. S. Buller, “Object depth profile and reflectivity restoration from sparse single-photon data acquired in underwater environments,” IEEE Trans. Comput. Imaging 3(3), 472–484 (2017). [CrossRef]  

11. G. S. Buller and A. Wallace, “Ranging and three-dimensional imaging using time-correlated single-photon counting and point-by-point acquisition,” IEEE J. Sel. Top. Quantum Electron. 13(4), 1006–1015 (2007). [CrossRef]  

12. N. R. Gemmell, A. McCarthy, B. Liu, M. G. Tanner, S. D. Dorenbos, V. Zwiller, M. S. Patterson, G. S. Buller, B. C. Wilson, and R. H. Hadfield, “Singlet oxygen luminescence detection with a fiber-coupled superconducting nanowire single-photon detector,” Opt. Express 21(4), 5005–5013 (2013). [CrossRef]  

13. H.-K. Lo, M. Curty, and K. Tamaki, “Secure quantum key distribution,” Nat. Photonics 8(8), 595–604 (2014). [CrossRef]  

14. P. J. Clarke, R. J. Collins, V. Dunjko, E. Andersson, J. Jeffers, and G. S. Buller, “Experimental demonstration of quantum digital signatures using phase-encoded coherent states of light,” Nat. Commun. 3(1), 1174–1182 (2012). [CrossRef]  

15. R. Henderson and K. Schulmeister, Laser Safety, (Institute of Physics Publishing, 2004).

16. International Standard, Safety of laser products-Part 1: Equipment classification and requirements. IEC 60825-1, (International Electrotechnical Commission, 2007).

17. L. S. Rothman, D. Jacquemart, A. Barbe, D. C. Benner, M. Birk, L. Brown, M. Carleer, C. Chackerian Jr, K. Chance, L. Coudert, V. Dana, V. M. Devi, J.-M. Flaud, R. R. Gamache, A. Goldman, J.-M. Hartmann, K. W. Jucks, A. G. Maki, J.-Y. Mandin, S. T. Massie, J. Orphal, A. Perrin, C. P. Rinsland, M. A. H. Smith, J. Tennyson, R. N. Tolchenov, R. A. Toth, J. Vander Auwera, P. Varanasi, and G. Wagner, “The HITRAN 2004 molecular spectroscopic database,” J. Quant. Spectrosc. Radiat. Transfer 96(2), 139–204 (2005). [CrossRef]  

18. H. Willebrand and B. S. Ghuman, Free space optics: enabling optical connectivity in today's networks. (SAMS publishing, 2002).

19. R. E. Bird, R. L. Hulstrom, and L. Lewis, “Terrestrial solar spectral data sets,” Sol. Energy 30(6), 563–573 (1983). [CrossRef]  

20. M. A. Albota, B. F. Aull, D. G. Fouche, R. M. Heinrichs, D. G. Kocher, R. M. Marino, J. G. Mooney, N. R. Newbury, M. E. O’Brien, and B. E. Player, “Three-dimensional imaging laser radars with Geiger-mode avalanche photodiode arrays,” L. Lab. J. 13(2), 351–370 (2002).

21. B. F. Aull, A. H. Loomis, D. J. Young, R. M. Heinrichs, B. J. Felton, P. J. Daniels, and D. J. Landers, “Geiger-mode avalanche photodiodes for three-dimensional imaging,” L. Lab. J. 13(2), 335–349 (2002).

22. A. McCarthy, R. J. Collins, N. J. Krichel, V. Fernández, A. M. Wallace, and G. S. Buller, “Long-range time-of-flight scanning sensor based on high-speed time-correlated single-photon counting,” Appl. Opt. 48(32), 6241–6251 (2009). [CrossRef]  

23. P. A. Hiskett, C. S. Parry, A. McCarthy, and G. S. Buller, “A photon-counting time-of-flight ranging technique developed for the avoidance of range ambiguity at gigahertz clock rates,” Opt. Express 16(18), 13685–13698 (2008). [CrossRef]  

24. C. Niclass, A. Rochas, P.-A. Besse, and E. Charbon, “Design and characterization of a CMOS 3-D image sensor based on single photon avalanche diodes,” IEEE J. Solid-State Circuits 40(9), 1847–1854 (2005). [CrossRef]  

25. D. Stoppa, L. Pancheri, M. Scandiuzzo, L. Gonzo, G.-F. Dalla Betta, and A. Simoni, “A CMOS 3-D imager based on single photon avalanche diode,” IEEE Trans. Circuits Syst. I 54(1), 4–12 (2007). [CrossRef]  

26. C. Niclass, M. Soga, H. Matsubara, S. Kato, and M. Kagami, “A 100-m Range 10-Frame/s 340 ( 96-Pixel Time-of-Flight Depth Sensor in 0.18-mu m CMOS,” IEEE J. Solid-State Circuits 48(2), 559–572 (2013). [CrossRef]  

27. M. Entwistle, M. A. Itzler, J. Chen, M. Owens, K. Patel, X. Jiang, K. Slomkowski, and S. Rangwala, “Geiger-mode APD camera system for single-photon 3D LADAR imaging,” Proc. SPIE 8375, 83750D (2012). [CrossRef]  

28. R. Tobin, A. Halimi, A. McCarthy, X. Ren, K. J. McEwan, S. McLaughlin, and G. S. Buller, “Long-range depth profiling of camouflaged targets using single-photon detection,” Opt. Eng. 57(3), 031303 (2017). [CrossRef]  

29. X. Jiang, M. Itzler, K. O’Donnell, M. Entwistle, M. Owens, K. Slomkowski, and S. Rangwala, “InP-based single-photon detectors and geiger-mode APD arrays for quantum communications applications,” IEEE J. Sel. Top. Quantum Electron. 21(3), 5–16 (2015). [CrossRef]  

30. G. S. Buller and R. Collins, “Single-photon generation and detection,” Meas. Sci. Technol. 21(1), 012002 (2010). [CrossRef]  

31. M. G. Tanner, C. Natarajan, V. Pottapenjara, J. O’Connor, R. Warburton, R. Hadfield, B. Baek, S. Nam, S. Dorenbos, and E. B. Ureña, “Enhanced telecom wavelength single-photon detection with NbTiN superconducting nanowires on oxidized silicon,” Appl. Phys. Lett. 96(22), 221109 (2010). [CrossRef]  

32. A. McCarthy, N. J. Krichel, N. R. Gemmell, X. Ren, M. G. Tanner, S. N. Dorenbos, V. Zwiller, R. H. Hadfield, and G. S. Buller, “Kilometer-range, high resolution depth imaging via 1560 nm wavelength single-photon detection,” Opt. Express 21(7), 8904–8915 (2013). [CrossRef]  

33. R. E. Warburton, A. McCarthy, A. M. Wallace, S. Hernandez-Marin, R. H. Hadfield, S. W. Nam, and G. S. Buller, “Subcentimeter depth resolution using a single-photon counting time-of-flight laser ranging system at 1550 nm wavelength,” Opt. Lett. 32(15), 2266–2268 (2007). [CrossRef]  

34. C. M. Natarajan, M. G. Tanner, and R. H. Hadfield, “Superconducting nanowire single-photon detectors: physics and applications,” Supercond. Sci. Technol. 25(6), 063001 (2012). [CrossRef]  

35. A. Y. Loudon, P. A. Hiskett, G. S. Buller, R. T. Carline, D. C. Herbert, W. Leong, and J. G. Rarity, “Enhancement of the infrared detection efficiency of silicon photon-counting avalanche photodiodes by use of silicon germanium absorbing layers,” Opt. Lett. 27(4), 219–221 (2002). [CrossRef]  

36. R. E. Warburton, G. Intermite, M. Myronov, P. Allred, D. R. Leadley, K. Gallacher, D. J. Paul, N. J. Pilgrim, L. J. M. Lever, Z. Ikonic, R. W. Kelsall, E. Huante-Ceron, A. P. Knights, and G. S. Buller, “Ge-on-Si Single-Photon Avalanche Diode Detectors: Design, Modeling, Fabrication, and Characterization at Wavelengths 1310 and 1550 nm,” IEEE Trans. Electron Devices 60(11), 3807–3813 (2013). [CrossRef]  

37. Z. Lu, Y. Kang, C. Hu, Q. Zhou, H.-D. Liu, and J. C. Campbell, “Geiger-Mode Operation of Ge-on-Si Avalanche Photodiodes,” IEEE J. Quantum Electron. 47(5), 731–735 (2011). [CrossRef]  

38. N. J. Martinez, M. Gehl, C. T. Derose, A. L. Starbuck, A. T. Pomerene, A. L. Lentine, D. C. Trotter, and P. S. Davids, “Single photon detection in a waveguide-coupled Ge-on-Si lateral avalanche photodiode,” Opt. Express 25(14), 16130–16139 (2017). [CrossRef]  

39. P. Vines, K. Kuzmenko, J. Kirdoda, D. C. Dumas, M. M. Mirza, R. W. Millar, D. J. Paul, and G. S. Buller, “High performance planar Ge-on-Si single-photon avalanche detectors,” Nat. Commun. 10(1), 1086 (2019). [CrossRef]  

40. A. Halimi, Y. Altmann, A. McCarthy, X. Ren, R. Tobin, G. S. Buller, and S. McLaughlin, “Restoration of intensity and depth images constructed using sparse single-photon data,” in Proceedings of 24th European Signal Processing Conference, (EUSIPCO2016).

41. Y. Altmann, X. Ren, A. McCarthy, G. S. Buller, and S. McLaughlin, “Lidar waveform-based analysis of depth images constructed using sparse single-photon data,” IEEE Trans. on Image Process. 25(5), 1935–1946 (2016). [CrossRef]  

42. D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7(1), 12046 (2016). [CrossRef]  

43. D. Shin, F. Xu, F. N. Wong, J. H. Shapiro, and V. K. Goyal, “Computational multi-depth single-photon imaging,” Opt. Express 24(3), 1873–1888 (2016). [CrossRef]  

44. J. Rapp and V. K. Goyal, “A few photons among many: Unmixing signal and noise for photon-efficient active imaging,” IEEE Trans. Comput. Imaging 3(3), 445–459 (2017). [CrossRef]  

45. A. Halimi, R. Tobin, A. McCarthy, J. Bioucas-Dias, S. McLaughlin, and G. S. Buller, “Restoration of Multidimensional Sparse Single-Photon 3D-LiDAR Images,” IEEE Trans. Comput. Imaging, in press (2019).

46. J. Tachella, Y. Altmann, X. Ren, A. McCarthy, G. S. Buller, J.-Y. Tourneret, and S. McLaughlin, “Bayesian 3d reconstruction of complex scenes from single-photon lidar data,” SIAM J. Imaging Sci. 12(1), 521–550 (2019). [CrossRef]  

47. R. Richmond and S. Cain, Direct-Detection Ladar System; (SPIE Publications, 2009).

48. N. J. Krichel, A. McCarthy, I. Rech, M. Ghioni, A. Gulinatti, and G. S. Buller, “Cumulative data acquisition in comparative photon-counting three-dimensional imaging,” J. Mod. Opt. 58(3-4), 244–256 (2011). [CrossRef]  

49. J. Kirdoda, L. Ferre Llin, K. Kuzmenko, P. Vines, Z. Greener, D. C. S. Dumas, R. W. Millar, M. M. Mirza, G. S. Buller, and D. J. Paul, “High efficiency planar Ge-on-Si single-photon avalanche diode detectors,” in Proceedings of Conference on Lasers and Electro-Optics ff1A.4 (2019).

50. J. Tachella, Y. Altmann, N. Mellado, A. McCarthy, R. Tobin, G. S. Buller, J.-Y. Tourneret, and S. McLaughlin, “Real-time 3D reconstruction from single-photon lidar data using plug-and-play point cloud denoisers,” Nat. Commun. 10(1), 4984 (2019). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. A schematic diagram representing the experimental set-up of the monostatic transceiver. The target was illuminated using a spectrally tunable supercontinuum laser system. The target was mounted on computer controlled motorized translation stages, which allowed it to be raster scanned relative to the stationary illuminating beam. Light reflected from the beam splitter was then focused via lens L2 onto the 100 µm diameter planar Ge-on-Si SPAD. BS denotes a beam-splitter. The detector was located inside a cryostat and operated at a temperature of 100 K.
Fig. 2.
Fig. 2. The depth and intensity profile measurements reconstructed using the pixel-wise cross–correlation approach. The images (a) and (b) are close-up visible photographs of the targets: a double decker bus model (110 × 60 × 38 mm) and a Mini Cooper car model (95 × 45 × 60 mm). The depth reconstructions are shown in (c) and (d), and the intensity reconstructions are shown in (e) and (f). The measurements were performed in a dark laboratory environment at a stand–off distance of 0.4 m. The scanned scene consisted of the target mounted in front of a white cardboard backplane with a maximum front-to-back separation of approximately 100 mm. To improve the presentation clarity, an arbitrary zero depth position was used for the scale on depth profiles (c) and (d). The measurement parameters are described in the main text.
Fig. 3.
Fig. 3. Depth and intensity profiles reconstructed using the pixel-wise cross correlation technique using data acquired with varying per-pixel acquisition times: (a) 30 ms; (b) 10 ms; (c) 3 ms; (d) 1 ms; and (e) 0.5 ms. The scene was scanned at a wavelength of 1450 nm in dark laboratory conditions. The image format used was 100 × 70 pixels, which covered an area of approximately 100 × 70 mm at a range of 0.4 m. In order to improve the presentation clarity, an arbitrary zero depth position was used for the scale on the depth profiles.
Fig. 4.
Fig. 4. Depth and intensity images reconstructed using the cross-correlation technique (left), the RDI-TV algorithm (middle), and the ManiPoP algorithm (right) with 25% of the pixels removed. The image contained 100 × 70 pixels prior to the randomly selected removal of 25% of the pixels. A 10 ms per pixel acquisition time was used in these measurements.
Fig. 5.
Fig. 5. Depth and intensity images reconstructed using the cross-correlation technique (left), the RDI-TV algorithm (middle), and the ManiPoP algorithm (right) with 50% of the pixels removed. The image contained 100 × 70 pixels prior to the randomly selected removal of 50% of the pixels. A 10 ms per pixel acquisition time was used in these measurements.
Fig. 6.
Fig. 6. Depth and intensity images reconstructed using the cross-correlation technique (left), the RDI-TV algorithm (middle), and the ManiPoP algorithm (right) with 75% of the pixels removed. The image contained 100 × 70 pixels prior to the randomly selected removal of 75% of the pixels. A 10 ms per pixel acquisition time was used in these measurements.
Fig. 7.
Fig. 7. The average laser power required to image a target at different stand-off distances from 100 m to 1 km using 1310 nm (a) or 1450 nm (b) wavelength illuminating light. The estimation considers different acquisition times per pixel: 1 ms (magenta triangles), 3 ms (blue triangles), 10 ms (red circles) and 30 ms (black squares). The estimate is based on a collecting lens of 23 mm diameter and a 26 µm diameter planar Ge-on-Si SPAD operated at a temperature of 125 K under an excess bias of 2.5% above avalanche breakdown. The repetition rate of the pulsed illumination laser is 100 kHz.
Fig. 8.
Fig. 8. The average laser power required to image a target at a stand-off distance of 300 meters for different attenuation lengths between the system and the target using operating wavelengths of 1310 nm (black squares) and 1450 nm (red circles) and 10 ms per pixel acquisition time. The SPAD operating conditions are the same as used in Fig. 7.

Tables (1)

Tables Icon

Table 1. Description of the parameters in Eq. (2).

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

c τ = j = 1 N h τ + j × g j ,
n p = E P u l s e F λ h c t A L e n s ρ 2 π R 2 e 2 α R C i n C d e t η
n b = t D C R τ b F
S N R = n p n p + n b .
P o u t = h c λ 2 π R 2 n p A l e n s ρ t η C in C det .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.