Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Kilometer-range depth imaging at 1550 nm wavelength using an InGaAs/InP single-photon avalanche diode detector

Open Access Open Access

Abstract

We have used an InGaAs/InP single-photon avalanche diode detector module in conjunction with a time-of-flight depth imager operating at a wavelength of 1550 nm, to acquire centimeter resolution depth images of low signature objects at stand-off distances of up to one kilometer. The scenes of interest were scanned by the transceiver system using pulsed laser illumination with an average optical power of less than 600 µW and per-pixel acquisition times of between 0.5 ms and 20 ms. The fiber-pigtailed InGaAs/InP detector was Peltier-cooled and operated at a temperature of 230 K. This detector was used in electrically gated mode with a single-photon detection efficiency of about 26% at a dark count rate of 16 kilocounts per second. The system’s overall instrumental temporal response was 144 ps full width at half maximum. Measurements made in daylight on a number of target types at ranges of 325 m, 910 m, and 4.5 km are presented, along with an analysis of the depth resolution achieved.

© 2013 Optical Society of America

1. Introduction

Time of flight (ToF) light detection and ranging (LiDAR) systems have been used for a variety of remote sensing applications including terrain mapping, environmental monitoring, and security [13]. A new class of LiDAR system based on time correlated single-photon counting (TCSPC) techniques [4, 5] has emerged in recent years. In comparison with conventional ToF LiDAR systems [6], ToF single-photon depth imagers can offer shot noise limited detection, as well as excellent surface-to-surface resolution. There are a number of detector choices for single-photon LiDAR systems operating at wavelengths shorter than 1000 nm, for example: micro-channel plate (MCP) detectors [7], photomultiplier tubes (PMTs) [8] or silicon-based single-photon avalanche diodes (SPADs) [912]. Highly optimized individual SPADs have been used in a number of single-photon counting systems but significant progress has been made on the development of CMOS Si SPAD arrays and their use in ToF depth imaging systems [1315].

Single-photon imagers working in the short-wavelength infrared range (1400 to 3000 nm) have the advantage of being less affected by solar background noise [16] and less attenuated by the atmosphere [17], as well as remaining eye-safe at significantly higher power levels than wavelengths in the retinal hazard region of the spectrum which extends from 400 to 1400 nm [18]. However, semiconductor-based single-photon detectors operating in this spectral region generally have had issues with significantly increased dark count rates [19]. There have been reports of measurements made using SPAD arrays fabricated from InGaAsP/InP and InGaAs/InP operating at 1064 nm [20] and 1550 nm wavelength [21] respectively, as well as with arrays of Sb-containing devices [22], but kilometer-range depth imaging with such devices has not been reported in the literature. Recently, superconducting nanowire single photon detectors (SNSPDs) have demonstrated promising results for infrared single-photon detection [19, 23], and have been successfully used for ranging [24] and depth imaging at long distances [25]. SNSPDs require cooling to temperatures typically less than 4 K [23, 26], and whilst miniaturized closed-cycle cooling has made this considerably more practical in recent years, the requirement for such low temperatures remains a major disadvantage in depth imaging applications that require compact transceivers.

In this paper, we describe a scanning single-photon ToF depth imager that incorporated an individual, highly optimized, Peltier-cooled InGaAs/InP SPAD [2733] that operates in “gated-mode”. The fiber-pigtailed InGaAs/InP SPAD detector is cooled to a temperature of 230 K, and with the excess bias set at 5 V, has a dark count rate of around 16 kilocounts per second (kcps) and a single-photon detection efficiency of about 26% at the selected illumination wavelength of 1550 nm [34, 35]. We describe the use of this InGaAs/InP SPAD detector to perform depth imaging at a wavelength of 1550 nm in bright daylight. Three-dimensional images with sub-centimeter depth uncertainties were achieved at stand-off distances of 325, 910 and 4500 meters. To the best of our knowledge, this is the first report of a kilometer-range scanning depth imager, using an illumination wavelength in the 1550 nm region of the spectrum, to employ an individual semiconductor single-photon detector. The work reported here has successfully demonstrated long range ToF depth imaging in daylight using a compact, low power consumption detector module in conjunction with an eye-safe, pulsed 1550 nm wavelength laser source.

2. Description of the 1550 nm wavelength single-photon imager

A schematic of the system set-up is shown in Fig. 1. The 1550 nm wavelength pulsed laser illumination was provided by a supercontinuum laser source (SuperK EXTREME EXW-6, NKT Photonics, Denmark). For the measurements reported here, the repetition frequency of the laser was set to 40 MHz and the illumination beam had an average optical power of less than 600 µW on exiting the transceiver unit, corresponding to approximately 15 pJ per pulse. The pulse width of the laser was less than 50 ps. The supercontinuum laser system provided flexibility for exploring the performance parameters of the depth imaging system but the timing jitter of the system and the optical power levels used for the measurements presented in this paper are fully consistent with using, for example, a relatively inexpensive gain-switched picosecond diode laser as the source. In these experiments, the wavelength selection from the broadband laser output was achieved by using a series of optical filters: a 30 nm wide bandpass filter (BPF1) with a central wavelength of 1550 nm; a longpass filter (LPF1) with a cut-on wavelength of 1500 nm; and a shortpass filter (SPF) with a cut-off wavelength of 1845 nm. This spectrally selected output was delivered to the transceiver unit through a polarization-maintaining fiber (PMF). A linear polarizer (LP) and a half-wave plate (HWP) were used to orientate the polarization in order to optimize coupling into the fiber, and transmission through the transceiver.

 figure: Fig. 1

Fig. 1 Schematic of the layout of the 1550 nm single-photon depth imaging system which comprises a supercontinuum laser source, an InGaAs/InP SPAD detector, a TCSPC module, and a custom transceiver. Optical components include: fiber collimation packages (FC1, FC2, FCT, FCR); polarizing beam splitter (PBS); galvanometer scan mirrors (GM1, GM2); relay lenses (RL1, RL2, RL3); objective lens (OL); longpass filters (LPF1, LPF2); bandpass filters (BPF1, BPF2); shortpass filter (SPF); linear polarizer (LP); half wave plate (HWP); polarization-maintaining fiber (PMF); single mode fiber (SMF). Other abbreviations used: nuclear instrumentation module (NIM); single-photon avalanche diode (SPAD); time correlated single-photon counting (TCSPC).

Download Full Size | PDF

The custom-built scanning transceiver system used for our previous work at shorter wavelengths (λ ~850 nm) was employed – reference [11] provides a more complete description, and a photograph, of the transceiver unit which was designed to allow for flexibility in reconfiguring. With the exception of the spectral filters, the near infrared optical components and the 500 mm focal length objective lens (OL) were the same as those previously used for depth imaging with an SNSPD system at 1560 nm [25]. The optical receive channel of the transceiver included a longpass filter (LPF2) with a cut-on wavelength of 1500 nm, and a 30 nm wide bandpass filter (BPF2) with a central wavelength of 1550 nm, in order to reduce the amount of solar background in the collected scattered return signal. An armored, 10 µm diameter core, single mode fiber (SMF) was used to deliver the spectrally filtered return signal, to the fiber-coupled detection head of the InGaAs/InP SPAD detector. The optical components and optomechanics of this transceiver assembly have shown good long-term mechanical stability with the assembly maintaining its optical alignment over the course of month-long field trials. These trials were conducted from our roof laboratory facility where the temperature can vary by about 10 °C over the course of a day.

The optical layout of our transceiver system is monostatic: i.e. the transmit and receive channels are co-axial and therefore a number of optical components, including the XY scanning mirrors, are common to both channels as can be seen in Fig. 1. This has a number of advantages particularly in terms of the spatial alignment of the transmit and receive channels since the optical system does not need to be adjusted when the target stand-off distance is changed. This is in contrast to bistatic systems where the transmit and receive channels are physically separate entities, not sharing the same optical components. However, when a monostatic transceiver arrangement is used in conjunction with a sensitive detector such as a single-photon counting system, optical back reflections from the components can be significant – this is discussed further in section 4.

A synchronous electrical trigger mechanism between the laser source, the detector, and the acquisition module was needed for accurate time correlation. This was provided by the laser source as a nuclear instrumentation module (NIM) pulse signal with a frequency of 40 MHz for all our measurements. This signal was split in two by a power splitter - one part was used to trigger the control unit of the InGaAs/InP SPAD detector, and the other was down-divided to 2.5 MHz to accommodate the maximum count rate limit of the start trigger input of the TCSPC module (PicoHarp 300, PicoQuant GmbH, Germany). This therefore provided 16 optical pulses per histogram width, and all returned photons were efficiently recorded within this time window subject to the usual statistical rules governing time-correlated single-photon counting, as described in section 3 below and reference [4]. The output of the InGaAs/InP SPAD detector provided the stop trigger of the TCSPC module which was configured with a 16 ps time bin width. The TCSPC module, the scanning galvanometer mirrors (GM1 and GM2), and the InGaAs/InP SPAD detector, were controlled via custom-designed software. Overall, the system design reduces background noise counts via a combination of spectral, spatial, and time-gated filtering approaches. The spectral filtering is achieved by the two optical filters used in the receive channel of the system (see Fig. 1), the spatial filtering is performed by the 10 µm diameter core of the fiber connected to the detector, and the time-gating is inherent to the TCSPC technique.

3. Single photon detection by the gated-mode InGaAs/InP SPAD detector

The fiber-pigtailed InGaAs/InP SPAD detector had an active area diameter of 25 µm and was packaged within a detection module which is described more fully in references [34, 35]. The module used custom control software for set-up and re-configurability. The detector was cooled to 230 K by means of an in-package thermo-electric cooler and used an electrical gating approach to switch the detector to above avalanche breakdown, into the Geiger mode, at pre-programmed time intervals for a gate-on time of 7 ns, as shown in Fig. 2. The excess bias was set to 5 V and the resulting photon detection efficiency was about 26%, with a dark count rate of 16 kcps. InGaAs/InP SPAD detectors suffer from afterpulsing, an effect where charge carriers are trapped during the avalanche process, and are subsequently released causing further avalanche events which increase the background counts. To reduce the contribution of afterpulsing, the detector was biased below avalanche breakdown for a pre-programmed hold-off time to allow for trap states to empty prior to the detector being re-activated, thus lowering the afterpulsing probability to negligible levels. Typically, this hold-off time was greater than 10 µs, which was well in excess of the gating period, meaning that a number of detection gates were skipped after each event. As is usual in the time-correlated single-photon counting technique, it is important that the probability of a detection event is low compared to the excitation rate (typically below 5%) to avoid the effects of “pulse pile-up” [4], and this probability must include the effect of skipped detection gates.

 figure: Fig. 2

Fig. 2 Timing diagram for the gated mode operation used with the SPAD detector. A 40 MHz synchronous clock signal was supplied to the detector module. Using this clock as a trigger, the detector was gated on for a pre-determined period, TON, then switched off until the next clock trigger. If an avalanche event is triggered within the detector (e.g. by an incident photon) during the gate then an output pulse will be registered, as shown at time (a) in the figure. The detector will be rapidly quenched and remain off for a set hold-off time, THO, to reduce the probability of afterpulsing. Any clock triggers (and thus incident photons) will be ignored during this period, e.g. at time (b). Once the hold-off duration is complete, the gate is once again ready to be triggered by the clock. Any photons arriving outside the gate window will be ignored (c), but any photons arriving within the gate window, for example (d), will be detected.

Download Full Size | PDF

This gated detection scheme guarantees a flat and linear temporal response of the detection system to photon events, thus allowing low timing jitter to be achieved – approximately 140 ps full width at half-maximum (FWHM) in the measurements shown below - which permits centimeter resolution depth profiling. The use of a programmable electrical gate means that the profiling of surfaces with a depth extent corresponding to half the gate width can be achieved, and this gate duration can be adjusted as necessary for a specific target type. Another major consideration in a monostatic optical system is that the use of a gated detector can eliminate potentially serious issues with back-reflections from optical components, as described in section 4 below. Of course, the electrical gate must be located in the vicinity of the target for most efficient detection. That can be achieved by an initial time-scan of the start-stop time difference (over only 25 ns at the 40 MHz laser repetition rate used in these measurements) as in the burst illumination depth imaging approach [36], or by the use of the pseudo-random pulse trains to ascertain unambiguous range prior to depth profiling previously used in single-photon depth imaging [12]. A short detector gate gives the least afterpulsing for a given target scenario, and hence a better signal-to-noise can be achieved. However, longer gate durations can be used to locate a target. When using a longer duration gate it is possible to increase the hold-off time to mitigate the effects of increased afterpulsing. Hence a combination of longer gate and longer hold-off times do mean that signal-to-noise can be preserved at the expense of a longer acquisition time. Another detector gating approach reported is the use of sine-wave gating of an InGaAs/InP SPAD [37] used in free-running mode in order to avoid the need for establishment of range before profiling [38], however the resulting additional jitter will reduce depth resolution significantly as well as an increased likelihood of a lower average detection efficiency.

4. Depth profile retrieval

During a depth measurement scan, the detected return photon events were time-tagged by the TCSPC module and transferred to the control computer where software then constructed each pixel’s histogram in memory. These histograms, with 16 ps wide bins, were analyzed using a time-correlated cross-correlation method [25, 39] in order to determine the depth measurement for each pixel. This method is used to calculate the cross-correlation, C, between an acquired histogram of time versus return photon counts and a normalized system instrumental response function, R, such that C = F−1[F(H) . × F*(R)], where F denotes fast Fourier transform, and . × denotes element-wise multiplication. Thus a time position corresponding to the highest cross-correlation can be found for each pixel. Once the relevant time positions (or bin numbers) are picked up for each pixel, through combining the spatial (i.e. X and Y) information and time-correlated depth measurement (i.e. Z), a depth profile of the scanned scene can be reconstructed.

As was mentioned in section 2, optical back reflections can be a significant issue in a monostatic transceiver arrangement. This was evident previously when using the free-running SNSPD system [25], where we found that the return histogram included the internal back reflections caused by the laser illumination as it left the transceiver. The contribution of the back reflections can be seen in the example of a processed histogram shown in Fig. 3(a). The sections of the histogram containing the peaks caused by the back reflections were easily removed during the software analysis. However, during data acquisition these back-reflections add to the overall count rate of the detector, increasing the likelihood of pulse-pile-up effects appearing in the histograms. The use of a gated detector meant that such back-reflections could easily be avoided in the return histogram as can clearly be seen in Fig. 3(b).

 figure: Fig. 3

Fig. 3 The plot in (a), after [25], shows an example of a processed histogram for a single pixel obtained from the measurements made using a free-running SNSPD – peaks corresponding to the optical back reflections from the transceiver components are present. The plot in (b) is from a pixel measurement using the gated InGaAs/InP detector module and shows the result of a cross-correlation between the normalized instrumental timing response (see inset) and the original timing histogram of return photon counts within one period of the laser pulse train. The inset shows the instrumental response function which had a 144 ps full width at half maximum timing jitter. Most of the non-zero cross-correlation is within a gated window of only 7 ns width, and the cross-correlation peak corresponds to the time position of the return signal peak (i.e. 10.34 ns in the example).

Download Full Size | PDF

5. Results and discussion

We performed a series of field trials in bright daylight to investigate the acquisition of depth images from scenes containing a variety of typical non-cooperative materials (e.g. wood, woven fabrics, leather, and plastic). The measurements were made at stand-off distances of 325 meters, 910 meters and 4500 meters - these distances correspond to suitable locations to which we had access and a clear line of sight from our roof laboratory facility. A sheet of hardboard was used as a backplane for the measurements made at 325 m and 910 m. This acted as a convenient and definite “backstop” which meant that the acquired data was clearly discernible and not obscured by data from other less clearly defined background objects in the field of view of the scan. It also enabled the XY extents of the scans to be easily measured. Due to variations in the weather conditions, and the angle of the Sun relative to the optic axis of the transceiver’s objective lens, the calculated normalized background noise count rate, measured at different times over the course of the field trials, varied between approximately 30 kcps and 100 kcps.

At a range of 325 meters, we scanned two scenes consisting of a human model as shown in Figs. 4(a) and 4(d). The data was acquired with a normalized background noise count rate of around 90 kcps, and the depth images obtained for per-pixel acquisition times of 5, 2, 1, and 0.5 ms are shown in Fig. 4 – two different viewpoints of the raw plotted depth data are shown for each acquisition time in order to convey the XY spatial resolution of the system and the level of depth detail that was realized. It is evident from these depth images that some of the pixels coinciding with the areas of exposed skin on both human models did not give sufficient returns, at these per-pixel acquisition times, to obtain reliable depth measurements. Aside from skin, these results at an illumination wavelength of 1550 nm show that accurate sub-centimeter depth measurements can be made for the other materials in the scene using per-pixel acquisition times in the region of 2 to 5 ms. The results obtained for per-pixel acquisition times as short as 1 and 0.5 ms contain depth measurements for the vast majority of the pixels enabling the overall target outline and depth profile to be easily discerned.

 figure: Fig. 4

Fig. 4 Depth profile measurements acquired in bright daylight of human models at a stand-off distance of 325 meters. The images in columns (a) and (d) are close-up photographs of the two scenes that were scanned. Both scenes are shown from two different viewpoints and consisted of a human standing in front of a hardboard backplane with a maximum front-to-back surface separation of approximately 400 mm. The depth scans covered an area of approximately 800 × 1000 mm using 60 × 75 pixels, resulting in a pixel-to-pixel spacing of approximately 13 mm in both X and Y. Plots of the depth data obtained for per-pixel acquisition times of 5, 2, 1, and 0.5 ms are shown in columns (b), (c), (e) and (f) – each of these columns show two different viewpoints of the surface plot constructed from the data obtained using the specified per-pixel acquisition time, and the color shading is used to map depth. A per-pixel acquisition time of 1 ms equated to a total scan time of 4.5 s for these scenes. The normalized background noise count rate was calculated to be approximately 93 kcps and 91 kcps for scenes (a) and (d) respectively.

Download Full Size | PDF

The data acquired from three different scenes in daylight at a stand-off distance of 325 m and using an acquisition time of 5 ms per pixel is compared in Fig. 5. The scenes consisted of a life-sized mannequin, and the two human models shown in Fig. 4. In the bottom row of Fig. 5, two different viewpoints are shown of the greyscale color-mapped surface plot of the depth data for each of the three scenes. The top row of Fig. 5 includes a face-on view of the surface plot for each scene where the color is used to map the number of detected return photons for each individual pixel according to the color bar on the right hand side of the figure. It is immediately evident that the photon return from human skin is much less than, for example, the exposed surfaces of the mannequin and indeed all of the other materials in the scenes. The integrated number of photons is, of course, also dependent on the nature of the target: the scatter signature and the angle of incidence of the illuminating beam, for the different parts of the scene, contribute to the varying integrated photon returns. For the results shown in Fig. 5, which were all obtained under similar weather conditions within the space of two hours, depth measurements are reliably made at integrated photon numbers greater than approximately 15 per pixel. The integrated photon returns for discrete points of the mannequin scene shown in Fig. 5(a1) are plotted in Fig. 6 as a function of the acquisition time so as to provide an approximate indication of the variation between the different material types and indeed the variation between two different locations on the same material i.e. the mannequin’s jacket.

 figure: Fig. 5

Fig. 5 Comparison between the results obtained from depth profile scans of a life-sized mannequin and two different human models, at a range of 325 m using a per-pixel acquisition time of 5 ms. The photographs in (a1), (b1), and (c1) are close-up images of the three different scenes that were scanned in similar daylight conditions. The bottom row shows two viewpoints of the plotted depth data that was acquired for each of the three scenes, with the white to black color shading being used to map depth. The plots shown in (a2), (b2), and (c2) use color to map the calculated number of detected return photons in the histogram peak, for each individual pixel, according to the color bar on the right. A distinct variation in the number of detected return photons from the different types of clothing materials can be seen from these intensity images e.g. the knitted cardigan on the female in (b2) has a slightly higher return in comparison to the mannequin’s rain jacket in (a2). Both of these materials appear to have a significantly higher return than the leather jacket worn by the male in (c2). The plots of the human models in (b2) and (c2) highlight the relatively low returns from skin.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Material-dependent integrated photon number versus acquisition time per-pixel. The data for the graph shown in (a) was extracted from depth profile scans acquired in daylight of a life-sized mannequin at a stand-off distance of 325 m and using acquisition times of 0.5, 1, 2, 5, 10, and 20 ms per-pixel. Note that the integrated photon number is a nine-pixel-based average using a single pixel and its eight neighbors. The dashed lines in the graph, linking the set of data points for each material, are included as a guide for the reader. The pixel locations corresponding to the plotted data are indicated in the photograph shown in (b). Sets of data points for two different locations on the jacket material are included in the graph – these illustrate how the more pronounced creasing of the material at the lower location results in a significantly lower photon number due to the illuminating beam striking the material at glancing angles. This is similar to what happens at edges, e.g. points on the mannequin outline (see the integrated photon return plots in Fig. 5).

Download Full Size | PDF

The results shown in Fig. 4 and Fig. 5, acquired with an illumination wavelength of approximately 1550 nm and an average output power of less than 600 µW, show sub-centimeter resolution depth images using per-pixel acquisition times that are comparable with the results we obtained previously using a free-running SNSPD system with 1560 nm wavelength illumination [25]. The acquisition times are significantly lower however, than those achieved in our earlier work at an illumination wavelength of ~850 nm [11, 39]. That work was carried out under similar daylight conditions and at the same stand-off distance of 325 m, albeit using a much lower average output illumination power of 50 µW. Nevertheless, on comparing the measurements carried out at the wavelengths of 850 nm and 1550 nm, the results indicate that the difference between the integrated number of photon returns for a pixel on human skin and a pixel on other materials, such as textiles and concrete, is not as pronounced at an illumination wavelength of ~850 nm compared to when using λ ~1550 nm. This is consistent with the findings of work [40] which investigated optical properties of human skin and subcutaneous tissue at these wavelengths.

Having achieved good quality depth images at 325 meters using per-pixel acquisition times of 1.0 ms, we proceeded to scan a moving target – a ~200 mm diameter soccer ball, suspended by a ~1.2 m long string, and set swinging with a conical pendulum motion in front of a hardboard backplane i.e. the ball swung in a horizontal circle. A depth movie of four seconds duration was acquired using a 1 ms per-pixel dwell time. The movie was recorded in daylight at 10 frames per second, with 10 × 10 pixels per frame (see Media 1). Ten consecutive frames of the depth movie are shown in Fig. 7 - two different views of the data acquired for each frame are shown. In frames 24 to 28 we can see the ball moving from left to right as well as the distance from the backplane to the front surface of the ball increasing by approximately 100 mm. No significant left-right motion of the ball is visible between frames 28 to 30 as it “turns the corner” but the distance from the backplane to the front surface of the ball continues to increase by approximately 60 mm. In frames 31 to 33, we can see the ball moving from right to left with the distance from the backplane still increasing to the point that it is approximately 200 mm further away from the backplane in frame 33 than in frame 24. In contrast to a 2D movie, the variation in depth of the swinging motion is obvious and quantifiable from this ToF depth data.

 figure: Fig. 7

Fig. 7 Time-of-flight depth profile movie of a swinging soccer ball, ~200 mm diameter, recorded in daylight from a standoff distance of 325 meters (Media 1). The movie recorded the ball swinging in a conical pendulum motion - the four-second, 10 × 10 pixel movie with 10 frames per second was acquired using an acquisition time of 1 ms per pixel. Two different views of the data from ten consecutive frames (numbers 24 to 33) are shown. The images shown in rows (a) and (c) are face-on views as seen from the direction of the transceiver, and the corresponding top-down depth view of the data for each of the frames is shown in rows (b) and (d). The same color scheme is used to map depth in all of the plots.

Download Full Size | PDF

Depth profiling at a longer stand-off distance of 910 m was performed in daylight with a corresponding normalized background noise count rate of approximately 32 kHz. Scanned depth images of the life-sized mannequin with per-pixel acquisition times of 20, 10, 5, and 2 ms are shown in Fig. 8. As a result of the increase in stand-off distance, while still maintaining the same system parameters, longer per-pixel acquisition times were required at this distance in order to obtain images with a corresponding level of depth detail to those acquired at 325 m. Even so, for scans with per-pixel acquisition times as low as 5 ms, it is possible to clearly discern the outline of the mannequin, and the vast majority of the depth profile. The clear advantage of depth imaging compared with low-photon number imaging is evident in the photon-starved regime represented by the low per-pixel acquisition times. A number of the pixels that had achieved a reliable depth value in the image acquired with a 2 ms per-pixel dwell time, as shown in Fig. 8(e), were analyzed. The data from these pixels indicated that a maximum of 6 return photons had been detected for individual pixels on the hardboard backplane, and a maximum of 4 return photons from individual pixels on the mannequin.

 figure: Fig. 8

Fig. 8 Depth profile measurements made in daylight of a life-size mannequin from a standoff distance of 910 meters. The close-up photographs in column (a) are different viewpoints of the scene that was scanned which consisted of the mannequin against a hardboard backplane. Each depth scan covered an area of approximately 800 × 2000 mm using 30 × 80 pixels, resulting in a pixel-to-pixel spacing of approximately 25 mm in X and Y. Surface plots of the raw depth data obtained for per-pixel acquisition times of 20, 10, 5, and 2 ms are shown in columns (b) to (e) - each of these columns show two different views of the same data obtained with the specified per-pixel acquisition time, and a white to black color shading is used to map depth. A per-pixel acquisition time of 10 ms equated to a total scan time of 24 s for this scene.

Download Full Size | PDF

We also carried out depth profiling measurements of a scene at a stand-off distance of 4.5 kilometers. At this stand-off distance we expect much lower returns, in comparison to those obtained for the 325 m and 910 m measurements when using the same system parameters, and therefore used cooperative targets, i.e. retro-reflective material attached to flat boards. These measurements were carried out in bright daylight which corresponded to a normalized background noise count rate of approximately 78 kcps. Figure 9 shows the depth images of these cooperative targets measured using three different per-pixel acquisition times: 2 s, 500 ms, and 100 ms. Even at the relatively short pixel acquisition time of 100 ms, clear depth imaging profiles can be seen. The flat panels and the separation between them can be clearly seen. In these measurements, the separation between adjacent planes was approximately 270 mm and the spot size at this distance was approximately 300 mm diameter.

 figure: Fig. 9

Fig. 9 Depth images acquired in daylight at a stand-off distance of 4500 meters of a scene containing cooperative targets. The photographs in column (a) show two different viewpoints of the scene that was scanned. The scene consisted of two stacked 12.5 mm thick plywood panels (resulting in an area measuring 900 mm tall and 1220 mm wide) covered in white retro-reflecting material. Approximately 275 mm in front of this, on the ground, was a 400 mm tall red retro-reflective roadside warning triangle. A 400 mm wide plywood panel, with a set of five red retro-reflecting triangles, was placed centrally about 300 mm behind the white boards. Each depth scan covered an area of approximately 1500 mm tall by 1220 mm wide using 15 × 12 pixels, resulting in a pixel-to-pixel spacing of approximately 100 mm in X and Y. Surface plots of the depth data obtained for per-pixel acquisition times of 2, 0.5, and 0.1 s are shown in columns (b), (c) and (d) respectively - each of these columns show two different views of the same data obtained with the specified per-pixel acquisition time, and the color shading is used to map depth.

Download Full Size | PDF

In order to quantitatively evaluate the depth uncertainty at the three stand-off distances of 325 m, 910 m and 4500 m, we scanned a 12.5 mm thick 900 x 1220 mm flat plywood panel at near normal incidence. One side of this plywood panel was covered in white retro-reflective material (T6500 HIP by Avery Dennison), the other side was uncovered. The plain plywood surface of the panel was scanned at the stand-off distances of 325 m and 910 m, and the retro-reflective side was scanned at 910 m and 4500 m. In each case, we measured a 2D array of pixels and fitted these depth measurements to a flat plane. The residual in depth, expressed as one standard deviation, was recorded for the three stand-off distances and various per-pixel acquisition times, to produce the results shown in Fig. 10. As is shown in this plot, a per-pixel acquisition time of 20 ms yielded depth uncertainties of approximately 1.2 mm and 7 mm for the scans of the plywood surface of the panel at stand-off distances of 325 m and 910 m respectively. Scans of the retro-reflective surface of the panel at stand-off distances of 910 m and 4500 m at the same per-pixel acquisition time of 20 ms achieved depth residuals of approximately 2 mm and 4 mm respectively. These calculations are based on approximately 3600 pixels for the 325 m range, and approximately 300 pixels for the 910 m and 4500 m ranges.

 figure: Fig. 10

Fig. 10 Depth uncertainty measurements made using the gated-mode InGaAs/InP SPAD detector (the filled marker points) and measurements made previously using a free-running SNSPD (unfilled marker points). The measurements were acquired by scanning a 900 x 1220 x 12.5 mm thick, flat plywood panel at stand-off distances of 325 meters, 910 meters, and 4500 meters. The panel was scanned at near normal incidence - one side of the plywood panel was covered in white retro-reflective material, and the other side was uncovered. The graph shows the depth residuals, expressed as one standard deviation from the mean, for scans made using various per-pixel acquisition times on the plywood surface (325 m and 910 m) and the retro-reflective surface (910 m and 4500 m). The residuals were calculated using approximately 3600 pixels for the scans at 325 m, and approximately 300 pixels for the scans at both 910 m and 4500 m.

Download Full Size | PDF

It is difficult to make a direct comparison between the performance of the depth imaging system presented in this paper and the system using the free-running SNSPD we reported previously [25]. The SNSPD used in that work had an efficiency of 18% at 1 kHz dark count rate, and the overall system jitter was ~100 ps. The depth images were acquired by illuminating the scene with an optical output power level of less than 250 μW average which was provided by a mode-locked fiber laser with a central wavelength of 1560 nm, a pulse repetition rate of 50 MHz, and a pulse width of <1 ps. As well as using different detectors, laser sources, and optical filters, the two sets of field trials were carried out in slightly different weather conditions. Nonetheless, the SNSPD system exhibits slightly better performance based on the data shown in Fig. 10. This is likely to be as a result of the improved time jitter of the SNSPD system (100 ps FWHM) compared to that of the InGaAs/InP SPAD system (140 ps), since the background will be dominated by solar background.

6. Conclusions

In conclusion, we have constructed a single photon depth imaging system using a compact packaged, thermo-electrically-cooled, gated-mode InGaAs/InP SPAD detector. When operating at a wavelength of 1550 nm, the gated-mode InGaAs/InP SPAD offers good temporal response, high single photon detection efficiency and relatively low dark count rate. Use of this wavelength takes advantage of the reduced atmospheric attenuation and lower solar background when compared to operation at shorter wavelengths. Also, in terms of eye safety considerations, this wavelength band is outside the retinal hazard region and therefore permits the use of a higher average optical power illumination beam in comparison to our previous measurements made at 842 nm. Depth resolutions of less than one centimeter were achieved from non-cooperative targets at stand-off distances of up to one kilometer for per-pixel acquisition times in the millisecond regime. These measurements were all performed using sub-milliwatt average optical power levels and in bright daylight. The rapid data acquisition meant that depth measurements on moving targets were also investigated at a distance of 325 m. This gated detector has offered comparable depth resolution to results obtained using a similar optical system which utilized a superconducting nanowire single-photon detector [25] operating at a temperature of approximately 4 K by means of a relatively cumbersome refrigerator system. The results presented in this paper point the way towards the practical implementation of an eye-safe, 1550 nm wavelength depth imaging system with realistic potential of compact construction and low-power operation (optical and electrical), and which is therefore compatible with a number of remote 3-D imaging applications, including those using mobile platforms.

Acknowledgments

The team acknowledges the support of the EU FP7 project “MiSPiA” (Grant Agreement Number 257646). The team from Heriot-Watt University is affiliated to the Scottish Universities Physics Alliance (SUPA) and thanks the UK Engineering and Physical Sciences Research Council for support (Platform Grant Award: EP/F048041/1), and Aurora Maccarone for her help with the field trials.

References and links

1. B. Schwarz, “LIDAR: Mapping the world in 3D,” Nat. Photonics 4(7), 429–430 (2010). [CrossRef]  

2. M. C. Amann, T. Bosch, M. Lescure, R. Myllyla, and M. Rioux, “Laser ranging: a critical review of usual techniques for distance measurement,” Opt. Eng. 40(1), 10–19 (2001). [CrossRef]  

3. C. Mallet and F. Bretar, “Full-waveform topographic lidar: State-of-the-art,” ISPRS J. Photogramm. Remote Sens. 64(1), 1–16 (2009). [CrossRef]  

4. W. Becker, Advanced Time-Correlated Single Photon Counting Techniques (Springer, 2005).

5. G. S. Buller and A. M. Wallace, “Ranging and three-dimensional imaging using time-correlated single-photon counting and point-by-point acquisition,” IEEE J. Sel. Top. Quantum Electron. 13(4), 1006–1015 (2007). [CrossRef]  

6. F. Blais, “Review of 20 years of range sensor development,” J. Electron. Imaging 13(1), 231–243 (2004). [CrossRef]  

7. C. Ho, K. L. Albright, A. W. Bird, J. Bradley, D. E. Casperson, M. Hindman, W. C. Priedhorsky, W. R. Scarlett, R. C. Smith, J. Theiler, and S. K. Wilson, “Demonstration of literal three-dimensional imaging,” Appl. Opt. 38(9), 1833–1840 (1999). [CrossRef]   [PubMed]  

8. J. J. Degnan, “Photon-counting multikilohertz microlaser altimeters for airborne and spaceborne topographic measurements,” J. Geodyn. 34(3-4), 503–549 (2002). [CrossRef]  

9. M. A. Albota, B. F. Aull, D. G. Fouche, R. M. Heinrichs, D. G. Kocher, R. M. Marino, J. G. Mooney, N. R. Newbury, M. E. O’Brien, B. E. Player, B. C. Willard, and J. J. Zayhowski, “Three-dimensional imaging laser radars with Geiger-mode avalanche photodiode arrays,” Lincoln Lab. J 13, 351–370 (2002).

10. B. F. Aull, A. H. Loomis, D. J. Young, R. M. Heinrichs, B. J. Felton, P. J. Daniels, and D. J. Landers, “Geiger mode avalanche photodiodes for three-dimensional imaging,” Lincoln Lab. J. 13, 335–350 (2002).

11. A. McCarthy, R. J. Collins, N. J. Krichel, V. Fernández, A. M. Wallace, and G. S. Buller, “Long-range time-of-flight scanning sensor based on high-speed time-correlated single-photon counting,” Appl. Opt. 48(32), 6241–6251 (2009). [CrossRef]   [PubMed]  

12. P. A. Hiskett, C. S. Parry, A. McCarthy, and G. S. Buller, “A photon-counting time-of-flight ranging technique developed for the avoidance of range ambiguity at gigahertz clock rates,” Opt. Express 16(18), 13685–13698 (2008). [CrossRef]   [PubMed]  

13. C. Niclass, A. Rochas, P. A. Besse, and E. Charbon, “Design and characterization of a CMOS 3-D image sensor based on single photon avalanche diodes,” IEEE J. Solid-State Circuits 40(9), 1847–1854 (2005). [CrossRef]  

14. D. Stoppa, L. Pancheri, M. Scandiuzzo, L. Gonzo, G. F. Dalla Betta, and A. Simoni, “A CMOS 3-D imager based on single photon avalanche diode,” IEEE Trans. Circuits Syst. Regul. Pap. 54(1), 4–12 (2007). [CrossRef]  

15. C. Niclass, M. Soga, H. Matsubara, S. Kato, and M. Kagami, “A 100-m Range 10-Frame/s 340 x 96-Pixel Time-of-Flight Depth Sensor in 0.18-mu m CMOS,” IEEE J. Solid-State Circuits 48(2), 559–572 (2013). [CrossRef]  

16. H. Willebrand and B. S. Ghuman, Free Space Optics: Enabling Optical Connectivity in Today's Networks (Sams, Indianapolis, 2002).

17. L. S. Rothman, D. Jacquemart, A. Barbe, D. C. Benner, M. Birk, L. R. Brown, M. R. Carleer, C. Chackerian, K. Chance, L. H. Coudert, V. Dana, V. M. Devi, J. M. Flaud, R. R. Gamache, A. Goldman, J. M. Hartmann, K. W. Jucks, A. G. Maki, J. Y. Mandin, S. T. Massie, J. Orphal, A. Perrin, C. P. Rinsland, M. A. H. Smith, J. Tennyson, R. N. Tolchenov, R. A. Toth, J. Vander Auwera, P. Varanasi, and G. Wagner, “The HITRAN 2004 molecular spectroscopic database,” J. Quant. Spectrosc. Radiat. Transf. 96(2), 139–204 (2005). [CrossRef]  

18. R. Henderson and K. Schulmeister, Laser Safety (Institute of Physics Publishing, 2004).

19. G. S. Buller and R. J. Collins, “Single-photon generation and detection,” Meas. Sci. Technol. 21(1), 012002 (2010). [CrossRef]  

20. P. Yuan, R. Sudharsanan, X. G. Bai, P. McDonald, E. Labios, B. Morris, J. P. Nicholson, G. M. Stuart, H. Danny, S. Van Duyne, G. Pauls, S. Gaalema, M. D. Turner, and G. W. Kamerman, “Three-dimensional imaging with 1.06 µm Geiger-mode LADAR camera,” Laser Radar Technology and Applications XVII 8379, 837902, 837902-12 (2012). [CrossRef]  

21. M. Entwistle, M. A. Itzler, J. Chen, M. Owens, K. Patel, X. D. Jiang, K. Slomkowski, S. Rangwala, and J. C. Campbell, “Geiger-mode APD Camera System for Single Photon 3-D LADAR Imaging,” Advanced Photon Counting Techniques VI, 8375 (2012).

22. M. A. Diagne, M. Greszik, E. K. Duerr, J. J. Zayhowski, M. J. Manfra, R. J. Bailey, J. P. Donnelly, and G. W. Turner, “Integrated array of 2-μm antimonide-based single-photon counting devices,” Opt. Express 19(5), 4210–4216 (2011). [CrossRef]   [PubMed]  

23. M. G. Tanner, C. M. Natarajan, V. K. Pottapenjara, J. A. O'Connor, R. J. Warburton, R. H. Hadfield, B. Baek, S. Nam, S. N. Dorenbos, E. B. Urena, T. Zijlstra, T. M. Klapwijk, and V. Zwiller, “Enhanced telecom wavelength single-photon detection with NbTiN superconducting nanowires on oxidized silicon,” Appl. Phys. Lett. 96(22), 221109 (2010). [CrossRef]  

24. R. E. Warburton, A. McCarthy, A. M. Wallace, S. Hernandez-Marin, R. H. Hadfield, S. W. Nam, and G. S. Buller, “Subcentimeter depth resolution using a single-photon counting time-of-flight laser ranging system at 1550 nm wavelength,” Opt. Lett. 32(15), 2266–2268 (2007). [CrossRef]   [PubMed]  

25. A. McCarthy, N. J. Krichel, N. R. Gemmell, X. Ren, M. G. Tanner, S. N. Dorenbos, V. Zwiller, R. H. Hadfield, and G. S. Buller, “Kilometer-range, high resolution depth imaging via 1560 nm wavelength single-photon detection,” Opt. Express 21(7), 8904–8915 (2013). [CrossRef]   [PubMed]  

26. C. M. Natarajan, M. G. Tanner, and R. H. Hadfield, “Superconducting nanowire single-photon detectors: physics and applications,” Supercond. Sci. Technol. 25(6), 063001 (2012). [CrossRef]  

27. A. Lacaita, F. Zappa, S. Cova, and P. Lovati, “Single-photon detection beyond 1 µm: Performance of commercially available InGaAs/lnP detectors,” Appl. Opt. 35(16), 2986–2996 (1996). [CrossRef]   [PubMed]  

28. G. Ribordy, J. D. Gautier, H. Zbinden, and N. Gisin, “Performance of InGaAs/InP avalanche photodiodes as gated-mode photon counters,” Appl. Opt. 37(12), 2272–2277 (1998). [CrossRef]   [PubMed]  

29. J. G. Rarity, T. E. Wall, K. D. Ridley, P. C. M. Owens, and P. R. Tapster, “Single-photon counting for the 1300-1600-nm range by use of peltier-cooled and passively quenched InGaAs avalanche photodiodes,” Appl. Opt. 39(36), 6746–6753 (2000). [CrossRef]   [PubMed]  

30. P. A. Hiskett, G. S. Buller, A. Y. Loudon, J. M. Smith, I. Gontijo, A. C. Walker, P. D. Townsend, and M. J. Robertson, “Performance and design of InGaAs /InP photodiodes for single-photon counting at 1.55 microm,” Appl. Opt. 39(36), 6818–6829 (2000). [CrossRef]   [PubMed]  

31. S. Pellegrini, R. E. Warburton, L. J. J. Tan, J. S. Ng, A. B. Krysa, K. Groom, J. P. R. David, S. Cova, M. J. Robertson, and G. S. Buller, “Design and performance of an InGaAs-InP single-photon avalanche diode detector,” IEEE J. Quantum Electron. 42(4), 397–403 (2006). [CrossRef]  

32. M. A. Itzler, X. D. Jiang, M. Entwistle, K. Slomkowski, A. Tosi, F. Acerbi, F. Zappa, and S. Cova, “Advances in InGaAsP-based avalanche diode single photon detectors,” J. Mod. Opt. 58(3-4), 174–200 (2011). [CrossRef]  

33. A. Tosi, F. Acerbi, M. Anti, and F. Zappa, “InGaAs/InP Single-Photon Avalanche Diode With Reduced Afterpulsing and Sharp Timing Response With 30 ps Tail,” IEEE J. Quantum Electron. 48(9), 1227–1232 (2012). [CrossRef]  

34. A. Tosi, A. Della Frera, A. B. Shehata, and C. Scarcella, “Fully programmable single-photon detection module for InGaAs/InP single-photon avalanche diodes with clean and sub-nanosecond gating transitions,” Rev. Sci. Instrum. 83(1), 013104 (2012). [CrossRef]   [PubMed]  

35. A. Tosi, A. Della Frera, A. Bahgat Shehata, C. Scarcella, F. Acerbi, and F. Zappa, “InGaAs/InP single-photon counting module running up to 133 MHz,” Quantum Sensing and Nanophotonic Devices IX 8268, 82681S, 82681S-6 (2012). [CrossRef]  

36. A. Nayak, E. Trucco, A. Ahmad, and A. M. Wallace, “SimBIL: appearance-based simulation of burst-illumination laser sequences,” IET Image Proc. 2(3), 165–174 (2008). [CrossRef]  

37. N. Namekata, S. Adachi, and S. Inoue, “1.5 GHz single-photon detection at telecommunication wavelengths using sinusoidally gated InGaAs/InP avalanche photodiode,” Opt. Express 17(8), 6275–6282 (2009). [CrossRef]   [PubMed]  

38. M. Ren, X. R. Gu, Y. Liang, W. B. Kong, E. Wu, G. Wu, and H. P. Zeng, “Laser ranging at 1550 nm with 1-GHz sine-wave gated InGaAs/InP APD single-photon detector,” Opt. Express 19(14), 13497–13502 (2011). [CrossRef]   [PubMed]  

39. N. J. Krichel, A. McCarthy, I. Rech, M. Ghioni, A. Gulinatti, and G. S. Buller, “Cumulative data acquisition in comparative photon-counting three-dimensional imaging,” J. Mod. Opt. 58(3-4), 244–256 (2011). [CrossRef]  

40. A. N. Bashkatov, E. A. Genina, V. I. Kochubey, and V. V. Tuchin, “Optical properties of human skin, subcutaneous and mucous tissues in the wavelength range from 400 to 2000 nm,” J. Phys. D Appl. Phys. 38(15), 2543–2555 (2005). [CrossRef]  

Supplementary Material (1)

Media 1: AVI (4158 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Schematic of the layout of the 1550 nm single-photon depth imaging system which comprises a supercontinuum laser source, an InGaAs/InP SPAD detector, a TCSPC module, and a custom transceiver. Optical components include: fiber collimation packages (FC1, FC2, FCT, FCR); polarizing beam splitter (PBS); galvanometer scan mirrors (GM1, GM2); relay lenses (RL1, RL2, RL3); objective lens (OL); longpass filters (LPF1, LPF2); bandpass filters (BPF1, BPF2); shortpass filter (SPF); linear polarizer (LP); half wave plate (HWP); polarization-maintaining fiber (PMF); single mode fiber (SMF). Other abbreviations used: nuclear instrumentation module (NIM); single-photon avalanche diode (SPAD); time correlated single-photon counting (TCSPC).
Fig. 2
Fig. 2 Timing diagram for the gated mode operation used with the SPAD detector. A 40 MHz synchronous clock signal was supplied to the detector module. Using this clock as a trigger, the detector was gated on for a pre-determined period, TON, then switched off until the next clock trigger. If an avalanche event is triggered within the detector (e.g. by an incident photon) during the gate then an output pulse will be registered, as shown at time (a) in the figure. The detector will be rapidly quenched and remain off for a set hold-off time, THO, to reduce the probability of afterpulsing. Any clock triggers (and thus incident photons) will be ignored during this period, e.g. at time (b). Once the hold-off duration is complete, the gate is once again ready to be triggered by the clock. Any photons arriving outside the gate window will be ignored (c), but any photons arriving within the gate window, for example (d), will be detected.
Fig. 3
Fig. 3 The plot in (a), after [25], shows an example of a processed histogram for a single pixel obtained from the measurements made using a free-running SNSPD – peaks corresponding to the optical back reflections from the transceiver components are present. The plot in (b) is from a pixel measurement using the gated InGaAs/InP detector module and shows the result of a cross-correlation between the normalized instrumental timing response (see inset) and the original timing histogram of return photon counts within one period of the laser pulse train. The inset shows the instrumental response function which had a 144 ps full width at half maximum timing jitter. Most of the non-zero cross-correlation is within a gated window of only 7 ns width, and the cross-correlation peak corresponds to the time position of the return signal peak (i.e. 10.34 ns in the example).
Fig. 4
Fig. 4 Depth profile measurements acquired in bright daylight of human models at a stand-off distance of 325 meters. The images in columns (a) and (d) are close-up photographs of the two scenes that were scanned. Both scenes are shown from two different viewpoints and consisted of a human standing in front of a hardboard backplane with a maximum front-to-back surface separation of approximately 400 mm. The depth scans covered an area of approximately 800 × 1000 mm using 60 × 75 pixels, resulting in a pixel-to-pixel spacing of approximately 13 mm in both X and Y. Plots of the depth data obtained for per-pixel acquisition times of 5, 2, 1, and 0.5 ms are shown in columns (b), (c), (e) and (f) – each of these columns show two different viewpoints of the surface plot constructed from the data obtained using the specified per-pixel acquisition time, and the color shading is used to map depth. A per-pixel acquisition time of 1 ms equated to a total scan time of 4.5 s for these scenes. The normalized background noise count rate was calculated to be approximately 93 kcps and 91 kcps for scenes (a) and (d) respectively.
Fig. 5
Fig. 5 Comparison between the results obtained from depth profile scans of a life-sized mannequin and two different human models, at a range of 325 m using a per-pixel acquisition time of 5 ms. The photographs in (a1), (b1), and (c1) are close-up images of the three different scenes that were scanned in similar daylight conditions. The bottom row shows two viewpoints of the plotted depth data that was acquired for each of the three scenes, with the white to black color shading being used to map depth. The plots shown in (a2), (b2), and (c2) use color to map the calculated number of detected return photons in the histogram peak, for each individual pixel, according to the color bar on the right. A distinct variation in the number of detected return photons from the different types of clothing materials can be seen from these intensity images e.g. the knitted cardigan on the female in (b2) has a slightly higher return in comparison to the mannequin’s rain jacket in (a2). Both of these materials appear to have a significantly higher return than the leather jacket worn by the male in (c2). The plots of the human models in (b2) and (c2) highlight the relatively low returns from skin.
Fig. 6
Fig. 6 Material-dependent integrated photon number versus acquisition time per-pixel. The data for the graph shown in (a) was extracted from depth profile scans acquired in daylight of a life-sized mannequin at a stand-off distance of 325 m and using acquisition times of 0.5, 1, 2, 5, 10, and 20 ms per-pixel. Note that the integrated photon number is a nine-pixel-based average using a single pixel and its eight neighbors. The dashed lines in the graph, linking the set of data points for each material, are included as a guide for the reader. The pixel locations corresponding to the plotted data are indicated in the photograph shown in (b). Sets of data points for two different locations on the jacket material are included in the graph – these illustrate how the more pronounced creasing of the material at the lower location results in a significantly lower photon number due to the illuminating beam striking the material at glancing angles. This is similar to what happens at edges, e.g. points on the mannequin outline (see the integrated photon return plots in Fig. 5).
Fig. 7
Fig. 7 Time-of-flight depth profile movie of a swinging soccer ball, ~200 mm diameter, recorded in daylight from a standoff distance of 325 meters (Media 1). The movie recorded the ball swinging in a conical pendulum motion - the four-second, 10 × 10 pixel movie with 10 frames per second was acquired using an acquisition time of 1 ms per pixel. Two different views of the data from ten consecutive frames (numbers 24 to 33) are shown. The images shown in rows (a) and (c) are face-on views as seen from the direction of the transceiver, and the corresponding top-down depth view of the data for each of the frames is shown in rows (b) and (d). The same color scheme is used to map depth in all of the plots.
Fig. 8
Fig. 8 Depth profile measurements made in daylight of a life-size mannequin from a standoff distance of 910 meters. The close-up photographs in column (a) are different viewpoints of the scene that was scanned which consisted of the mannequin against a hardboard backplane. Each depth scan covered an area of approximately 800 × 2000 mm using 30 × 80 pixels, resulting in a pixel-to-pixel spacing of approximately 25 mm in X and Y. Surface plots of the raw depth data obtained for per-pixel acquisition times of 20, 10, 5, and 2 ms are shown in columns (b) to (e) - each of these columns show two different views of the same data obtained with the specified per-pixel acquisition time, and a white to black color shading is used to map depth. A per-pixel acquisition time of 10 ms equated to a total scan time of 24 s for this scene.
Fig. 9
Fig. 9 Depth images acquired in daylight at a stand-off distance of 4500 meters of a scene containing cooperative targets. The photographs in column (a) show two different viewpoints of the scene that was scanned. The scene consisted of two stacked 12.5 mm thick plywood panels (resulting in an area measuring 900 mm tall and 1220 mm wide) covered in white retro-reflecting material. Approximately 275 mm in front of this, on the ground, was a 400 mm tall red retro-reflective roadside warning triangle. A 400 mm wide plywood panel, with a set of five red retro-reflecting triangles, was placed centrally about 300 mm behind the white boards. Each depth scan covered an area of approximately 1500 mm tall by 1220 mm wide using 15 × 12 pixels, resulting in a pixel-to-pixel spacing of approximately 100 mm in X and Y. Surface plots of the depth data obtained for per-pixel acquisition times of 2, 0.5, and 0.1 s are shown in columns (b), (c) and (d) respectively - each of these columns show two different views of the same data obtained with the specified per-pixel acquisition time, and the color shading is used to map depth.
Fig. 10
Fig. 10 Depth uncertainty measurements made using the gated-mode InGaAs/InP SPAD detector (the filled marker points) and measurements made previously using a free-running SNSPD (unfilled marker points). The measurements were acquired by scanning a 900 x 1220 x 12.5 mm thick, flat plywood panel at stand-off distances of 325 meters, 910 meters, and 4500 meters. The panel was scanned at near normal incidence - one side of the plywood panel was covered in white retro-reflective material, and the other side was uncovered. The graph shows the depth residuals, expressed as one standard deviation from the mean, for scans made using various per-pixel acquisition times on the plywood surface (325 m and 910 m) and the retro-reflective surface (910 m and 4500 m). The residuals were calculated using approximately 3600 pixels for the scans at 325 m, and approximately 300 pixels for the scans at both 910 m and 4500 m.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.