Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Design tool for TOF and SL based 3D cameras

Open Access Open Access

Abstract

Active illumination 3D imaging systems based on Time-of-flight (TOF) and Structured Light (SL) projection are in rapid development, and are constantly finding new areas of application. In this paper, we present a theoretical design tool that allows prediction of 3D imaging precision. Theoretical expressions are developed for both TOF and SL imaging systems. The expressions contain only physically measurable parameters and no fitting parameters. We perform 3D measurements with both TOF and SL imaging systems, showing excellent agreement between theoretical and measured distance precision. The theoretical framework can be a powerful 3D imaging design tool, as it allows for prediction of 3D measurement precision already in the design phase.

© 2017 Optical Society of America

Corrections

27 October 2017: A typographical correction was made to the author listing.

1. Introduction

Active illumination 3D imaging is a set of techniques where distance images are generated through the use of a camera and an illumination source. We can divide this field into two groups of techniques, namely Structured Light (SL), where the distance information is recovered through triangulation, and Time of Flight (TOF) techniques, where the distance information is recovered through the measurement of travel time for a light pulse [1].

Over the last decade, both of these groups have seen an impressive evolution. From a technical point of view, Structured Light techniques have been pushed forward by rapid development in computer performance for distance calculations [2], development of digital projectors (notably Digital Micromirror Device projectors) and high frame-rate, high resolution digital cameras. Time of Flight techniques have been pushed forward through the increasing availability of pulsed illumination sources (notably LEDs with nanosecond pulse duration) and rapid shutter camera chips [3–5].

From an end-user point of view, the lowered system costs have allowed entry into application areas within e.g. entertainment, machine vision and size measurements within industrial production and infrastructure. Further cost reduction and improved performance is to be expected also in the future, as technological development continues and as new systems are constantly emerging.

In the design of a 3D imaging system, key design metrics can be identified. Two relevant metrics are Measurement Range and Measurement Precision. Other parameters like camera resolution and frame rate will also affect the performance of the 3D measurement technique. Generally, high frame rate, high 3D image resolution and long measurement range will decrease measurement precision. The task at hand is how to choose your system design parameters in order to comply with the design metrics. E.g. for a TOF system, system design parameters are Illumination Intensity, Pulse Duration and Camera Time Response (etc). For SL systems, corresponding parameters can be identified. In order to correctly choose the system design parameters, we need a model that links system design parameters with system performance.

Calculations on distance noise from measurement noise has been reported previously for the case of pulsed TOF [3,6] and for phased TOF [6–8]. The expressions vary somewhat, but show the same dependencies on pulse duration and signal levels. Comparison against experimental results has been shown for pulsed TOF [3] and for phased TOF [8]. For structured light, some work on noise calculations can be found, e.g [9,10]. [10] uses a constant noise value for the system, which assumes that the system is dark noise limited. This work, therefore, does not take into account what happens when the system is shot noise limited, which is more often the case [9]. derives expressions for different types of patterns, and discusses the results quantitatively in terms of preferable pattern characteristics, however, the system noise and noise characteristics are not treated in detail. Neither of the articles provide quantitative distance maps. In both cases [9,10], the system analysis is specific to each system, whereas in this paper the developed model has the ability to quantify the performance of a new system once the system parameters are known.

In this paper, we develop a theoretical framework for prediction of the performance of TOF and SL 3D imaging, based on error propagation of shot noise (signal and background) and dark noise (readout noise) inherent in the measurements. We perform experiments and show that the theoretical framework is able to predict the system performance with great accuracy, the only input parameters being physically measurable design parameters that can be known before the system is assembled, thus providing accurate prediction of the performance of the intended system. The expressions that we derive is a powerful tool in the design phase of a 3D imaging system.

As a design example, we create a combined TOF and SL 3D imaging system, for improved precision over a long range and increased 3D frame rate. We use TOF distance data in combination with SL wrapped phase in order to reconstruct distance from SL without the use of Gray-code images. Similar ideas can be found in patents [11,12], motivated either by increased 3D frame rate or improved precision over longer range, but no measurements have been provided.

The article is divided in three sections. In the first section, we develop the theoretical framework for calculating distance precision in TOF and SL imaging, based on experimental parameters. In the second section, the theory is validated by quantitative comparison with 3D measurements. In the last and third section, we discuss the results and establish the algorithm allowing us to circumvent the use of Gray-codes for reconstructing distance information in SL measurements.

2. Theoretical estimation of the relative performances of time-of-flight and structured-light

We present a theoretical estimate of the distance uncertainty for TOF and SL techniques as function of system parameters. For the derivation of the theoretical expressions, we assume that the systems are subject to pixel readout noise and shot noise from both background illumination and projected signal. All other sources of error have been neglected.

Distance uncertainty for time-of-flight

In a TOF system, distance D, is calculated from the round-trip travel time 2D/c of an emitted light pulse. A fast camera shutter allows for short and well-defined camera exposures. The travel time is often calculated by shifting the camera shutter timing such that the camera exposure and the return of the light pulse coincide. The received signal is then obtained through a convolution between the returning laser pulse and the intensifier gate function [3]. The signal reaches a maximum when the two overlap. The simplest form of a TOF measurement consists a laser pulse with duration τpulse=Lpulse/c, equal to the round-trip time of the furthest object to be detected, and two measurements of the returning signal. The camera gate is open for a duration equal to the length of the laser pulse. In the first measurement the pixel gate is opened with the emission of the front end of the laser pulse, and in the second measurement the gate opens where the gate was closed in the first measurement (see Fig. 1).

 figure: Fig. 1

Fig. 1 Illustration of TOF measurement timing.

Download Full Size | PDF

Distance is calculated as:

D=s1s1+s2Lpulse2
where s1 and s2 are the number of photons obtained in measurement 1 and 2, respectively. The intensities sn are affected by noise, and this will affect the precision by which the distance can be retrieved. In the absence of ambient light, accounting only for shot-noise in the measurement, the distance uncertainty, given as the standard deviation in the distance estimate in this measurement, is found to be:
σTOF=(Ds1)2(Δs1)2+(Ds2)2(Δs2)2=14LpulseNph
Here, Nph=s1+s2. Because of the assumption of shot noise limitation, Δsn=sn. Note that all intensity values must be presented in units of photoelectrons (e-), not in units of AD-counts. As such, the e-/ADC conversion factor or equivalently the full well capacity of the pixel must be known. The result is in line with previously reported values [3,6], to within a constant. The expression can be generalized to:
σTOF=122mcτresponseNph
where Lpulse has been replaced with cτresponse, to take into account the total response time of the system, which may or may not be limited by the laser pulse duration. m is the number of samples within the response time. This expression shows that the distance uncertainty for TOF is proportional to the system response time. Nph can be replaced by the signal to noise ratio (SNR) when including contribution from dark noise and ambient noise. In the absence of ambient light, where Nph1/D2 we have:

σTOFD

Distance uncertainty for structured-light imaging

In SL-based 3D imaging techniques, one seeks to calculate a distance map D(x,y,z) from triangulation, through the relation [13]:

D=Bsin(θp)sin(θp+θc)
Here, B is the baseline between the camera and the projector, θp and θc are angles between projected angle and baseline, and camera angle and baseline, respectively (see Fig. 2). θp must be reconstructed from the projected phase φ and projector parameters. If the projected phase varies between 0 and φmax the relation becomes
θp=FOV2+FOVφφmax+π2
where FOV is the field of view of the projector.

 figure: Fig. 2

Fig. 2 Geometry for Structured Light depth calculation.

Download Full Size | PDF

A common way to calculate the phase map information is to successively project four sinusoidal patterns, I1, I2, I3, and I4, spatially shifted by a fraction L4 of the spatial period L on a three dimensional structure and retrieve a wrapped phase map using Eq. (7) [14]:

φw(x,y)=arctan(I1(x,y)I3(x,y)I2(x,y)I4(x,y))
φw(x,y) is restricted to the interval [02π]. Recovering of the true phase from the wrapped phase is described by Eq. (8):
φ(x,y)=φw(x,y)+2πNgc(x,y)
where Ngc(x,y) is the Gray-code number at position (x, y) [14,15]. The intensities Ii are again affected by noise, and this will affect the precision by which the true phase and the distance can be retrieved. By using propagation of error, the standard deviation in phase, Δφ is expressed as:
Δφ=n=14(φIn)2(ΔIn)21A2(I2I4)2(I1+I3)+(I1I3)2(I2+I4)=1AA+2C
where A is the peak to peak amplitude of the sinus modulation and C is the background light intensity recorded on one pixel. We observe that the uncertainty in phase is simply the inverse of the signal to noise ratio of the measurement. Inserting Eq. (6) into Eq. (5), and doing error propagation, the expression for the distance uncertainty σSLas function of experimental parameters is obtained:
σSL=D2Bsin(θc)sin(θp)FOVφmaxA+2CA
We see that in the absence of ambient light, i.e. C=0, where the signal strength is 1D2, we have
σSLD3
The model developed in this paper supposes a 4-step phase shifting algorithm. For a Nsteps-steps phase shifting algorithm, it can be shown that the depth resolution dependence evolves as 1/Nsteps, see for example [9] for a derivation.

Relative performance of structured-light and time-of-flight and combination of the two methods

Based on the equations above, we evaluate the precision of TOF and SL in a hypothetical measurement situation. We choose the parameters shown in Table 1 below

Tables Icon

Table 1. Parameters for theoretical TOF and SL comparison.

The distance uncertaintyσSLand σTOFfor the two methods is shown in Fig. 3. We can see the DandD3trends for TOF and SL, respectively, and we see that with this set of parameters, SL is expected to be more accurate at distances closer than approx. 10 meters, while TOF will be more accurate at larger distances.

 figure: Fig. 3

Fig. 3 Theoretical distance uncertainty for TOF and SL systems.

Download Full Size | PDF

3. Experiment

Set up description

In our experiments, we combine a TOF system and an SL system. The setup is shown in Fig. 4. Both systems make use of a time gated intensified CCD camera (ICCD) for image acquisition (Andor iStar 334-18F-03). The camera parameters are listed in Table 2. The TOF image acquisition was performed with the camera triggered by a short pulse Q-switched DPSS laser from BrightSolutions SRL. The Intensifier of the camera can be triggered at high rate, while the CCD frame rate is kept at a low value, allowing for multiple pulse accumulation while keeping the read out noise at a low value. The beam emitted by the laser is expanded by a set of two plano concave lenses, and homogenized using a set of diffusors located at the output of the second lens. The parameters for the TOF system are shown in Table 2. For the SL imaging set up, a digital micro mirror device (DMD) from Texas Instruments was used to generate Gray-code and phase shift patterns. A Luxeon Rebel ES LED with an emission wavelength of 567 nm illuminated the DMD through a projector optic.

 figure: Fig. 4

Fig. 4 Combined set up for TOF and SL distance imaging. The camera is seen in the centre of the image, the SL projector to the left, the laser to the right bottom and the beam expansion optic to the right of the camera.

Download Full Size | PDF

Tables Icon

Table 2. Experimental parameters.

As the LED had much lower power than the QSwitch laser, the ICCD gating time was increased to allow for sufficient signal levels. The SL parameters are shown in Table 2. In the experiments, we use both TOF and SL to measure the distance to flat panels with 5 gray tone levels of reflectivity, placed at 3, 5, 7, 9 and 12 meters from the TOF-SL system. The measurements were performed under the influence of laboratory ambient light.

Distance extraction and uncertainty estimates

For TOF measurements, many alternative pulse and exposure timing algorithms exist [16–18]. In this work, a set of images is recorded, the relative delay between laser pulse emission and camera gate being shifted between each image. The recorded signal reaches a peak when the camera gate and time of pulse return coincides, as discussed in previous sections. The top of the peak is considered to be the position of the target. Our distance extraction algorithm treats each pixel individually, and run as follows:

  • - Filter the signal with a gaussian filter of length 6 and standard deviation 5.
  • - Find the maximum signal sn
  • - Perform a parabolic fit around the signal peak. The measured distance is then obtained by Eq. (12):
    d=d0+cΔt2(n+(sn1sn+1)2(sn12sn+sn+1))

    wheredthe is measured distance, d0 is the distance corresponding to delay step 0, c is the speed of light, Δt is the step length, n is the step with the maximum signal and sm is the signal at step m.

For SL measurements, we use the parameters listed in Table 2. For each individual pixel, distance is calculated by:

  • - Extraction of phase φwaccording to Eq. (7).
  • - Extraction of Gray-code number Ngcaccording to the procedure outlined in [14].
  • - Extraction of projected angle θp according to Eq. (6).
  • - Calculation of distance according to Eq. (5).

Our structured light algorithm requires 11 exposures: one dark, one bright, 5 Gray-codes and 4 phase images. For uncertainty estimates, we select small regions in the image, where the signal intensity is uniform and there is little physical variation in distance. We subtract a planar surface from this image region, and take the standard deviation of the measured distance. This standard deviation is compared with the expected value from Eqs. (10) and (3).

4. Results and discussion

Time-of-flight experiments

Examples of intensity and distance images obtained from TOF distance measurements are shown in Fig. 5 left and right respectively. The scene in the intensity image consists of multiple flat panels placed at different distances in the field of view.

 figure: Fig. 5

Fig. 5 Intensity image (left) and distance image [metres] (right) from TOF measurements. Multiple flat panels at various distance from the camera are shown. The colour bar indicates distance in metres.

Download Full Size | PDF

Extracting the distance uncertainty as outlined above, we get the results shown in Fig. 6 (colored crosses), which we compare with the theoretical expression in Eq. (3) using the system parameters from Table 2 and intensity values from the measurements. We see that over two orders of magnitude in signal intensity, the measured distance uncertainty follows the theoretical distance uncertainty given by Eq. (3), using τresponse = 6 ns and Nsamp=m=6.

 figure: Fig. 6

Fig. 6 Experimental uncertainty in distance estimate as function of signal intensity (crosses), as well as the theoretical expression for TOF measurements based on shot noise only (solid line), and total SNR, including readout noise (dashed line).

Download Full Size | PDF

We see that the distance noise is completely dominated by shot noise. At low signals, we see a slight increase over the shot noise limit, which can be explained by the readout noise of around 9 photoelectrons. Note that a distance uncertainty of 2 mm, which is reached at the highest signal levels, corresponds to an ability to position the signal peak to within 13 ps, compared to the peak width of 6 ns and sample spacing of 1 ns.

Structured-light experiments

From SL measurements, we extract similar data as for the TOF measurements, shown in Fig. 7. In the distance image, we observe vertical lines corresponding to erroneous distance reconstruction. This is a result of wrong Gray-code interpretation in the transition between two Gray-codes.

 figure: Fig. 7

Fig. 7 Intensity image (left) and distance image [metres] (right) from SL measurements. Multiple targets shown. Colorbar is distance in metres.

Download Full Size | PDF

As the intensity from the LED used in the experiment for illumination was very weak, the background signal was significant compared to the signal. From Eq. (10), we see that the theoretical distance uncertainty is dependent on the position in the image, distance from camera, signal strength and background signal level. Instead of isolating all of these dependencies, we have plotted theoretical distance uncertainty from Eq. (10) using system parameters from Table 2 vs experimental distance uncertainty. The results are shown in Fig. 8. Again, the experiments follow the theory over almost three orders of magnitude in predicted distance noise. The experimental noise increases slightly slower than the predicted distance noise. This could be a result of uncertainties in values used in calculation of noise, such as baseline, fields of view etc.

 figure: Fig. 8

Fig. 8 Experimental uncertainty in distance estimate vs theoretical distance noise (crosses). The line theory = experiment is shown in black.

Download Full Size | PDF

The two sections above show that our experiments follow theory very closely. We have shown that we are able to predict the distance resolving power of a 3D imaging system whether it is a TOF system or a SL system. Assuming that the only error contributions are shot noise (signal and background) and readout noise, we can explain all of the observed distance uncertainties.

Combining time-of-flight and structured-light

As argued in previous sections, combination of TOF distance data and SL wrapped phase data can be advantageous in certain applications. The algorithm for SL distance reconstruction we propose consists of the following steps:

  • - Extraction of phase φw from SL phase images, according to Eq. (7).
  • - Calculate a Gray-code pr pixel based on distance data from TOF (see explanation below)
  • - Extraction of projected phase θp
  • - Calculation of distance according to Eq. (5)

The Gray-code determination for a given pixel is performed as follow: We use Eq. (8) and the wrapped phase from SL previously extracted to construct an unwrapped phase value for all possible value of the Gray-code, i.e. for Ngc(x,y)=1,2...,Ngcmax. This artificial unwrapped phase is converted to distance, i.e. a distance for each value of the Gray-code Ngc(x,y). We then perform a least-square minimization of the absolute value of the difference between the artificial distance and the TOF data. The result of the minimization give access to the correct Gray-code for the (x,y) position under consideration.

In Fig. 9, we have performed these steps. The top left image is the TOF distance map. A standard deviation of 1 cm is indicated for one of the target plates. The top right image shows the same distance map, but with added synthetic distance noise based on signal intensity in each pixel. This, in order to illustrate that even a noisy TOF distance map can suffice. The bottom left image represents the distance obtained from SL phase images plus Gray-code.

 figure: Fig. 9

Fig. 9 Top left: Distance map from TOF. Bottom left: Distance map from SL phase and Gray-code. Top right: Distance map from TOF with added synthetic distance noise (used for Gray-code extraction in bottom right image). Bottom right: Combination image. Distance from SL phase and Gray-code from noisy TOF distance image.

Download Full Size | PDF

The vertical lines corresponding to erroneous Gray-code extraction can be seen, as well as a false Gray-coding in the left part of the image. The bottom right image is the combination image, where phase from SL has been combined with Gray-code from the noisy TOF distance map. We have used a noisy TOF distance map instead of Gray-codes, and we are able to successfully reconstruct a distance map with the same distance precision as from the complete SL algorithm. The measured distances are identical and thus so is the distance precision (both algorithms showing a standard deviation of 0.68 cm). Note that by using TOF data for Gray-code extraction, the vertical stripes in the 3D data are eliminated.

5. Discussion

We have developed a framework for predicting the theoretical distance precision of TOF and SL systems, assuming that noise is dominated by shot noise from signal and background light and by readout noise. We have thus disregarded other potential sources of error, such as digitization error, projector instabilities etc. The good correspondence between theory and experiments confirm that shot noise, and to a lesser extent readout noise are the dominant sources of error in our systems. High accuracy and high precision SL systems require careful calibration of camera and projector geometries as well as careful characterization of the projected pattern. High accuracy TOF measurements require accurate knowledge on the timing between light pulse emission and camera exposure. We have not considered these types of errors in this work, but again, our results show that these are not dominant. Note that factors like reflectivity of the scene, pixel size and thus camera resolution or integration time will influence the signal at sensor both for SL and TOF and thus affect the performance of a 3D imaging system. While camera resolution and integration time can be controlled by the experimenter, reflectivity cannot in a real scene. The influences of these parameters on the performance of a 3D imaging system are nonetheless captured by the theoretical expression presented in section 2, either explicitly or implicitly.

By fusing TOF and SL one can obtain optimal distance resolution over a long range. Using the same camera, projection optics and light source, for performing both TOF and SL illumination, the coordinate systems of both sub-systems will be identical (i.e. no registration required). This will be a definite advantage [19]. If short pulses are used for projecting light patterns in combination with gated acquisition, added robustness against background illumination can be obtained.

Short 3D frame acquisition time will minimize artifacts from motion in the scene. TOF measurements can give 3D from two measurements (or one, if the image chip allows for two different timings on the chip at once). Our structured light algorithm requires 11 exposures. In dynamic scenes, or scenes with varying background illumination, this can be a significant shortcoming. In our combined TOF and SL the number of needed frames is reduced to 6: 2 frames for acquisition of distance using TOF and 4 images for acquisition of the wrapped phase. This is a significant reduction of the number of frames needed to perform SL 3D reconstruction, and thus a significant reduction in the total acquisition time.

Examples of scenarios of interest for a combined TOF and SL system includes robotic applications in dynamic environments with varying background illumination. Long range distance information, e.g. for navigation, will be provided by TOF. When high distance precision is necessary, for example for approaching and picking objects of interest, Structured Light will provide high precision data.

6. Conclusion

In this paper, we have presented a theoretical framework for prediction of the performance of 3D imaging systems. Theory for both Time-of-Flight (TOF) and Structured Light (SL) systems has been developed. The equations allow the prediction of distance noise as function of system design parameters. Through experiments, we verify the validity of the equations, over a wide range of signal intensities and distances, showing that shot noise and readout noise dominates the distance noise. The theoretical framework is thus a powerful tool, allowing for more efficient 3D imaging system design, as the precision of the system can be predicted already in the design process.

As an example of an innovative 3D imaging system design, we have built and demonstrated a 3D system where we combine TOF and SL measurements. The motivation for this fusion are:

  • - Increased precision over a large distance range, through the use of TOF measurements at long range (precision D) and SL measurements at short range (precision D3)
  • - Reduced number of exposures required for 3D measurement, through the use of TOF 3D data instead of Gray-code images for phase unwrapping in SL measurements.

We have demonstrated that we can use coarse TOF distance data instead of Gray-codes to convert the SL wrapped phase to absolute phase and thus distance. By doing so, the total number of images necessary to recover distance information is reduced. Distance artefacts that can appear at the edge of two Gray-codes when performing SL are also eliminated. The simplest practical implementation of a combined TOF-SL system could be achieved by using a TOF sensor synchronized with a short-pulsed light source projected through a spatial light modulator device.

References and links

1. L. Pérez, Í. Rodríguez, N. Rodríguez, R. Usamentiaga, and D. F. García, “Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review,” Sensors (Basel) 16(3), E335 (2016). [PubMed]  

2. Ø. Skotheim, H. Schumann-Olsen, J. Thorstensen, A. N. Kim, M. Lacolle, K. Haugholt, and T. Bakke, “A Real-Time 3D Range Image Sensor based on a novel Tip-Tilt-Piston Micro-mirror and Dual Frequency Phase Shifting,” in Proc. SPIE 9393, Three-Dimensional Image Processing, Measurement (3DIPM), and Applications (2015), p. 93930A.

3. S. J. Kim, S. W. Han, B. Kang, K. Lee, J. D. K. Kim, and C. Y. Kim, “A three-dimensional time-of-flight CMOS image sensor with pinned-photodiode pixel structure,” IEEE Electron Device Lett. 31, 1272–1274 (2010).

4. D. Monnin, A. L. Schneider, F. Christnacher, and Y. Lutz, “A 3D Outdoor Scene Scanner Based on a Night-Vision Range-Gated Active Imaging System,” in 3D Data Processing, Visualization, and Transmission, Third International Symposium on (2006), pp. 938–945.

5. M. Beer, B. J. Hosticka, and R. Kokozinski, “SPAD-based 3D sensors for high ambient illumination,” in 2016 12th Conference on Ph.D. Research in Microelectronics and Electronics, PRIME 2016 (2016).

6. J. Illade-Quinteiro, V. M. Brea, P. López, D. Cabello, and G. Doménech-Asensi, “Distance Measurement Error in Time-of-Flight Sensors Due to Shot Noise,” Sensors (Basel) 15(3), 4624–4642 (2015). [PubMed]  

7. F. Mufti and R. Mahony, “Statistical analysis of measurement processes for time-of-flight cameras,” in Proceedings of SPIE-The International Society for Optical Engineering (2009), p. 74470I.

8. B. Büttgen, M. H. El Mechat, F. Lustenberger, and P. Seitz, “Pseudonoise Optical Modulation for Real-Time 3-D Imaging With Minimum Interference,” IEEE Trans. Circuits Syst. I Regul. Pap. 54, 2109–2119 (2007).

9. S. Savarese and P. Perona, “3D depth recovery with grayscale structured lighting,” Tech. Rep., Comput. Vision, Calif. Insititute Technol. (1998).

10. Y. Wang, K. Liu, Q. Hao, D. L. Lau, and L. G. Hassebrook, “Period coded phase shifting strategy for real-time 3-D structured light illumination,” IEEE Trans. Image Process. 20(11), 3001–3013 (2011). [PubMed]  

11. S. J. Koppal and V. V. Appia, “Time-of-flight (TOF) assisted structured light imaging,” U.S. patent US 2015/0062558 A1 (2015).

12. D. M. Bloom and M. Leone, “Structured light and time of flight depth capture with a MEMS ribbon linear array spatial light modulator,” U.S. patent US 8,970,827 B2 (2015).

13. J. Geng, “Structured-light 3D surface imaging : a tutorial,” Adv. Opt. Photonics 3, 128–160 (2011).

14. G. Sansoni, M. Carocci, and R. Rodella, “Three-dimensional vision based on a combination of gray-code and phase-shift light projection: analysis and compensation of the systematic errors,” Appl. Opt. 38(31), 6565–6573 (1999). [PubMed]  

15. Ø. Skotheim and F. Couweleers, “Structured light projection for accurate 3D shape determination,” in Proc. 12th Int. Conf. Exp. Mech (2004).

16. F. Christnacher, S. Schertzer, N. Metzger, E. Bacher, M. Laurenzis, and R. Habermacher, “Influence of gating and of the gate shape on the penetration capacity of range-gated active imaging in scattering environments,” Opt. Express 23(26), 32897–32908 (2015). [PubMed]  

17. W. Xinwei, L. Youfu, and Z. Yan, “Multi-pulse time delay integration method for flexible 3D super-resolution range-gated imaging,” Opt. Express 23(6), 7820–7831 (2015). [PubMed]  

18. M. Laurenzis and E. Bacher, “Image coding for three-dimensional range-gated imaging,” Appl. Opt. 50(21), 3824–3828 (2011). [PubMed]  

19. C. Pfitzner, W. Antal, P. Hess, S. May, C. Merkl, P. Koch, R. Koch, and M. Wagner, “3D Multi-Sensor Data Fusion for Object Localization in Industrial Applications,” in ISR/Robotik 2014; 41st International Symposium on Robotics; Proceedings of (2014), pp. 108–113.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 Illustration of TOF measurement timing.
Fig. 2
Fig. 2 Geometry for Structured Light depth calculation.
Fig. 3
Fig. 3 Theoretical distance uncertainty for TOF and SL systems.
Fig. 4
Fig. 4 Combined set up for TOF and SL distance imaging. The camera is seen in the centre of the image, the SL projector to the left, the laser to the right bottom and the beam expansion optic to the right of the camera.
Fig. 5
Fig. 5 Intensity image (left) and distance image [metres] (right) from TOF measurements. Multiple flat panels at various distance from the camera are shown. The colour bar indicates distance in metres.
Fig. 6
Fig. 6 Experimental uncertainty in distance estimate as function of signal intensity (crosses), as well as the theoretical expression for TOF measurements based on shot noise only (solid line), and total SNR, including readout noise (dashed line).
Fig. 7
Fig. 7 Intensity image (left) and distance image [metres] (right) from SL measurements. Multiple targets shown. Colorbar is distance in metres.
Fig. 8
Fig. 8 Experimental uncertainty in distance estimate vs theoretical distance noise (crosses). The line theory = experiment is shown in black.
Fig. 9
Fig. 9 Top left: Distance map from TOF. Bottom left: Distance map from SL phase and Gray-code. Top right: Distance map from TOF with added synthetic distance noise (used for Gray-code extraction in bottom right image). Bottom right: Combination image. Distance from SL phase and Gray-code from noisy TOF distance image.

Tables (2)

Tables Icon

Table 1 Parameters for theoretical TOF and SL comparison.

Tables Icon

Table 2 Experimental parameters.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

D = s 1 s 1 + s 2 L p u l s e 2
σ T O F = ( D s 1 ) 2 ( Δ s 1 ) 2 + ( D s 2 ) 2 ( Δ s 2 ) 2 = 1 4 L p u l s e N p h
σ T O F = 1 2 2 m c τ r e s p o n s e N p h
σ T O F D
D = B sin ( θ p ) sin ( θ p + θ c )
θ p = F O V 2 + F O V φ φ max + π 2
φ w ( x , y ) = arc tan ( I 1 ( x , y ) I 3 ( x , y ) I 2 ( x , y ) I 4 ( x , y ) )
φ ( x , y ) = φ w ( x , y ) + 2 π N g c ( x , y )
Δ φ = n = 1 4 ( φ I n ) 2 ( Δ I n ) 2 1 A 2 ( I 2 I 4 ) 2 ( I 1 + I 3 ) + ( I 1 I 3 ) 2 ( I 2 + I 4 ) = 1 A A + 2 C
σ S L = D 2 B sin ( θ c ) sin ( θ p ) F O V φ max A + 2 C A
σ S L D 3
d = d 0 + c Δ t 2 ( n + ( s n 1 s n + 1 ) 2 ( s n 1 2 s n + s n + 1 ) )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.