Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Model based scattering correction in time-of-flight cameras

Open Access Open Access

Abstract

In-camera light scattering is a systematic error of Time-of-Flight depth cameras that significantly reduces the accuracy of the systems. A completely new model is presented, based on raw data calibration and only one additional intrinsic camera parameter. It is shown that the approach effectively removes the errors of in-camera light scattering.

© 2014 Optical Society of America

1. Introduction

With the release of the Microsoft Kinect v2 in 2013, Time-of-Flight (ToF) depth imaging has finally made the leap into mass production and consumer markets. Yet most available systems still suffer from only partially understood systematic errors. These prevent them from reaching their full theoretical potential regarding accuracy and reproducibility of depth measurements.

While there are obvious limitations with certain scene properties such as reflective or strongly absorbing surfaces that require smart post processing of the data, the systematic errors should be dealt with as early in the pipeline as possible to simplify the process.

In this paper, the effects of in-camera light scattering are investigated. Previous work on scattering was limited to the processed data most camera systems provide, leading to very complex models. By working directly on the raw data of the camera, we can use the simplest possible physical scattering model with just one single scene independent camera parameter, which proves to be a good approximation of the process.

The model requires a calibration of the raw data, which is also introduced here. With this approach we are able to reduce the depth error due to scattering by 90%, compared to 70% in the most recent related work (cf. [1]). Even though we only present work based on the PMD CamCube 3, similar effects are apparent in other cameras (cf. Sect. 5) and can probably be generalized to most ToF systems.

Preliminaries and notation

In the following we briefly recall some relevant basics about the acquisition of depth maps by ToF cameras. Details can be found e.g. in [24].

To determine the depth of a scene, a ToF camera illuminates the scene with modulated IR light and records the reflected illumination at n different internal phase shifts. We refer to these recordings as sub-frames, denoted by Ii(x, y), i = 1,...,n. We assume n = 4, which is the standard for most current ToF cameras. Furthermore, the camera has two taps A and B. Each tap measures all four of the Ii, but tap B measures them in a different order: 3, 4, 1, 2. The two corresponding measurements are then combined or averaged for further processing. From the sub-frames Ii the amplitude a and phase φ of the reflected signal can be retrieved as

a(x,y)=12(I4(x,y)I2(x,y))2+(I1(x,y)I3(x,y))2,
φ(x,y)=arctan(I4(x,y)I2(x,y)I1(x,y)I3(x,y)),
where the phase φ(x, y) is directly proportional to the depth of the scene, i.e. the radial distance from the object at (x, y) to the camera position. To simplify the notation, we will omit the dependence of Ii and φ on the coordinates (x, y) in the following.

In the standard error model it is assumed that each Ii is affected independently by additive zero-mean Gaussian noise of variance σ2. It can be shown (cf. [5, 6]) that the resulting noise in φ is Gaussian with mean zero and variance σφ2=σ22a2 depending on the amplitude a of the recorded signal. In particular, we observe from this model that any individual distortion of the Ii affects darker regions far stronger than brighter regions.

2. Intensity calibration

There exist several papers on distance calibration of ToF cameras which have taken intensity and integration time into account (e.g. [79]). In other publications (cf. [10]), the influence of the intensity on the distance error was doubted, because a physical explanation for it had not been proposed previously. However, in the next section, we identify the scattering effect as one intensity dependent error source.

Before dealing with this effect, we introduce a new relative calibration for ToF cameras which is necessary for the subsequent signal decomposition. Our calibration approach is based on camera parameters and does not require an extensive lookup table, contrary to other approaches. All of these parameters can be measured from the dark signal of the camera. An explanation for the dependency of the depth measurement on temperature and integration time will also be proposed.

Close inspection of the raw data reveals a dark signal Idarki that can be decomposed into two parts: an integration time independent offset Ioffi and an integration time dependent signal Idci caused by the dark current. Idci follows a power function (cf. Fig. 1 left) with an exponent γ:

Idarki=Ioffi+(Idci)γ=Ioffi+(idcitint)γ.
Furthermore, these parameters vary for each pixel and tap (e.g. γ̄ = 1.32 ± 0.08 average value and standard deviation over the whole sensor for tap A). The offset Ioffi is very sensitive to temperature changes. The dark current and the exponent γ are rather constant pixel parameters with Idci increasing linearly with the integration time tint. This is very important for the calibration process, because the offset can be measured quickly, while measuring the exponent γ is much more complex.

 figure: Fig. 1

Fig. 1 Left: Mean dark signal of a single pixel, averaged over 100 frames at two different camera temperatures. The fitted function is according to Eq. (3). The different temperatures are set by warming up the camera at a constant integration time but different frame rates. For recording the data points at varying integration times but constant camera temperatures, the integration time increase is compensated by an appropriately reduced frame rate. Center/Right: Mean dark signal sub-frame difference and error of a single pixel, compared to the first sub-frame, averaged over 100 frames for different frame rates (10 and 20fps) and different integration times (200 and 2000 μs). Camera temperature is in steady state at every measurement. The time on the x-axis gives the time since the last frame was recorded. The difference increases with lower frame rates and lower integration times.

Download Full Size | PDF

The actually measured intensity signal Ii in each pixel and subframe is equal to the dark signal and an additional, also integration time dependent light current signal Ilc:

Ii=Ioffi+(Idci+Ilci)γ=Ioffi+(idcitint+ilcitint)γ.
The light current signal Ilc is not only dependent on the incident light and on the integration time, but also on the sensor modulation.

A close examination of the dark signal of the four individual sub-frames reveals an increase in intensity (cf. Fig. 1 center/right). This is probably a very short-term temperature effect due to the difference in heating power during active and inactive time periods. Consequently, this sub-frame offset difference is reduced if the active periods of the sensor are prolonged (in relation to the passive periods) by increasing the integration time or the frame rate. This effect can also explain a dependency of the depth measurements on integration time and on temperature. This short-term effect strongly depends on the frame rate. Reproducible results can only be obtained with a constant and controllable frame rate, requiring a good timing control of the camera during acquisition. For the measurements presented here, the camera acquisition was software triggered in fixed time intervals. This setting was accurate enough for our purposes.

3. Internal scattering

Previous papers on internal scattering employ models based on a point spread function (cf. [1, 11]), empirical local scattering functions (cf. [12]), reference data (cf. [13]) or heuristic functions (cf. [14, 15]). All of these methods use the amplitude and phase data for the processing. While the point spread functions in [1, 11] have a physical justification, omission of illumination changes in a scene with foreground and background objects and restriction to the nonlinear processed data are weaknesses of the approach. The models used in [14, 15] lack any physical motivation and are limited to very specific scene configurations of two parallel planes, perpendicular to the optical axis.

The assumption in our approach is that a small fraction of the light entering the camera lens is scattered diffusely and then spread over the whole sensor (cf. Fig. 2). This is the simplest possible physical model of scattering, as it is fully parameterized by the relation of diffusely versus directly transferred light. The scattering parameter is an intrinsic property of the optical system and completely scene independent. The actual scattering properties of the system might vary, but it is a good first order approximation, as demonstrated by the results. In order to be able to perform a simple correction, we need to apply our method to the raw data of the ToF camera.

 figure: Fig. 2

Fig. 2 Scattering effect. Incident light is scattered diffusely and spread over the whole sensor area.

Download Full Size | PDF

To model the scatting, we decompose the light current signal of each pixel and subframe Ilci from Eq. (4) into the unscattered incident light Ilii and a scattered part. This depends on the camera scattering parameter s and the total incident light I¯lii, averaged over the whole subframe:

Ilci:=Ilii+sI¯lii.
This is an approximation as also light from outside the field of view can enter the lens and cause scattering. The scattering parameter s is assumed to vary only slightly over the image domain (cf. Fig. 3 right). As a consequence, we approximate it by a constant value. We will see below that this single, global camera specific parameter suffices to effectively remove internal scattering. Inserting Eq. (5) into Eq. (4) we obtain
Ii=Ioffi+(Idci+Ilii+sI¯lii)γ,
which can be reformulated to yield only the unscattered linear light signal:
Ilii=(IiIoffi)1/γIdcisI¯lii.
Determining Ioffi, γ and Idci is part of the calibration (cf. Sect. 2). What is left to determine is s and the average unscattered light I¯lii. The latter can easily be extracted from Eq. (7) by averaging it over the whole subframe:
I¯lii=11+sI¯lci=11+s((IiIoffi)1/γ¯I¯dci),
where denotes the average of x over the whole image domain. Please note the impact of Eq. (6) on the depth measurement by the ToF camera: The systematic errors Ioffi, Idci, sI¯lii and γ affect the sub-frames Ii independently. When applying Ii in Eq. (2), due to the nonlinear dependency of φ on Ii, these errors can cause a significant distortion of the depth.

 figure: Fig. 3

Fig. 3 Raw data of the scene to measure the scattering parameter s. The scattering surface is positioned at an angle to avoid direct reflections of the light sources. In the left part (measurement area), the unscattered incident light Ilii is considered equal (cf. Eq. (9)). The two crops on the right show the difference of the two raw intensities in the measurement area on the left and the scattering parameter s for each pixel. The scattering parameter shows a slight scene dependency but is mostly dominated by noise. (Brightness and contrast adjusted.)

Download Full Size | PDF

To measure the scattering parameter s, we propose the following approach. A scene is recorded twice with different reflectivity only in a specific area of the image (=scattering area), while the reflectivity in the other part (=measurement area) stays unchanged (cf. Fig. 3). It is important to only change the reflectivity and not the scene setup in general, as this would cause a difference in the overall illumination situation. The result is a difference in the mean un-scattered light signal I¯lii of the whole frame, while Ilii remains the same for the pixels in the measurement area. This area can be used to calculate s with Eqs. (5) and (8). Equation (5) results in:

Ili,1i=!Ili,2i(inmeasurementarea)
Ilc,1isI¯li,1i=Ilc,2isI¯li,2i
s=Ilc,1iIlc,2iI¯li,1iI¯li,2i.
This equation is valid for every pixel in the measurement area and because we assume a constant scattering parameter s it can be averaged. The average values of the measurement area are denoted as Ĩi to differentiate them from the average values of the whole subframe Īi. Employing Eq. (8) results in:
s=I˜lc,1iI˜lc,2i11+s(I¯lc,1iI¯lc,2i)
s=I˜lc,1iI˜lc,2i(I¯lc,1iI¯lc,2i)(I˜lc,1iI˜lc,2i).
On top of averaging over the whole measurement area, s can also be averaged over the different subframes. Please note again that s is independent from the scene and thus the measurement of s can be performed offline. The scattering correction can be done directly with Eqs. (5), (7) and (8):
Ilii=Ilcis1+sI¯lci.

4. Experiments

To determine s for a particular camera, a scene is set up (cf. Fig. 3) with no bright objects outside the frame. A bright object is placed in the foreground to serve as a strongly scattering source. It is purposely not aligned perpendicular to the optical axis to avoid reflections between its flat surface and the camera lens. Specular reflections of the light sources would either cause overexposures in these areas or force a reduction of the integration time. The reflecting object is covered with black cloth for a second recording. Removing the object would change the overall illumination of the scene and corrupt the measurement. The part of the frame without the scattering source is then used to calculate s from Eq. (13). The image on the very right of Fig. 3 shows the parameter s calculated for each pixel of the measurement area, averaged over the different subframes. For this particular setup, the average value of the whole area and standard deviation are s = 0.015 ± 0.005.

There remains a very weak scene dependency of s, which might be due to an imperfection in the calibration of the camera or residual light scattered from other parts of the room when the scattering target is not covered. The measurement is repeated for several different arrangements of the scene and the scattering parameter of the camera is found to be s = 0.017 ± 0.002.

As the scattering parameter is an intrinsic camera parameter it, can be used for a scattering correction of any scene recorded with the same camera. Only the calibration of the dark signal should be repeated for each measurement due to its strong temperature dependency (cf. Sect. 2).

Figures 4, 5 and 6 show the depths calculated from corrected and uncorrected raw data of different setups but with the scattering parameter obtained from the scheme above (cf. Fig. 3). At first we focus on a region with almost constant depth in Fig. 4 (red frame) and analyze surface plots of the area.

 figure: Fig. 4

Fig. 4 3D surface plots: depth data from the previous scene. Left column: without white cylinder present. Right column: with white cylinder present. Surface maps from top to bottom: depth from uncalibrated raw, depth from calibrated raw, depth from calibrated raw with scattering correction. It is apparent, that the calibration process reduces the noise in the data, while it does not affect the intensity related distance error. The scattering compensation effectively removes the influence of the bright white cylinder in the foreground. The results of the depth measurement are then qualitatively indistinguishable from the scene without the cylinder. Furthermore, the scattering correction does not compromise measurements without strong scattering sources. This means it can safely be applied to any measurement without prior knowledge of the scene.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Color maps of phase shift and differences. a and b: calibrated data. c: difference of b and a. d and e: scattering corrected data. f: difference of e and d. g: difference of d and a. h: difference of e and b. i: difference of e and a. The average phase values of the outlined areas can be found in Tab. 2. The images show clearly that the scattering effect is much stronger in the dark parts of the frame. The depth differences here are greatly reduced with the proposed scattering correction.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Color maps of intensities and phase shift of a different scene. The top row shows the intensities of the raw data (left) after calibration (center) and scattering correction (right). The intensity data after the calibration is very noisy because the pixels are not calibrated against each other but only linearized. The bottom row shows the phase data after calibration (left), after scattering correction (center) and the difference of both (right). The average phase values of the outlined areas can be found in Tab. 3. The scattering effect is much weaker here because of the smaller scattering object and the smaller depth difference. But the effect becomes apparent again in the dark spots and also in the background.

Download Full Size | PDF

The plots show nicely the depth noise reduction introduced by the calibration step. However, it does not affect the dependency of the depth values on the reflectivity in the scene with the scattering object (cf. Fig. 4 right column). Only the scattering correction achieves this goal.

Figure 5 shows the calibrated and corrected depth maps of the whole scene in Fig. 4 and an additional setup. It is apparent that with the added scattering object, the background surfaces are measured closer to the camera compared to the setup without scattering. This is due to depth difference of the scattering object and the rest of the scene. Once the depth difference is larger than half the ambiguity range, the scattering will result in an increased background distance, due to the mixing of the two complex signals.

The scattering has a severe influence on the darker areas and barely affects the bright parts. This can be explained by the amount of scattered light which is equal for each pixel (cf. Eq. (5)), while the unscattered light varies with the reflectivity and therefore is less dominant in dark areas. This becomes apparent in the difference images in Fig. 5.

The experiment shows the correction process to improve the depth measurement compared to raw and intensity calibrated data. Its result has the same quality as the depth measured in a setup without scattering object (Fig. 4 bottom left, Fig. 5e).

If we look at the difference of the corrected scene with and without the scattering object (Fig. 5f), there are areas in the bottom left corner and next to the cylinder, where the scattering correction overcompensates compared to the scene without a scattering object. Most likely the in-scene scattering introduced by the scattering object is responsible for these effects.

In the areas right next to the scattering object in the scene with the cone (Fig. 6) a strong local scattering effect can be observed, quite obvious in the difference image (Fig. 6 bottom right). This is actually not a special local scattering effect, but it is due to the reduced intensity where the cone occludes one of the light sources mounted left and right of the camera.

Notable is also the behavior of the background in all of the examples. The area around the scenes has been covered with black cloth to avoid additional scattering effects from outside the scene. The data obtained from these areas is usually very noisy and unreliable. But the presented approach makes the measured values in these areas much more consistent.

To quantitatively evaluate the scattering correction, we compare the different depth values averaged over the considered region to those of the setup without scattering (cf. Tab. 1,2,3). Clearly, the effect of scattering is greatly reduced in the box scene. For the crop in Fig. 4, the corrected depth value of −2.119 m differs to that of the measurement without scattering (−2.122 m) only by 3 mm. Table 2 gives some additional examples from Fig. 5 that prove how the phase shift of the different setups matches much better following the scattering correction. The standard deviations presented in the tables are not only due to noise, but also due to non-uniform depth in the areas. In most of the samples, the deviation is reduced because the amplitude or intensity error is reduced and the areas with different reflectivity behave more uniform.

Tables Icon

Table 1. Mean depth of the different depth data from the crops in Fig. 4 in the same order.

Tables Icon

Table 2. Mean phase shift of the different measurements from the areas highlighted in Fig. 5. There is a slight overcompensation in area C of the cylinder scene, probably due to in-scene scattering.

Tables Icon

Table 3. Mean phase shift of the different measurements from the areas highlighted in Fig. 6. The scattering is much weaker here and hard to evaluate quantitatively. Except for the unreliable background data of area A, the mean values of the corrected data both with and without scattering converge nicely.

In the cone-setup (Tab. 3), the depth differences are very small, except for the background area. The size of the scattering object is small, as is its distance to the background objects. Still, the effect on the dark points is well visible in the color maps (cf. Fig. 6) and reduced by the scattering correction.

5. General applicability

Since our model is based on effects of the optical system, we believe it is applicable to most cameras currently on the market. As an example for a different system with similar scattering distortions, we considered the Bluetechnix Argos3D camera, which uses a different PMD sensor, a wider lens and a different modulation frequency. The scattering errors are very similar to the CamCube 3 (cf. Fig. 7). In the scene with the scattering object, the background, especially the dark areas, appear closer to the camera than without it.

 figure: Fig. 7

Fig. 7 Example for scattering in a different camera (Bluetechnix Argos3D). Left: recorded without scattering object in the foreground. Right: recorded with scattering object in the foreground.

Download Full Size | PDF

We want to point out that the access of the raw data is crucial for our approach and in general for research in this area. Unfortunately, this access is not possible with the interfaces of most cameras available on the market, such as the Argos 3D camera considered above. Consequently, the model and our scattering correction can not be applied to such systems.

6. Conclusion

We have introduced a new model and a solution to in-camera light scattering for Time-of-Flight depth cameras. To this end we considered a simple scattering model, which already provides a sufficient correction for most applications. Previous publications used much more complicated models, but at the same time lacked the computation on raw data. In our work this processing of the raw data turned out to be crucial for an efficient and accurate scattering correction.

Future work will focus on extended models to meet the physical properties of the camera even better. Another refinement of the model could be to incorporate vignetting. This should increase the subjective influence of scattering sources at the frame borders. At the same time, the scattering intensity will decrease slightly with increasing distance from the frame center.

Acknowledgments

The authors thank the anonymous reviewers for greatly improving the quality of this manuscript through their insightful and detailed remarks. The work presented in this article has been cofinanced by the Intel Visual Computing Institute in Saarbrücken. The content is under sole responsibility of the authors.

References and links

1. W. Karel, S. Ghuffar, and N. Pfeifer, “Modelling and compensating internal light scattering in time of flight range cameras,” The Photogrammetric Record 27, 155–174 (2012). [CrossRef]  

2. M. Hansard, S. Lee, O. Choi, and R. Horaud, Time-of-Flight Cameras (Springer, 2013). [CrossRef]  

3. D. Lefloch, R. Nair, F. Lenzen, H. Schäfer, L. Streeter, M. J. Cree, R. Koch, and A. Kolb, “Technical foundation and calibration methods for time-of-flight cameras,” in “Time-of-Flight and Depth Imaging: Sensors, Algorithms, and Applications,”, vol. 8200 of LNCS (Springer, 2013), pp. 3–24.

4. M. Schmidt, “Analysis, modeling and dynamic optimization of 3d time-of-flight imaging systems,” Dissertation, IWR, Fakultät für Physik und Astronomie, Univ. Heidelberg (2011).

5. M. Frank, M. Plaue, K. Rapp, U. Köthe, B. Jähne, and F. Hamprecht, “Theoretical and experimental error analysis of continuous-wave time-of-flight range cameras,” Optical Engineering 48, 13602 (2009). [CrossRef]  

6. F. Lenzen, K. I. Kim, H. Schäfer, R. Nair, S. Meister, F. Becker, C. S. Garbe, and C. Theobalt, “Denoising strategies for time-of-flight data,” in “Time-of-Flight and Depth Imaging: Sensors, Algorithms, and Applications,”, vol. 8200 of LNCS (Springer, 2013), pp. 25–45.

7. T. Kahlmann, F. Remondino, and H. Ingensand, “Calibration for increased accuracy of the range imaging camera swissranger:,” in “Proceedings of the ISPRS Commission V Symposium ’Image Engineering and Vision Metrology’,” (ISPRS, 2006), pp. 136–141.

8. M. Lindner and A. Kolb, “Calibration of the intensity-related distance error of the PMD ToF-camera,” in “Optics East,” (International Society for Optics and Photonics, 2007).

9. M. Lindner, I. Schiller, A. Kolb, and R. Koch, “Time-of-flight sensor calibration for accurate range sensing,” Comput. Vis. Image Underst. 114, 1318–1328 (2010). [CrossRef]  

10. J. Godbaz, M. Cree, and A. Dorrington, “Understanding and ameliorating non-linear phase and amplitude responses in AMCW Lidar,” Remote Sensing 4, 21–42 (2012). [CrossRef]  

11. J. Mure-Dubois and H. Hügli, “Real-time scattering compensation for time-of-flight camera,” in “Proceedings of the ICVS Workshop on Camera Calibration Methods for Computer Vision Systems - CCMVS2007,” (2007), pp. 117–122.

12. T. Kavli, T. Kirkhus, J. T. Thielemann, and B. Jagielski, “Modelling and compensating measurement errors caused by scattering in time-of-flight cameras,” in “Two- and Three-Dimensional Methods for Inspection and Metrology VI,” (2008), 706604. [CrossRef]  

13. S. Jamtsho and D. D. Lichti, “Modelling scattering distortion in 3D range camera,” International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences 38, 299–304 (2010).

14. D. D. Lichti, X. Qi, and T. Ahmed, “Range camera self-calibration with scattering compensation,” ISPRS Journal of Photogrammetry and Remote Sensing 74, 101–109 (2012). [CrossRef]  

15. D. D. Lichti, J. C. Chow, E. Mitishita, J. A. S. Centeno, F. M. M. d. Silva, R. A. Barrios, and I. Contreras, “New models for scattering bias compensation in time-of-flight range camera self-calibration,” Journal of Surveying Engineering 140, 04014003 (2014). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Left: Mean dark signal of a single pixel, averaged over 100 frames at two different camera temperatures. The fitted function is according to Eq. (3). The different temperatures are set by warming up the camera at a constant integration time but different frame rates. For recording the data points at varying integration times but constant camera temperatures, the integration time increase is compensated by an appropriately reduced frame rate. Center/Right: Mean dark signal sub-frame difference and error of a single pixel, compared to the first sub-frame, averaged over 100 frames for different frame rates (10 and 20fps) and different integration times (200 and 2000 μs). Camera temperature is in steady state at every measurement. The time on the x-axis gives the time since the last frame was recorded. The difference increases with lower frame rates and lower integration times.
Fig. 2
Fig. 2 Scattering effect. Incident light is scattered diffusely and spread over the whole sensor area.
Fig. 3
Fig. 3 Raw data of the scene to measure the scattering parameter s. The scattering surface is positioned at an angle to avoid direct reflections of the light sources. In the left part (measurement area), the unscattered incident light I li i is considered equal (cf. Eq. (9)). The two crops on the right show the difference of the two raw intensities in the measurement area on the left and the scattering parameter s for each pixel. The scattering parameter shows a slight scene dependency but is mostly dominated by noise. (Brightness and contrast adjusted.)
Fig. 4
Fig. 4 3D surface plots: depth data from the previous scene. Left column: without white cylinder present. Right column: with white cylinder present. Surface maps from top to bottom: depth from uncalibrated raw, depth from calibrated raw, depth from calibrated raw with scattering correction. It is apparent, that the calibration process reduces the noise in the data, while it does not affect the intensity related distance error. The scattering compensation effectively removes the influence of the bright white cylinder in the foreground. The results of the depth measurement are then qualitatively indistinguishable from the scene without the cylinder. Furthermore, the scattering correction does not compromise measurements without strong scattering sources. This means it can safely be applied to any measurement without prior knowledge of the scene.
Fig. 5
Fig. 5 Color maps of phase shift and differences. a and b: calibrated data. c: difference of b and a. d and e: scattering corrected data. f: difference of e and d. g: difference of d and a. h: difference of e and b. i: difference of e and a. The average phase values of the outlined areas can be found in Tab. 2. The images show clearly that the scattering effect is much stronger in the dark parts of the frame. The depth differences here are greatly reduced with the proposed scattering correction.
Fig. 6
Fig. 6 Color maps of intensities and phase shift of a different scene. The top row shows the intensities of the raw data (left) after calibration (center) and scattering correction (right). The intensity data after the calibration is very noisy because the pixels are not calibrated against each other but only linearized. The bottom row shows the phase data after calibration (left), after scattering correction (center) and the difference of both (right). The average phase values of the outlined areas can be found in Tab. 3. The scattering effect is much weaker here because of the smaller scattering object and the smaller depth difference. But the effect becomes apparent again in the dark spots and also in the background.
Fig. 7
Fig. 7 Example for scattering in a different camera (Bluetechnix Argos3D). Left: recorded without scattering object in the foreground. Right: recorded with scattering object in the foreground.

Tables (3)

Tables Icon

Table 1 Mean depth of the different depth data from the crops in Fig. 4 in the same order.

Tables Icon

Table 2 Mean phase shift of the different measurements from the areas highlighted in Fig. 5. There is a slight overcompensation in area C of the cylinder scene, probably due to in-scene scattering.

Tables Icon

Table 3 Mean phase shift of the different measurements from the areas highlighted in Fig. 6. The scattering is much weaker here and hard to evaluate quantitatively. Except for the unreliable background data of area A, the mean values of the corrected data both with and without scattering converge nicely.

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

a ( x , y ) = 1 2 ( I 4 ( x , y ) I 2 ( x , y ) ) 2 + ( I 1 ( x , y ) I 3 ( x , y ) ) 2 ,
φ ( x , y ) = arctan ( I 4 ( x , y ) I 2 ( x , y ) I 1 ( x , y ) I 3 ( x , y ) ) ,
I dark i = I off i + ( I dc i ) γ = I off i + ( i dc i t int ) γ .
I i = I off i + ( I dc i + I lc i ) γ = I off i + ( i dc i t int + i lc i t int ) γ .
I lc i : = I li i + s I ¯ li i .
I i = I off i + ( I dc i + I li i + s I ¯ li i ) γ ,
I li i = ( I i I off i ) 1 / γ I dc i s I ¯ li i .
I ¯ li i = 1 1 + s I ¯ lc i = 1 1 + s ( ( I i I off i ) 1 / γ ¯ I ¯ dc i ) ,
I li , 1 i = ! I li , 2 i ( in measurement area )
I lc , 1 i s I ¯ li , 1 i = I lc , 2 i s I ¯ li , 2 i
s = I lc , 1 i I lc , 2 i I ¯ li , 1 i I ¯ li , 2 i .
s = I ˜ lc , 1 i I ˜ lc , 2 i 1 1 + s ( I ¯ lc , 1 i I ¯ lc , 2 i )
s = I ˜ lc , 1 i I ˜ lc , 2 i ( I ¯ lc , 1 i I ¯ lc , 2 i ) ( I ˜ lc , 1 i I ˜ lc , 2 i ) .
I li i = I lc i s 1 + s I ¯ lc i .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.