Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Multi-object distance determination by analysis of CoC variation for dynamic structured light

Open Access Open Access

Abstract

A multi-object distance determination method can be achieved by 932 nm structured light with one camera as the data receiver. The structured light generated by a liquid crystal on silicon spatial light modulator (LCoS-SLM) facilitates dynamic image projection on targets. A series of moving light strip images were captured and collected for data analysis. This method lifted the limitation of single-object distance determination and the limitation of the angle requirement between the camera and the light source in the triangulation method. The average error of this method was approximately 3% in the range of 700 mm to 1900 mm away from LCoS-SLM without further optimization. It provides a potential compact design for indoor multi-object distance determination in the future.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Object distance determination has been widely applied in many ways [16]. It is essential for unmanned vehicles and robot operations. Laser light-based techniques can be categorized as time-of-flight (ToF) and structured light triangulation [711]. Both techniques have their advantages and disadvantages. ToF technologies for distance determination include indirect time-of-flight (i-ToF) [1216] and direct time-of-flight (d-ToF) [1722] methods. Both methods calculate the time difference between the emitted laser light signal and the light reflected by an object at the detector. Among the methods, i-ToF technology, which benefits from the maturity of the process using CMOS as a sensor, is widely used. However, i-ToF has drawbacks, such as double-frequency challenges when the distance exceeds the range and thermal effects of Complementary Metal Oxide Semiconductor (CMOS). In contrast, d-ToF technology uses pulsed laser light and a single-photon avalanche diode (SPAD) as a detector to avoid the double frequency drawback. This method exhibits a considerably longer detection range. Distance determination based on structured light triangulation requires structured light generation and a camera as a receiver. The analysis is based on the triangulation of the light source, object, and camera. Furthermore, various structured light can be generated and applied accurately. However, the exact angle and distance between the light source and the camera limit its application.

This study proposes a multi-object distance determination method based on structured light generated by a liquid crystal on silicon spatial light modulator (LCoS-SLM) and one camera as the data receiver. The structured light exhibits a depth-of-field phenomenon before and after the focal point. The captured image can be calculated using the equation for circle of confusion (CoC). Furthermore, structured light can be swiped in sequence using the dynamic features of the fast-switching LCoS in the inferred region. Finally, the dual-distance model is applied as the analysis tool, yielding distance results for various objects. As a result, the distance between the light source and the camera is no longer required. The entire volume of an optical system can be significantly reduced for compact applications.

2. Circle of confusion structured light generated from LCoS-SLM

The depth of field (DoF) [2329] is determined as the distance between an acceptably sharp focus range in an image captured by a camera or projected by a projector. Furthermore, a hologram image generated by CGH and projected via the LCoS-SLM follows the same phenomenon. The image is gradually blurred after moving away from the DoF range. This phenomenon can be mathematically expressed using the CoC [30], as shown in Eq. (1), according to the settings shown in Fig. 1(a):

$$C = A\frac{{|{{S_2} - {S_1}} |}}{{{S_1}}}\; ,$$
where C is the diameter of the diffuse circle, A is the lens diameter (which is also the aperture of the system), ${S_1}$ is the image distance, and ${S_2}$ is the distance between the diffuse circle and lens, ${f_1}$ is the original light source position, and ${f_2}$ is the equivalent light source position after movement. In Fig. 1(a), the red line represents the original imaging position, whereas the blue line indicates the position of the imaging surface. Figure 1(b)∼(d) show the simulated light distribution at the focal and ±200 mm planes based on the Iteration Fourier Transform Algorithm (IFTA) [3133] algorithm. A single-point light spot is used as the target image, set at a focal length of 1000 mm after the light is reflected from the LCoS, as shown in Fig. 1(c). Figure 1(b) and (d) show the simulated light distributions of the images from 800 mm and 1200 mm), respectively. The light field distribution suggests a rectangular shape when moving away from the target distance. Hence, the expansion of the rectangular shape corresponds to the target distance.

 figure: Fig. 1.

Fig. 1. (a) Schematic diagram of CoC system and the simulated light distribution at (b) 800 mm, (c) 1000 mm, and (d) 1200 mm.

Download Full Size | PDF

The LCoS-SLM serves as a digital reflective mirror as a digital lens, and the CoC of LCoS-SLM can be expressed as follows:

$${C_{LCoS - SLM}} = {A_{panel}}\frac{{|{{S_2} - {S_1}} |}}{{{S_1}}}\; ,$$
where, ${C_{LCoS - SLM}}$ is the Confusion of LCoS-SLM, ${A_{panel}}$ is the shape and size of the LCoS panel, ${S_1}$ is the focal length from the CGH to the panel and ${S_2}$ is the distance from the blurred area to the panel. However, CGH imaging is not typically based on a single point of light. Thus, the image can be regarded as the sum of all the point light sources after they are out of focus. The possible light distribution of the CGH at the nonfocused position can be written as follows:
$$C({X,Y} )= \mathop \smallint \nolimits_\Sigma ^{} {A_{panel}}\frac{{|{{S_2} - {S_1}} |}}{{{S_1}}}G({X,Y} )$$
where C is the image at the nonfocused position, G is the image of the CGH in focus, ${S_1}$ is the focal distance from the CGH to the panel, and ${S_2}$ is the distance from the blurred area to the LCoS panel.

3. Experimental design

3.1 Structured light measurement and light ratio

The focal length of the CGH can be digitally tuned in advance, and the results of the yielded circle of confusion of the LCoS-SLM can be adopted for calibration at a planned distance. Because the out-of-focus pattern of the structured light expands, the expanded structured light patterns eventually overlap. For example, the vertical strip light bar becomes blurred and spreads in the left and right directions when the projection screen moves away from the focal plane, as shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. Experimental setup design.

Download Full Size | PDF

The ratio of the bright striped line to the defined area, as shown in Fig. 3, can be calculated as follows:

$$Light\; ratio = \frac{{Total\; width\; of\; the\; reconstructed\; image}}{{Length\; of\; the\; defined\; area}}\; $$

Using Eq. (4), the light ratio equals 0.5 when the light strip is a binary grating as a photoperiod in the defined area. Furthermore, the light ratio can be any number but is limited by the initial intensity and camera. For subsequent experiments, the light ratio is set to 0.1. Each light strip in the reconstructed image can be expressed using Eq. (5):

$$X = Np \cdot \Delta x = \frac{{\lambda z}}{{\Delta s\; }}\; ,$$
where X is the width of the light strip, Np is number of 1920 pixels, Δx is the pixel pitch at projection, λ is the wavelength, z is the projection distance, and Δs is the pixel pitch of LCoS-SLM, which is 6.4 µm. Based on the DoF, the deformed image is larger than the image at the focal plane. The deformation width is expressed by Eq. (6) as follows:
$${W_\textrm{D}} = {W_{\textrm{LB}}} + {A_{panel}}\frac{{|{{S_2} - {S_1}} |}}{{{S_1}}}\; ,$$
where ${W_\textrm{D}}$ is the deformation width, ${W_{\textrm{LB}}}$ is the light bar width.

 figure: Fig. 3.

Fig. 3. Out-of-focus image prediction.

Download Full Size | PDF

Because the projected LCoS-SLM image can be measured, the light ratio can be analyzed based on the cross section of the captured image at any position, which is a signal shape similar to the duty ratio of a square-wave signal. The photoperiod light ratio can be rewritten as

$$Light\; ratio = {W_\textrm{D}} \div \frac{{\; {W_{\textrm{RI}}}\; \; }}{{{N_{\textrm{LB}}}}}\; ,$$
where ${N_{\textrm{LB}}}$ is the number of light bars, ${W_{\textrm{RI}}}$ is the reconstructed image width.

The light ratio can be calculated at any in- and out-of-focus position, starting with the initial setting value at the focal plane as 1 when the adjacent light strips are connected. We found that the size of the image varies linearly with distance and that the photoperiod changes faster before and after the focal plane.

To improve the distance analysis of multiple objects, a series of moving strips between two adjacent light strips in the original reconstructed image is generated based on the dynamic diffraction features of the CGHs. Each cycle defines the number of moving steps in a series as M swipe steps. In addition, this depends on the swiping time of each cycle, the camera resolution, and the initial light intensity. In our experimental setup, 40 moving steps were prepared for each cycle using. The entire scan is captured using a camera and sent for data analysis to computer.

This study uses a PCU3-02-932 LCoS panel targeting 932 nm with a response time of 9.84 ms (∼ 100 Hz). The calibration method is described in the literature [34]. The output CGH frame rate is set to 20 Hz, which can be illuminated with the optimal efficiency of the panel. A Blackfly BFLY-U3-13S2 M camera equipped with a Sony ICX445 (1/3” Mono CCD) sensor with a resolution of 1288 × 964 was used as a receiver.

3.2 Dual distance model

The entire workflow for using dual distance model to obtain the unidentified objects as shown in Fig. 4. The work include setting standard and getting distance result as describe below.

 figure: Fig. 4.

Fig. 4. Data process scheme for setting standard and getting final distance results.

Download Full Size | PDF

The original light ratio is set to 0.1, and the targets of the grating-structured light with a fixed photoperiod are set to 1000 and 1600 mm. In addition, the focal length is 150 mm, and the distance between the lens and the LCoS-SLM is 100 mm. Each set of target distances contains 40 swipe images acquired by the Charge Coupled Device (CCD) before computing processing. The acquired images are processed for noise reduction to enable subsequent calculations. Each pixel from the captured image has a binarized value of 0 or 1 when the pixel intensity is lower or higher, respectively than the average intensity of the same pixel from all captured images. Standard values (d11 and d12) and (d21 and d22) are calculated using Eq. (6) and 7 for the focal planes at 1000 mm and 1600 mm, respectively, as shown in Fig. 5(a). The second digits 1 and 2 indicate before and after the focal plane, respectively. For each pixel from the undetermined object, the light ratio is calculated as the average of all the binarized captured images in each cycle. Hence, the resulting values c1 and c2 represent the reference datasets from 700 mm to 1900 mm, respectively. Finally, the distance values of d’11 and d’21 can be obtained when c1 is interpolated into the model of the first group (d11 and d12), and c2 is interpolated into the model of the second group (d21 and d22), yielding d’21 and d’22, as shown in Fig. 5(b).

 figure: Fig. 5.

Fig. 5. (a) Four standards (d11∼d22) are obtained by calculation when the distance between the two targets is divided into the front and the back; d11: in front of the first target, d12: behind the first target, d21: in front of the second target, and d22: behind the second target. (b) Illustration of getting distance value (d): the c1 and c2 are obtained from the experiments. The values of d’11, d’12, d’21, and d’22 are obtained from Fig. 5(a). The distance d is obtained after comparing the closest points.

Download Full Size | PDF

The distance of the object is determined based on the closest interpolation data, d’12 and d’21, as shown in Fig. 5(b). To minimize errors, the average of the extracted distance values is used. Beyond this range, the distance determination may depend solely on the interpolation of d’11 or d’22.

4. Experimental results

4.1 Analysis of distance results

First, the error in the proposed dual standard distance system was evaluated. We started measuring 700 mm away from the LCoS panel at 50 mm intervals. Figure 6 shows the deviation of each measurement from 700 mm to 1900 mm. The average error was 3.38%.

 figure: Fig. 6.

Fig. 6. Distance errors of a single screen.

Download Full Size | PDF

In addition, the error was lower in the middle of one standard. This is because the two standards overlap, yielding better accuracy from the obtained data. However, both ends of the standards exhibited higher determination errors.

4.2 Actual object test

Two dolls, a smiling shield on the right and a big mouth on the left, were used as unidentified objects. The smiling shield was set at 1030 mm, whereas the big mouth was set at 1250 mm, as shown in Fig. 7(a). Two sets of photo data were obtained using 1000 mm and 1600 mm as dual standards to perform the measurements. Figure 7(b) and 7(c) shows the selected photographs.

 figure: Fig. 7.

Fig. 7. Actual object pictures, (a) original picture, (b) target stripe at 1000 mm, (c) target stripe at 1600 mm.

Download Full Size | PDF

The distance information of each pixel, which was converted from the corresponding light ratio, was reconstructed, as shown in Fig. 8(a). Interestingly, the reconstructed image shows two dolls. This also suggests that different reflection surfaces, such as eyes, may interfere with distance evaluation. To further analyze the data, we cropped the pixels with similar distance information, as shown in Fig. 8(b) and (c), for distances of 1000 mm and 1250 mm, respectively.

 figure: Fig. 8.

Fig. 8. Analysis of the picture: (a) the distance information, (b) the pixel point at 1000 mm, (c) the pixel point at 1250 mm.

Download Full Size | PDF

Different reflection surfaces and random missing pixel information may affect the distance analysis. We cropped the figure to set the calculation boundary and used the average number of pixels inside to minimize errors. As a result, the smiling shield on the right and the big mouth on the left were obtained at 994 mm and 1275 mm, respectively. The differences were 36 mm and 25 mm, with 3.5% and 2% errors, respectively. We randomly placed the two dolls between 800 mm and 1800 mm for the test. Figure 9 shows the calculated positions of the objects and the errors. These results suggest that a weaker laser intensity limits a longer distance, yielding higher signal noise. This drawback may need to be addressed in future studies.

 figure: Fig. 9.

Fig. 9. Errors of the system on two actual objects.

Download Full Size | PDF

5. Conclusion

Multi-object distance determination was achieved using a single camera with a dynamic holographic system targeting 932 nm. This method overcomes the limitation of single-object distance determination and the limitation of the angle requirement between the camera and the light source in the triangulation method. Without further optimization, the proposed method had an average error of approximately 3%. Furthermore, the method is not limited by architecture design because the dynamic CGH projection by LCoS-SLM has its own DoF. All analysis was based on the captured image, and the only requirement for camera placement was a clear view of the captured photos. The resolution of the camera can further enhance the accuracy; however, this is not discussed in this study. Furthermore, the distance determination was analyzed independently for each pixel in the sequence of the captured photos. The analysis is not affected by the texture frequency of objects in space; instead, the analysis model is interfered with the light absorption at the analytical wavelength. As a result, this multi-object distance determination method based on single camera with dynamic holographic system can greatly reduce space requirement for future indoor applications.

Funding

National Science and Technology Council (NSTC 112-2218-E-011-005 -MBK).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. F. Blais, “Review of 20 years of range sensor development,” J. Electron. Imag. 13(1), 231–243 (2004). [CrossRef]  

2. R. Horaud, M. Hansard, G. Evangelidis, et al., “An overview of depth cameras and range scanners based on time-of-flight technologies,” Machine Vision and Applications volume 27(7), 1005–1020 (2016). [CrossRef]  

3. V. Kabashnikov and B. Kuntsevich, “Distance determination based on the delay time-intensity profile analysis in range-gated imaging,” Appl. Opt. 56(30), 8378–8384 (2017). [CrossRef]  

4. S. L. Camenzind, J. F. Fricke, J. Kellner, et al., “Dynamic and precise long-distance ranging using a free-running dual-comb laser,” Opt. Express 30(21), 37245–37260 (2022). [CrossRef]  

5. B. D. Padullaparthi, J. Tatum, and K. Iga, VCSEL Industry (Wiley, 2021), Chap. 8.

6. Z. Wang, C. Zhang, Z. Wang, et al., “Real-time three-dimensional measurement techniques: A review,” Optik 202, 163627 (2020). [CrossRef]  

7. X. Li, Y. Huang, and Q. Hao, “Automated robot-assisted wide-field optical coherence tomography using structured light camera,” Biomed. Opt. Express 14(8), 4310–4325 (2023). [CrossRef]  

8. A. Georgopoulos, C. Ioannidis, and A. Valanis, “Assessing The Performance Of A Structured Light Scanner,” International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences (ISPRS Archives) 38(5), 251–255 (2010).

9. J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photon. 3(2), 128–160 (2011). [CrossRef]  

10. M. Gupta, A. Agrawal, A. Veeraraghavan, et al., “Structured light 3D scanning in the presence of global illumination,” presented at the CVPR (2011).

11. D. You, Z. You, P. Zhou, et al., “Theoretical analysis and experimental investigation of the Floyd-Steinberg-based fringe binary method with offset compensation for accurate 3D measurement,” Opt. Express 30(15), 26807–26823 (2022). [CrossRef]  

12. M. Perenzoni and D. Stoppa, “Figures of Merit for Indirect Time-of-Flight 3D Cameras: Definition and Experimental Evaluation,” Remote Sens. 3(11), 2461–2472 (2011). [CrossRef]  

13. S. C. Shin, M. Kye, I. Hwang, et al., “Indirect-ToF system optimization for sensing range enhancement with patterned light source and adaptive binning,” presented at the 2021 International Image Sensor Workshop (2021).

14. H. Kawachi, T. Nakamura, K. Iwata, et al., “Snapshot super-resolution indirect time-of-flight camera using a grating-based subpixel encoder and depth-regularizing compressive reconstruction,” Opt. Continuum 2(6), 1368–1383 (2023). [CrossRef]  

15. S. Bellisai, D. Bronzi, F. A. Villa, et al., “Single-photon pulsed-light indirect time-of-flight 3D ranging,” Opt. Express 21(4), 5086–5098 (2013). [CrossRef]  

16. D. Stoppa and A. Simoni, “Single-photon detectors for time-of-flight range imaging,” in Single-Photon Imaging, 1st ed, P. Seitz, eds. (Springer, 2011), pp. 275–300.

17. E. Charbon, M. Fishburn, R. Walker, et al., eds. (Springer Berlin, Heidelberg, 2013), pp. 11–38.

18. B. Markovic, S. Tisa, F.A. Villa, et al., “A high-linearity, 17 ps precision time-to-digital coverter based on a single-stage delay Vernier loop fine interpolation,” IEEE Trans. Circuits Syst. I 60(3), 557–569 (2013). [CrossRef]  

19. R. E. Warburton, A. McCarthy, A. M. Wallace, et al., “Enhanced performance photon-counting time-of-flight sensor,” Opt. Express 15(2), 423–429 (2007). [CrossRef]  

20. Y. Shi, D. Hu, R. Xue, et al., “High speed time-of-flight displacement measurement based on dual-comb electronically controlled optical sampling,” Opt. Express 30(5), 8391–8398 (2022). [CrossRef]  

21. M. J. Sun, M. Edgar, G. Gibson, et al., “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7(1), 12010 (2016). [CrossRef]  

22. X. Wang, P. Song, W. Zhang, et al., “A systematic non-uniformity correction method for correlation-based ToF imaging,” Opt. Express 30(2), 1907–1924 (2022). [CrossRef]  

23. A. P. Pentland, “A New Sense for Depth of Field,” in Proceedings of IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI-9(4) (IEEE, 1987), pp. 523–531.

24. R. Kosara, S. Miksch, and H. Hauser, “Semantic Depth of Field,” presented at the IEEE Symposium on Information Visualization (2001).

25. M. Kraus and M. Strengert, “Depth-of-Field Rendering by Pyramidal Image Processing,” Computer Graphics Forum 26(3), 645–654 (2007). [CrossRef]  

26. C. Soler, K. Subr, F. Durand, et al., “Fourier depth of field,” ACM Trans. Graph 28(2), 1–12 (2009). [CrossRef]  

27. C. G. Luo, X. Xiao, M. Martínez-Corral, et al., “Analysis of the depth of field of integral imaging displays based on wave optics,” Opt. Express 21(25), 31263–31273 (2013). [CrossRef]  

28. P. Dai, G. Lv, Z. Wang, et al., “Enhanced depth of field of computer-generated holographic stereograms by twice close-range capturing with occlusion processing,” SID 30(3), 233–243 (2022). [CrossRef]  

29. J. Y. Son, O. Chernyshov, C. H. Lee, et al., “Depth resolution in three-dimensional images,” J. Opt. Soc. Am. A 30(5), 1030–1038 (2013). [CrossRef]  

30. D. Durmus and W. Davis, “Blur perception and visual clarity in light projection systems,” Opt. Express 27(4), A216–A223 (2019). [CrossRef]  

31. F. Wyrowski and O. Bryngdahl, “Iterative Fourier-transform algorithm applied to computer holography,” J. Opt. Soc. Am. A 5(7), 1058–1065 (1988). [CrossRef]  

32. K. Watanabe and T. Inoue, “Energy adjustment pulse shaping algorithm part I: accuracy improvement of phase retrieval IFTA,” Opt. Express 28(10), 14807–14814 (2020). [CrossRef]  

33. K. Choi, H. Kim, and B. Lee, “Synthetic phase holograms for auto-stereoscopic image displays using a modified IFTA,” Opt. Express 12(11), 2454–2462 (2004). [CrossRef]  

34. J. P. Yang, F. Y. Wu, P. S. Wang, et al., “Characterization of the spatially anamorphic phenomenon and temporal fluctuations in high-speed, ultra-high pixels-per-inch liquid crystal on silicon phase modulator,” Opt. Express 27(22), 32168–32183 (2019). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. (a) Schematic diagram of CoC system and the simulated light distribution at (b) 800 mm, (c) 1000 mm, and (d) 1200 mm.
Fig. 2.
Fig. 2. Experimental setup design.
Fig. 3.
Fig. 3. Out-of-focus image prediction.
Fig. 4.
Fig. 4. Data process scheme for setting standard and getting final distance results.
Fig. 5.
Fig. 5. (a) Four standards (d11∼d22) are obtained by calculation when the distance between the two targets is divided into the front and the back; d11: in front of the first target, d12: behind the first target, d21: in front of the second target, and d22: behind the second target. (b) Illustration of getting distance value (d): the c1 and c2 are obtained from the experiments. The values of d’11, d’12, d’21, and d’22 are obtained from Fig. 5(a). The distance d is obtained after comparing the closest points.
Fig. 6.
Fig. 6. Distance errors of a single screen.
Fig. 7.
Fig. 7. Actual object pictures, (a) original picture, (b) target stripe at 1000 mm, (c) target stripe at 1600 mm.
Fig. 8.
Fig. 8. Analysis of the picture: (a) the distance information, (b) the pixel point at 1000 mm, (c) the pixel point at 1250 mm.
Fig. 9.
Fig. 9. Errors of the system on two actual objects.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

C = A | S 2 S 1 | S 1 ,
C L C o S S L M = A p a n e l | S 2 S 1 | S 1 ,
C ( X , Y ) = Σ A p a n e l | S 2 S 1 | S 1 G ( X , Y )
L i g h t r a t i o = T o t a l w i d t h o f t h e r e c o n s t r u c t e d i m a g e L e n g t h o f t h e d e f i n e d a r e a
X = N p Δ x = λ z Δ s ,
W D = W LB + A p a n e l | S 2 S 1 | S 1 ,
L i g h t r a t i o = W D ÷ W RI N LB ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.