Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Automated reconstruction of multiple objects with individual movement based on PSP

Open Access Open Access

Abstract

Many methods have been proposed to reconstruct the moving object based on phase shifting profilometry. Quality reconstruction results can be achieved when a single moving object or multiple objects with same movement are measured. However, errors will be introduced when multiple objects with individual movements are reconstructed. This paper proposes an automated method to track and reconstruct the multiple objects with individual movement. First, the objects are identified automatically and their bounding boxes are obtained. Second, with the identified objects’ images before movement, the objects are tracked by the KCF algorithm in the successive fringe pattern after movement. Third, the SIFT method is applied on the tracked object images and the objects’ movement is described individually by the rotation matrix and translation vector. Finally, the multiple objects are reconstructed based on the different movement information. Experiments are presented to verify the effectiveness.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

3D reconstruction of dynamic scene is the basis for various applications such as assembly line product inspection etc [15]. Phase shifting profilometry (PSP) is one of the most popular technologies for 3D reconstruction because of its advantages such as high accuracy, high speed and robust to the background light noise. Multiple sinusoidal fringe patterns (normally at least three) with phase shifting are projected onto the object surface from one angle and captured by the camera from another angle. Based on the multiple captured fringe patterns, the phase information is retrieved to reconstruct the object. As the multiple fringe patterns are employed, the object is required to be kept static during the reconstruction. The object movement not only introducing mismatch among the captured fringe patterns, but also modifying the phase shift amount between the fringe patterns. Therefore, with the traditional PSP, errors will be introduced in the reconstruction result when the moving object is reconstructed.

Recently, the issue caused by movement has attracted intensive attention. Shijie Feng etc. [6] proposed a motion compensation method for rigid moving object reconstruction. The ripples introduced by motion are reduced by employing the statistical nature of the fringe patterns. The method works well when the motion-introduced phase shifts are constant and may fail when the phase shifts are varying across the object surface (such as rotation movement). Jorge L. Flores etc. employed an iterative algorithm to reconstruct the object moving in the linear travel stage [7]. The algorithm does not need the calibration between the moving velocity and image acquisition. However, the movement direction is limited. Ziping Liu etc. reconstructed the moving objects by estimate the motion between the two subsequent 3D frames [8]. The algorithm assumes that the object moves at a constant speed between the two successive 3D frames. Yajun Wang etc. compensate the errors caused by motion by employing Hilbert transform [9]. The motion introduced error doubles the frequency of the projected fringe frequency. Without introducing additional fringe patterns, the Hilbert transform shifts the phase of the fringe pattern pi/2. Therefore, the fringe pattern generated by the Hilbert transform is used to compensate the errors caused by motion. Minghui Duan etc. reconstruct the object with 2D movement by using a composite fence image [10]. The motion of the object is tracked by the composite fence image, the phase map is retrieved and refined by the reference phase map. However, the motion is limited in two-dimensional. Xinran Liu etc. proposed a method to reconstruct the dynamic object by calculating the differences between the computed phase maps [11]. The motion-induced phase shift estimation and error compensation are achieved. However, the method assumes the motion has constant speed or constant acceleration. Jiaming Qian etc. proposed a hybrid Fourier-transform phase-shifting profilometry method to reconstruct the scenes which contain both static and dynamic motions [12]. The PSP is employed for the stationary regions and the FTP is used to compensate the motion-introduced error of PSP. However, both FTP and PSP are required for the algorithm.

The author reconstructed the moving object by introducing the movement information to the reconstruction model [1317]. The object with 2D movement is reconstructed firstly [13]. The movement is tracked and described mathematically by the rotation matrix and translation vector; then, the influence caused by the movement is analyzed; a new reconstruction model with the movement information is presented and the correct phase information can be retrieved by solving the equations given by the new model. In Ref. [13], the movement is tracked by placing three markers on the object surface in advance, which is not desirable in automatic reconstruction. In Ref. [15], the object can be reconstructed automatically by employing the Scale-invariant feature transform (SIFT) algorithm to track the movement. In Ref. [14], the object with 3D movement is measured by using the iterative least-square algorithm. Then, the isolated multiple objects are reconstructed successfully with the algorithm described in Ref. [17]. However, the multiple objects are required to have the same movement. In practice, it is more common that the objects have different movements.

This paper presents a new algorithm to reconstruct the multiple objects with individual movements. Firstly, the area of interest for each object is identified. Then, the objects are tracked individually and the rotation matrix and translation vector describing the movement are obtained separately. At last, the object is reconstructed respectively by introducing the movement information into the reconstruction model. The proposed method not only can reconstruct the multiple objects with individual movement, but also can reconstruct the specific object from the multiple objects.

This paper is organized as follows. Section 2 presents the principle of the traditional PSP. In Section 3, the multiple objects are tracked individually and the reconstruction model is described. In Section 4, the Experiments results are given to verify the effectiveness of the proposed method. Section 5 concludes this paper.

2. Principle of the traditional PSP

Assume N-step PSP is employed and the captured fringe pattern can be expressed as

$${I_n}(x,y) = a + b\cos (\phi (x,y) + 2\pi (n - 1)/N)$$
where $n = 1,2,3, \cdots ,N$; ${I_n}(x,y)$ is the intensity distribution of the $nth$ fringe pattern; a is the ambient light intensity and b is the amplitude of the intensity of the sinusoidal fringe patterns; $\phi (x,y)$ is the phase map we need to retrieve.

The phase map $\phi (x,y)$ can be obtained by

$$\phi (x,y) = \arctan \frac{{ - \sum\limits_{n = 1}^N {{I_n}(x,y)\sin 2\pi (n - 1)/N} }}{{\sum\limits_{n = 1}^N {{I_n}(x,y)\cos 2\pi (n - 1)/N} }}$$

The phase map obtained by $\arctan ({\cdot} )$ is wrapped into $- \pi$ to $\pi$, which leading to ambiguity among different fringes. Phase unwrapping [18] can be applied to remove the phase discontinuities and the unwrapped phase map with monotonous value is obtained. At last, the object is reconstructed based on the phase information and calibration parameters.

From Eq. (2) it can be seen that, the phase information of object is retrieved based on the intensity values of the object among different fringe patterns. The object is required to be kept static between different fringe patterns. For one specific point on the object, the phase shift amounts among the fringe patterns are determined in advance. Therefore, when the object is moved during the capture of the fringe patterns, not only the position of the object is mismatched, but also the designed phase shift amount is violated. Errors will be introduced in the result.

3. Proposed method

3.1 Movement tracking for multiple objects

In order to reconstruct the multiple objects with different movement, the objects need to be tracked individually. The number of objects and boundary for each object in the scene are identified firstly. Then, the high speed object tracking algorithm Kernelized Correlation Filters (KCF) is employed to track the movement for each object [19]. The identified objects are used as the target and the object image of the successive fringe pattern with movement are employed as the input of the KCF. The output of the KCF is the corresponding object boundary in the new fringe pattern. At last, with the corresponding object images defined by the boundaries, the SIFT algorithm is applied to retrieve the feature points and their corresponding relationship. The rotation matrix and translation vector describing the object movement mathematically can be obtained by the singular value decomposition (SVD) algorithm and the coordinates of the feature points. The above operations are repeated for all the objects to be measured, resulting individual object tracking for all the objects. The details can be found in the following.

Assume three objects are measured in the scene as shown in Fig. 1(a). The bounding boxes for different object are identified firstly before the measurement (as shown in Fig. 1(b)). The whole object is included in the bounding box and the captured image is divided into four parts in Fig. 1(b). Therefore the captured fringe pattern expressed in Eq. (1) can be rewritten as:

$${I_n}(x,y) = I_n^1(x,y) \cup I_n^2(x,y) \cup I_n^\textrm{3}(x,y) \cup I_n^r(x,y)$$
where “$\cup$” is the merge operation; $I_n^1(x,y),I_n^2(x,y),I_n^3(x,y)$ and $I_n^r(x,y)$ are the area of interest for the object1, object2, object3 and the remainder area.

 figure: Fig. 1.

Fig. 1. Multiple objects tracking. (a) the three objects to be measured; (b) the bounding box for each object; (c)-(d) the movement tracking by KCF between the images of the object before movement and after movement.

Download Full Size | PDF

Two successive frames $I_1^{}(x,y)$ and $I_2^{}(x,y)$ are captured as shown in Fig. 1(c) and Fig. 1(d). Compared with the Fig. 1(c), the objects have different movement in the Fig. 1(d). In order to track the object movement individually, the whole object included in the bounding box can be used as the target. Then the KCF is employed to track the movement between the target and the successive object image. In Fig. 1(c), the identified objects in $I_1^{}(x,y)$ are employed as the template and they are compared with the whole image $I_2^{}(x,y)$ successively based on KCF. Then, all the objects in $I_2^{}(x,y)$ can be identified with the new position and new bounding box as shown in Fig. 1(d). The objects identified in $I_2^{}(x,y)$ can be used as the new template to track the objects in the following fringe patterns.

Based on the areas of interests obtained by KCF, the object is tracked between the fringe patterns of the object before movement and after movement. The rotation matrix and translation vector are employed to describe the movement mathematically. As only the rigid object and 2D movement is considered, the rotation matrix and translation vector can be obtained by the coordinates of at least three corresponding points on the object surface. The SIFT algorithm can be applied between the tracked areas of the object before movement and after movement, then, the feature points of the object and their corresponding relationship can be obtained. At last, with the corresponding feature points on the object, the SVD method can be applied to retrieve the rotation matrix and translation vector. For the objects in Fig. 1, the rotation matrix and translation vector for object1, object2 and object3 can be obtained respectively.

Please note that for any tracking algorithm based on the intensity value of the image, the ideal input is the object image without fringe patterns. The fringe pattern will affect the accuracy of the tracking and feature point retrieval. Therefore, similarly with Ref. [15], the fringe patterns are projected in red color and a color camera is used to capture the fringe patterns. The red component including the fringe patterns information is used to retrieve the phase information. The blue component excluding the fringe patterns is used to obtain the pure object image used for tracking and feature point retrieval. When the red object is reconstructed, the fringe pattern can be projected in blue color and the red component of the image can be used to tracking the object movement. The color of the fringe pattern should have a big difference with the color of object.

3.2 Phase retrieval

The paper only considers that the objects have two-dimensional movement. The movement information describing by the rotation matrix and translation vector can be employed to reconstruct the moving object with high accuracy [15]. The influence on the phase value caused by the movement is analyzed and the reconstruction model describing the fringe pattern with movement is given. At last, a least-square algorithm is employed to retrieve the correct the phase information. The objects with different movement are reconstructed respectively by repeating the above steps. The reconstruction process for object1 is illustrated as follows, the detailed derivation can be found in Ref. [13]. The fringe pattern including the movement information for object1 can be described as:

$$I_n^1(x,y) = a + b\cos \{ \phi [f_n^1(x,y),g_n^1(x,y)] + \Phi (x,y) + 2\pi (n - 1)/N\}$$
where $f_n^1(x,y)$ and $g_n^1(x,y)$ are the functions related to the rotation matrix and translation vector of object1; $\phi ({\cdot} )$ is the phase map of the reference plane; $\Phi (x,y)$ is the phase variation caused by the object height. In Eq. (4), $I_n^1(x,y)$ and $\phi [f_n^1(x,y),g_n^1(x,y)]$ are the known parameters; a, b and $\Phi (x,y)$ are the unknown parameters. Therefore, when $N \ge 3$ Eq. (4) can be solved by the following procedure.

Equation (4) can be rewritten as

$$I_n^1(x,y) = a + B(x,y)\cos \delta + C(x,y)\sin \delta$$
where $B(x,y) = b\cos \Phi (x,y)$, $C(x,y) ={-} b\sin \Phi (x,y)$, $\delta = \phi [f_n^1(x,y),g_n^1(x,y)] + 2\pi (n - 1)/N$. Assume the captured fringe pattern is denoted as $\tilde{I}_n^1(x,y)$, the sum of the squared error for each pixel is
$$S(x,y) = \sum\limits_{n = 1}^N {{{[I_n^1(x,y) - \tilde{I}_n^1(x,y)]}^2}}$$
When Eq. (6) is minimized based on the least-square criteria, we have
$${\mathbf X}(x,y) = {{\mathbf A}^{ - 1}}(x,y){\mathbf B}(x,y)$$
Where
$${\mathbf A}(x,y) = \left[ {\begin{array}{ccc} N&{\sum\limits_{n = 1}^N {\cos \delta } }&{\sum\limits_{n = 1}^N {\sin \delta } }\\ {\sum\limits_{n = 1}^N {\cos \delta } }&{\sum\limits_{n = 1}^N {{{\cos }^2}\delta } }&{\frac{1}{2}\sum\limits_{n = 1}^N {\sin 2\delta } }\\ {\sum\limits_{n = 1}^N {\sin \delta } }&{\frac{1}{2}\sum\limits_{n = 1}^N {\sin 2\delta } }&{\sum\limits_{n = 1}^N {{{\sin }^2}\delta } } \end{array}} \right], $$
$${\mathbf X}(x,y) = {\left[ {\begin{array}{ccc} a&{B(x,y)}&{C(x,y)} \end{array}} \right]^T}, $$
$${\mathbf B}(x,y) = {\left[ {\begin{array}{ccc} {\sum\limits_{n = 1}^N {\tilde{I}_n^1(x,y)} }&{\sum\limits_{n = 1}^N {\tilde{I}_n^1(x,y)\cos \delta } }&{\sum\limits_{n = 1}^N {\tilde{I}_n^1(x,y)\sin \delta } } \end{array}} \right]^T}$$

The unknown parameters a, $B(x,y)$ and $C(x,y)$ can be obtained by solving the Eqs. (7)–(10). The phase information ${\Phi }(x,y)$ can be determined by

$$\Phi (x,y) = {\tan ^{ - 1}}[ - C(x,y)/B(x,y)]$$

The above process can be applied for other objects with their specific movement information individually.

3.3 Automated object identification

In order to achieve automatic reconstruction of multiple objects, the objects should be identified automatically. The first step is to separate the objet from the background. In this paper, the OTSU method is employed to distinguish the target from the background. The result is shown in Fig. 2. It should be noted that the intensity difference between the object and the background should be large enough to achieve better performance.

 figure: Fig. 2.

Fig. 2. The object separation from the background based on OTSU method. (a) the object image; (b) the separation result.

Download Full Size | PDF

Normally, the object can be identified by finding the contour of the object directly. However, as the OTSU method employs a threshold to separate the object from the background, some pixels on the object will be identified as the background as shown in Fig. 2(b), which may divide the object into multiple parts and mislead the detection of object contour. Dilation operation can connect the adjacent pixels and merge the above multiple parts into one. The dilated effect on Fig. 2(b) is shown in Fig. 3, the “noise” points on the object surface are removed and the object can be identified correctly. Please note that as the dilation operation extends the area of the object, the objects in the scene are required to keep distance for each other.

 figure: Fig. 3.

Fig. 3. Effect of image dilation.

Download Full Size | PDF

Finally, based on the obtained contour information, the bounding box for each object is obtained as shown in Fig. 4. The object in each bounding box is used as the template to track the movement in the following fringe patterns.

 figure: Fig. 4.

Fig. 4. Identify the object by bounding box.

Download Full Size | PDF

In summary, the reconstruction for multiple objects with individual movement can be implemented by the steps shown in Fig. 5:

 figure: Fig. 5.

Fig. 5. The flow chart of the multiple objects reconstruction.

Download Full Size | PDF

Step 1: Project the fringe patterns in red color onto the object surface and capture them by a color camera;

Step 2: Identify the objects and determine the bounding box for each object based on the blue component of the captured image;

Step 3: The identified objects in the bounding boxes are used as the target and apply the KCF algorithm to track the object movement between the target and the successive captured fringe pattern, obtain the corresponding bounding box in the successive fringe pattern;

Step 4: Obtain the feature points and their corresponding relationship between the target and their corresponding bounding box based on SIFT algorithm;

Step 5: Determine the rotation matrix and translation vector for each objects;

Step 6: Retrieve the phase information in red component by Eq. (11) with the movement information for each object;

Step 7: Reconstruct the object with individual movement information until all the objects are reconstructed.

4. Experiments

The experimental system employs a color camera (Allied Vision Manta 504C, the resolution is 2452 × 2056) and a projector (Wintech DLP PRO 4500, the resolution is 912 × 1140) to reconstruct the object to be measured. Two car models as shown in Fig. 6(a) are used to verify the performance of the proposed method. 3-step PSP with red fringe patterns are used and the objects are moved individually in two-dimensional (rotation movement and translation movement on the reference plane). The captured fringe pattern images of PSP are shown in Figs. 6(b)–(d).

 figure: Fig. 6.

Fig. 6. Two car models used in the experiment. (a) The car model image without fringe patterns; (b)–(d) The captured moving object fringe patterns of 3-step PSP.

Download Full Size | PDF

The fringe pattern information and object information are obtained by separating the red component and blue component of the captured image. With the captured image in Fig. 7(a), the red component of Fig. 7(a) only contains the fringe pattern information as presented in Fig. 7(b). In the blue component, the red fringe patterns are filtered and the pure object image is obtained as shown in Fig. 7(c). Please note that the blue light existing in the ambient light is captured by the camera, when the ambient light is weak, the background light with blue color should be added in the projected fringe pattern.

 figure: Fig. 7.

Fig. 7. Images in different components. (a) The captured image of the object; (b) The image of Fig. 7(a) in red component; (c) The image of Fig. 7(a) in blue component.

Download Full Size | PDF

With the pure object image in blue component, the objects are identified and their bounding boxes are obtained. A rectangle bounding box shown in Fig. 8(a) is identified to include the object area. Then, the KCF algorithm is applied to the image in the bounding box and the successive captured image. The output of the KCF is the new bounding box in the successive image as shown in Fig. 8(b). The object is then tracked by repeating the above procedure. Please note that when multiple objects are reconstructed, the bounding boxes for all the objects are required to be identified.

 figure: Fig. 8.

Fig. 8. The objects identification and tracking. (a) The bounding box identified before the measurement; (b) The tracking result with KCF in the successive image.

Download Full Size | PDF

The KCF algorithm only identifies the object area between the successive images. In order to describe the movement by rotation matrix and translation vector, the corresponding points of the object before movement and after movement are required to be obtained. The SIFT algorithm is applied on the images of the object before movement and after movement, then the feature points are extracted as shown in Fig. 9. For each specific object, the corresponding points in the bounding box are used to calculate the rotation matrix and translation vector based on SVD algorithm.

 figure: Fig. 9.

Fig. 9. The feature points obtained by SIFT algorithm and their corresponding relationship.

Download Full Size | PDF

With the obtained rotation matrix and translation vector, reconstruction is implemented by the proposed algorithm and the traditional PSP algorithm respectively. The results are shown in Fig. 10. Figures 10(a) and (b) are the reconstruction results obtained by the proposed algorithm. Figures 10(c) and (d) are the results with the traditional PSP algorithm. It is apparent that the errors are introduced by the object movement with the traditional PSP algorithm. The accuracy is increased significantly with the proposed method and the errors caused by the movement are removed.

 figure: Fig. 10.

Fig. 10. The reconstructed results with the traditional PSP and the proposed algorithm. (a) The front view of the result with the proposed algorithm; (b) The mesh display of Fig.  10(a); (c) The front view of the result with the traditional PSP; (d) The mesh display of Fig.  10(c).

Download Full Size | PDF

In order to evaluate the accuracy performance of the proposed algorithm, the RMS (root mean square) measurement error is calculated for the above experiment. 6-step traditional PSP algorithm is applied and the reconstructed result for the static object is used as the reference. For the reconstructed result with the proposed method shown in Figs. 10(a) and (b), the RMS error is 0.0773 mm. In the other hand, for the results shown in Figs. 10(c) and (d), the RMS error is 8.981 mm. Significant improvement has been achieved by the proposed algorithm for the reconstruction of the multiple moving objects with individual movement. The residual error is shown in Fig. 11. It can be found that with the traditional PSP, the movement caused serious errors when the object has sharp height change (such as the edge of the object).

 figure: Fig. 11.

Fig. 11. The residual error with the traditional PSP and the proposed algorithm. (a) The residual error of the proposed algorithm; (b) The residual error of the traditional PSP.

Download Full Size | PDF

Please note that, when there are multiple objects in the scenario and not all the objects need to be reconstructed, the proposed algorithm can reconstruct the specific object and ignore others. This can be achieved by determining the objects of interest after the object identification at the beginning of the reconstruction. During the reconstruction, there is no limitation on the movement speed. However, the capture speed should be fast enough to obtain the clear images of the object.

5. Conclusion

This paper proposes an automated approach to reconstruct the multiple objects with individual movement based on PSP. The movement for the objects in the scenario can be different from each other. At first, the objects are identified by the object contour and the bounding boxes for each object are given; then, based on the KCF algorithm, the object image within the bounding box is used as the template to track the object movement in the following fringe pattern images. The corresponding feature points of the object before movement and after movement are obtained by the SIFT algorithm and the rotation matrix and translation vector describing the object movement is calculated. At last, with the movement information for each object, the objects with individual movement are reconstructed with high accuracy.

Funding

National Natural Science Foundation of China (61705060, U1904120); Science and Technology Department of Henan Province (202102210314).

Acknowledgments

The author would like to thank Dr. Shuming Jiao (Nanophotonics Research Center, Shenzhen University) for valuable discussions on the moving object tracking. The exchange of the ideas helps to inspire the work in this paper.

Disclosures

The authors declare no conflicts of interest.

References

1. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Laser Eng. 109, 23–59 (2018). [CrossRef]  

2. S. Zhang, “High-speed 3D shape measurement with structured light methods: A review,” Opt. Laser Eng. 106, 119–131 (2018). [CrossRef]  

3. Z. Zhang, S. Huang, S. Meng, F. Gao, and X. Jiang, “A simple, flexible and automatic 3D calibration method for a phase calculation-based fringe projection imaging system,” Opt. Express 21(10), 12218–12227 (2013). [CrossRef]  

4. Z. Wu, W. Guo, and Q. Zhang, “High-speed three-dimensional shape measurement based on shifting Gray-code light,” Opt. Express 27(16), 22631–22644 (2019). [CrossRef]  

5. S. Jiao, M. Sun, Y. Gao, T. Lei, Z. Xie, and X. Yuan, “Motion estimation and quality enhancement for a single image in dynamic single-pixel imaging,” Opt. Express 27(9), 12841–12854 (2019). [CrossRef]  

6. S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-D measurements with motion-compensated phase-shifting profilometry,” Opt. Laser Eng. 103, 127–138 (2018). [CrossRef]  

7. J. Flores, M. Stronik, A. Munoz, G. Torales, S. Ordones, and A. Cruz, “Dynamic 3D shape measurement by iterative phase shifting algorithms and colored fringe patterns,” Opt. Express 26(10), 12403–12414 (2018). [CrossRef]  

8. Z. Liu, P. C. Zibley, and S. Zhang, “Motion-induced error compensation for phase shifting profilometry,” Opt. Express 26(10), 12632–12637 (2018). [CrossRef]  

9. Y. Wang, Z. Liu, C. Jiang, and S. Zhang, “Motion induced phase error reduction using a Hilbert transform,” Opt. Express 26(26), 34224–34235 (2018). [CrossRef]  

10. M. Duan, Y. Jin, C. Xu, X. Xu, C. Zhu, and E. Chen, “Phase-shifting profilometry for the robust 3-D shape measurement of moving objects,” Opt. Express 27(16), 22100–22115 (2019). [CrossRef]  

11. X. Liu, T. Tao, Y. Wan, and J. Kofman, “Real-time motion-induced-error compensation in 3D surface-shape measurement,” Opt. Express 27(18), 25265–25279 (2019). [CrossRef]  

12. J. Qian, T. Tao, S. Feng, Q. Chen, and C. Zuo, “Motion-artifact-free dynamic 3D shape measurement with hybrid Fourier-transform phase-shifting profilometry,” Opt. Express 27(3), 2713–2731 (2019). [CrossRef]  

13. L. Lu, J. Xi, Y. Yu, and Q. Guo, “New approach to improve the accuracy of 3-D shape measurement of moving object using phase shifting profilometry,” Opt. Express 21(25), 30610–30622 (2013). [CrossRef]  

14. L. Lu, J. Xi, Y. Yu, and Q. Guo, “Improving the accuracy performance of phase-shifting profilometry for the measurement of objects in motion,” Opt. Lett. 39(23), 6715–6718 (2014). [CrossRef]  

15. L. Lu, Y. Ding, Y. Luan, Y. Yin, Q. Liu, and J. Xi, “Automated approach for the surface profile measurement of moving objects based on PSP,” Opt. Express 25(25), 32120–32131 (2017). [CrossRef]  

16. L. Lu, Y. Yin, Z. Su, X. Ren, Y. Luan, and J. Xi, “General model for phase shifting profilometry with an object in motion,” Appl. Opt. 57(36), 10364–10369 (2018). [CrossRef]  

17. L. Lu, Z. Jia, Y. Luan, and J. Xi, “Reconstruction of isolated moving objects with high 3D frame rate based on phase shifting profilometry,” Opt. Commun. 438, 61–66 (2019). [CrossRef]  

18. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Laser Eng. 85, 84–103 (2016). [CrossRef]  

19. J. Henriques, R. Caseiro, P. Martins, and J. Batista, “High-Speed Tracking with Kernelized Correlation Filters,” IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 583–596 (2015). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Multiple objects tracking. (a) the three objects to be measured; (b) the bounding box for each object; (c)-(d) the movement tracking by KCF between the images of the object before movement and after movement.
Fig. 2.
Fig. 2. The object separation from the background based on OTSU method. (a) the object image; (b) the separation result.
Fig. 3.
Fig. 3. Effect of image dilation.
Fig. 4.
Fig. 4. Identify the object by bounding box.
Fig. 5.
Fig. 5. The flow chart of the multiple objects reconstruction.
Fig. 6.
Fig. 6. Two car models used in the experiment. (a) The car model image without fringe patterns; (b)–(d) The captured moving object fringe patterns of 3-step PSP.
Fig. 7.
Fig. 7. Images in different components. (a) The captured image of the object; (b) The image of Fig. 7(a) in red component; (c) The image of Fig. 7(a) in blue component.
Fig. 8.
Fig. 8. The objects identification and tracking. (a) The bounding box identified before the measurement; (b) The tracking result with KCF in the successive image.
Fig. 9.
Fig. 9. The feature points obtained by SIFT algorithm and their corresponding relationship.
Fig. 10.
Fig. 10. The reconstructed results with the traditional PSP and the proposed algorithm. (a) The front view of the result with the proposed algorithm; (b) The mesh display of Fig.  10(a); (c) The front view of the result with the traditional PSP; (d) The mesh display of Fig.  10(c).
Fig. 11.
Fig. 11. The residual error with the traditional PSP and the proposed algorithm. (a) The residual error of the proposed algorithm; (b) The residual error of the traditional PSP.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

I n ( x , y ) = a + b cos ( ϕ ( x , y ) + 2 π ( n 1 ) / N )
ϕ ( x , y ) = arctan n = 1 N I n ( x , y ) sin 2 π ( n 1 ) / N n = 1 N I n ( x , y ) cos 2 π ( n 1 ) / N
I n ( x , y ) = I n 1 ( x , y ) I n 2 ( x , y ) I n 3 ( x , y ) I n r ( x , y )
I n 1 ( x , y ) = a + b cos { ϕ [ f n 1 ( x , y ) , g n 1 ( x , y ) ] + Φ ( x , y ) + 2 π ( n 1 ) / N }
I n 1 ( x , y ) = a + B ( x , y ) cos δ + C ( x , y ) sin δ
S ( x , y ) = n = 1 N [ I n 1 ( x , y ) I ~ n 1 ( x , y ) ] 2
X ( x , y ) = A 1 ( x , y ) B ( x , y )
A ( x , y ) = [ N n = 1 N cos δ n = 1 N sin δ n = 1 N cos δ n = 1 N cos 2 δ 1 2 n = 1 N sin 2 δ n = 1 N sin δ 1 2 n = 1 N sin 2 δ n = 1 N sin 2 δ ] ,
X ( x , y ) = [ a B ( x , y ) C ( x , y ) ] T ,
B ( x , y ) = [ n = 1 N I ~ n 1 ( x , y ) n = 1 N I ~ n 1 ( x , y ) cos δ n = 1 N I ~ n 1 ( x , y ) sin δ ] T
Φ ( x , y ) = tan 1 [ C ( x , y ) / B ( x , y ) ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.