Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Motion-induced error compensation for phase shifting profilometry

Open Access Open Access

Abstract

This paper proposes a novel method to substantially reduce motion-introduced phase error in phase-shifting profilometry. We first estimate the motion of an object from the difference between two subsequent 3D frames. After that, by leveraging the projector’s pinhole model, we can determine the motion-induced phase shift error from the estimated motion. A generic phase-shifting algorithm considering phase shift error is then utilized to compute the phase. Experiments demonstrated that proposed algorithm effectively improved the measurement quality by compensating for the phase shift error introduced by rigid and nonrigid motion for a standard single-projector and single-camera digital fringe projection system.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Phase-shifting profilometry (PSP) is widely employed in 3D shape measurement because of its accuracy, resolution, speed, and robustness to noise. In general, PSP uses multiple fringe images with precisely known phase shifts to accurately recover the phase, and thus the measured scene should remain stationary while these phase-shifted fringe images are being captured.

For practical PSP systems, phase shifts are not always precisely known, and the unknown phase shifts introduce measurement errors. In a conventional laser interferometry system, the phase shift error could be introduced by the displacement error of the mirror driven by a piezoelectric device. The phase-shift error, in general, could also be introduced by motion of the object, i.e., the object moves during the time of capturing the required number of phase-shifted fringe patterns for phase determination. The former introduces homogeneous phase shift error that remains the same across the entire measurement surface, while the latter introduces nonhomogeneous phase shift error that could vary from point to point.

Researchers have developed methods to address the homogeneous phase-shift error problem. Wang and Han [1] proposed to find the randomly-shifted phase by iteratively solving for the phase and actual phase shift using the least-square optimization. This method has relatively stable and fast convergence, yet assumes the background intensity and modulation amplitude do not have pixel-to-pixel variation. By analyzing the region with zero difference of intensity between two images with different phase shifts, Guo et al. [2] developed a method to determine the actual phase shift. Gao et al. [3] proposed a method to extract the actual phase shift in interferometry with arbitrary unknown phase shift. This method first computes a rough estimation of the phase shift based on the statistical property of object’s phase, and then employs an iterative approach to further improve the phase-shift estimation.

Researchers also attempted to address the more complex nonhomogeneous phase-shift error problems especially introduced by motion of the object, which is common in practice. Lu et al. [4] proposed a method to reduce artifacts caused by the planar motion parallel to the imaging plane. They essentially placed a few markers on the object and analyzed the movement of the marker to estimate the rigid-body motion of an object. Lu et al. [5] later improved their method [4] to handle error induced by translation in the direction of object height. In this new method, the motion is estimated using the arbitrary phase shift extraction method developed in Wang and Han [1]. However, this method only works well for homogeneous background intensity and modulation amplitude. Feng et al. [6] proposed to apply the homogeneous phase-shift extraction method proposed by Gao et al. [3] to solve the nonhomogeneous motion artifact problem by segmenting objects with different rigid shifts. However, this method still assumes that the phase shift error within a single segmented object is homogeneous. As a result, this method may not work well for dynamically deformable objects where each point could introduce different phase shift errors.

This paper proposes a novel method to substantially reduce the nonhomogeneous motion-induced phase shift error. We first estimate the motion of an object from the difference between two subsequent 3D frames. We then take advantage of projector’s pinhole model to determine the motion-induced phase shift error from the estimated motion. A generic phase-shifting algorithm considering phase shift error is then utilized to compute the phase. Since this proposed method treats each individual point independently, we experimentally demonstrated that it substantially reduced the motion artifact for dynamically deformable objects (e.g., human facial expressions).

2. Principle

2.1. Phase-shifting profilometry

Assume k-th fringe images of a generic M-step phase-shifting algorithm can be described as,

Ik(uc,vc)=I(uc,vc)+I(uc,vc)cos[Φ(uc,vc)δk],
where (uc, vc) denotes the pixel location on the camera image coordinate, I′ (uc, vc) is the average intensity, I″ (uc, vc) the intensity modulation, δk is the phase shift for the k-th frame, and Φ(uc, vc) is the phase to be solved for. If we define
[a0(uc,vc)a1(uc,vc)a2(uc,vc)]=[Mcosδksinδkcosδkcos2δkcosδksinδksinδkcosδksinδksin2δk]1[IkIkcosδkIksinδk],
then the phase can be calculated as [7]i
ϕ(uc,vc)=tan1[a2(uc,vc)a1(uc,vc)].

The arctangent function in Eq. (3) gives the wrapped phase within (−π, π] with 2π discontinuities. The desired continuous phase Φ(uc, vc) can be obtained by applying a a phase unwrapping algorithm to determine k(uc, vc), the number of 2π’s to be added for each point, that is

Φ(uc,vc)=ϕ(uc,vc)+k(uc,vc)×2π,
where k(uc, vc) is an integer number that is often referred as fringe order.

2.2. Proposed error compensation method

Phase-shifting profilometry works well if ϕ(uc, vc) can be accurately determined using Eq. (3), which requires the phase shift δk to be precisely known. However, if the object moves between different frames, the actual phase shift δk¯ differs from ideal phase shift δk with an error ϵk, i.e.,

δk¯(uc,vc)=δk(uc,vc)+ϵk(uc,vc),
where ϵk(uc, vc) is motion dependent and thus can vary from point to point.

Figure 1 illustrates that the motion of an deformable object can introduce phase shift error. The camera image point C1 corresponds to point S1 on the object surface without motion, but actually corresponds to S1¯ if the object moves. The corresponding projected fringe pattern points are P1 and P1¯, respectively. These two points on the projector correspond to two different phase values Φ1 and Φ1¯; and we define the difference between these phase values as phase shift error,

ϵ1=Φ1¯Φ1.

 figure: Fig. 1

Fig. 1 Object motion could introduce nonhomogeous phase shift error.

Download Full Size | PDF

Similarly, the phase shift error also occurs for another camera image point C2 whose corresponding object surface point is S2 assuming the object does not move, and S2¯ if the object moves. The phase difference for the corresponding projector points P2 and P2¯ is,

ϵ2=Φ2¯Φ2.

Apparently, ϵ1 is not always the same as ϵ2 for different image points, and thus the phase shift error introduced by object motion is nonhomogeneous.

In order to determine the phase shift error ϵ from object motion, we propose to utilize the pin-hole model of a projector [8], which describes the relation between the 3D world coordinates (xw, yw, zw) and the 2D projector coordinate (up, vp) as

sp{upvp1}=Ap[Rp,tp]{xwywzw1}=[P11P12P13P14P21P22P23P24P31P32P33P34]{xwywzw1}
where sp is the scaling factor, R p and tp are the rotation matrix and the translation vector between the world coordinate system and the lens coordinate system, Ap is the intrinsic parameter matrix describing the relation between the lens coordinate system and its plane coordinate system. These parameters can be determined by structured light system calibration [8].

From the pin-hole model described in Eq. (8), we can clearly see that if a point at (xw, yw, zw) moves by Δxw in x direction, the corresponding point in projector’s plane changes accordingly, from (up, vp) to (up¯,vp¯). Mathematically, the relationship between the movement along x direction and the corresponding change on the projector plane can be calculated as,

upxw=P11(P32yw+P33zw+P34)P11(P12yw+P13zw+P14)(P31xw+P32yw+P33zw+P34)2.

Similarly, the motion along the y and z directions could result in corresponding point changes on the projector plane. If the phase changes along up direction on the projector and the phase remains constant along vp direction, and the fringe period is λ in pixels on the projector, then the overall phase shift error induced by motion can be mathematically represented as,

ϵ(uc,vc)=2πλ(upxwΔxw+upywΔyw+upzwΔzw).

This equation indicates that the motion-induced phase-shift error for a given point (xw, yw, zw) can be determined if the motion of the point is known. In this research, we proposed to estimate the object motion by,

Δxw=xw¯xwN;Δyw=yw¯ywN;Δzw=zw¯zwN,
where (xw, yw, zw) are the coordinates of a point for the current 3D frame, and (xw¯,yw¯,zw¯) are the coordinates of the same point for the subsequent 3D frame, and N is the number of frames between the beginning of two sequences of high-frequency fringe images that ultimately provide the wrapped phase for two successive 3D measurements. If we assume that the camera speed is not significantly slower than the motion, then (uc,vc)(uc¯,vc¯), and the object approximately moves at a constant speed within the time of capturing two successive 3D frames.

However, this phase shift error estimation method is approximate and may not be accurate. To further improve the accuracy, we iteratively apply the same method to the updated 3D reconstructions from the previous step(s) until the algorithm converges. Our experiments found that this process typically converges within 2 or 3 iterations. The estimated phase-shift error is used to determine the actual phase shift δk¯ in Eq. (5), that is used to calculate phase using Eqs. (2)(3).

3. Experiment

We built a PSP system to evaluate the performance of our proposed method. This system includes a complementary metal-oxide-semiconductor (CMOS) camera (model: PointGrey Grasshopper3 GS3-U3-23S6M) attached with a 8 mm lens (model: Computar M0814-MP2) and a digital light processing (DLP) projection developmental kit (model: LightCrafter 4500). The projector’s resolution is 912×1140 pixels and the camera’s resolution was set as 640×480 pixels. The system was calibrated using the method described by Li et al. [9]. Both the projector and the camera speeds were set at 120 Hz. We employed a three-step phase-shifting algorithm for wrapped phase retrieval and a three-frequency temporal phase-unwrapping algorithm.

We evaluated the performance of the proposed motion error compensation method by measuring a moving sphere with a diameter of 79.2 mm. For this experiment, the sphere is moving at a speed of approximately 80 mm/s. Visualization 1 shows all 3D frames of the measurement sequence. Figure 2 shows the result of one representative 3D frame. Figure 2(a) shows the raw 3D result without employing our proposed method. The sphere surface is not smooth, as expected, because the 3D shape measurement speed is not sufficient to capture the motion of the object. We then employed our proposed phase-shift error compensation algorithm, the quality was drastically improved, as shown in Fig. 2(b).

 figure: Fig. 2

Fig. 2 Measurement result of a moving sphere (associated with Visualization 1). (a) 3D results from standard phase-shifting method; (b) 3D result using our proposed phase-shift error compensation method; (c) error map of the result shown in (a) (mean 0.482 mm, standard deviation 0.209 mm); (d) error map of the result shown in (b) (mean 0.032 mm, standard deviation 0.037 mm).

Download Full Size | PDF

To quantitatively evaluate the improvement using our proposed method, we compared the measurement results with an ideal sphere. Figures 2(c) and 2(d) shows the corresponding error map that is the difference between the measured data and the ideal sphere. Before error compensation, the mean error is 0.482 mm, and the standard deviation is 0.209 mm. In contrast, after applying our phase-shift error compensation method, the mean error is reduced to 0.032 mm and the standard deviation is reduced to 0.037 mm.

To verify that our proposed method can also work for more complex geometry than a smooth surface, we measured a surface with small, detailed features. Figure 3 shows the results of the experiment. Figure 3(a) shows the object photograph; and Fig. 3(b) shows the result without employing our proposed phase-shift error compensation method. The measured surface depicts vertical stripes introduced by motion. In contrast, after applying our proposed method, the vertical stripes almost disappear, as shown in Fig. 3(c). To visually verify the performance of our proposed method with a ground truth, we measured the same object while it did not move, and Fig. 3(d) shows the result. Visually, our proposed method produced the same result as the ground truth, demonstrating that our proposed method works well for non-smooth object.

 figure: Fig. 3

Fig. 3 Experiment results of an object with complex geometry. (a) the photograph of the object; (b) 3D result without employing motion error compensation while the object is moving; (c) 3D result from proposed method while the object is moving; (d) 3D result while the object is stationary (i.e., ground truth).

Download Full Size | PDF

We also evaluated the performance of our proposed method to measure dynamically deformable objects. In this experiment, we measured human facial expressions. Figure 4 shows one representative 3D frame, and Visualization 2 shows the entire video sequence. Figure 4(a) shows the 3D reconstruction without adopting our phase-shift error compensation algorithm. This result clearly shows measurement errors (non-smooth surface geometry). We then employed our error compensation algorithm to process the same frame, and Fig. 4(b) shows the result. The motion artifacts are less obvious and the measurement quality is significantly improved. This experiment demonstrated that our proposed method can successfully alleviate motion artifacts even for dynamically deformable objects with complex surface geometry.

 figure: Fig. 4

Fig. 4 Experiment results of a dynamically deformable complex object, human facial expression (associated with Visualization 2). (a) 3D result without employing our proposed method; (b) 3D result after employing our proposed phase shift error compensation algorithm; (c) texture mapped 3D result of (a); (d) texture mapped 3D result of (b).

Download Full Size | PDF

In addition, we noticed that our proposed method can also significantly improve texture quality. Figure 4(c) shows the texture of the frame before applying our phase-shift error compensation method, showing some vertical stripes on the image. Figure 4(d) shows the texture after applying our phase-shift error compensation method, and those stripes almost disappear. All these experimental data clearly demonstrate the effectiveness of our proposed motion error compensation method.

4. Conclusion

This paper has presented a novel method to reduce nonhomogeneous phase shift error caused by object motion. The principle behind this proposed method was elucidated. Our experimental results demonstrated that our proposed method can effectively compensate for motion introduced phase-shift error and enhance measurement quality for objects with rigid motion as well as objects that are dynamically deformable.

Funding

National Science Foundation (NSF) (CMMI-1531048).

References and links

1. Z. Wang and B. Han, “Advanced iterative algorithm for phase extraction of randomly phase-shifted interferograms,” Opt. Lett. 29, 1671–1673 (2004). [CrossRef]  

2. C.-S. Guo, B. Sha, Y.-Y. Xie, and X.-J. Zhang, “Zero difference algorithm for phase shift extraction in blind phase-shifting holography,” Opt. Lett. 39, 813–816 (2014). [CrossRef]   [PubMed]  

3. P. Gao, B. Yao, N. Lindlein, K. Mantel, I. Harder, and E. Geist, “Phase-shift extraction for generalized phase-shifting interferometry,” Opt. Lett. 34, 3553–3555 (2009). [CrossRef]   [PubMed]  

4. L. Lu, J. Xi, Y. Yu, and Q. Guo, “New approach to improve the accuracy of 3-d shape measurement of moving object using phase shifting profilometry,” Opt. Express 21, 30610–30622 (2013). [CrossRef]  

5. L. Lu, J. Xi, Y. Yu, and Q. Guo, “Improving the accuracy performance of phase-shifting profilometry for the measurement of objects in motion,” Opt. Lett. 39, 6715–6718 (2014). [CrossRef]   [PubMed]  

6. S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-d measurements with motion-compensated phase-shifting profilometry,” Opt. Laser Eng. 103, 127–138 (2018). [CrossRef]  

7. H. Schreiber and J. H. Bruning, Optical Shop Testing, 3rd ed. (John Wiley & Sons, 2007).

8. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000). [CrossRef]  

9. B. Li, N. Karpinsky, and S. Zhang, “Novel calibration method for structured-light system with an out-of-focus projector,” Appl. Opt. 53, 3415–3426 (2014). [CrossRef]   [PubMed]  

Supplementary Material (2)

NameDescription
Visualization 1       motion artifact reduction
Visualization 2       motion artefact reduction

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1
Fig. 1 Object motion could introduce nonhomogeous phase shift error.
Fig. 2
Fig. 2 Measurement result of a moving sphere (associated with Visualization 1). (a) 3D results from standard phase-shifting method; (b) 3D result using our proposed phase-shift error compensation method; (c) error map of the result shown in (a) (mean 0.482 mm, standard deviation 0.209 mm); (d) error map of the result shown in (b) (mean 0.032 mm, standard deviation 0.037 mm).
Fig. 3
Fig. 3 Experiment results of an object with complex geometry. (a) the photograph of the object; (b) 3D result without employing motion error compensation while the object is moving; (c) 3D result from proposed method while the object is moving; (d) 3D result while the object is stationary (i.e., ground truth).
Fig. 4
Fig. 4 Experiment results of a dynamically deformable complex object, human facial expression (associated with Visualization 2). (a) 3D result without employing our proposed method; (b) 3D result after employing our proposed phase shift error compensation algorithm; (c) texture mapped 3D result of (a); (d) texture mapped 3D result of (b).

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

I k ( u c , v c ) = I ( u c , v c ) + I ( u c , v c ) cos [ Φ ( u c , v c ) δ k ] ,
[ a 0 ( u c , v c ) a 1 ( u c , v c ) a 2 ( u c , v c ) ] = [ M cos δ k sin δ k cos δ k cos 2 δ k cos δ k sin δ k sin δ k cos δ k sin δ k sin 2 δ k ] 1 [ I k I k cos δ k I k sin δ k ] ,
ϕ ( u c , v c ) = tan 1 [ a 2 ( u c , v c ) a 1 ( u c , v c ) ] .
Φ ( u c , v c ) = ϕ ( u c , v c ) + k ( u c , v c ) × 2 π ,
δ k ¯ ( u c , v c ) = δ k ( u c , v c ) + ϵ k ( u c , v c ) ,
ϵ 1 = Φ 1 ¯ Φ 1 .
ϵ 2 = Φ 2 ¯ Φ 2 .
s p { u p v p 1 } = A p [ R p , t p ] { x w y w z w 1 } = [ P 11 P 12 P 13 P 14 P 21 P 22 P 23 P 24 P 31 P 32 P 33 P 34 ] { x w y w z w 1 }
u p x w = P 11 ( P 32 y w + P 33 z w + P 34 ) P 1 1 ( P 12 y w + P 13 z w + P 14 ) ( P 31 x w + P 32 y w + P 33 z w + P 34 ) 2 .
ϵ ( u c , v c ) = 2 π λ ( u p x w Δ x w + u p y w Δ y w + u p z w Δ z w ) .
Δ x w = x w ¯ x w N ; Δ y w = y w ¯ y w N ; Δ z w = z w ¯ z w N ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.