Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Real-time motion-induced-error compensation in 3D surface-shape measurement

Open Access Open Access

Abstract

Object motion can introduce unknown phase shift and thus measurement error in multi-image phase-shifting methods of fringe projection profilometry. This paper presents a new method to estimate the unknown phase shifts and reduce the motion-induced error by using three phase maps computed over a multiple measurement sequence and calculating the difference between phase maps. The pixel-wise estimation of the motion-induced phase shifts permits phase-error compensation for non-homogeneous surface motion. Experiments demonstrated the ability of the method to reduce motion-induced error in real-time, for shape measurement of surfaces with high depth variation, and moving and deforming surfaces.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical non-contact three-dimensional (3D) surface-shape measurement has diverse applications in manufacturing [1], biomedical engineering [2], computer vision, and heritage digitization [3,4]. Fringe projection profilometry (FPP) is a common technique that permits full-field 3D surface-shape measurement with high-accuracy [5]. One or more spatially-continuous fringe patterns are projected onto the object surface by a projector, and images of deformed patterns are captured by a camera. A phase map can be obtained from a single sinusoidal fringe pattern using Fourier transform profilometry (FTP) [6], or multiple phase-shifted fringe patterns using phase-shifting profilometry (PSP) [7]. The computed phase map is generally wrapped in a range from −π to π with 2π discontinuities. Phase-unwrapping [8] or system-geometric-constraint [9,10] methods can solve the phase ambiguity of the wrapped phase map and permit determination of camera-projector correspondences. The object surface shape can then be reconstructed by stereovision techniques using geometric and optical calibration parameters of the pre-calibrated FPP system. For measurement of static object surfaces, PSP, which use multiple fringe patterns, may be a preferred technique over FTP methods, because of the higher accuracy of multiple-image PSP [11]. However, for applications requiring dynamic 3D measurement of moving or deforming surfaces, the object surface motion between successive camera-captured images (when multiple patterns are used) will cause phase errors and consequently reduce measurement accuracy.

There have been several approaches to improve measurement accuracy for dynamic measurement [11]. Increasing image acquisition speed to reduce the effect of object motion during measurement has been achieved using high-speed projectors and cameras [12]. For example, Goes Before Optics (GOBO) projection can achieve high pattern-switching speeds with a high-speed spinning wheel [13]. However, high speed pattern projection and image capture is achieved at a higher system cost. High projection speed has been achieved using defocused binary patterns to generate sinusoidal fringe patterns [14]. While this can reduce the effect of object motion between successive images on measurement accuracy, further reduction of motion-induced errors would be desirable.

Reduction of the number of projected patterns can also reduce the effect of object motion during measurement on measurement accuracy. The single-image FTP [6] method is ideal for dynamic measurement of fast-moving objects. However, FTP is limited to a finite maximum slope of object depth variation, beyond which, the fundamental frequency component would overlap other components and thus cannot be retrieved unambiguously. This would lead to phase errors in regions of high depth variation. Windowed Fourier transform [15] and wavelet transform [16] methods can achieve higher phase accuracy than FTP [17]; however, they have higher computational cost, which is not favorable for real-time measurement. Other single-image methods use a composite fringe pattern [18] or a color fringe pattern [19], where several sinusoidal fringe patterns with different frequency or phase shift are embedded into one image. However, complex image demodulation and high sensitivity to object surface color and ambient light lead to decreased signal-to-noise ratio (SNR) of the extracted patterns.

Because of the high accuracy obtained using multiple phase-shifted-pattern methods, there has been a growing interest in developing methods to compensate for errors caused by object motion between the successive images captured during measurement. Marker based [20,21] and Scale Invariant Feature Transform (SIFT) [22] methods can compensate for measurement error due to planar object motion without handling motion in the depth direction. In PSP, fringe patterns are projected with known phase shift, and PSP phase-analysis algorithms can solve for the phase shift due to the object surface geometry. However, object surface motion will cause an additional unknown phase-shift in the captured images, resulting in motion-induced phase error and thus depth measurement error. Several error compensation methods can handle errors due to motion in the depth direction. One phase error compensation method for dynamic 3D measurement is based on the assumption that the motion-induced phase-shift error is homogeneous within a single object [23]. However, the estimation of phase-shift error may not be accurate if the object is deforming and the motion-induced phase-shift error is non-homogeneous. One pixel-independent phase-shift error estimation method using FTP computed phase map differences of successive captured fringe images [24]. Another method fused FTP and PSP surface reconstruction, guided by phase-based pixel-wise motion detection [25]. Although the latter two methods work well to handle object motion, the measurement accuracy would be limited where there is high depth variation due to the use of FTP. Another method used the Hilbert transform to generate an additional set of fringe images and additional phase map, which can substantially compensate motion-induced error by averaging the original and additional phase maps [26]. However, use of the Hilbert transform requires additional processing to suppress errors at fringe edges [27].

Iterative methods [28,29] can estimate non-homogeneous phase-shift error and then compute phase using the generic phase-shifting algorithm while considering the phase-shift error. The method can achieve very high accuracy with little motion-induced error after several iterations. While these motion-induced-error compensation methods work well for fast moving or deforming surfaces, they are computationally expensive and not suitable for real-time applications. A method that can perform pixel-wise motion-induced-error compensation in real-time measurement and also handle objects with large depth variation is still needed.

This paper presents a new motion-induced-error compensation method that is non-iterative and thus suitable for real-time 3D surface-shape measurement of dynamic surfaces. The method performs pixel-wise estimation of the phase-shift due to surface motion without the assumption of homogeneous motion across the surface. To permit real-time measurement, system geometry-constraints are used to solve the phase ambiguity of the wrapped phase map without requiring additional patterns as in temporal phase-unwrapping methods.

2. Principle and method

2.1. Phase-shifting method

The intensity of each pixel of camera-captured N-step phase-shifted fringe images can be modeled by:

In(x,y)=A(x,y)+B(x,y)cos[ϕ(x,y)θn],
where A(x, y) and B(x, y) represent the unknown background intensity and amplitude of modulation, respectively, and ϕ(x, y) is the unknown phase map. θn (n = 1, 2, …, N) is the known phase shift for the n-th projected fringe pattern. Equation (1) can be rewritten as:
In(x,y)=A(x,y)+B1(x,y)cos(θn)+B2(x,y)sin(θn),
where B1(x, y) = B(x, y)cos[ϕ(x, y)] and B2(x, y) = B(x, y)sin[ϕ(x, y)]. The following equation can be obtained [30]:
Χ(x,y)=(MTM)1MTΙ(x,y),
where X(x, y) = [A(x, y), B1(x, y), B2(x, y)]T, I(x, y) = [I1(x, y), I2(x, y), ..., IN(x, y)]T, and
M=[1cos(θ1)sin(θ1)1cos(θ2)sin(θ2)1cos(θN)sin(θN)].
The wrapped phase can then be computed by:

ϕ(x,y)=tan1[B2(x,y)B1(x,y)].

For a standard 4-step phase-shifting method with phase shift θn = 2π(n−1)/N. The wrapped phase can be computed using the following equation:

ϕ(x,y)=tan1[I2(x,y)I4(x,y)I1(x,y)I3(x,y)].

2.2. Motion-induced phase error

If the object is moving or deforming between successive captured images, the phase shift at each pixel in the captured images will have an additional unknown phase-shift due to the object surface motion. This phase-shift error can be determined pixel by pixel using the system geometric and optical parameters and object motion, if the motion is known [28]. The object motion induced phase-shift error for the standard 4-step phase-shifting method, where the object surface is moving toward the camera at varying speed, is shown in Fig. 1. When capturing fringe images with phase-shift error due to motion, I1(x,y), I2(x,y), I3(x,y) and I4(x,y), the position of the object surface for any camera pixel is at p1(x, y), p2(x, y), p3(x, y) and p4(x, y), respectively. Three unknown phase-shift errors ε1(x, y), ε2(x, y) and ε3(x, y) are caused by the object motion from p1 to p2, p2 to p3, and p3 to p4, respectively. Similar phase-shift errors can be observed, for any direction the object is moving in [23]. Since the position of the object is unknown and changing during the measurement, an object position P(x, y) (red dashed line) that has zero phase-shift error (i.e. no motion-induced error), can be defined as a reference to aid in determining the phase-shift error in each image In(x,y) (n = 1, 2, 3, 4).

 figure: Fig. 1

Fig. 1 Object surface motion-induced phase-shift error.

Download Full Size | PDF

It is assumed that the object moves in one direction during the image acquisition of four phase-shifted images, and that the position P(x, y) is between the middle two positions p2 and p3, where the phase-shift error between positions p2 and P is half the phase-shift error between p2 and p3, ε2(x, y)/2. Note that P is not necessarily at mid-depth between p2 and p3.

The intensity of 4-step phase-shifted fringe images considering object motion can be described by the following equations:

I1(x,y)=A(x,y)+B(x,y)cos[ϕ(x,y)ε2(x,y)/2ε1(x,y)],I2(x,y)=A(x,y)+B(x,y)cos[ϕ(x,y)π/2ε2(x,y)/2],I3(x,y)=A(x,y)+B(x,y)cos[ϕ(x,y)π+ε2(x,y)/2],I4(x,y)=A(x,y)+B(x,y)cos[ϕ(x,y)3π/2+ε2(x,y)/2+ε3(x,y)].
ϕ(x, y) is the unknown phase related to the object surface geometry to be solved for. Since the motion-induced phase-shift errors are unknown, the wrapped phase ϕ′(x, y) given by the standard 4-step phase-shifting method:
ϕ(x,y)=tan1[I2(x,y)I4(x,y)I1(x,y)I3(x,y)],
will contain some motion-induced phase error Δϕ(x, y):
Δϕ(x,y)=ϕ(x,y)ϕ(x,y).
Camera pixel coordinates (x, y) may be omitted hereinafter for brevity. Equation (8) can be rewritten as:
ϕ(x,y)=tan1[sin(ϕε2/2)+sin(ϕ+ε2/2+ε3)cos(ϕε2/2ε1)+cos(ϕ+ε2/2)].
For a very small phase-shift error ε, sin(ε)≈ε and cos(ε)≈1. Then, Eq. (10) can be approximated as:
ϕ(x,y)tan1[2sinϕ+ε3cosϕ2cosϕ+ε1sinϕ].
The motion-induced phase error Δϕ(x, y) (Eq. (9)) can be derived as:
Δϕ(x,y)=tan1[ε3cos2ϕε1sin2ϕ2+(ε1+ε3)sinϕcosϕ].
By using Taylor series approximation, the motion-induced phase error can be further approximated as:

Δϕ(x,y)ε3ε14+ε3+ε14cos2ϕ.

2.3. Simulation of motion-induced phase error

Equation (13) shows that the motion-induced phase error approximately correlates to 2ϕ. The mean phase error (DC component) is close to zero if the object motion is at constant speed, where ε1(x, y) ≈ε3(x, y). The phase error amplitude increases as phase-shift errors ε1 and ε3 increase. A phase measurement simulation was performed using a given phase-shift error between successive positions: ε1 = ε2 = ε3 = 0.2 rad, and assuming constant speed motion. Note that the phase maps in this simulation are unwrapped. The phase map ϕ′(x, y) with motion-induced phase error, simulated using the standard 4-step phase-shifting method (Eq. (8)) (black curve), is shown with the simulated phase at the four positions p1, p2, p3, p4 (four colored lines) in Fig. 2(a).

 figure: Fig. 2

Fig. 2 Phase measurement simulation with constant speed motion: (a) unwrapped phase computed by standard 4-step PSP (black) and phase at four positions p1, p2, p3, p4, (b) motion-induced phase error computed using ϕ′−ϕ (blue) and simulated using Eq. (13) (red).

Download Full Size | PDF

A phase map ϕ(x, y) without error is simulated by generating images using Eq. (7) and assuming that the object has no motion, thus using ε1 = ε2 = ε3 = 0 rad. The computed motion-induced phase error using ϕ′−ϕ, and the simulated phase error using Eq. (13), are shown in Fig. 2(b).

A simulation of phase measurement with varying speed object motion was performed using a given phase-shift error between successive positions: ε1 = 0.15 rad, ε2 = 0.2 rad, and ε3 = 0.25 rad. The phase map ϕ′(x, y) with motion-induced phase error, simulated using the standard 4-step phase-shifting method (Eq. (8)) (black curve), is shown with the simulated phase at the four positions p1, p2, p3, p4 (four colored lines) in Fig. 3(a). The computed motion-induced phase error (blue) using ϕ′−ϕ and the simulated phase error (red) using Eq. (13) are shown in Fig. 3(b). The small difference between the computed and simulated phase curves in both Figs. 2(b) and 3(b) shows that the motion-induced phase error can be approximated by Eq. (13).

 figure: Fig. 3

Fig. 3 Phase measurement simulation with varying speed object motion: (a) unwrapped phase computed by standard 4-step PSP (black) and phase at four positions p1, p2, p3, p4, (b) motion-induced phase error computed using ϕ′−ϕ (blue) and simulated using Eq. (13) (red).

Download Full Size | PDF

2.4. Motion-induced phase-shift error estimation

Although the wrapped phase can be obtained once the phase shift is known using the phase-shifting method (Section 2.1), the computed phase map will have unknown phase error (Section 2.2) from the additional unknown phase shift due to object surface motion. Determination of the unknown phase-shift error caused by object motion is key to retrieve the phase with no motion artifact. The approach in this paper is to compute three phase maps at intervals over a multiple measurement sequence and estimate the phase-shift error by calculating the difference between the computed phase maps. To generate the three phase maps over the sequence, the standard 4-step phase-shifted images (frames) at p1 to p4 (Fig. 1) are used together with the previous two frames at p-1 and p0, and the subsequent two frames at p5 and p6, where the object is moving from position p-1 to position p6 (Fig. 4).

 figure: Fig. 4

Fig. 4 Object motion at eight successive frames and motion-induced phase-shift errors.

Download Full Size | PDF

Consider a phase-shift error εi caused by object motion from any position pi to the subsequent position pi+1 (Fig. 4). While it is possible to compute five different phase maps ϕk (k = 0, 1, 2, 3, 4), only ϕ0, ϕ2 and ϕ4 are used as discussed below. The phase maps ϕ0, ϕ2 and ϕ4 are each computed using four images captured at p-1, p0, p1, p2; p1, p2, p3, p4; and p3, p4, p5, p6; respectively, using the standard 4-step phase-shifting method for all three phase maps, and where ϕ2 is the phase map that is being corrected, for the current measurement. Referring to Fig. 4, according to Eq. (13), the motion-induced phase error of phase map ϕ2 is:

Δϕ2(x,y)ε3ε14+ε3+ε14cos2ϕ2.
Similarly, the motion-induced phase error of phase map ϕ4 is:
Δϕ4(x,y)ε5ε34+ε5+ε34cos2ϕ4.
The difference of phase maps ϕ2 and ϕ4 consists of a default phase shift of –π and a phase shift due to object motion:
ϕ4(x,y)ϕ2(x,y)=π+ε3+(ε2+ε4)/2.
For motion at constant speed or constant acceleration, the following approximation holds for any successive three phase-shift errors:
εi+2(x,y)εi+1(x,y)εi+1(x,y)εi(x,y).
Although the computed phase maps ϕ2 and ϕ4 have phase error, the phase errors Δϕ2(x, y) and Δϕ4(x, y) will be similar according to Eqs. (14) and (15), if the phase-shift errors are small. Calculating the difference of the two phase maps with similar phase error can partially cancel the effect of the phase error. Considering that there is a default phase shift of –π between ϕ2 and ϕ4, computing the π offset phase difference ϕ4ϕ2+π gives:
ϕ4(x,y)ϕ2(x,y)+π=(ϕ4+Δϕ4)(ϕ2+Δϕ2)+π2ε3+ε4ε22cos(2ϕ2)2ε3ε4sin(2ϕ2).
The π offset phase difference ϕ4ϕ2+π (Eq. (18)) contains a DC component, which is approximately twice the phase-shift error ε3, and a small periodic AC component correlated to 2ϕ2. A simulation of ϕ4ϕ2+π computed for different values of phase-shift error ε3 = 0.1, 0.15 and 0.2 rad, results in DC components 0.2, 0.3, and 0.4 rad, respectively (twice the phase shift errors) (Fig. 5), for assumed constant speed object motion. As the phase-shift error ε3 increases, the sinusoidal error increases, since the phase errors of phase maps ϕ2 and ϕ4 become more different.

 figure: Fig. 5

Fig. 5 Simulation of the π offset phase difference of two phase maps with different phase-shift errors.

Download Full Size | PDF

The phase-shift error ε3 can be estimated by (ϕ4ϕ2+π)/2; however, there is a residual AC component that contributes to inaccuracy in ε3. To improve the accuracy of the ε3 estimation, in this paper, an averaging operation is further performed over a small region around each pixel after computing (ϕ4ϕ2+π)/2, to eliminate the additional sinusoidal error (AC component). The phase-shift error ε3(x, y) can thus be estimated by:

ε3(x,y)=ϕ4ϕ2+π2V.
Here V denotes an averaging operator, where the average is computed over all valid pixels within a small image region V around pixel (x, y). The window size of the small region is the wavelength of the projected fringe. It is computationally expensive if the above averaging operation is performed at each pixel. An integral image [31] is used to reduce the computational cost, making it suitable for parallel computing. Similarly, by computing the offset phase difference of phase maps ϕ0 and ϕ2, the phase-shift error ε1(x, y) can be estimated by:
ε1(x,y)=ϕ2ϕ0+π2V.
Phase-shift error ε2(x, y) is estimated by ε2(x, y) = [ε1(x, y) + ε3(x, y)]/2. After the estimates of ε1, ε2 and ε3 are computed for each pixel, the wrapped phase ϕ(x, y), which has reduced motion-induced phase error, can be obtained by the phase-shifting method Eqs. (1)–(5) (Section 2.1), where:

θ1(x,y)=ε2/2+ε1,θ2(x,y)=π/2+ε2/2,θ3(x,y)=πε2/2,θ4(x,y)=3π/2ε2/2ε3.

It should be noted that during an object measurement, the 3D surface reconstruction would be based on a phase map computed using only four images (at positions p1, p2, p3, p4 in Fig. 4). The use of eight images (at positions p-1 to p6 in Fig. 4) and three phase maps, ϕ0, ϕ2 and ϕ4, are only used to estimate the phase-shift error. A quality map [32] is used to determine a valid measurement region following computation of all phase maps above.

2.5. Unwrapped phase using geometry-constraints

To solve the phase ambiguity of the wrapped phase map, system geometry-constraint based methods have the advantage over temporal phase unwrapping methods in not requiring additional patterns. In this paper, wrapped phase maps with reduced motion-induced phase error are used to determine the correspondence between a projector and a right and left camera, and geometry constraints are used to minimize the number of candidate points for correspondence [33–35]. The number of candidate positions in the measurement volume is only one due to the use of a very short baseline between the left camera and projector. There is thus no need to embed additional information in the fringe patterns to achieve correspondence reliability. The object surface shape can finally be reconstructed by stereovision techniques.

2.6. Summary of method

The new motion-induced-error compensation method is summarized as follows:

  • 1. Continuously project standard π/2-shifted fringe patterns onto surface. Capture fringe pattern images and perform lens distortion correction.
  • 2. Compute three wrapped phase maps ϕ0, ϕ2 and ϕ4 over eight successive captured images using the standard 4-step phase-shifting method (Section 2.4). A quality map is used to determine the valid measurement region.
  • 3. Estimate the motion-induced phase-shift errors ε1, ε3 using Eqs. (19) and (20). Estimate the motion-induced phase-shift error ε2 using ε2(x, y) = [ε1(x, y) + ε3(x, y)]/2.
  • 4. For each camera, compute a single wrapped phase map with reduced motion-induced phase error using Eq. (21) and the phase-shifting method Eqs. (1)–(5) (Section 2.1).
  • 5. Compute 3D coordinates at all camera pixels employing system geometry-constraint based methods (Section 2.5) using the single wrapped phase map for each camera.

3. Experiments and results

To verify the performance of the new motion-induced-error compensation method, an experimental system was developed consisting of two monochrome cameras (Basler acA1300-200um) using 800 × 600 images and a DLP projector (Wintech PRO4500) with 912 × 1140 resolution. The left camera was placed beneath the projector to achieve a very short left-camera projector baseline (26.5 mm), while the camera-camera baseline was long (116.9 mm). The system was calibrated using the method described in [36]. The working distance of the system from the object was approximately 700 mm. The image capture from the two cameras was synchronized with the pattern projection, which had a speed of 120 Hz. The range of depth of the measurement volume used for geometry-constraint based phase unwrapping was set to 200 mm. The wavelength of projected fringe pattern was set to 24 pixels.

3.1. Qualitative evaluation

The performance of the motion-induced-error compensation method was first evaluated by measurement of a moving multi-step object (Fig. 6(a)). The object was moved by hand during the measurement. The object motion consisted of both translation (in depth direction) and rotation. The first captured fringe image of the 4-step PSP method is shown in Fig. 6(b). The 3D measurement using the standard 4-step PSP method had large motion artifact in the form of ripples on the reconstructed surfaces on both the multi-step object and hand, as shown in Fig. 6(c). There was less motion artifact when using the new motion-induced-error compensation method (Fig. 6(d)). The depth of points located on the red line segment (300th row) in Fig. 6(b) without error compensation shows motion artifact in the form of ripples in Fig. 7(a). Using the new error compensation method, the motion artifact is again seen to be largely eliminated in the depth plot (Fig. 7(b)). A comparison of 3D surface shape measurements of the moving object over the entire measurement sequence with and without error compensation is shown in Visualization 1, and a comparison of the single-row depth plots computed over the measurement sequence with and without error compensation is shown in Visualization 2.

 figure: Fig. 6

Fig. 6 3D measurement of moving multi-step object (associated with Visualization 1): (a) image of stepped object (b) one captured fringe image; (c) measurement result using standard 4-step PSP method; (d) measurement result using new motion-induced-error compensation method.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Depth of a row of points located on the red line segment in Fig. 6(b) (associated with Visualization 2): (a) measurement result using standard 4-step PSP method; (b) measurement result using new motion-induced-error compensation method.

Download Full Size | PDF

3.2. Quantitative evaluation

To quantitatively evaluate the performance of the new motion-induced-error compensation method, a double hemisphere object (with true radii 50.800 ± 0.015 mm and distance between hemisphere centers 120.000 ± 0.005 mm, based on manufacturing specification and precision) was measured while it was moving with an approximate speed of 17 cm/s in the depth direction. The surface was reconstructed using the standard 4-step PSP method and the new error compensation method, respectively (Fig. 8). The reconstructed surface had motion artifacts in the form of ripples for the standard measurement (Fig. 8(a)), and less artifact when using the new error compensation method (Fig. 8(b)).

 figure: Fig. 8

Fig. 8 3D measurement of moving double hemisphere object (associated with Visualization 3): (a) measurement result using standard 4-step PSP method; (b) measurement result using new motion-induced-error compensation method.

Download Full Size | PDF

For comparison to the new error compensation method, in addition to measurement by the standard 4-step PSP method, a surface measurement was performed by a single-image real-time FTP method using only one of the four captured images. The measurement error for the three methods was determined by least-square fitting of a sphere to each 3D reconstructed hemisphere point cloud. The sphere-fitting residual errors (indicated by color) for the new error compensation method were mostly under 0.1 mm (Fig. 9(c)). These errors, which may be partly due to non-constant surface speed and acceleration, were lower than the errors for the standard 4-step PSP (Fig. 9(a)), seen as ripples of multiple colors with errors as high as 0.5 mm, and lower than the errors for single-image FTP (Fig. 9(e)), which were as high as 0.5 mm, mainly at the edges of the hemisphere. The measurement error distributions for the three methods in Figs. 9(b), 9(d), and 9(f), respectively, show lower errors for the motion-induced-error compensation method (Fig. 9(d)), compared to the other two methods (Figs. 9(b) and 9(f)). Even with the use of multiple images in the new method, the error reduction is highly effective to bring the errors (Figs. 9(c) and 9(d)) to a lower level than the errors for the single-image FTP (Figs. 9(e) and 9(f)). As discussed earlier, measurement accuracy of the FTP method is limited where depth variation is large (Fig. 9(e)), as seen at the outer edges, where errors are approximately 0.5 mm.

 figure: Fig. 9

Fig. 9 Errors in 3D measurement of a moving double-hemisphere object using: (a) standard 4-step PSP method, (c) new motion-induced-error compensation method, (e) single-image FTP method; and measurement error distribution (number of points versus error (mm) for the three methods: (b) standard 4-step PSP, (d) error compensation, and (f) single-image FTP.

Download Full Size | PDF

The calculated radii of the two hemispheres, distance between two sphere centers, root mean square (RMS) error based on differences between measured points on the hemisphere and the true radius, and sphere-fitting standard deviation (SD), are shown in Table 1 for the three measurement methods. The RMS errors and the sphere fitting SD are much lower using the new error compensation method compared to the standard PSP and single-image FTP methods. The uncorrected motion artifacts contribute to the higher RMS and SD errors of the standard PSP. The limited measurement accuracy at regions of large depth variation contributes to the higher error of the FTP method. The new error compensation method can reduce the motion-induced error and also handle the large depth variation of an object surface.

Tables Icon

Table 1. Measurement results for the standard PSP method, new motion-induced-error compensation method, and single-image FTP method.

3.3. Real-time measurement

To verify the performance of the new motion-induced-error compensation method for real-time applications, real-time measurements were performed on a desktop computer with a NVIDIA GeForce GTX1080ti graphic card and an Intel i7-3820 processor. Four standard 4-step PSP fringe patterns were pre-stored in the projector and then projected sequentially. System calibration and geometric parameters were pre-calculated and stored on the GPU before measurement. All computations were performed on the GPU. Pixel-wise computation permitted parallel computing to achieve real-time motion-induced-error compensation during surface-shape measurement.

A moving manikin head was measured by the system. The measurement, including 3D reconstruction of a point cloud using motion-induced-error compensation and display, was performed in real-time with image capture. The reconstructed point cloud was displayed using OpenGL, with colors blue to red representing near to far (Fig. 10 and Visualization 4). The mean GPU runtime of a single 3D measurement with motion-induced-error compensation was approximately 25 ms. The display rate (including 3D construction, data transfer, and rendering) achieved approximately 30 fps (In Visualization 4 using Bandicam software, the number shown is the current frame rate in fps).

 figure: Fig. 10

Fig. 10 Real-time measurement result of a manikin head after motion-induced-error compensation (associated with Visualization 4).

Download Full Size | PDF

A further measurement using real-time motion-induced-error compensation was performed on a deforming surface, a deflating balloon. The reconstructed 3D point cloud is displayed in Fig. 11, with grayscale texture (Fig. 11(a)) and with colors blue to red representing near to far (Fig. 11(b)), and in Visualization 5. The display rate slightly dropped to 25 fps, due to the additional rendering of the grayscale texture (Fig. 11(a)). The video demonstrates that the error compensation method was effective even for real-time measurement with non-rigid body motion (deforming surfaces).

 figure: Fig. 11

Fig. 11 Real-time measurement result of a deflating balloon after motion-induced-error compensation (associated with Visualization 5).

Download Full Size | PDF

3.4. Discussion

The new motion-induced-error compensation method in this paper can estimate the motion-induced phase-shift error and reduce the motion artifact by using three phase maps computed over a multiple measurement sequence and calculating the difference between phase maps. A phase map with reduced motion-induced error is then computed from four images using the estimated phase shift error. The pixel-wise estimation of the motion-induced phase shifts permits phase-error compensation for non-homogeneous surface motion. The new method has low computational cost, which is suitable for real-time 3D measurement. The experimental results demonstrated the effectiveness of the new real-time motion-induced-error compensation method in 3D surface-shape measurement for objects with large depth variations and deforming surfaces.

The motion-induced phase-shift error estimation was based on the assumption that the phase-shift error ε is small, such that sin(ε)≈ε and cos(ε)≈1. If the phase-shift error is not small, possibly due to fast surface motion or low camera speed, the relationship between approximated phase error Δϕ(x, y) and phase-shift errors ε1 and ε3 in Eq. (13) will be more inaccurate. The inaccuracy in approximated phase error Δϕ(x, y) can be seen in the mismatch of curves in Fig. 2(b) and also in Fig. 3(b). A greater ε would result in a greater inaccuracy of Eq. (13) and a greater mismatch of curves (Eq. (9) and Eq. (13)). The inaccuracy in approximated phase error Δϕ(x, y) would lead to inaccurate estimation of ε1 and ε3 in Eqs. (19)–(20), and ultimately greater residual errors in the phase map computed by Eqs. (1)–(5). The residual phase errors would appear as residual measurement errors (ripples) in the reconstructed surface. Faster surface motion and lower camera speed would contribute to higher residual phase and measurement errors. The motion-induced phase-shift error for non-homogeneous surface motion was estimated also with the assumption that the object motion for the pixel analyzed has constant speed or constant acceleration. Non-constant speed and acceleration would contribute to higher residual phase and measurement errors. Future research may focus on real-time measurement of surfaces with faster motion and non-constant acceleration.

The method focused on motion-induced phase shift errors due to motion in the depth direction. Further research will also investigate methods to handle greater planar motion (perpendicular to depth) combined with motion in the depth direction.

4. Conclusion

A new real-time motion-induced-error compensation method was developed for dynamic 3D surface-shape measurement. Three phase maps are computed over a multiple measurement sequence and the unknown phase shifts due to surface motion are estimated by calculating the differences between the computed phase maps. A phase map with reduced motion-induced error is then computed from four images using the estimated phase shift error. The method achieved higher measurement accuracy than the standard PSP and single-image FTP, reducing the motion artifact due to surface motion, while also handling measurement of surfaces with high depth variation. The motion-induced phase shift estimation and error compensation are performed pixel-wise, which enables parallel computing using a GPU to reduce the processing time for real-time measurement. Experiments demonstrated the ability of the method to reduce motion-induced error in real time, for shape measurement of surfaces with high depth variation, and moving and deforming surfaces.

Funding

Natural Sciences and Engineering Research Council of Canada; University of Waterloo; China Scholarship Council.

References

1. K. Zhong, Z. Li, X. Zhou, Y. Li, Y. Shi, and C. Wang, “Enhanced phase measurement profilometry for industrial 3D inspection automation,” Int. J. Adv. Manuf. Technol. 76(9–12), 1563–1574 (2014).

2. A. J. Das, T. A. Valdez, J. A. Vargas, P. Saksupapchon, P. Rachapudi, Z. Ge, J. C. Estrada, and R. Raskar, “Volume estimation of tonsil phantoms using an oral camera with 3D imaging,” Biomed. Opt. Express 7(4), 1445–1457 (2016). [CrossRef]   [PubMed]  

3. W. H. Su and W. T. Co, “A real-time, full-field, and low-cost velocity sensing approach for linear motion using fringe projection techniques,” Opt. Lasers Eng. 81, 11–20 (2016). [CrossRef]  

4. R. Ramm, C. Bräuer-Burchardt, P. Kühmstedt, and G. Notni, “High-resolution mobile optical 3D scanner with color mapping,” Proc. SPIE 10331, 103310D (2017).

5. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018). [CrossRef]  

6. M. Takeda and K. Mutoh, “Fourier transform profilometry for the automatic measurement of 3-D object shapes,” Appl. Opt. 22(24), 3977–3982 (1983). [CrossRef]   [PubMed]  

7. V. Srinivasan, H. C. Liu, and M. Halioua, “Automated phase-measuring profilometry of 3-D diffuse objects,” Appl. Opt. 23(18), 3105 (1984). [CrossRef]   [PubMed]  

8. S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: A review,” Opt. Lasers Eng. 107, 28–37 (2018). [CrossRef]  

9. C. Jiang, B. Li, and S. Zhang, “Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers,” Opt. Lasers Eng. 91, 232–241 (2017). [CrossRef]  

10. K. Zhong, Z. Li, Y. Shi, C. Wang, and Y. Lei, “Fast phase measurement profilometry for arbitrary shape objects without phase unwrapping,” Opt. Lasers Eng. 51(11), 1213–1222 (2013). [CrossRef]  

11. S. Zhang, “High-speed 3D shape measurement with structured light methods: A review,” Opt. Lasers Eng. 106, 119–131 (2018). [CrossRef]  

12. C. Zuo, T. Tao, S. Feng, L. Huang, A. Asundi, and Q. Chen, “Micro Fourier transform profilometry (μFTP): 3D shape measurement at 10,000 frames per second,” Opt. Lasers Eng. 102, 70–91 (2018). [CrossRef]  

13. S. Heist, P. Lutzke, I. Schmidt, P. Dietrich, P. Kühmstedt, A. Tünnermann, and G. Notni, “High-speed three-dimensional shape measurement using GOBO projection,” Opt. Lasers Eng. 87, 90–96 (2016). [CrossRef]  

14. J.-S. Hyun, B. Li, and S. Zhang, “High-speed high-accuracy three-dimensional shape measurement using digital binary defocusing method versus sinusoidal method,” Opt. Eng. 56(7), 074102 (2017). [CrossRef]  

15. Q. Kemao, “Applications of windowed Fourier fringe analysis in optical measurement: A review,” Opt. Lasers Eng. 66, 67–73 (2015). [CrossRef]  

16. L. R. Watkins, “Review of fringe pattern phase recovery using the 1-D and 2-D continuous wavelet transforms,” Opt. Lasers Eng. 50(8), 1015–1022 (2012). [CrossRef]  

17. L. Huang, Q. Kemao, B. Pan, and A. K. Asundi, “Comparison of Fourier transform, windowed Fourier transform, and wavelet transform methods for phase extraction from a single fringe pattern in fringe projection profilometry,” Opt. Lasers Eng. 48(2), 141–148 (2010). [CrossRef]  

18. C. Guan, L. Hassebrook, and D. Lau, “Composite structured light pattern for three-dimensional video,” Opt. Express 11(5), 406–417 (2003). [CrossRef]   [PubMed]  

19. Z. Zhang, D. P. Towers, and C. E. Towers, “Snapshot color fringe projection for absolute three-dimensional metrology of video sequences,” Appl. Opt. 49(31), 5947 (2010). [CrossRef]  

20. L. Lu, J. Xi, Y. Yu, and Q. Guo, “New approach to improve the accuracy of 3-D shape measurement of moving object using phase shifting profilometry,” Opt. Express 21(25), 30610–30622 (2013). [CrossRef]   [PubMed]  

21. L. Lu, J. Xi, Y. Yu, and Q. Guo, “New approach to improve the performance of fringe pattern profilometry using multiple triangular patterns for the measurement of objects in motion,” Opt. Eng. 53(11), 112211 (2014). [CrossRef]  

22. L. Lu, Y. Ding, Y. Luan, Y. Yin, Q. Liu, and J. Xi, “Automated approach for the surface profile measurement of moving objects based on PSP,” Opt. Express 25(25), 32120–32131 (2017). [CrossRef]   [PubMed]  

23. S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-D measurements with motion-compensated phase-shifting profilometry,” Opt. Lasers Eng. 103, 127–138 (2018). [CrossRef]  

24. P. Cong, Z. Xiong, Y. Zhang, S. Zhao, and F. Wu, “Accurate dynamic 3D sensing with Fourier-assisted phase shifting,” IEEE J. Sel. Top. Signal Process. 9(3), 396–408 (2015). [CrossRef]  

25. J. Qian, T. Tao, S. Feng, Q. Chen, and C. Zuo, “Motion-artifact-free dynamic 3D shape measurement with hybrid Fourier-transform phase-shifting profilometry,” Opt. Express 27(3), 2713–2731 (2019). [CrossRef]   [PubMed]  

26. Y. Wang, Z. Liu, C. Jiang, and S. Zhang, “Motion induced phase error reduction using a Hilbert transform,” Opt. Express 26(26), 34224–34235 (2018). [CrossRef]   [PubMed]  

27. H. Chen, Y. Yin, Z. Cai, W. Xu, X. Liu, X. Meng, and X. Peng, “Suppression of the nonlinear phase error in phase shifting profilometry: considering non-smooth reflectivity and fractional period,” Opt. Express 26(10), 13489–13505 (2018). [CrossRef]   [PubMed]  

28. Z. Liu, P. C. Zibley, and S. Zhang, “Motion-induced error compensation for phase shifting profilometry,” Opt. Express 26(10), 12632–12637 (2018). [CrossRef]   [PubMed]  

29. L. Lu, Y. Yin, Z. Su, X. Ren, Y. Luan, and J. Xi, “General model for phase shifting profilometry with an object in motion,” Appl. Opt. 57(36), 10364–10369 (2018). [CrossRef]   [PubMed]  

30. N. Pears, Y. Liu, and P. Bunting, 3D Imaging, Analysis and Applications (Springer, 2012).

31. P. Viola and M. J. Jones, “Robust Real-time Object Detection,” in Proceedings of IEEE Workshop on Statistical and Computational Theories of Vision (IEEE, 2001), pp. 137–154.

32. S. Zhang, X. Li, and S. T. Yau, “Multilevel quality-guided phase unwrapping algorithm for real-time three-dimensional shape reconstruction,” Appl. Opt. 46(1), 50–57 (2007). [CrossRef]   [PubMed]  

33. T. Tao, Q. Chen, S. Feng, J. Qian, Y. Hu, L. Huang, and C. Zuo, “High-speed real-time 3D shape measurement based on adaptive depth constraint,” Opt. Express 26(17), 22440–22456 (2018). [CrossRef]   [PubMed]  

34. T. Tao, Q. Chen, J. Da, S. Feng, Y. Hu, and C. Zuo, “Real-time 3-D shape measurement with composite phase-shifting fringes and multi-view system,” Opt. Express 24(18), 20253–20269 (2016). [CrossRef]   [PubMed]  

35. X. Liu and J. Kofman, “High-frequency background modulation fringe patterns based on a fringe-wavelength geometry-constraint model for 3D surface-shape measurement,” Opt. Express 25(14), 16618–16628 (2017). [CrossRef]   [PubMed]  

36. X. Liu and J. Kofman, “Real-time 3D surface-shape measurement using background-modulated modified Fourier transform profilometry with geometry-constraint,” Opt. Lasers Eng. 115, 217–224 (2019). [CrossRef]  

Supplementary Material (5)

NameDescription
Visualization 1       3D measurement of moving multi-step object
Visualization 2       Depth of a row of points
Visualization 3       3D measurement of moving double hemisphere object
Visualization 4       Real-time measurement result of a manikin head after motion-induced-error compensation
Visualization 5       Real-time measurement result of a deflating balloon after motion-induced-error compensation

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 Object surface motion-induced phase-shift error.
Fig. 2
Fig. 2 Phase measurement simulation with constant speed motion: (a) unwrapped phase computed by standard 4-step PSP (black) and phase at four positions p1, p2, p3, p4, (b) motion-induced phase error computed using ϕ′−ϕ (blue) and simulated using Eq. (13) (red).
Fig. 3
Fig. 3 Phase measurement simulation with varying speed object motion: (a) unwrapped phase computed by standard 4-step PSP (black) and phase at four positions p1, p2, p3, p4, (b) motion-induced phase error computed using ϕ′−ϕ (blue) and simulated using Eq. (13) (red).
Fig. 4
Fig. 4 Object motion at eight successive frames and motion-induced phase-shift errors.
Fig. 5
Fig. 5 Simulation of the π offset phase difference of two phase maps with different phase-shift errors.
Fig. 6
Fig. 6 3D measurement of moving multi-step object (associated with Visualization 1): (a) image of stepped object (b) one captured fringe image; (c) measurement result using standard 4-step PSP method; (d) measurement result using new motion-induced-error compensation method.
Fig. 7
Fig. 7 Depth of a row of points located on the red line segment in Fig. 6(b) (associated with Visualization 2): (a) measurement result using standard 4-step PSP method; (b) measurement result using new motion-induced-error compensation method.
Fig. 8
Fig. 8 3D measurement of moving double hemisphere object (associated with Visualization 3): (a) measurement result using standard 4-step PSP method; (b) measurement result using new motion-induced-error compensation method.
Fig. 9
Fig. 9 Errors in 3D measurement of a moving double-hemisphere object using: (a) standard 4-step PSP method, (c) new motion-induced-error compensation method, (e) single-image FTP method; and measurement error distribution (number of points versus error (mm) for the three methods: (b) standard 4-step PSP, (d) error compensation, and (f) single-image FTP.
Fig. 10
Fig. 10 Real-time measurement result of a manikin head after motion-induced-error compensation (associated with Visualization 4).
Fig. 11
Fig. 11 Real-time measurement result of a deflating balloon after motion-induced-error compensation (associated with Visualization 5).

Tables (1)

Tables Icon

Table 1 Measurement results for the standard PSP method, new motion-induced-error compensation method, and single-image FTP method.

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

I n (x,y)=A(x,y)+B(x,y)cos[ϕ(x,y) θ n ],
I n (x,y)=A(x,y)+ B 1 (x,y)cos( θ n )+ B 2 (x,y)sin( θ n ),
Χ(x,y)= ( M T M ) 1 M T Ι(x,y),
M=[ 1 cos( θ 1 ) sin( θ 1 ) 1 cos( θ 2 ) sin( θ 2 ) 1 cos( θ N ) sin( θ N ) ].
ϕ(x,y)= tan 1 [ B 2 (x,y) B 1 (x,y) ].
ϕ(x,y)= tan 1 [ I 2 (x,y) I 4 (x,y) I 1 (x,y) I 3 (x,y) ].
I 1 (x,y)=A(x,y)+B(x,y)cos[ ϕ(x,y) ε 2 (x,y)/2 ε 1 (x,y) ], I 2 (x,y)=A(x,y)+B(x,y)cos[ ϕ(x,y)π/2 ε 2 (x,y)/2 ], I 3 (x,y)=A(x,y)+B(x,y)cos[ ϕ(x,y)π+ ε 2 (x,y)/2 ], I 4 (x,y)=A(x,y)+B(x,y)cos[ ϕ(x,y) 3π/2 + ε 2 (x,y)/2 + ε 3 (x,y) ].
ϕ (x,y)= tan 1 [ I 2 (x,y) I 4 (x,y) I 1 (x,y) I 3 (x,y) ],
Δϕ(x,y)= ϕ (x,y)ϕ(x,y).
ϕ (x,y)= tan 1 [ sin( ϕ ε 2 /2 )+sin( ϕ+ ε 2 /2 + ε 3 ) cos( ϕ ε 2 /2 ε 1 )+cos( ϕ+ ε 2 /2 ) ].
ϕ (x,y) tan 1 [ 2sinϕ+ ε 3 cosϕ 2cosϕ+ ε 1 sinϕ ].
Δϕ(x,y)= tan 1 [ ε 3 cos 2 ϕ ε 1 sin 2 ϕ 2+( ε 1 + ε 3 )sinϕcosϕ ].
Δϕ(x,y) ε 3 ε 1 4 + ε 3 + ε 1 4 cos2ϕ.
Δ ϕ 2 (x,y) ε 3 ε 1 4 + ε 3 + ε 1 4 cos2 ϕ 2 .
Δ ϕ 4 (x,y) ε 5 ε 3 4 + ε 5 + ε 3 4 cos2 ϕ 4 .
ϕ 4 (x,y) ϕ 2 (x,y)=π+ ε 3 + ( ε 2 + ε 4 )/2 .
ε i+2 (x,y) ε i+1 (x,y) ε i+1 (x,y) ε i (x,y).
ϕ 4 (x,y) ϕ 2 (x,y)+π=( ϕ 4 +Δ ϕ 4 )( ϕ 2 +Δ ϕ 2 )+π 2 ε 3 + ε 4 ε 2 2 cos( 2 ϕ 2 )2 ε 3 ε 4 sin( 2 ϕ 2 ).
ε 3 (x,y)= ϕ 4 ϕ 2 +π 2 V .
ε 1 (x,y)= ϕ 2 ϕ 0 +π 2 V .
θ 1 (x,y)= ε 2 /2 + ε 1 , θ 2 (x,y)=π/2 + ε 2 /2 , θ 3 (x,y)=π ε 2 /2 , θ 4 (x,y)= 3π/2 ε 2 /2 ε 3 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.