Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-speed three-dimensional shape measurement based on cyclic complementary Gray-code light

Open Access Open Access

Abstract

The binary defocusing technique has been widely used in high-speed three-dimensional (3D) shape measurement because it breaks the bottlenecks in high-speed fringe projection and the projector’s nonlinear response. However, it is challenging for this method to realize a two- or multi-frequency phase-shifting algorithm because it is difficult to simultaneously generate high-quality sinusoidal fringe patterns with different periods under the same defocusing degree. To bypass this challenge, we proposed a high-speed 3D shape measurement technique for dynamic scenes based on cyclic complementary Gray-code (CCGC) patterns. In this proposed method, the projected phase-shifting sinusoidal fringes kept one same frequency, which is beneficial to ensure the optimum defocusing degree for binary dithering technique. The wrapped phase can be calculated by phase-shifting algorithm and unwrapped with the aid of complementary Gray-code (CGC) patterns in a simple and robust way. Then, the cyclic coding strategy further extends the unambiguous phase measurement range and improves the measurement accuracy compared with the traditional Gray-coding strategy under the condition of the same number of projected patterns. High-quality 3D results of three complex dynamic scenes—including a cooling fan and a standard ceramic ball with a free-falling table tennis, collapsing building blocks, and impact of the Newton’s cradle—were demonstrated at a frame rate of 357 fps. This verified the proposed method’s feasibility and validity.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

With recent advancements on hardware equipment and computer science, three-dimensional (3D) shape information in every high-speed dynamic scene is desirable to be obtained to analyze the details of the shape changes. So high-speed and high-accuracy 3D shape measurement have been increasingly sought by scholars in fields such as biomedical engineering and machine vision [1–3] and also can be applied in those technology fields such as experimental mechanics and material mechanics where instantaneous dynamic scenes need to be analyzed [4–9].

Over past years, a number of fringe projection techniques have been developed by improving the structured light coding strategy [10–13] or modifying an existing digital projectors for high-speed application [14,15]. Square binary patterns can become pseudo sinusoidal patterns by properly defocusing the projector [16]. In addition, the DLP platform (e.g., DLP Discovery, DLP Light Commander and the DLP LightCrafter) can display the binary images at a much faster rate (e.g., kHz) but they can display 8-bit grayscale images at only a few hundred Hz [1]. On the other hand, off-the-shelf high-speed cameras have more than kHz frame rate with an enough image resolution. Therefore, high-speed 3D shape measurement with up kHz frame rate can be achieved if all the projected patterns are binary.

The well-established Fourier transform profilometry (FTP) [17–20] requires only one fringe pattern to reconstruct one 3D measurement result. However, it is difficult to measure objects with sharp edges, abrupt change, or non-uniform surface reflectivity. The two-frequency [21,22] or multi-frequency phase-shifting technique [23] are suitable to reconstruct dynamic complex or isolated objects, but it is difficult to simultaneously achieve high-quality sinusoidal fringe patterns for different periods under a same defocusing degree [24]. Heist et al. [8] achieved 3D shape measurement at over 330Hz by using an array projector for high-speed shape measurements. They [25] also developed a projector which utilizes a rotating slide structure to project aperiodic sinusoidal fringe pattern at high frame rates and high-speed data acquisition (more than 1300 Hz) was realized. Those two techniques can improve measurement speed compared with the techniques using ordinary commercial DLP projectors. However, the projectors need to be designed and fabricated. In addition, Laughner et al. [26] utilized the approach combining phase-shift technique and Gray-code technique to restore the accurate rabbit cardiac deformation at 200 frames per second and 10 images were required to recover one 3D frame. Nevertheless, the boundaries of two adjacent binary words, especially conversion between black and white, are not sharp cut-off caused by non-uniform reflectivity of the tested object surface, the effect of the background light intensity, noise and the impact of the defocus, and it will bring some errors and result in failing in phase unwrapping [27]. In high-speed measurement, Gray codes are also defocused because of using the binary defocusing method and more errors will occur in phase unwrapping.

Combining our previous 3D shape measurement based on complementary Gray-code (CGC) light technique [27] with a new cyclic coding strategy, a new high-speed 3D shape measurement technique is proposed in this research. Firstly, three-step phase-shifting sinusoidal fringes with the same frequency are produced using dithering technique [28] to ensure the best defocusing degree. Then the wrapped phase is calculated using phase-shifting algorithm and unwrapped with the aid of Gray-code (GC) patterns. And the robustness of phase unwrapping can be improved by projecting an additional Gray-code pattern, because the decoded results of the traditional Gray codes and complementary Gray codes will be different and complementary on those boundaries of black and white conversions. And the detailed information can be found in [27]. Finally, using the reverse relationship between the last Gray-code patterns in the consecutive projected pattern sequences, the newly introduced cyclic coding strategy can double the period number of the projected sinusoidal fringe, extend the unambiguous phase measurement range and improve the accuracy of measurement without any additional patterns. Experiments are preformed to verify the performance of the proposed technique.

The rest of the paper is organized as follows: Section 2 illustrates the principle of the proposed cyclic complementary Gray-code (CCGC) fringe projection method. Section 3 presents some experimental results of three complex scenes, including a static cooling fan and a standard ceramic ball with a free-falling table tennis, collapsing building blocks and impact of the Newton’s cradle, to validate the propose method. Section 4 discussed the strengths and limitation of the proposed method. And Section 5 summarizes this paper.

2. Cyclic complementary Gray-code fringe projection

2.1 Principle of complementary Gray-code projection

Binary dithering technique is the technique used to achieve satisfactory image rendering and color reduction, and it works well to produce high-quality sinusoidal patterns using binary defocusing technique, especially for large periods [28]. This technique is employed to produce sinusoidal fringes in our work as shown in Figs. 1(a)-1(c). And phase-shifting methods have been extensively adopted in optical metrology because of the advantages of non-contact, high precision, high measuring speed, and easy accomplishment under the computer control. At least three fringe patterns are required for 3D shape recovery [29], so the three-step phase-shifting algorithm is commonly used for high-speed 3D shape measurement. Three fringe patterns can be described as:

I1(x,y,n)=α(x,y,n){ap+bpcos[ϕ(x,y,n)2π/3]+β1(x,y,n)}+β2(x,y,n)
I2(x,y,n)=α(x,y,n){ap+bpcos[ϕ(x,y,n)]+β1(x,y,n)}+β2(x,y,n)
I3(x,y,n)=α(x,y,n){ap+bpcos[ϕ(x,y,n)+2π/3]+β1(x,y,n)}+β2(x,y,n)
in which, ap and bp are the mean value and the amplitude of sinusoidal fringe pattern designed in projector space and α(x, y, n) is the object reflectivity, β1(x, y, n) is the ambient light onto the object surface, β2(x, y, n) is the ambient light directly entering the camera at different sequences of projected patterns (3 phase-shifting dithered patterns and 4 Gray-code patterns in one sequence). As shown in Fig. 1(d), ϕ(x, y, n) is the wrapped phase of the modulated light field which can be obtained by solving Eqs. (1)-(3).

 figure: Figure 1

Figure 1 Sketch map of complementary Gray-code method. (a)-(c) Phase-shifting dithered patterns. (d) The wrapped phase. (e)-(g) Traditional Gray-code patterns. (h) The additional Gray-code pattern.

Download Full Size | PDF

ϕ(x,y,n)=tan13(I1(x,y,n)I3(x,y,n))2I2(x,y,n)I1(x,y,n)I3(x,y,n)

The calculation result of Eq. (4) changes within the range of (-π, π] due to the value range of the arc-tangent function.

Gray codes are used to unwrap the wrapped phase. On the encoding of the traditional Gray codes, the width of code words is equal to that of the projected grating period. And it is assumed that the number of the grating period is N, then the number of projected Gray-code patterns (CP) is log2N. In our work, N is chosen as 8 to pursue high-speed measurement, so 3 traditional Gray-code patterns are needed as shown in Figs. 1(e)-1(g). However, Gray-code patterns are also defocused since binary defocusing technique has been employed to produce the sinusoidal fringe patterns. So the boundary of the black and white cannot be well defined and the wrong codes in these boundary areas will be got easily. It means that phase errors caused by wrong codes will occur in phase unwrapping process. As shown in Fig. 1(h), an additional Gray-code pattern CP4, whose width of code words is the half of the grating period, is projected. As shown in dotted circle in Figs. 1(e)-1(g), the boundaries of black and white conversions of CP4 locate at the different positions from CP1-3, therefore, errors of CP4 will be separated from those of CP1-3. Three traditional Gray-code patterns will produce a traditional decoding result k1, and all 4 Gray-code patterns will produce another decoding result k2. Since the boundary of black and white of k1 and k2 is complementary, the unwrapping phase can be correctly recovered using k1 and k2.

2.2 Cyclic coding strategy of complementary Gray-code light

It is well-known that measuring accuracy will be improved with the increase of the grating periods. Three traditional Gray-code patterns and one additional Gray-code pattern can only encode eight grating periods. The increase of the grating periods needs more additional patterns to be projected. However, additional patterns are not expected to achieve high-speed measurement. So the cyclic coding strategy of complementary Gray-code light is proposed. As shown in Figs. 2(a)-2(g), the period numbers of the sinusoidal and Gray-code patterns become double, and the projected order of the Gray-code patterns is reversed to improve phase unwrapping robustness in dynamic measurement. The new Gray codes are called cyclic complementary Gray-code patterns (CCP). Both left and right 8 grating periods of CCP can be uniquely encoded by complementary Gray-code light separately. But, there is no difference between left and right 8 grating periods in the spatial domain. So, as shown in Figs. 2(g) and 2(n), the right half of CCP1 is designed to be cyclic reverse in the temporal sequences. That is, the right half of CCP1 is always the reverse code of that pattern in the former pattern sequence. Then, exclusive OR operation between sequential CCP1 as shown in Figs. 2(o) and 2(p) will be performed to calculate the third decoding result k3 as shown in Fig. 2(q). The right half of CCP1 in the odd pattern sequence (n is odd) is designed to the traditional Gray-code patterns which is not reverse and that in the even pattern sequence (n is even) is designed to the reverse word. In order to get the correct k1 and k2, CCP1 in the even pattern sequence should be corrected as shown in Fig. 2(r), using the exclusive OR operation with the decoding result k3. Finally, the wrapped phase with 16 periods can be unwrapped using k1, k2 and k3 without additional Gray-code patterns.

 figure: Fig. 2

Fig. 2 Sketch map of cyclic coding strategy of complementary Gray-code light. (a)-(c) Phase-shifting dithered patterns in pattern sequence n-1. (d)-(g) Cyclic complementary Gray-code patterns (CCP) in pattern sequence n-1. (h)-(n) Corresponding patterns in pattern sequence n. (o) CCP1 in pattern sequence n-1. (p) Uncorrected CCP1 in pattern sequence n. (q) Phase order k3. (r) Corrected CCP1 in pattern sequence n.

Download Full Size | PDF

2.3 Phase unwrapping based on cyclic complementary Gray-code light

In order to remove the discontinuities of the wrapped phase using phase orders k1, k2 and k3, the decoding process of the CCP is shown in Fig. 3 and the details are as follows:

 figure: Fig. 3

Fig. 3 The obtainment of phase orders k1, k2 and k3.

Download Full Size | PDF

  • 1) Firstly, a suitable threshold is chosen to binarize Gray-code patterns in different sequences, then the CCP1-4(x, y, n) in the sequence n and CCP1(x, y, n-1) in the sequence n-1 can be obtained.
  • 2) Phase order k3(x, y, n) is calculated using the exclusive OR operation between CCP1(x, y, n) and CCP1(x, y, n-1) as shown in Eq. (5).
    k3(x,y,n)=CCP1(x,y,n)CCP1(x,y,n1)
  • 3) Correct the CCP1 in the even pattern sequences using the following equation:
    CCP1(x,y,n)=(mod(n+1,2)*k3(x,y,n))CCP1(x,y,n)
  • 4) Calculate the decimal number using three traditional Gray-code patterns, CCP1(x, y, n), CCP2(x, y, n) and CCP3(x, y, n) using Eq. (7).
    V1(x,y,n)=i=13CCPi(x,y,n)*2(3i)
  • 5) Obtain the phase order k1(x, y, n) by looking up the known unique relationship between the decimal number V1(x, y, n) and the decoding number i(V1(x, y, n)). The phase order k1(x, y, n) is equal to the decoding number i(V1(x, y, n)).
  • 6) Calculate a new decimal number using all four Gray-code patterns, CCP1(x, y, n), CCP2(x, y, n), CCP3(x, y, n) and CCP4(x, y, n) using Eq. (8).
    V2(x,y,n)=i=14CCPi(x,y,n)*2(4i)
  • 7) Look up the decoding number i(V2(x, y, n)) and calculate the phase order k2(x, y, n) using Eq. (9).
    k2(x,y,n)=INT((i(V2(x,y,n))+1)/2)

Finally, phase unwrapping can be executed using these three phase orders by following equation:

Φ(x,y,n)={ϕ(x,y,n)+2π(k2(x,y,n)+8k3(x,y,n)),ϕ(x,y,n)-π/2ϕ(x,y,n)+2π(k1(x,y,n)+8k3(x,y,n)),-π/2<ϕ(x,y,n)<π/2ϕ(x,y,n)+2π(k2(x,y,n)+8k3(x,y,n))-2π,ϕ(x,y,n)π/2.

2.4 Error correction caused by binarization and motion

As shown in Figs. 4(a) and 4(b), the edge of the black and white conversion of CCP1 in the adjacent sequence cannot match completely due to binarization of the CCP and the motion of the object. Therefore, a few order errors will occur in k3(x, y, n) and CCP1(x, y, n) as shown in Figs. 4(c) and 4(d). All of the errors occur in the edge of the black and white, and the width of the wrong codes depends on shooting speed, projection speed and the speed of the moving object. As shown in Fig. 4(f), k2(x, y, n) is used on the edge area (gray part in k2(x, y, n)), which remains the same code. The width of the gray part is half of one period of the fringe, so if motion-induced phase difference is within π, the width of wrong codes in CCP1(x, y, n) will be narrower than the width of the gray part. And shooting and projection speed is far faster than the speed of the moving target objects in our measuring system. Therefore k1(x, y, n) and k2(x, y, n) can be used to unwrap the wrapped phase according to Eq. (10) without any errors. But errors in k3(x, y, n) will cause 16π phase-jump errors as shown in Eq. (10). These errors can be corrected in the phase-to-height mapping process.

 figure: Fig. 4

Fig. 4 Error caused by binarization and motion. (a) CCP1 in sequence n-1. (b) Uncorrected CCP1 in sequence n. (c) Phase order k3 with errors in sequence n. (d) Corrected CCP1 with errors in sequence n. (e) CCP4 in sequence n. (f) Phase order k2 with errors in sequence n. (g) Phase order k1 with errors in sequence n.

Download Full Size | PDF

In order to obtain the height of each point of the object, the unwrapping phase should be converted to height using a phase-to-height mapping algorithm [30]. Equation (11) can be used to reconstruct the height of a measured object:

1h(x,y,n)=u(x,y)+v(x,y)ΔΦ(x,y,n)+w(x,y)ΔΦ2(x,y,n),
in which, ΔΦ(x, y, n) is the absolute phase of the measured object, relative to the reference plane. Four planes which have known height distributions are measured. Then the 3 unknown factors u(x, y), v(x, y) and w(x, y) can be calculated and saved as system parameters for the future phase-to-height mapping.

In our system, the measurement of the flat board was repeatedly moved with an increment of 50 millimeter (mm) to cover the entire calibrated 150-mm measuring volume. The absolute phase of 4 planes at 4 different heights, relative to the reference plane is shown in Fig. 5(a). And 10 mm in height maps at most 0.96π phase difference in our system, so the phase difference of the whole measurement volume should be within 16π. The peaks model whose peak-to-peak value is 60 mm was simulated to explain the process of error correction. As shown in Fig. 5(a), the absolute phase of peaks model was obtained using the calibrated parameters. The cross-section of the result is shown in Fig. 5(b), which indicates the correct absolute phase is within the calibrated measuring volume and the wrong phase suffers 16π phase-jump error. So, two thresholds Φ1 and Φ2 which have a 16π phase difference are used to correct errors. In our system, Φ1 is 15.2π and Φ2 is −0.8π. Because the phase difference of the whole measurement volume is within 16π, the wrong phase will be out of the interval between two thresholds. Then Eq. (12) can be used to correct the phase error in k3(x, y, n) caused by binarization and motion.

 figure: Fig. 5

Fig. 5 Correction of the phase-jump errors in phase-to-height mapping process. (a) Phase-to-height mapping space. (b) The middle cross-section of (a).

Download Full Size | PDF

ΔΦ(x,y,n)={ΔΦ(x,y,n)16π,ΔΦ(x,y,n)>Φ1ΔΦ(x,y,n)+16π,ΔΦ(x,y,n)<Φ2

3. Experiments and results

Experiments have been conducted to test the performance of our proposed method. As shown in Fig. 6, we developed a measuring system, including a digital-light-processing (DLP) projector (LightCrafter 4500) and a high-speed CCD camera (Photron FASTCAM Mini UX50). The projector resolution is 912 × 1140 pixels and the lens assembled to the camera has a focal length of 16mm and an aperture of f/1.4. In all our experiments, the image refreshing rate of the projector was set at 2500Hz and the camera resolution was set at 1280 × 800 pixels with an image acquisition speed of 2500Hz which is synchronized with the camera trigger.

 figure: Fig. 6

Fig. 6 Photograph of the high-speed 3D shape measurement system.

Download Full Size | PDF

3.1. The cooling fan and standard ceramic ball with a free-falling table tennis

In the first experiment, a cooling fan for CPU, a standard ceramic ball and a free-falling table tennis were measured using the proposed CCGC method, the CGC method and the traditional GC method to determine the accuracy of our proposed method and compare the performance of three methods. In order to achieve a fair comparison, 22 patterns for both CGC method (8 patterns in one pattern sequence) and CCGC method (14 patterns in two pattern sequences) are required to project sequentially onto the same scene as shown in Fig. 7. And the associated Visualization 1 shows continuously captured images at 15Hz. As shown in Fig. 7(a), the first 3 patterns and last 4 patterns are obtained for GC method and all 8 patterns are obtained for CGC method. The period of the sinusoidal pattern in three methods are 70 pixels. Then, 3D results can be reconstructed by the GC and CGC methods respectively using 7 patterns (first 3 and last 4) and all 8 patterns as shown in Fig. 7(a), and two 3D results can be restored by the CCGC method using 14 patterns as shown in Figs. 7(b) and 7(c).

 figure: Fig. 7

Fig. 7 Continuously captured fringe images for three methods (Visualization 1). (a) A sequence of 8 continuous fringe images for the GC and CGC methods. (b) Seven continuous fringe images for the CCGC method in sequence n-1. (c) Seven continuous fringe images for the CCGC method in sequence n.

Download Full Size | PDF

As shown in Fig. 8(a), the radius of the standard ceramic ball is 12.6994 mm, which is measured by coordinate measurement machine and the radius of the table tennis is about 20 mm, which is measured by the vernier caliper. Figure 8(b) shows the absolute phase calculated by the CCGC method and the phase-jump errors occurred in the blade and table tennis ball. The phase distribution of the red line in Fig. 8(b) is shown in Figs. 8(c) and 8(d), and the value of the phase-jump errors is 16π. Therefore, errors can be corrected using Eq. (12) and the correct phase can be obtained as shown in Fig. 8(e). After phase-to-height mapping and system calibration, the 3D result can be reconstructed as shown in Fig. 8(f). The sphere fitting on reconstructed 3D geometries of the table tennis and standard ceramic ball were performed as shown in Figs. 8(g) and 8(h). The measured radii of two balls are 20.052 mm and 12.692 mm respectively; the root-mean-square (RMS) error of two balls are 0.165 mm and 0.098 mm respectively. And the error distributions of two ball are shown in Figs. 8(i) and 8(j). Note that due to the inaccuracy of the measurement with the vernier caliper and the motion of the table tennis, the measurement accuracy of table tennis is lower than that of standard ceramic ball. This experiment clearly showed the effectiveness and high measurement accuracy of our proposed CCGC method.

 figure: Fig. 8

Fig. 8 Data processing and accuracy analyzing of the CCGC method. (a) Test scene consisting of a cooling fan for CPU, a standard ceramic ball and a free-falling table tennis at T = 100ms. (b) Uncorrected absolute phase. (c)-(d) One cross-section (highlighted in (b)) of the uncorrected absolute phase. (e) Corrected absolute phase. (f) Reconstructed result. (g) Sphere fitting of the table tennis in (f). (h) Sphere fitting of the standard ceramic ball in (f). (i) Error distribution of the measured table tennis. (j) Error distribution of the measured standard ceramic ball.

Download Full Size | PDF

To compare the performance of the CCGC method with the CGC and GC methods, the 3D results of the GC and CGC methods were calculated. The number of projected patterns and grating periods, reconstructed results and reconstructed success rate of three methods are summarized in Fig. 9. Three same phase-shifting patterns are used in three methods, so the same wrapped phase can be obtained. However, three different phase unwrapping strategies are used, so the number of projected patterns and reconstructed success rate are different for different methods. And data indicates that the CCGC method has the same number of projected patterns as the GC method but has higher reconstructed success rate (100%) than that (88.5%) of the GC method. And it also shows that both CGC and CCGC methods have the same reconstructed success rate (100%), but the CCGC method has the less number of projected patterns than CGC method. Comparative assessments clearly showed that our proposed CCGC method reserved the advantages of the CGC and GC methods and had better performance than those two methods.

 figure: Fig. 9

Fig. 9 Comparative assessment of GC, CGC and CCGC methods.

Download Full Size | PDF

3.2. Collapsing building blocks

In the second experiment, the collapsing process of the building blocks was measured to demonstrate the performance of our proposed method in the high-speed and complex scene. Wooden building blocks (block size ~30 × 30 × 30 mm3) were piled up and the collapse was started by pulling the lower left block. This is a challenging scene to measure because of a mount of discontinuous surface including gaps between blocks and sharp edges of single block, and because of different reflectivity of the surface of blocks [8]. The total collapsing process took about 490 ms. Three phase-shifting fringe images were averaged to create a texture map at different pattern sequences as shown in Fig. 10(a), and corresponding reconstructed results are shown in Fig. 10(b) and Visualization 2. The reconstructed rate is 357 fps and the replayed rate is 30 fps. The depth range of the scene is comparatively large, but it varies within 150-mm calibrated measurement volume, so the errors caused by binarization and motion can be corrected using Eq. (12). Experimental result showed our proposed method can complete absolute 3D shape measurement of multiple randomly moving objects with sharp edges.

 figure: Fig. 10

Fig. 10 Measurement of collapsing building blocks. (a) Representative collapsing scenes at different time points. (b) Corresponding 3D reconstructions (Visualization 2).

Download Full Size | PDF

3.3. Impact process of Newton’s cradle

In the last experiment, the impact process of the Newton’s cradle was recorded and measured. Figure 11(a) shows the texture maps calculated by averaging three phase-shifting images at different sampling times with 58.8-ms interval to present a complete impact and swing, and Fig. 11(b) shows the corresponding 3D reconstructed results. T = 0 ms is the start time of the swing, T = 117.6ms is the time when the left ball hits the second ball and T = 235.2ms is the time when the right ball reaches the highest position. The results indicate the middle three balls kept almost still during the whole process and the left ball transferred energy to the right ball through the impact. To further analyze the impact process, the trajectory and velocity of the ball can be obtained by tracing the center of the ball. The sphere fitting method was used to find the center of the ball and the trajectory of the left ball and the right ball were traced as shown in Fig. 11 (c). The gray shiny surface is the reconstructed result with the texture map at the first moment (T = 0ms), the black spheres are the fitting spheres of the results and two gray spheres are respectively the fitting sphere of the left ball at the third moment (T = 117.6ms) and the fitting sphere of the right ball at the last moment (T = 235.2ms). The colored lines are the trajectory of the left and right ball, which indicates that the starting position of the left ball is below the horizontal plane and the noncentral impact happened between left two balls, which mean the more momentum will be loss in this impact. In addition, the instantaneous speed of the ball can be estimated by taking the derivative of the position function to time and the color in the trajectory represents the speed of the ball. Figure 11(d) shows the velocity of the left and right balls as a function of time, which indicates the velocity of the left ball generally increased with the time growth and the velocities of two balls exchanged in the hitting moment (T = 117.6ms), then the left ball remained almost still and the velocity of the right ball decreased to be around zero. Because of the noncentral impact, the velocity of the ball decreased from 0.74m/s (left ball) to 0.69m/s (right ball) at the hitting moment. The velocity oscillation of two balls at around the hitting moment is caused by the shake of the ball as shown in Fig. 11(e). Six texture maps in Fig. 11(e) are the moments corresponding to moments in the circle of the Fig. 11(d). The angle between the tangential direction of the ball on the sides and the normal direction of the string is less than 90 degrees at the moments 1 and 6, the angle is equal to 90 degrees at the moments 2 and 5 and the angle is more than 90 degrees at the moments 3 and 4. Figures 11(d) and 11(e) indicates the velocity of the ball increases when the angle between the tangential direction of the ball on the sides and the normal direction of the string is less than 90 degrees and decreases when the angle is more than 90 degrees. The whole reconstructed process is provided in Visualization 3, which can show the shake of the ball clearly. In this experiment, the reconstructed rate is 357 fps and the replayed rate is 15 fps.

 figure: Fig. 11

Fig. 11 Measurement of the impact process of the Newton’s cradle. (a) Representative scenes at different time points. (b) Corresponding 3D reconstructions (Visualization 3). (c) The 3D point cloud of the scene at the first moment, with the color line showing the trajectory and velocity of the left and right ball. (d) The velocity profile of the left and right balls. (e) Different postures of the balls at the corresponding moments in (d).

Download Full Size | PDF

The maximum velocity of the ball is 0.74m/s, which is far lower than the projected and captured speed, so no phase unwrapping error occur in this experiment. However, when the speed of the measured object increases until motion-induced phase error between adjacent CCP1 exceeds π, errors will occur in the decoding order.

To evaluate the maximum measuring velocity of the moving object, motion in two directions which induce phase error is discussed: one is along the depth direction (Z-axis), and the other is along the perpendicular direction (X-axis) of the fringe lines. In the depth direction (Z-axis), the least height difference hmin caused by π phase difference is 150/14.4 = 10.41 mm as shown in Fig. 5. The image refreshing rate of the projector and image acquisition rate of the camera is r, and one pattern sequence includes m images, so the maximum velocity VZmax of measured objects in the depth direction (Z-axis) can be calculated by the following equation:

VZmax=hminr/m.

In the perpendicular direction (X-axis) of the fringe direction, the fourth reference with 150-mm height is nearest to the camera, so π phase difference in X-axis causes the least distance difference in X-axis on this plane. As shown in Fig. 12(a), the unwrapping phase of the fourth reference is obtained and the pixel coordinate is converted to the world coordinate in X-axis and Y-axis. To calculate the least distance difference dmin caused by phase, the derivative of distance in X-axis to phase in the middle row of Fig. 12(a) is shown in Fig. 12(b). So dmin = 2.07π × 10−3 m, and VXmax can be calculated by following equation:

VXmax=dminr/m.
In our system, r = 2500 and m = 7, so VZmax = 0.01041 × 2500/7 ≈3.72 m/s and VXmax = 2.07 × π × 10−3 × 2500/7 ≈2.32 m/s. Both of them are far higher than the speed of the moving target (0.74 m/s) in this experiment, so the object motion does not induce phase unwrapping errors in our measurement. To further increase the maximum measurable speed allowed, one can increase the projected and captured rate r, decrease the number of the patterns m in one sequence, or reduce the angle between the projector and the camera to increase hmin and dmin.

 figure: Fig. 12

Fig. 12 Evaluation of the maximum measuring velocity in the perpendicular direction of the fringe direction. (a) Unwrapping phase of the fourth reference. (b) The derivative of distance in X-axis to phase in the middle row of (a).

Download Full Size | PDF

4. Discussion

Our proposed method has the following advantages compared with other high-speed 3D shape measurement techniques.

  • Assurance of the optimum defocusing degree for binary dithering technique. Since only three phase-shifting sinusoidal patterns with single frequency are generated in our approach to obtain wrapped phase distribution, the optimum defocusing degree can be guaranteed in binary dithering technique, and thus patterns with higher sinusoidal quality can be obtained.
  • Simple and robust phase unwrapping in dynamic 3D shape measurement. Multi-wavelength (heterodyne) and multi-frequency (hierarchical) phase unwrapping algorithms [31] are widely used in high-speed 3D shape measurement. Multi-wavelength method increases the unambiguous measurement range by sacrificing its signal to noise ratio, so it is sensitive to the noise in phase unwrapping. Multi-frequency method needs to project the unit-frequency sinusoidal pattern, but it is difficult to preferably generate a unit-frequency sinusoidal pattern with only two gray scales. In addition, the optimal defocused degree cannot be ensured for the unit-frequency sinusoidal pattern because high-frequency sinusoidal patterns are optimally defocused to pursue higher measuring accuracy. However, in the proposed CCGC method, Gray-code patterns only have two gray scales, which have high robustness and anti-noise ability. And the cyclic complementary strategy can eliminate errors at the transition area between black and white codes. So complex dynamic scenes can be measured without phase unwrapping errors using the proposed method.
  • Absolute 3D shape measurement of multiple randomly moving objects with sharp edges. As shown in measuring experiment of collapsing building blocks, our proposed approach is capable of restoring absolute 3D geometries for many spatially isolated and randomly moving objects with sharp edges, which is a challenging scene to measure in previous approaches.
  • Larger unambiguous phase measurement range and higher measuring accuracy. Our proposed cyclic coding strategy doubles the period of the projected sinusoidal fringe, extends the unambiguous phase measurement range and consequently improves the accuracy of measurement without any additional patterns compared with our previously proposed work [27].

However, there is also the limitation that to be further improved in our proposed method. It is:

  • Limited measuring depth range. As mentioned in Sec. 2.4, phase difference of the whole measurement volume should be within 16π to correct errors caused by binarization and motion. And it leads to a limited measuring depth range in our approach although the range is relatively large for general measurement. In further work, the hypothesis of motion continuity could be used to correct errors in our method and the limitation of measurement depth will be eliminated.

5. Conclusion

A novel high-speed 3D shape measurement technique using cyclic complementary Gray-code light has been proposed. This method employs dithering technique to generate the binary patterns and binary defocusing technique to produce three phase-shifting sinusoidal patterns with single frequency. And the wrapped phase is calculated using phase-shifting algorithm. Then, the wrapped phase can be unwrapped pixel by pixel without ambiguities using additional four complementary Gary-code patterns, which can eliminate phase-unwrapping errors caused by the impact of defocus, non-uniform reflectivity of the tested surface, the effect of the background light intensity and the noise. Finally, the cyclic coding strategy can be used to double the period of the projected sinusoidal fringe, extend the unambiguous phase measurement range and consequently improve the accuracy of measurement without additional patterns using the reverse relationship between the last Gray-code patterns in the consecutive sequences. Comparative experiments verified that the proposed CCGC method outperforms the conventional GC and CGC methods due to its advantages of higher robustness, larger unambiguous phase measurement range, and less projected patterns. And experimental results on dynamic scenes demonstrated that the technique can reliably obtain high-quality shape and texture information of moving objects at a rate of 357 fps.

Funding

National Natural Science Foundation of China (61675141, 61722506, 61705105, 11574152) and National Key R&D Program of China (2017YFF0106403).

Acknowledgments

The authors would like to thank Dr. Lei Huang from Brookhaven National Lab in USA for his valuable advice and helpful discussion.

References

1. S. Zhang, “High-speed 3D shape measurement with structured light methods: A review,” Opt. Lasers Eng. 106, 119–131 (2018). [CrossRef]  

2. K. R. Ford, G. D. Myer, and T. E. Hewett, “Reliability of landing 3D motion analysis: implications for longitudinal analyses,” Med. Sci. Sports Exerc. 39(11), 2021–2028 (2007). [CrossRef]   [PubMed]  

3. E. Malamas, E. Petrakis, M. Zervakis, L. Petit, and J. Legat, “A survey on industrial vision systems, applications and tools,” Image Vis. Comput. 21(2), 171–188 (2003). [CrossRef]  

4. X. Su and Q. Zhang, “Dynamic 3D shape measurement: a review,” Opt. Lasers Eng. 48(2), 191–204 (2010). [CrossRef]  

5. S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010). [CrossRef]  

6. V. Tiwari, M. Sutton, and S. Mcneill, “Assessment of high speed imaging systems for 2D and 3D deformation measurements: methodology development and validation,” Exp. Mech. 47(4), 561–579 (2007). [CrossRef]  

7. Q. Zhang, X. Su, Y. Cao, Y. Li, L. Xiang, and W. Chen, “Optical 3D shape and deformation measurement of rotating blades using stroboscopic structured illumination,” Opt. Eng. 44(11), 113601 (2005). [CrossRef]  

8. S. Heist, A. Mann, P. Kühmstedt, P. Schreiber, and G. Notni, “Array projection of aperiodic sinusoidal fringes for high-speed three-dimensional shape measurement,” Opt. Eng. 53(11), 112208 (2014). [CrossRef]  

9. C. Zuo, T. Tao, S. Feng, L. Huang, A. Asundi, and Q. Chen, “Micro Fourier Transform Profilometry (μFTP): 3D shape measurement at 10,000 frames per second,” Opt. Lasers Eng. 102, 70–91 (2018). [CrossRef]  

10. G. A. Ayubi, J. A. Ayubi, J. M. Di Martino, and J. A. Ferrari, “Pulse-width modulation in defocused three-dimensional fringe projection,” Opt. Lett. 35(21), 3682–3684 (2010). [CrossRef]   [PubMed]  

11. Y. Wang and S. Zhang, “Optimal pulse width modulation for sinusoidal fringe generation with projector defocusing,” Opt. Lett. 35(24), 4121–4123 (2010). [CrossRef]   [PubMed]  

12. W. Lohry and S. Zhang, “3D shape measurement with 2D area modulated binary patterns,” Opt. Lasers Eng. 50(7), 917–921 (2012). [CrossRef]  

13. C. Zuo, Q. Chen, S. Feng, F. Feng, G. Gu, and X. Sui, “Optimized pulse width modulation pattern strategy for three-dimensional profilometry with projector defocusing,” Appl. Opt. 51(19), 4477–4490 (2012). [CrossRef]   [PubMed]  

14. C. Zuo, Q. Chen, G. Gu, S. Feng, and F. Feng, “High-speed three-dimensional profilometry for multiple objects with complex shapes,” Opt. Express 20(17), 19493–19510 (2012). [CrossRef]   [PubMed]  

15. L. Zhu, Y. Cao, D. He, and C. Chen, “Real-time tricolor phase measuring profilometry based on CCD sensitivity calibration,” J. Mod. Opt. 64(4), 379–387 (2017). [CrossRef]  

16. X.-Y. Su, W.-S. Zhou, G. von Bally, and D. Vukicevic, “Automated phase-measuring profilometry using defocused projection of a Ronchi grating,” Opt. Commun. 94(6), 561–573 (1992). [CrossRef]  

17. H. Ina, M. Takeda, and S. Kobayashi, “Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry,” Rev. Sci. Instrum. 72(12), 156–160 (1982).

18. X. Su, J. Li, L. Guo, and W. Su, “An Improved Fourier Transform Profilometry,” In Proc. SPIE 0954, (1989).

19. Q. Zhang and X. Su, “An optical measurement of vortex shape at a free surface,” Opt. Laser Technol. 34(2), 107–113 (2002). [CrossRef]  

20. Q. Zhang and X. Su, “High-speed optical measurement for the drumhead vibration,” Opt. Express 13(8), 3110–3116 (2005). [CrossRef]   [PubMed]  

21. Y. Wang, J. I. Laughner, I. R. Efimov, and S. Zhang, “3D absolute shape measurement of live rabbit hearts with a superfast two-frequency phase-shifting technique,” Opt. Express 21(5), 5822–5832 (2013). [CrossRef]   [PubMed]  

22. C. Zuo, Q. Chen, G. Gu, S. Feng, F. Feng, R. Li, and G. Shen, “High-speed three-dimensional shape measurement for dynamic scenes using bi-frequency tripolar pulse-width-modulation fringe projection,” Opt. Lasers Eng. 51(8), 953–960 (2013). [CrossRef]  

23. Y. Wang and S. Zhang, “Superfast multifrequency phase-shifting technique with optimal pulse width modulation,” Opt. Express 19(6), 5149–5155 (2011). [CrossRef]   [PubMed]  

24. J. Zhu, P. Zhou, X. Su, and Z. You, “Accurate and fast 3D surface measurement with temporal-spatial binary encoding structured illumination,” Opt. Express 24(25), 28549–28560 (2016). [CrossRef]   [PubMed]  

25. S. Heist, P. Lutzke, I. Schmidt, P. Dietrich, P. Kühmstedt, A. Tünnermann, and G. Notni, “High-speed three-dimensional shape measurement using GOBO projection,” Opt. Lasers Eng. 87, 90–96 (2016). [CrossRef]  

26. J. I. Laughner, S. Zhang, H. Li, C. C. Shao, and I. R. Efimov, “Mapping cardiac surface mechanics with structured light imaging,” Am. J. Physiol. Heart Circ. Physiol. 303(6), H712–H720 (2012). [CrossRef]   [PubMed]  

27. Q. Zhang, X. Su, L. Xiang, and X. Sun, “3D shape measurement based on complementary Gray-code light,” Opt. Lasers Eng. 50(4), 574–579 (2012). [CrossRef]  

28. Y. Wang and S. Zhang, “Three-dimensional shape measurement with binary dithered patterns,” Appl. Opt. 51(27), 6631–6636 (2012). [CrossRef]   [PubMed]  

29. K. V. Creath, “Phase-Measurement Interferometry Techniques,” Prog. Opt. 26, 349–393 (1988). [CrossRef]  

30. W. Li, X. Su, and Z. Liu, “Large-scale three-dimensional object measurement: a practical coordinate mapping and image data-patching method,” Appl. Opt. 40(20), 3326–3333 (2001). [CrossRef]   [PubMed]  

31. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016). [CrossRef]  

Supplementary Material (3)

NameDescription
Visualization 1       Visualization 1
Visualization 2       Visualization 2
Visualization 3       Visualization 3

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Figure 1
Figure 1 Sketch map of complementary Gray-code method. (a)-(c) Phase-shifting dithered patterns. (d) The wrapped phase. (e)-(g) Traditional Gray-code patterns. (h) The additional Gray-code pattern.
Fig. 2
Fig. 2 Sketch map of cyclic coding strategy of complementary Gray-code light. (a)-(c) Phase-shifting dithered patterns in pattern sequence n-1. (d)-(g) Cyclic complementary Gray-code patterns (CCP) in pattern sequence n-1. (h)-(n) Corresponding patterns in pattern sequence n. (o) CCP1 in pattern sequence n-1. (p) Uncorrected CCP1 in pattern sequence n. (q) Phase order k3. (r) Corrected CCP1 in pattern sequence n.
Fig. 3
Fig. 3 The obtainment of phase orders k1, k2 and k3.
Fig. 4
Fig. 4 Error caused by binarization and motion. (a) CCP1 in sequence n-1. (b) Uncorrected CCP1 in sequence n. (c) Phase order k3 with errors in sequence n. (d) Corrected CCP1 with errors in sequence n. (e) CCP4 in sequence n. (f) Phase order k2 with errors in sequence n. (g) Phase order k1 with errors in sequence n.
Fig. 5
Fig. 5 Correction of the phase-jump errors in phase-to-height mapping process. (a) Phase-to-height mapping space. (b) The middle cross-section of (a).
Fig. 6
Fig. 6 Photograph of the high-speed 3D shape measurement system.
Fig. 7
Fig. 7 Continuously captured fringe images for three methods (Visualization 1). (a) A sequence of 8 continuous fringe images for the GC and CGC methods. (b) Seven continuous fringe images for the CCGC method in sequence n-1. (c) Seven continuous fringe images for the CCGC method in sequence n.
Fig. 8
Fig. 8 Data processing and accuracy analyzing of the CCGC method. (a) Test scene consisting of a cooling fan for CPU, a standard ceramic ball and a free-falling table tennis at T = 100ms. (b) Uncorrected absolute phase. (c)-(d) One cross-section (highlighted in (b)) of the uncorrected absolute phase. (e) Corrected absolute phase. (f) Reconstructed result. (g) Sphere fitting of the table tennis in (f). (h) Sphere fitting of the standard ceramic ball in (f). (i) Error distribution of the measured table tennis. (j) Error distribution of the measured standard ceramic ball.
Fig. 9
Fig. 9 Comparative assessment of GC, CGC and CCGC methods.
Fig. 10
Fig. 10 Measurement of collapsing building blocks. (a) Representative collapsing scenes at different time points. (b) Corresponding 3D reconstructions (Visualization 2).
Fig. 11
Fig. 11 Measurement of the impact process of the Newton’s cradle. (a) Representative scenes at different time points. (b) Corresponding 3D reconstructions (Visualization 3). (c) The 3D point cloud of the scene at the first moment, with the color line showing the trajectory and velocity of the left and right ball. (d) The velocity profile of the left and right balls. (e) Different postures of the balls at the corresponding moments in (d).
Fig. 12
Fig. 12 Evaluation of the maximum measuring velocity in the perpendicular direction of the fringe direction. (a) Unwrapping phase of the fourth reference. (b) The derivative of distance in X-axis to phase in the middle row of (a).

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

I 1 ( x , y , n ) = α ( x , y , n ) { a p + b p cos [ ϕ ( x , y , n ) 2 π / 3 ] + β 1 ( x , y , n ) } + β 2 ( x , y , n )
I 2 ( x , y , n ) = α ( x , y , n ) { a p + b p cos [ ϕ ( x , y , n ) ] + β 1 ( x , y , n ) } + β 2 ( x , y , n )
I 3 ( x , y , n ) = α ( x , y , n ) { a p + b p cos [ ϕ ( x , y , n ) + 2 π / 3 ] + β 1 ( x , y , n ) } + β 2 ( x , y , n )
ϕ ( x , y , n ) = tan 1 3 ( I 1 ( x , y , n ) I 3 ( x , y , n ) ) 2 I 2 ( x , y , n ) I 1 ( x , y , n ) I 3 ( x , y , n )
k 3 ( x , y , n ) = C C P 1 ( x , y , n ) C C P 1 ( x , y , n 1 )
C C P 1 ( x , y , n ) = ( mod ( n + 1 , 2 ) * k 3 ( x , y , n ) ) C C P 1 ( x , y , n )
V 1 ( x , y , n ) = i = 1 3 C C P i ( x , y , n ) * 2 ( 3 i )
V 2 ( x , y , n ) = i = 1 4 C C P i ( x , y , n ) * 2 ( 4 i )
k 2 ( x , y , n ) = I N T ( ( i ( V 2 ( x , y , n ) ) + 1 ) / 2 )
Φ ( x , y , n ) = { ϕ ( x , y , n ) + 2 π ( k 2 ( x , y , n ) + 8 k 3 ( x , y , n ) ) , ϕ ( x , y , n ) - π / 2 ϕ ( x , y , n ) + 2 π ( k 1 ( x , y , n ) + 8 k 3 ( x , y , n ) ) , - π / 2 < ϕ ( x , y , n ) < π / 2 ϕ ( x , y , n ) + 2 π ( k 2 ( x , y , n ) + 8 k 3 ( x , y , n ) ) - 2 π , ϕ ( x , y , n ) π / 2 .
1 h ( x , y , n ) = u ( x , y ) + v ( x , y ) Δ Φ ( x , y , n ) + w ( x , y ) Δ Φ 2 ( x , y , n ) ,
Δ Φ ( x , y , n ) = { Δ Φ ( x , y , n ) 16 π , Δ Φ ( x , y , n ) > Φ 1 Δ Φ ( x , y , n ) + 16 π , Δ Φ ( x , y , n ) < Φ 2
V Z max = h min r / m .
V X max = d min r / m .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.