Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fringe-width encoded patterns for 3D surface profilometry

Open Access Open Access

Abstract

This paper presents a new fringe projection method for surface-shape measurement that uses novel fringe-width encoded fringe patterns. Specifically, the projection patterns are adjusted with the width of the fringe as the codeword. The wrapped phase with coding information is obtained by using the conventional wrapped phase calculation method, and the fringe order can be identified from the wrapped phase. After the fringe order is corrected based on the region growing algorithm, the fringe order and the wrapped phase can be directly used to reconstruct the surface. Static and dynamic measurements demonstrated the ability of the method to perform 3D shape measurement with only three projected patterns, single camera and projector in the least case.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In recent years, three-dimensional shape measurement has been widely used in industrial inspection, medicine, preservation of historical relics and other fields [15]. However, researchers have never stopped their pursuit of high-speed and high-accuracy measurement methods. Fringe projection profilometry (FPP) is one of the most used techniques at present [610]. Usually, one FPP system consists of a camera, a digital projector and a computer. In the process of measurement, the projector actively projects the pre-set fringe patterns to the object, and at the same time, the camera captures the images deformed by the modulation of the object, and these captured images contain the depth information of the object. Through the geometric relationship between the projector and the camera, the three-dimensional information of the object can be reconstructed.

The current research on the improvement of the FPP method is mainly in two aspects: one is to improve the hardware equipment of the measurement system. In addition to using cameras and projectors with higher resolution and frame rate, there are also ways such as introducing another camera. Tao et al. [11] proposed a dual-frequency phase shift technique based on multi-view fringe projection system. In this method, five fringe images are projected, and the reference phase is obtained from two low-frequency images through the geometric constraints between the two cameras, and then used to unfold the high-frequency wrapped phase. Li et al. [12] proposed a high-speed 3-D measurement framework with multi-view phase shift. Different from the traditional method, the frame can find the corresponding points directly from the wrapped phase image without phase unwrapping. Although the above two methods improve the measurement accuracy to some extent, the introduction of extra cameras also brings the complexity and cost of the measurement system. In addition, the measurement range is further reduced due to the difference in field of view between multiple cameras. The other is on the phase unwrapping algorithm. Traditional FPP methods are generally divided into spatial phase unwrapping algorithm and temporal phase unwrapping algorithm [13,14]. Although the spatial phase unwrapping method needs fewer fringe patterns to retrieve the absolute phase, it does not work well for complex contour or separated objects because surrounding pixels are needed. The temporal phase unwrapping algorithm [1519] is a more general method, which carries out the unwrapping process by projecting additional images. Therefore, many research achievements have been born on the idea of temporal phase unwrapping algorithm.

The multi-wavelength method [2022] projects several phase-shift patterns of different frequencies and uses the relationship between low-frequency patterns and high-frequency patterns to get the fringe order, so as to realize the unwrapping phase. The idea of gray-code method [2325] is similar to that of phase-code method [26,27], which embeds codewords into extra-projected patterns, and then the corresponding fringe order can be determined by decoding these extra-projected images. These methods can realize the unwrapping of the wrapped phase, but too many extra projected images also lead to the reduction of measurement speed.

To solve the limitation of extra projection in time phase unwrapping algorithm, so as to improve the speed of measurement, some new ideas are also put forward. Wu et al. [28] proposed a method of four special intensity coded fringe patterns to realize pixel-by-pixel wrapped phase unwrapping by embedding codewords in the phase-shift domain without adding additional patterns. An et al. [29] proposed a method to realize wrapped phase unwrapping by the geometric relationship of the system, but it is necessary to generate a reference plane in advance, and there is a certain restriction on the height of the measured object.

In this article, in order to solve the problem that extra images are required to unwrap the wrapped phase as in other temporal phase unwrapping methods. We present a new fringe projection method that uses novel fringe-width encoded fringe patterns. The fringe-width contains the information of De-Bruijn sequence to identify the fringe order which are directly used to determine the texture of the target object. Since only three projected patterns, single camera and projector are required in the least case, the proposed method can improve the measuring speed at no additional cost.

2. Methods

2.1 Phase-shifting algorithm

Over the years, many fringe analysis techniques have been proposed to retrieve phase information. The phase-shifting methods are widely used due to its accuracy and robustness. Without loss of generality, this paper verifies the feasibility of the proposed fringe-width encoding method based on the three-step phase-shifting algorithm. The three phase-shifting patterns can be mathematically written as:

$$\begin{aligned} &{{I_1}({u,v} )= A({u,v} )+ B({u,v} )\cos ({\phi ({u,v} )- 2\pi /3} )}\\& {{I_2}({u,v} )= A({u,v} )+ B({u,v} )\cos ({\phi ({u,v} )} )}\\ &{{I_3}({u,v} )= A({u,v} )+ B({u,v} )\cos ({\phi ({u,v} )+ 2\pi /3} )} \end{aligned}$$
where ${I_i}(u,v)$ represents the intensity of point $(u,v)$ in the $i - th$ patterns ($i$=1,2,3). A, B, $\phi $ are the average intensity, intensity modulation and phase respectively. The phase $\varphi $ can be solved by solving the combine equations of Eq. (1):
$$\varphi ({u,v} )= \arctan \left\{ {\frac{{\sqrt 3 [{{I_3}(u,v) - {I_1}(u,v)} ]}}{{2{I_2}(u,v) - {I_1}(u,v) - {I_3}(u,v)}}} \right\}$$

The phase $\varphi (u,v)$ is in the range of 0 and 2π in one period, so it is also called wrapped phase. In order to unwrap the wrapped phase $\varphi (u,v)$ and restore the complete required unwrapped phase, a spatial or temporal phase unwrapping algorithm can be used. The focus of the phase unwrapping algorithm is to determine the fringe order $K(u,v)$ for each point. Then the unwrapped phase $\phi (u,v)$ can be calculated as follows:

$$\phi ({u,v} )= 2\pi \times K(u,v) + \varphi ({u,v} )$$

The spatial phase unwrapping algorithm can only generate relative phase, while the temporal phase unwrapping algorithm such as the multi-frequency phase-shifting algorithm will lead to extra projection patterns. We propose a new algorithm to determine the fringe order $K(u,v)$ without adding additional projection patterns.

2.2 Fringe width encoded pattern

Different from conventional multi-frequency phase-shifting method, the fringe width of the projected pattern we project contains coded information. To distinguish the fringe order $K(u,v)$, the unique subsequences of De-Bruijn sequence are adopted. There are ${2^n}$ subsequences composed of elements $\{ 0,1\} $ ($n$ is the length of the subsequence). For each element in the De-Bruijn sequence, it happens to form a subsequence with a length of n starting with this element. So, the length of the De-Bruijn sequence is ${2^n}$. For instance, the De-Bruijn sequence for elements $\{ 0,1\} $ and 6-length ($n = 6$) subsequence is $0000100110101111\ldots $ (Here only shows the first 16 elements). In this case, each contiguous 6-length subsequence will only appear once without repetition. These subsequences can be used to distinguish different fringe order. Every unique 6-length subsequence corresponds with a fringe order shown in Table 1. If one wants to distinguish the fringe order with fewer subsequences, we suggest to combine geometric constraints [29,30].

Tables Icon

Table 1. The look-up table between the subsequence and the fringe order in the example

After we have the De-Bruijn sequence, as shown in Fig. 1(a), we first generate three-step phase-shifting patterns(${I_1}$, ${I_2}$, ${I_3}$) in which the number of fringe periods is the same as the number of the De-Bruijn sequence. Take ${I_1}$ as an example, each period fringe is labelled as 0 or 1 according to the De-Bruijn sequence. Labelled 0 and labelled 1 are different in the width of the fringes. For the period fringe labelled 1, the width will be doubled, while the period fringe labelled 0 will remain the same. That is, if the number of pixels in the width direction of one period fringe labelled 0 is S, then the number of pixels in the width direction of the fringe labelled 1 is $2S$. After the width adjustment of each period fringe is completed, the original ${I_1}$ become $I_1^{\prime}$ with coding information.

 figure: Fig. 1.

Fig. 1. The patterns of the proposed method in the example. (a) The original three-step phase-shifting patterns. (b) Three fringe-width encoded fringe patterns.

Download Full Size | PDF

As shown in Fig. 1(b), for the original three-step phase-shifting patterns(${I_1}$, ${I_2}$, ${I_3}$), the width of each period fringe is adjusted according to the De-Bruijn sequence, and finally our novel fringe-width encoded fringe patterns($I_1^{\prime}$, $I_2^{\prime}$, $I_3^{\prime}$) are generated.

2.3 Initial fringe order map acquisition

In our patterns, similarly, the wrapped phase $\varphi $ can be obtained by bringing $I_1^{\prime}$, $I_2^{\prime}$, $I_3^{\prime}$ ($I_1^{\prime}$, $I_2^{\prime}$, $I_3^{\prime}$ correspond to ${I_1}$, ${I_2}$, ${I_3}$) into Eq. (2). As shown in Fig. 3(b), the calculated wrapped phase $\varphi $ also inherits the width encoding mode from the fringe patterns. The realization of obtaining the coding mode in the $\varphi $ and the fringe order corresponding to each period fringe includes the following steps:

 figure: Fig. 2.

Fig. 2. The growing processes. (a)The center seed pixel and eight neighborhood pixels. (b)New seed pixel obtained after a growing process.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. The principle of steps 2.2 to 2.4. (a) One cross-section of the three patterns. (The first six periods along the horizontal direction are shown.) (b) One cross-section of the wrapped phase computed from (a). (c) Extract the phase 2π discontinuity from (b). (d) The corresponding labels for all phase bands on a cross-section. (e) The corresponding process between the obtained labels and the look-up table. (f) The initial fringe order map. (Some errors still exist) (g) The final correct fringe order map after correction.

Download Full Size | PDF

2.3.1 Division of each period of fringe

For every cross section of the wrapped phase $\varphi $, we extract the place where the phase $2\pi$ discontinuity occurs as the demarcation point shown in Fig. 3(c). Through these demarcation points, many phase bands can be obtained, and the next step is to analyze these phase bands.

2.3.2 Determination of the label of each phase band

For the phase bands obtained from the first step, we set up a sliding window. Every $(n + 1)$ consecutive phase bands are selected as a group, and constantly slide one phase band to the right each time. For each group, we find the narrowest and widest phase band, half of the width of the two phase bands is found as a reference ${\tau _1}$. If the width of a phase band in the group is more than ${\tau _1}$, the phase band is marked as 1, otherwise it is marked as 0.

After all the groups have completed the analysis, we have obtained the corresponding labels for all phase bands on a cross section of the wrapped phase $\varphi $ shown in Fig. 3(d).

2.3.3 Acquisition of the initial fringe order map

For one cross section, since we have obtained the corresponding labels of all phase bands on that section, and we also have the original De-Bruijn sequence on hand, we just need to match the subsequence to get the fringe order of each phase band. If the length of subsequence is n, the fringe order $K(u,v)$ can be searched from every unique contiguous n-length subsequence based on a look-up table similar to Table 1. For example, as shown in the Fig. 3(e), if a label of a phase band is 0, and the label of the next 5 phase bands are 00010, then the corresponding fringe order of pixel $(u,v)$ in the phase band can be found to be $K(u,v) = 1$ from Table 1.

So far, we have obtained a fringe order on one cross section. Repeat the above steps (a to c) for each cross section, and then the initial fringe order map on the entire ROI area is formed shown in Fig. 3(f).

2.4 Fringe order map correction

In reality, due to shadows, occlusion of the object itself, etc., there are still some errors in the initial fringe order map obtained from the previous step. In this case, the correction of the fringe order map is needed. Although the fringe order in one fringe period may have error at some regions with sharp changes in height, the initial fringe order at most remaining regions are still reliable, and can be used to correct the fringe order map error. The fringe order map correction includes the following steps:

2.4.1 Phase period segmentation based on region growing algorithm

First, the region of each phase period should be segmented, and all pixels in one region should have the same fringe order. The region growing algorithm is used in the wrapped phase $\varphi $, and the judgment criterion for determining the growing process is more intuitive expressed as follows:

$${D = |{\varphi ({u_{nei}},{v_{nei}}) - \varphi ({u_{cen}},{v_{cen}})} |\textrm{ }} $$
$$({u_{nei}},{v_{nei}}) = \begin{cases}new\textrm{ }seed&if\textrm{ }D < {\tau_2}\\ not\textrm{ }a\textrm{ }seed&if\textrm{ }D > {\tau_2} \end{cases}.$$
where $\varphi ({u_{cen}},{v_{cen}})$ is the phase value of the center seed pixel, $\varphi ({u_{nei}},{v_{nei}})$ is the phase value of a pixel in the eight neighborhoods. Determine whether the absolute value D of the phase difference between each of the eight neighborhood pixels and the central seed pixel is less than the threshold ${\tau _2}$, and if it is less, it will grow as a new seed pixel. Figure 2 shows the growing process.

Until no new seed pixel is generated, the region of the phase period consisting of all seeds is distinguished.

2.4.2 Correction based on the voting mechanism

In each obtained region, by using the voting mechanism, the fringe order with the highest frequency is selected as the unified fringe order of this region.

After the fringe order correction, the correct fringe order can be obtained as shown in Fig. 3(g). Figure 3 shows the content of steps 2.2 to 2.4, from the projection patterns to the acquisition of the fringe order map.

2.5 Corresponding relation

In order to realize the reconstruction of the three-dimension surface, the corresponding relation between the pixel $({u^p},{v^p})$ of the projected image and the phase $\varphi (u,v)$ of the image captured is required.

First, we use a pinhole model to describe the projection from the 3D world coordinates $({X^w},{Y^w},{Z^w})$ to the 2D imaging coordinates $(u,v)$. The linear pinhole model can be mathematically represented as:

$$s\left[ {\begin{array}{c} u\\ v\\ 1 \end{array}} \right] = \left[ {\begin{array}{ccc} {{f_u}}&\gamma &{{u_0}}\\ 0&{{f_v}}&{{v_0}}\\ 0&0&1 \end{array}} \right]\left[ {\begin{array}{cccc} {{r_{11}}}{{r_{12}}}{{r_{13}}}{{t_1}}\\ {{r_{21}}}{{r_{22}}}{{r_{23}}}{{t_2}}\\ {{r_{31}}}{{r_{32}}}{{r_{33}}}{{t_3}} \end{array}} \right]\left[ {\begin{array}{c} {{X^w}}\\ {{Y^w}}\\ {{Z^w}}\\ 1 \end{array}} \right]$$
where s is a scaling factor, ${f_u}$ and ${f_v}$ represent the effective focal length in the u direction and $v$ direction respectively, $\gamma$ is the skew factor, $({u_0},{v_0})$ is the principal point. These parameters constitute the intrinsic parameter matrix, while ${r_{ij}}$ and ${t_i}$ are the extrinsic parameters.

Intrinsic and extrinsic parameter matrix can be obtained by calibration, and we use the following projection matrix M to represent the intrinsic and extrinsic parameter matrix:

$$\begin{array}{c} M = \left[ {\begin{array}{ccc} {{f_u}}&\gamma &{{u_0}}\\ 0&{{f_v}}&{{v_0}}\\ 0&0&1 \end{array}} \right]\left[ {\begin{array}{cccc} {{r_{11}}}&{{r_{12}}}&{{r_{13}}}&{{t_1}}\\ {{r_{21}}}&{{r_{22}}}&{{r_{23}}}&{{t_2}}\\ {{r_{31}}}&{{r_{32}}}&{{r_{33}}}&{{t_3}} \end{array}} \right]\\ = \left[ {\begin{array}{cccc} {{m_{11}}}&{{m_{12}}}&{{m_{13}}}&{{m_{14}}}\\ {{m_{21}}}&{{m_{22}}}&{{m_{23}}}&{{m_{24}}}\\ {{m_{31}}}&{{m_{32}}}&{{m_{33}}}&{{m_{34}}} \end{array}} \right] \end{array}$$

The projection process of the projector can be regarded as the inverse process of acquiring the image by the camera, so the same lens model for the camera is applicable to the projector. [31].With the calibration parameters of the camera-projector system, the following two equations can be obtained:

$$s\left[ {\begin{array}{c} u\\ v\\ 1 \end{array}} \right] = M\left[ {\begin{array}{c} {{X^w}}\\ {{Y^w}}\\ {{Z^w}}\\ 1 \end{array}} \right],\textrm{ }{s^p}\left[ {\begin{array}{c} {{u^p}}\\ {{v^p}}\\ 1 \end{array}} \right] = {M^p}\left[ {\begin{array}{c} {{X^w}}\\ {{Y^w}}\\ {{Z^w}}\\ 1 \end{array}} \right]$$
where superscript p represents projector, while no superscript represents the camera.

After completing the calibration of the camera-projector system, M and ${M^p}$ are known. Equation (8) can provide six equations with a total of seven unknowns, which requires an extra constraint to find the unique solution. The corresponding relation between the pixel $({u^p},{v^p})$ of the projected image and the pixel $(u,v)$ of the image captured is the last piece of the puzzle. Assume that fringe patterns vary sinusoidally along ${u^p}$ direction and remain constant along ${v^p}$ direction, and the fringe order in which $(u,v)$ is located is $K(u,v)$. The corresponding relation can be expressed as follows:

$${u^p} = \begin{cases} W + \varphi (u,v) \times S/(2\pi )&if\textrm{ }K(u,v)\textrm{ }is\textrm{ }labelled\textrm{ }0\textrm{ }\\ W + \varphi (u,v) \times 2S/(2\pi )&if\textrm{ }K(u,v)\textrm{ }is\textrm{ }labelled\textrm{ }1 \end{cases}.$$
$$W = {w_0} \times S + {w_1} \times 2S$$

Here W is the starting position of the fringe order $K(u,v)$. W can be obtained by Eq. (10). ${w_0}$ is the number of times that labelled 0 appears before $K(u,v)$ in the De-Bruijn sequence, and similarly, ${w_1}$ is the number of times that labelled 1 appears before $K(u,v)$ in the De-Bruijn sequence. $S$ ($2S$) is the number of pixels in the width direction of one period fringe labelled 0(1). Figure 4 better explains the respective meanings of the variables in Eq. (10). Taking $K(u,v)\textrm{ = }10$ as an example, the corresponding label is 0, and the variable ${w_0} = 6$, ${w_1} = 3$.

 figure: Fig. 4.

Fig. 4. The respective meanings of the variables in Eq. (10) (take $K(u,v)\textrm{ = }10$ as an example).

Download Full Size | PDF

Through the relationship between $({u^p},{v^p})$ and $(u,v)$, we can solve the equations obtained by Eq. (8) and finally obtain the three-dimensional information of the object.

3. Experiments

3.1 Static experiments

To verify the feasibility of the proposed method, an experimental system was developed consisting of one single CCD camera (Model: G-505B, resolution: 2056×2452) and one digital-light-processing projector (Model: Acer H750, resolution: 1024×768). In the experiments, the projector was placed about 900 mm in front of the reference plane, and the camera was placed approximately 350 mm from the side of the projector. In the projection patterns, without loss of generality, we choose the three-step phase-shifting algorithm as the basis. The length of the De-Bruijn sequence is ${2^n} = 64$, according to the optimal frequency of our system [32,33]. For this De-Bruijn sequence, the look-up table similar to Table 1 is generated. In this case, each contiguous 6-length subsequence will only appear once without repetition. Moreover, to reduce the influence of the projector nonlinearity, we pre-distorted these generated patterns [34]. After the image is captured by the camera, the Otsu algorithm [35] is used to preprocess the captured image to remove the shadow and low brightness area.

First, we compared the accuracy of the proposed method with the widely used conventional three-frequency three step algorithm which needs nine patterns. In this experiment, we measured a standard ceramic plate. Figure 5(a) shows one captured pattern of the ceramic plate, and Fig. 5(b) shows one cross-section of the retrieved wrapped phased map and the fringe order map. As shown in Fig. 5(c) and (d), we evaluated the flatness of 3D reconstruction results of the two methods in Geomagic Studio 2017. Geomagic Studio fits an ideal plane based on the principle of least squares. The plane-fitting standard deviation of the 3D reconstruction errors are 0.03188 mm for the conventional method and 0.03741 mm for the proposed method. The comparative experiments illustrate that the proposed method achieves the accuracy of the conventional method.

 figure: Fig. 5.

Fig. 5. (a) One captured patterns of the ceramic plate. (b) One cross-section of the retrieved wrapped phased map and the fringe order map. Figure 5. (c) and (d) The 3D reconstruction results of two methods and their errors distribution. (c) Proposed method and (d) the conventional method.

Download Full Size | PDF

To further check the measuring accuracy of the proposed method in 3D reconstruction, we use the same two methods to measure two ceramic standard spheres (A and B are connected by a black rod). The measured standard spheres are shown in Fig. 6(a). The wrapped phase map obtained by Eq. (2) is shown in Fig. 6(b). The identified fringe order map before and after applying correction, are shown in Fig. 6(c) and Fig. 6(d), respectively. The fringe order of captured patterns is assigned correctly in most regions, although some errors may remain at shadows and obscured regions. Figure 6(e) shows the final 3D reconstructed point cloud recovered by the wrapped phase and fringe order map. Figure 6(f) shows the 3D reconstruction results.

 figure: Fig. 6.

Fig. 6. The measurement results of two ceramic standard spheres of the proposed method. (a) The measured standard spheres. (b) The wrapped phase map. (c) The identified fringe order map before correction. (d) The identified fringe order map after correction. (e) The final 3D reconstructed point cloud. (f) The 3D reconstruction results of the proposed method.

Download Full Size | PDF

Table 2 shows the results in measuring two standard spheres by the two methods. The measured standard values of the sphere A and sphere B are obtained by the Coordinate Measuring Machine (CMM) in an environment with a temperature of 19.6 degrees Celsius and a relative humidity of 51%. The measured standard diameters of the sphere A and sphere B are 50.8070 mm with 1 μm uncertainty, and 50.8048 mm with 1 μm uncertainty, respectively. The measured standard center-to-center distance of the two spheres is 100.0343 mm with 2 μm uncertainty. For the conventional method, the measured diameter of the sphere A is 50.8154 mm (measuring error is +0.0084 mm), and that of the sphere B is 50.8077 mm (measuring error is +0.0029 mm). The measured center-to-center distance of the two spheres obtained by the conventional method is 100.0465 mm with +0.0122 mm measuring error. The sphere-fitting standard deviations of sphere A and sphere B are 0.0496 mm and 0.0538 mm, respectively. For the proposed method, the measured diameters of the sphere A and sphere B are 50.8158 mm (measuring error is +0.0088 mm), and 50.8133 mm (measuring error is +0.0085 mm), respectively. The measured center-to-center distance of the two spheres obtained by the proposed method is 100.0479 mm with +0.0136 mm measuring error. The sphere-fitting standard deviations of sphere A and sphere B are 0.0533 mm and 0.0652 mm, respectively. The accuracy was largely dependent on the position and posture of the target object, system calibration, and gamma correction. The experimental results further show that the proposed method has high- accuracy in 3D measurement.

Tables Icon

Table 2. The comparison of the results in measuring two standard spheres by the two methods

Measurement is also performed on a colored plastic turtle using our method. The 3D reconstruction results of the colored plastic turtle (Fig. 7(a)) is shown in Fig. 7(b). These 3D reconstruction results demonstrate the ability of the proposed method to perform 3D shape measurement with only three projected patterns, single camera and projector.

 figure: Fig. 7.

Fig. 7. The proposed method in measuring the colored plastic turtle. (a) The 2D photograph of the measured colored plastic turtle. (b) The 3D reconstruction model with color.

Download Full Size | PDF

3.2 Dynamic experiments

In order to validate the feasibility of the proposed method in high-speed measurement, we conduct experiments on two common vibration scenes. In this work, we develop a high-speed measurement system, that includes a high-speed camera (Model: Phantom S210, resolution: 1024×768, fps:1730) and a DLP 4500 LightCrafter DMD kit (Model: Texas Instruments, resolution: 912×1140, fps:120(8-bit)/4225(1-bit)). The frame rate of projection and acquisition used in the dynamic experiment is 1700 frames/s, and the projector is in 1-bit projection mode. The techniques used for converting the binary patterns to fringe patterns is based on the work of Zhang [36].

In the first experimental scene, we build a vibrating cantilever beam to verify the performance of the proposed method in high-speed scenario. The cantilever beam is made of a steel ruler wrapped in white paper. One end of cantilever is clamped on the frame while the other end is excited by human hand. First, we manually give the cantilever beam an initial pressure to bend it. After releasing the hand, the cantilever beam starts to vibrate, and then we start the high-speed measurement system for acquisition. In order to quantitatively study the vibration process, we select three points on the cantilever beam to see the amplitude of height direction at different positions. Visualization 1 shows all 3D frames of the measurement sequence. As shown in Fig. 8(a), the first point P1 is selected as a corner of the free end of the cantilever beam, and the second point P2 and the third point P3 are located at the edge of the cantilever beam on the same side as P1, and the distances to the point P1 are around 70mm and 140mm respectively. Figure 8(b), (c) and (d) show the results of a three-dimensional reconstruction of the cantilever beam at different times during vibration. The dots of different colors are also used to identify the three-dimensional points corresponding to P1, P2, P3. Figure 8 (e) shows the amplitude of height direction of P1, P2, P3 (calculated on the basis of the height of P1, P2, P3 in the static state before vibration). We can see two interesting phenomena about the vibration of the cantilever beam, one is that the maximum amplitude of P1 is larger than that of P2 and the maximum amplitude of P2 is larger than that of P3, that is, the closer to the free end, the greater the amplitude of the point. The other is that the amplitude of P1 decreases gradually from around 8 mm to 4 mm, that is, the amplitude of cantilever beam slows down gradually with time.

 figure: Fig. 8.

Fig. 8. The experiment of a vibrating cantilever beam (associated with Visualization 1). (a) A pattern captured during vibration. (b), (c) and (d) Corresponding 3D reconstruction at different times. (e) The amplitude of height direction of P1, P2, P3 as a function of time.

Download Full Size | PDF

In the second experimental scene, we measured and reconstructed a paper plane that vibrates in the wind. A static box placed next to the plane for comparison. The paper plane and the static box are isolated. As shown in Fig. 9(a), the paper plane is glued to a small bracket, while the wind is generated by a fan. The paper plane is on the windward side. After the fan is activated, the paper plane starts to vibrate under the action of the wind, while the static box cannot be moved by the wind and remains stationary. Similarly, we choose P1 and P2, the corner of the wings on both sides, to see the wing vibration of the paper plane. P3 is chosen on the static box, whose displacement is basically zero. Visualization 2 shows the entire video sequence. Figure 9(b) shows the results of a three-dimensional reconstruction of a paper aircraft during vibration. P1 and P2 are the three-dimensional points corresponding to the corners. Figure 9(c) shows the displacement in height of points P1, P2 and P3.

 figure: Fig. 9.

Fig. 9. The experiment of a paper plane blown by the wind (associated with Visualization 2). (a) A pattern captured during vibration. (b) Corresponding 3D reconstruction. (c) The displacement in height of points P1, P2 and P3 as a function of time.

Download Full Size | PDF

The results of these dynamic experiments verify that the proposed method can be used for high-speed measurement, and provides high-accuracy quantitative evaluation of the feature points on the surface of objects. On the other hand, these results show the great potential of the application of high-speed measurement in the field of engineering.

4. Conclusions

This paper proposes a new fringe projection method that uses novel fringe-width encoded fringe patterns. By using the new fringe-width encoded method and the correction based on region growing algorithm, the fringe order of captured patterns is assigned correctly in most regions. The measurement results demonstrate the ability of the proposed method to perform 3D shape measurement with only three projected patterns, single camera and projector in the least case. The proposed method can improve the measuring speed at no additional cost, and is suitable for high-speed measurement.

Funding

Key Technologies Research and Development Program (2018YFB2001400).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. L. Gomes, O. R. P. Bellon, and L. Silva, “3D reconstruction methods for digital preservation of cultural heritage: A survey,” Pattern Recognit. Lett. 50, 3–14 (2014). [CrossRef]  

2. C. S. Kim, W. Kim, K. Lee, and H. Yoo, “High-speed color three-dimensional measurement based on parallel confocal detection with a focus tunable lens,” Opt. Express 27(20), 28466–28479 (2019). [CrossRef]  

3. S. Ordones, M. Servin, M. Padilla, I. Choque, J. L. Flores, and A. Munoz, “Shape defect measurement by fringe projection profilometry and phase-shifting algorithms,” Opt. Eng. 59, 10 (2020). [CrossRef]  

4. S. C. Yang, G. X. Wu, Y. X. Wu, J. Yan, H. F. Luo, Y. N. Zhang, and F. Liu, “High-accuracy high-speed unconstrained fringe projection profilometry of 3D measurement,” Opt. Laser Technol. 125, 6 (2020). [CrossRef]  

5. D. Duadi, N. Ozana, N. Shabairou, M. Wolf, Z. Zalevsky, and A. Primov-Fever, “Non-contact optical sensing of vocal fold vibrations by secondary speckle patterns,” Opt. Express 28(14), 20040–20050 (2020). [CrossRef]  

6. Y. S. Hu, J. T. Xi, J. F. Chicharo, W. Q. Cheng, and Z. K. Yang, “Inverse Function Analysis Method for Fringe Pattern Profilometry,” IEEE Trans. Instrum. Meas. 58(9), 3305–3314 (2009). [CrossRef]  

7. P. K. Budianto and Lun, “Inpainting for Fringe Projection Profilometry Based on Geometrically Guided Iterative Regularization,” IEEE Trans. on Image Process 24(12), 5531–5542 (2015). [CrossRef]  

8. Q. H. Guo, J. T. Xi, L. M. Song, Y. G. Yu, Y. K. Yin, and X. Peng, “Fringe Pattern Analysis With Message Passing Based Expectation Maximization for Fringe Projection Profilometry,” IEEE Access 4, 4310–4320 (2016). [CrossRef]  

9. S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: A review,” Optics and Lasers in Engineering 107, 28–37 (2018). [CrossRef]  

10. C. W. Zhang, H. Zhao, J. C. Qiao, C. Q. Zhou, L. Zhang, G. L. Hu, and H. H. Geng, “Three-dimensional measurement based on optimized circular fringe projection technique,” Opt. Express 27(3), 2465–2477 (2019). [CrossRef]  

11. T. Y. Tao, Q. Chen, S. J. Feng, Y. Hu, J. Da, and C. Zuo, “High-precision real-time 3D shape measurement using a bi-frequency scheme and multi-view system,” Appl. Opt. 56(13), 3646–3653 (2017). [CrossRef]  

12. Z. W. Li, K. Zhong, Y. F. Li, X. H. Zhou, and Y. S. Shi, “Multiview phase shifting: a full-resolution and high-speed 3D measurement framework for arbitrary shape dynamic objects,” Opt. Lett. 38(9), 1389–1391 (2013). [CrossRef]  

13. Z. Z. Wang, “Review of real-time three-dimensional shape measurement techniques,” Measurement 156, 15 (2020). [CrossRef]  

14. Z. Cai, X. Liu, G. Pedrini, W. Osten, and X. Peng, “Structured-light-field 3D imaging without phase unwrapping,” Optics and Lasers in Engineering 129, 106047 (2020). [CrossRef]  

15. W. Yin, S. Feng, T. Tao, L. Huang, M. Trusiak, Q. Chen, and C. Zuo, “High-speed 3D shape measurement using the optimized composite fringe patterns and stereo-assisted structured light system,” Opt. Express 27(3), 2411–2431 (2019). [CrossRef]  

16. D. Zheng, K. Qian, F. Da, and H. S. Seah, “Ternary Gray code-based phase unwrapping for 3D measurement using binary patterns with projector defocusing,” Appl. Opt. 56(13), 3660–3665 (2017). [CrossRef]  

17. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Optics and Lasers in Engineering 85, 84–103 (2016). [CrossRef]  

18. Z. Wu, C. Zuo, W. Guo, T. Tao, and Q. Zhang, “High-speed three-dimensional shape measurement based on cyclic complementary Gray-code light,” Opt. Express 27, 1283–1297 (2019). [CrossRef]  

19. S. Xing and H. Guo, “Temporal phase unwrapping for fringe projection profilometry aided by recursion of Chebyshev polynomials,” Appl. Opt. 56(6), 1591–1602 (2017). [CrossRef]  

20. L. Song, Y. Chang, Z. Li, P. Wang, G. Xing, and J. Xi, “Application of global phase filtering method in multi frequency measurement,” Opt. Express 22(11), 13641–13647 (2014). [CrossRef]  

21. Y. Hu, Q. Chen, Y. Liang, S. Feng, T. Tao, and C. Zuo, “Microscopic 3D measurement of shiny surfaces based on a multi-frequency phase-shifting scheme,” Optics and Lasers in Engineering 122, 1–7 (2019). [CrossRef]  

22. J. Zhang, B. Luo, X. Su, Y. Wang, X. Chen, and Y. Wang, “Depth range enhancement of binary defocusing technique based on multi-frequency phase merging,” Opt. Express 27(25), 36717–36730 (2019). [CrossRef]  

23. Z. J. Wu, W. B. Guo, and Q. C. Zhang, “High-speed three-dimensional shape measurement based on shifting Gray-code light,” Opt. Express 27(16), 22631–22644 (2019). [CrossRef]  

24. B. Cai, Y. Yang, J. Wu, Y. Wang, M. Wang, X. Chen, K. Wang, and L. Zhang, “An improved gray-level coding method for absolute phase measurement based on half-period correction,” Optics and Lasers in Engineering 128, 106012 (2020). [CrossRef]  

25. X. He, D. Zheng, Q. Kemao, and G. Christopoulos, “Quaternary gray-code phase unwrapping for binary fringe projection profilometry,” Optics and Lasers in Engineering 121, 358–368 (2019). [CrossRef]  

26. Y. Wang, L. Liu, J. Wu, X. Chen, and Y. Wang, “Enhanced phase-coding method for three-dimensional shape measurement with half-period codeword,” Appl. Opt. 58(27), 7359–7366 (2019). [CrossRef]  

27. Y. Wang, L. Liu, J. Wu, X. Song, X. Chen, and Y. Wang, “Dynamic three-dimensional shape measurement with a complementary phase-coding method,” Optics and Lasers in Engineering 127, 105982 (2020). [CrossRef]  

28. G. Wu, Y. Wu, L. Li, and F. Liu, “High-resolution few-pattern method for 3D optical measurement,” Opt. Lett. 44(14), 3602–3605 (2019). [CrossRef]  

29. Y. An, J.-S. Hyun, and S. Zhang, “Pixel-wise absolute phase unwrapping using geometric constraints of structured light system,” Opt. Express 24(16), 18445–18459 (2016). [CrossRef]  

30. C. Jiang, B. Li, and S. Zhang, “Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers,” Optics and Lasers in Engineering 91, 232–241 (2017). [CrossRef]  

31. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]  

32. J. Li, L. G. Hassebrook, and C. Guan, “Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity,” J. Opt. Soc. Am. A 20(1), 106–115 (2003). [CrossRef]  

33. M. Zhang, Q. Chen, T. Tao, S. Feng, Y. Hu, H. Li, and C. Zuo, “Robust and efficient multi-frequency temporal phase unwrapping: optimal fringe frequency and pattern sequence selection,” Opt. Express 25(17), 20381–20400 (2017). [CrossRef]  

34. Z. Wang, D. A. Nguyen, and J. C. Barnes, “Some practical considerations in fringe projection profilometry,” Optics and Lasers in Engineering 48(2), 218–225 (2010). [CrossRef]  

35. N. Otsu, “A Threshold Selection Method from Gray-Level Histograms,” IEEE Trans. Syst., Man, Cybern. 9(1), 62–66 (1979). [CrossRef]  

36. J. Dai and S. Zhang, “Phase-optimized dithering technique for high-quality 3D shape measurement,” Optics and Lasers in Engineering 51(6), 790–795 (2013). [CrossRef]  

Supplementary Material (2)

NameDescription
Visualization 1       The data correspond to the results of the high-speed measurement experiments in the manuscript.
Visualization 2       The data correspond to the results of the high-speed measurement experiments in the manuscript.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. The patterns of the proposed method in the example. (a) The original three-step phase-shifting patterns. (b) Three fringe-width encoded fringe patterns.
Fig. 2.
Fig. 2. The growing processes. (a)The center seed pixel and eight neighborhood pixels. (b)New seed pixel obtained after a growing process.
Fig. 3.
Fig. 3. The principle of steps 2.2 to 2.4. (a) One cross-section of the three patterns. (The first six periods along the horizontal direction are shown.) (b) One cross-section of the wrapped phase computed from (a). (c) Extract the phase 2π discontinuity from (b). (d) The corresponding labels for all phase bands on a cross-section. (e) The corresponding process between the obtained labels and the look-up table. (f) The initial fringe order map. (Some errors still exist) (g) The final correct fringe order map after correction.
Fig. 4.
Fig. 4. The respective meanings of the variables in Eq. (10) (take $K(u,v)\textrm{ = }10$ as an example).
Fig. 5.
Fig. 5. (a) One captured patterns of the ceramic plate. (b) One cross-section of the retrieved wrapped phased map and the fringe order map. Figure 5. (c) and (d) The 3D reconstruction results of two methods and their errors distribution. (c) Proposed method and (d) the conventional method.
Fig. 6.
Fig. 6. The measurement results of two ceramic standard spheres of the proposed method. (a) The measured standard spheres. (b) The wrapped phase map. (c) The identified fringe order map before correction. (d) The identified fringe order map after correction. (e) The final 3D reconstructed point cloud. (f) The 3D reconstruction results of the proposed method.
Fig. 7.
Fig. 7. The proposed method in measuring the colored plastic turtle. (a) The 2D photograph of the measured colored plastic turtle. (b) The 3D reconstruction model with color.
Fig. 8.
Fig. 8. The experiment of a vibrating cantilever beam (associated with Visualization 1). (a) A pattern captured during vibration. (b), (c) and (d) Corresponding 3D reconstruction at different times. (e) The amplitude of height direction of P1, P2, P3 as a function of time.
Fig. 9.
Fig. 9. The experiment of a paper plane blown by the wind (associated with Visualization 2). (a) A pattern captured during vibration. (b) Corresponding 3D reconstruction. (c) The displacement in height of points P1, P2 and P3 as a function of time.

Tables (2)

Tables Icon

Table 1. The look-up table between the subsequence and the fringe order in the example

Tables Icon

Table 2. The comparison of the results in measuring two standard spheres by the two methods

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

I 1 ( u , v ) = A ( u , v ) + B ( u , v ) cos ( ϕ ( u , v ) 2 π / 3 ) I 2 ( u , v ) = A ( u , v ) + B ( u , v ) cos ( ϕ ( u , v ) ) I 3 ( u , v ) = A ( u , v ) + B ( u , v ) cos ( ϕ ( u , v ) + 2 π / 3 )
φ ( u , v ) = arctan { 3 [ I 3 ( u , v ) I 1 ( u , v ) ] 2 I 2 ( u , v ) I 1 ( u , v ) I 3 ( u , v ) }
ϕ ( u , v ) = 2 π × K ( u , v ) + φ ( u , v )
D = | φ ( u n e i , v n e i ) φ ( u c e n , v c e n ) |  
( u n e i , v n e i ) = { n e w   s e e d i f   D < τ 2 n o t   a   s e e d i f   D > τ 2 .
s [ u v 1 ] = [ f u γ u 0 0 f v v 0 0 0 1 ] [ r 11 r 12 r 13 t 1 r 21 r 22 r 23 t 2 r 31 r 32 r 33 t 3 ] [ X w Y w Z w 1 ]
M = [ f u γ u 0 0 f v v 0 0 0 1 ] [ r 11 r 12 r 13 t 1 r 21 r 22 r 23 t 2 r 31 r 32 r 33 t 3 ] = [ m 11 m 12 m 13 m 14 m 21 m 22 m 23 m 24 m 31 m 32 m 33 m 34 ]
s [ u v 1 ] = M [ X w Y w Z w 1 ] ,   s p [ u p v p 1 ] = M p [ X w Y w Z w 1 ]
u p = { W + φ ( u , v ) × S / ( 2 π ) i f   K ( u , v )   i s   l a b e l l e d   0   W + φ ( u , v ) × 2 S / ( 2 π ) i f   K ( u , v )   i s   l a b e l l e d   1 .
W = w 0 × S + w 1 × 2 S
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.