Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Adaptive horizontal scaling method for speckle-assisted fringe projection profilometry

Open Access Open Access

Abstract

Phase-shifting method is widely used in fringe projection profilometry to obtain high-precision wrapped phase maps. The wrapped phase map needs to be converted to an absolute phase map to recover 3D information. The speckle pattern based phase unwrapping method requires only one additional auxiliary pattern, showing great potential for fast 3D measurements. In this paper, a speckle assisted four-steps phase-shifting method was proposed for 3D measurements. This method requires five structured light patterns to complete 3D measurements, including four-steps phase-shifting fringe patterns and a speckle pattern which is used to remove phase ambiguity. The main challenge of speckle based phase unwrapping method is to overcome the mismatch problem which often occurs in some very steep surfaces. In order to improve the speckle matching accuracy, an adaptive horizontal scaling method was proposed. A voting strategy based on phase-connected regions was proposed to reduce the computational overhead. The experiments demonstrate its superior performance, and an accuracy of 0.21 mm was achieved.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Fringe projection profilometry(FPP) is an optical 3D measurement technology with merits of no-contact and high accuracy. FPP has been widely used in medical imaging, manufacturing, archeology, parcel sorting, robotics and computer vision [14]. A traditional FPP consists of one or two cameras and a projector. By projecting a set of encoded fringe patterns onto an object and capturing the deformed fringe images, the 3D information of the object can be recovered by decoding and triangulation. FPP has had great success with static 3D testing, but has encountered challenges with dynamic testing. The main limitation is the time of projection. Typically, traditional FPP needs to project ten to twenty patterns for 3D modeling which consumes a lot of time [5]. An intuitive way to resolve this problem is to reduce the number of patterns. Over the past few decades, a lot of studies have been done for reducing the number of projection patterns, among which the phase-shifting method is undoubtedly one of the most promising methods, as it only requires at least three images to recover a high-precision wrapped phase map [612]. This wrapped phase map cannot be directly used to obtain the measured object’s depth information. This is because the triangulation calculation requires an absolute phase map with a continuous distribution of phases. But the phase-shifting method uses the arctangent function to calculate the phase values which results in a continuous value range of –π to π in one stripe period and a phase discontinuity of 2π between adjacent stripe periods called phase ambiguity [13]. Various methods have been presented to eliminate the phase ambiguity, including temporal unwrapping methods and spatial unwrapping methods. Usually, spatial unwrapping methods do the unwrapping by directly analyzing the initial wrapped phase map with no demand of additional patterns [1418]. However, this kind of methods do not work well when the measured objects are spatial discontinuous. To overcome the challenges of discontinuous surfaces, numerous temporal unwrapping methods have been proposed [1923]. Temporal unwrapping method can obtain a reliable absolute phase map with the aid of additional patterns. Giovanna Sansoni proposed a method of using gray code to eliminate the phase ambiguity [24]. This method performs stably but requires too many additional patterns. Multiple-frequency methods use at least one more wrapped phase map for phase unwrapping [2527]. However, due to the noise in the measurement, two additional wrapped phases are necessary for getting a reliable result, which means at least nine patterns are needed to complete the phase unwrapping. To further reduce the number of projection patterns, some researchers introduced speckle patterns in the phase-shifting method to eliminate the phase ambiguity [2831]. The main challenge of speckle based method is to overcome the mismatch problem which often occurs in some very steep surfaces. Zhang proposed a speckle-assisted fringe based method which only needs three patterns to obtain an absolute phase map [28]. To overcome the impact of mismatch, this method uses a voting strategy to do the phase unwrapping by region rather than by pixel. Feng proposed a speckle-assisted fringe based method to further reduce the number of patterns to two [29]. This method uses a multiple-correlation-peak correction algorithm to correct mismatches. W. Lohry and S. Zhang proposed a phase remapping method(PRM) to improve the accuracy of speckle matching for speckle-assisted based method. This method uses phase information to rectify geometric distortion [30]. Yin proposed a speckle-assisted fringe based four-steps phase-shifting method [31]. This method using optimized composite fringe patterns, adaptive window image correlation and regional diffusion compensation technique to obtain a reliable absolute phase map.

To overcome the mismatch problem in speckle-assisted phase unwrapping method, we proposed an adaptive horizontal scaling method(AHSM) to rectify the distorted local speckle images. For different pixels to be matched, the parameters of horizontal scaling are different. These parameters are determined from the wrapped phase maps, and no additional images are required. Based on AHSM, we proposed a speckle-assisted four-steps phase-shifting method for 3D measurement. This method uses five structured light patterns to complete 3D measurement. Four phase-shifting fringe patterns are used to obtain wrapped phase maps and a speckle pattern is used to remove phase ambiguity. Obviously, the computational overhead is huge if the horizontal scaling process is performed for the speckle matching of all pixels. Therefore, we proposed a voting strategy based on phase-connected regions. Only a few pixels need to be carried out the match process which significantly reduces the computational overhead. The experiments demonstrate its superior performance, and an accuracy of 0.21 mm was achieved when measuring a dumbbell-shaped object with 201.10 mm center-to-center distance.

2. Principle

2.1 Absolute phase recovery

The absolute phase recovery contains two procedures. The first is to obtain a high-precision wrapped phase map with phase ambiguity. The second is to perform a robust phase unwrapping process to remove the phase ambiguity. Four-steps phase-shifting method is a commonly used method for obtaining a high-precision wrapped phase map. By projecting four phase-shifting patterns onto the surface and using the camera to capture these distorted fringe patterns, we get four images named ${I_n}(n = 1,2,3,4)$. These images can be expressed as:

$${I_n}(x,y) = {I_A}(x,y) + {I_B}(x,y)\cos [\phi (x,y) + \frac{\pi }{2}(n - 1)]$$
where $(x,y)$ represents the pixel coordinate, ${I_n}(x,y)$ represents the intensity value, ${I_A}(x,y)$ represents the average intensity, ${I_B}(x,y)$ represents the intensity modulation, $\phi (x,y)$ represents the wrapped phase value, n represents the image number. We can solve for $\phi (x,y)$ by the following equation:
$$\phi (x,y) = {\tan ^{ - 1}}\frac{{\sum\nolimits_{n = 1}^4 {{I_n}(x,y)\sin [(n - 1)\pi /2]} }}{{\sum\nolimits_{n = 1}^4 {{I_n}(x,y)\cos [(n - 1)\pi /2]} }}$$

Due to the arctangent function is used in the calculation and its value ranges from $-{-}\mathrm{\pi }$ to $\mathrm{\pi }$, the obtained wrapped phase map has periodically occurring phase discontinuities of 2$\mathrm{\pi }$. Assuming that the phase-shifting fringe pattern has K sinusoidal periods, we can number each sinusoidal period as $k = 1,2,3,\ldots ,K$, where k is called phase order. The absolute phase value can be expressed as:

$${\phi _a}(x,y) = \phi (x,y) + 2\pi (k - 1)$$

The essence of phase unwrapping is to find the correct phase order for each pixel. In this paper, we choose speckle pattern as the auxiliary pattern to tackle this issue due to its advantage of single-shot phase unwrapping. For convenience, we take one pixel as an example to introduce the process of speckle-assisted phase unwrapping. Suppose a space point P is illuminated by the projector and captured by the left and right cameras. The corresponding pixels in the left and right cameras are ${p_{left}}$ and ${p_{right}}$ respectively. The pixel coordinate of ${p_{left}}$ is known and the pixel coordinate of ${p_{right}}$ is unknown. The left camera has been calibrated with the projector and the right camera. Since the phase orders are integers ranging from 1 to K, we can respectively assume that the phase order of ${p_{left}}$ is $k = 1,2,3,\ldots ,K$ and calculate the corresponding 3D coordinate. A set of 3D coordinates is obtained and can be transformed into pixels in the right camera by the calibration parameters. The obtained pixels set is the candidate pixels to be performed speckle matching with ${p_{left}}$. It is worth noting that the geometric constraint of 300∼900 mm is used to reduce the number of candidate pixels before speckle matching [32]. The general process of speckle matching for ${p_{left}}$ is to find the pixel with the largest zeros-means normalized cross-correlation(ZNCC) with ${p_{left}}$ in the candidate pixels [28]. ZNCC is a criterion that measures the similarity of two pixels. If we want to get the ZNCC value of two pixels, we need first take two pixel windows f and g, which are centered at the two pixels respectively. f and g are called matching windows and have the same size. Then we can use the following equation to calculate the ZNCC value:

$$\gamma (f,g) = \frac{{\sum\nolimits_{x,y} {(f(x,y) - \bar{f})(g(x,y) - \bar{g})} }}{{\sqrt {\sum\nolimits_{x,y} {{{(f(x,y) - \bar{f})}^2}} \sum\nolimits_{x,y} {{{(g(x,y) - \bar{g})}^2}} } }}$$
where $\bar{f}$ and $\bar{g}$ represent the average intensities of f and g. The higher the ZNCC value, the higher the similarity between the two pixels. After finding the best matching pixel which has the largest ZNCC value with ${p_{left}}$, we assign the phase order and depth corresponding to the best matching pixel to ${p_{left}}$. If no mismatch occurs, the best matching pixel should be ${p_{right}}$. By performing the above phase unwrapping process on each pixel in the left camera, we can complete the absolute phase and depth recoveries.

2.2 Adaptive horizontal scaling method for speckle matching

The accuracy of speckle matching mainly determines the performance of 3D measurement. Therefore, the speckle matching method should be robust and accurate enough. Usually, ZNCC method performs well for flat surfaces. In this case, the measured surface is similar to the object in Fig. 1(a). The measured object is captured by the left and right cameras from symmetrical angles, resulting similar images. If the speckle pattern is projected on this object, the images of the same local speckle pattern in both cameras are similar too. This facilitates ZNCC method to find the correct pixel pair. But for some steep surfaces, it is difficult to find the correct pixel pair using ZNCC method. Figure 1(b) simulates a situation that a steep surface is captured by the two cameras. It is obvious that the shooting angles of the two cameras are very different, resulting in a large difference in the images of the same local speckle pattern. Figure 1(c) and Fig. 1(d) show real images of a statue captured from different perspectives at the same height. Sub-windows in Fig. 1(c) and Fig. 1(d) show the details of the same local speckle pattern projected on a steep surface.

 figure: Fig. 1.

Fig. 1. The impact of different camera angles. (a) The two cameras capture a flat surface; (b) the two cameras capture a steep surface; (c) the real image captured by the left camera; (d) the real image captured by the right camera.

Download Full Size | PDF

It can be seen that the same local speckle pattern is significantly different in the left and right cameras. The local speckle pattern covers more pixels along horizontal axis in the left camera compared to the right camera, which means that the local speckle pattern in the left camera has a larger scale along the horizontal axis. Since the two cameras are placed at the same height, there is no significant difference along vertical axis. Due to the different scales of the same local speckle pattern along horizontal axis in the two cameras, the ZNCC value calculated at the correct candidate pixel may not be the largest one, which results in a mismatch. To further analyze this problem, the shooting angle is divided into horizontal angle and vertical angle to consider the impact of different angles on the scale of the local speckle pattern in the left and right cameras. Since the difference is mainly in the horizontal angles, the local speckle images are scaled to different degrees mainly along the horizontal axis in the two cameras. Based on this observation, there is an intuitive idea to solve the mismatches caused by different shooting angles. In each calculation of ZNCC, if we can re-scale the right matching window horizontally by a suitable parameter so that the right matching window has a similar horizontal scale to the left matching window, then we can improve the accuracy of ZNCC method. Following this idea, we proposed AHSM. Next, the principle of this method will be introduced in details.

Figure 2(a) and Fig. 2(e) show the four-steps phase-shifting fringe images obtained by the left and right cameras, and the corresponding wrapped phase maps are shown in Fig. 2(b) and Fig. 2(f). By randomly picking a pixel ${p_a}$ in the left camera and find the corresponding pixel ${p_b}$ in the right camera, we obtain the details of the wrapped phase distributed along the horizontal axis around ${p_a}$ and ${p_b}$, as shown in Fig. 2(c) and Fig. 2(g). It can be seen that the slope of the wrapped phase distributed around ${p_a}$ is lower than the slope around ${p_b}$. It is because the wrapped phase value range for all sinusoidal periods is $- \pi \sim \pi$ and the horizontal scale of sinusoidal periods around ${s_2}$ ${p_a}$ in left camera’s image is greater than that around ${p_b}$ in the right camera’s image. For the same value range, the larger scale means lower slope. Therefore, the slope is inversely proportional to the scale. It can be found that the slope of the wrapped phase around one pixel reveals the scale of the local projection pattern around this pixel. Let the slope around ${p_a}$ is ${s_1}$, and the slope around ${p_b}$ is, we can re-scale the local image around ${p_b}$ by ${{{s_2}} / {{s_1}}}$ times to make it has a similar scale to the local image around ${p_a}$. By detecting the slopes of the wrapped phase around each pixel in the left and right cameras, we obtain the slope maps of the two cameras which are shown in Fig. 2(d) and Fig. 2(h). The higher the pixel intensity, the higher the slope of the pixel. When we carry out a ZNCC calculation on any pixel pair, we first need to get the slopes of the two pixels and re-scale the original right matching window to obtain a new right matching window for ZNCC calculation. This re-scaling process is called adaptive horizontal scaling. Figure 3(a-d) show the details of the speckle matching process for the center pixel of the sub-windows in Fig. 1(c-d), where Fig. 3(a) is the left matching window, Fig. 3(b) is the original right matching window of the correct candidate pixel and Fig. 3(c) is the new right matching window obtained by using AHSM. Obviously, the similarity between the left and right matching windows is improved by using AHSM horizontal scaling. The improvement of similarity is also demonstrated in Fig. 3(d) which shows the ZNCC values of all the candidate pixels including the correct candidate pixel. It can be seen that the correct pixel already has the largest ZNCC value before using AHSM. Although there is no mismatch, the ZNCC value of the correct candidate pixel is not significantly larger than that of the other candidates, which implies a lack of enough robustness. By using AHSM, the distance between the value of the correct candidate pixel and that of other candidates becomes larger, indicating an improvement of the robustness of ZNCC method. Figure 3(e-h) show the details of another randomly picked pixel. We can see from Fig. 3(h) that the ZNCC value of the correct candidate pixel calculated from the original image is not the largest one, meaning there occurs a mismatch. By using AHSM, this mismatch was corrected and the ZNCC value of the correct pixel is significantly larger than that of other candidates.

 figure: Fig. 2.

Fig. 2. The process of obtaining slope maps of left and right cameras. (a) The four-steps phase-shifting images of the left camera; (b) the wrapped phase map of the left camera; (c) the details of the wrapped phase distributed along the horizontal axis around ${p_a}$; (d) the slope map of the left camera. (e-h) corresponding results of the right camera.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Comparison of the proposed method and conventional ZNCC method. (a) The local speckle image of the pixel corresponding to Fig. 1(c-d) in the left camera; (b) the origin local speckle image in the right camera corresponding to (a); (c) the local speckle image obtained from (b) by using horizontal scaling; (d) the ZNCC values calculated from (b) and (c) respectively; (e-h) the corresponding results of another pixel.

Download Full Size | PDF

As described in the introduction, PRM [30] uses phase information to rectify geometric distortion which is similar to our method. Therefore, it is worth discussing the difference between the two methods. The first difference is in the way the rectification is implemented, which leads to a different amount of computation. RPM uses the common kernel smoothing to rectify the image. For a given disparity d, each pixel in the left image can find a corresponding pixel in the right image. Then, the new intensity of the right pixel is calculated by a kernel smoother. To determine the kernel smoother for each pixel, all pixels in the kernel window has to calculate a weight relying on phase similarity. If the disparity d changes, the kernel for each pixel needs to be recomputed. This is an elaborate process that takes into account the effect of each pixel in the kernel window on the intensity of the target pixel. Therefore, high speckle matching accuracy can be achieved by this method, but at the cost of a huge computational cost. In our method, we want a balance between the accuracy and computational overhead. We use the slope map as the reference to rectify the geometric distortion. The slope maps are only need to be computed once and only some of the pixels in the right camera need to be rectified instead of all pixels. During the rectification, all pixels within the local pixel window are considered as a whole and share the same rectifying parameter. The rectification of PRM requires kernel smoothing and Laplacian of Gaussian filter, but our method only uses linear scaling. Therefore, our method is a relatively simple process that does not require much computation. The second difference is in the measurement range. PRM assumes that the disparity is the same for all pixels. To get a good speckle matching result, the images need to be tested over a wide range of disparity. However, if the depth range of the measured object is too large, it is difficult to achieve a good speckle matching result for all pixels even if the disparity is already the best. In our method, the speckle matching for each pixel is independent. Instead of using a global disparity to find the candidate pixels, we consider all possible disparities and use geometric constraint to reduce the number of candidate pixels. The depth range of the geometric constraint is 300∼900 mm, which is wide enough. Therefore, the main improvement of our method is to reduce the amount of computation and remove the limitation of measurement range.

2.3 Voting strategy based on phase-connected regions

Although AHSM can improve the performance of ZNCC on steep surfaces, there remains a challenge if we apply the AHSM for high-speed measuring. Horizontal scaling of a matching window which usually comprises hundreds of pixels involves a lot of linear interpolation operations and we need perform such scaling process at least a dozen times to complete the speckle matching for one pixel. If we conduct this process for all pixels, it is obviously time-consuming. Inspired by the voting strategy proposed by Zhang [28], we proposed a voting strategy based on phase-connected regions to accelerate the speckle matching process and correct mismatches. By this method, the measured surface is divided into many phase-connected regions and all pixels in each phase-connected region have the same phase-order. For each region, the speckle matching process only needs to be performed for a few pixels and the phase-order of the region is determined by voting strategy.

In section 2.1, we have discussed the phase discontinuities of 2π caused by the periodicity of the phase-shifting fringe patterns. This type of phase discontinuities occurs at the boundaries between different sinusoidal periods and divides the measured surface into many sub-regions. Each sub-region corresponds to one sinusoidal period of the fringe pattern, so all pixels in the same sub-region have the same phase order. Since the wrapped phase within each sub-region is continuously distributed, we named such sub-region as the phase-connected region. Besides, there is another type of phase discontinuities which is caused by spatial discontinuities or occlusion of the measured surface. This type of phase discontinuities may further divides one phase-connected region into several regions, so the number of phase-connected regions in an image is more than K generally. Based on the uniqueness of the phase order in the same phase-connected region, ideally, we only need to perform speckle matching on one pixel to determine the phase order of all pixels in the same phase-connected region. The phase-connected region segmentation can be easily accomplished by the two-pass algorithm which is proposed for binary images. In the first pass, iterate through each pixel of the wrapped phase map by column, then by row. At each step, if the current pixel is not background, check its connectivity with its above pixel and left pixel. If the current pixel has no neighboring pixels, label it uniquely. Otherwise, assign the minimum label of the neighbors to the current pixel, and then store the equivalence between neighboring labels. In the second pass, relabel each non-background pixel with its lowest equivalent label. Since the wrapped phase maps are different from binary images, we use Eq. (5) and Eq. (6) to check the connectivity for each pixel. If a pixel satisfies Eq. (5) or Eq. (6), it is considered to be connected to its above or left pixel.

$$\textrm{above pixel}: \;\;|{{\phi_{current}} - {\phi_{above}}} |< T$$
$$\textrm{left pixel}: \;\;0 < {\phi _{current}} - {\phi _{left}} < T$$
where ${\phi _{current}}$ represents the wrapped phase value of the current pixel, ${\phi _{above}}$ and ${\phi _{left}}$ represent the wrapped phase values of the above and left pixels respectively, T represents the predefined threshold.

Since AHSM cannot completely avoid mismatches, especially for some very complex surfaces, it is unreliable to perform speckle matching process on only one pixel for a phase-connected region. In practice, we perform speckle matching on less than D pixels for one phase-connected region and determine the phase order by voting. For the phase-connected region which contains more than D pixels, we downsample all pixels in the region by a ratio of 1/ D to get a new pixels set DS. Then we iterate through pixels in DS to carry out speckle matching. For the regions which contains less than D pixels, all pixels need be performed speckle matching. By this way, we significantly reduce the amount of calculation and D can be flexibly set according to the actual needs. The larger D is, the more robust the phase unwrapping is, and the more computation is required. Since the mismatches occur on so few pixels that they do not affect the voting result, they can be corrected by the voting strategy. After getting the phase orders of all pixels by this method, we can obtain the absolute phase map by Eq. (3). Figure 4(a) shows the result of phase-connected region segmentation and Fig. 4(b) gives the absolute phase map obtained by voting strategy. Figure 4(c)–4(f) show four local details of phase-connected region segmentation, and the corresponding positions have been marked in Fig. 4(a).

 figure: Fig. 4.

Fig. 4. Absolute phase recovery based on phase-connected regions. (a) Results of phase-connected region segmentation; (b) the absolute phase map obtained by using voting strategy based on phase-connected regions; (c-f) local details of phase-connected region segmentation.

Download Full Size | PDF

The voting strategy in this paper can work well for most scenes, but may lead to errors for some abrupt surfaces or overlapping regions. In these areas, the wrapped phase difference between the current pixel and neighboring pixels of different heights may be smaller than the threshold T. This results in two separate regions being treated as one phase-connected region and causes errors in the region that is in the minority in voting. To solve this problem, a left-right consistency check [33] is required, which has been widely used in stereo matching. Since both the right camera and the projector are calibrated with the left camera, the right images can be performed a phase unwrapping process like the left images. After obtaining the absolute phase map and depth map of the right images, the absolute phase value of each pixel in the right camera can be mapped to the left camera by the calibration parameters. Due to the different perspectives of the two cameras, for the same local area, errors caused by abrupt surfaces or overlapping regions will only occur in one of the two cameras at most. If the absolute phase values of a pair of pixels are different, it indicates that an error may have occurred in the left or right camera. The current pixel in the left camera is labeled as suspicious pixel. By performing the left-right consistency check, all suspicious regions in the left camera can be detected. Since only the regions occur errors will be detected, the two regions that were originally treated as the connected domain due to abrupt surfaces or occlusion regions can be separated. Then, the phase-connected region segmentation and voting procedure are re-executed for these suspicious regions to obtain the correct absolute phase.

As described previously, our voting strategy is inspired by the voting strategy proposed by Zhang [28], and both methods use the idea of phase-connected region and voting strategy to overcome mismatches and reduce the amount of calculation. The main difference between the two methods is in the definition of phase-connected regions, which leads to different implementations of the voting strategy. Zhang’s method defines each continuous surface of the measured object as a phase-connected region. Since the 2π discontinuities within the continuous surface are removed by using spatial phase unwrapping before region segmentation [34], the phase-connected region in Zhang’s method is detected relying on a relatively unwrapped phase map. Based on the assumption that the relatively unwrapped phase value of each pixel within a phase-connected region has a common disparity from the absolute phase map of a reference plane, the absolute phase can be determined through a voting procedure. It can be found that Zhang’s method considers the continuous surface as a whole during voting and the quality of spatial phase unwrapping directly affects the quality of the absolute phase map. If the measured object contains complex surfaces that are not suitable for spatial phase unwrapping, it is difficult to obtain a good absolute phase map. To remove the limitation of spatial phase unwrapping, our method defines each stripe period as a phase-connected region in which the wrapped phase values are continuously distributed. This means that one phase-connected region in Zhang’s method may be subdivided into multiple phase-connected regions in our method. Based on the fact that all pixels within a stripe period have a common phase order, we can directly determine the absolute phase by voting without a spatial phase unwrapping procedure. Compared with Zhang’s method which performs voting within a continuous surface, our method performs voting within a smaller area. By subdividing the voting area, our method improves the robustness of the voting strategy for complex surfaces.

2.4 Flow charts of the proposed method

In previous section, we have discussed the details of the absolute phase recovery. The depths of the measured surface can be calculated from the absolute phase map using calibration parameters. This process is detailed in our previous work [35]. Figure 5 shows the flow charts of the proposed method. First, we capture the phase-shifting images and speckle images of the two cameras. Then, wrapped phase maps are calculated by Eq. (2). The left wrapped phase map is used to perform phase-connected region segmentation and get the downsampling pixels set. Meanwhile, the left and right slope maps are obtained from the wrapped phase maps. After completing all the above processes, we start iterate through the pixels in downsampling pixels set. In each step, we use AHSM based ZNCC method to complete the speckle matching and determine the phase order of the current pixel. Then, the phase order of each phase-connected region is determined by voting, and is assigned to all pixels in the same phase-connected region. By this way, the absolute phase map can be obtained. Finally, the depth information of the measured surface is transferred from the absolute phase map via triangulation. For measured objects containing complex surfaces, a left-right consistency check is required to overcome mismatches.

 figure: Fig. 5.

Fig. 5. The flow charts of the proposed method.

Download Full Size | PDF

3. Experiments and results

To investigate the performance of the proposed method, we established a 3D modeling system which is shown in Fig. 6. It is composed of two commodity cameras which can capture 8-bit gray-scale images with 1024 × 1280 resolution and a projector module which contains a one-axis MEMS-mirror system and a commodity speckle emitter. The one-axis MEMS-mirror system is a new fringe projection technology with the merits of afocal projection, small size, low cost, and low power consumption. It can project fringe pattern with 1024 resolution along the horizontal axis. Since it is an afocal projection system, the measurement range of the proposed system is 350∼800 mm which is only limited by the camera’s depth of field [36]. The one-axis MEMS-mirror system was calibrated with the left camera [35]. The four-steps phase-shifting fringe pattern in this paper had 64 periods and the pixel width of each period was 16 pixels. Since we hope that the geometric constraint will not affect the measurement range, the depth range of geometric constraint in this paper is 300∼900 mm which is wider than the designed measurement range. By using this depth range, the number of candidate pixels can be reduced from 64 to around 16. Considering the balance of computational time and speckle matching accuracy, the size of the matching window was 17 × 17. The threshold T for the phase-connected segmentation was 0.3π. The downsample parameter D was 30.

 figure: Fig. 6.

Fig. 6. The composition of the proposed system.

Download Full Size | PDF

3.1 Comparison of ZNCC base on adaptive horizontal scaling with conventional ZNCC

To verify the improvement of the AHSM based ZNCC on speckle matching accuracy, two cartons and a plaster statue were measured to compare the performance of conventional ZNCC method and our method. Figure 7 shows a qualitative comparison between the two methods, where Fig. 7(a) and 7(b) show the absolute phase maps of the carton obtained by using conventional ZNCC method and our method respectively, Fig. 7(c) show the details of the two absolute phase map, Fig. 7(d), 7(e), and 7(f) show the corresponding results for the plaster statue, Fig. 7(g), 7(h), and 7(i) show the corresponding results for the combination of the carton and plaster statue. To quantitatively compare the two methods, Table 1 gives the mismatch rates corresponding to Fig. 7, where test No.1 represents the test of the carton, test No.2 represents the plaster statue, test No.3 represents the combination of the carton and plaster statue. It can be seen that both methods performed similar level of speckle matching accuracy for the carton. It is because the measured surface of the carton is relative flat to the left and right cameras. The scales of the local speckle pattern in the left and right cameras are similar, so the conventional ZNCC can work well without any further processing. However, for the plaster statue containing many steep surfaces, there occur a lot of mismatches by using conventional ZNCC method. It is worth noting that the mismatch rate of conventional ZNCC method in test No. 3 is better than that in test No. 2 because the carton in test No. 3 reduces the overall mismatch rate. Usually, complex processes including cost aggregation and disparity optimization need to be carried out to remove these mismatches, which are computationally expensive. By using our method, the most mismatches that occur in conventional ZNCC are avoided and only a small number of mismatches occur at some very small isolated surfaces. The main reason for these mismatches is that the phase-connected regions where the mismatched pixels are located are small and very steep. It results in very few pixels participating in the vote and among the pixels participating in the vote, the number of mismatched pixels is greater than the number of correctly matched pixels. The results demonstrate that the AHSM and the voting strategy based on phase-connected region can significantly improve the accuracy of ZNCC in speckle matching.

 figure: Fig. 7.

Fig. 7. Comparison of our method with conventional ZNCC. (a) Absolute phase map of the carton obtained by using conventional ZNCC; (b) absolute phase map of the carton obtained by using our method; (c) local details of (a) and (b); (d-f) corresponding results for the plaster statue; (g-i) corresponding results for the carton and the plaster statue.

Download Full Size | PDF

Tables Icon

Table 1. Mismatch rates of our method and conventional ZNCC

3.2 Precision analysis

To investigate the accuracy of 3D measurement, the established system was used to measure a standard plane and a standard dumbbell-shaped object. The dumbbell-shaped object consists of two spheres with a diameter of 38.1 ± 0.01 mm and a center-to-center distance of 201.09 ± 0.01 mm. It was measured at nine different places to reveal the accuracy at the entire measuring range of the proposed system. The measurement results are shown in Table 2. The root mean square center-to-center distance and error are 201.08 mm and 0.21 mm. To further show the details on the sphere, the error map of a sphere in test No.1 is shown in Fig. 8(a). We can see that the most errors are in the range of −0.03∼0.03 mm. Due to the light source of the MEMS mirror is laser, the speckle problem of laser is inevitable. This results a few spikes on the surface of the sphere with errors around 0.3mm or −0.3 mm. The root mean square error of the entire sphere is 0.06 mm. The standard plane is a square plate with a side length of about 150mm. It was placed at four different distances and measured from different perspectives at each distance. The results were shown in Table 3 and the error map of test No.4 is shown in Fig. 8(b). It can be seen that the root mean square errors increase with measurement distance. It is reasonable for the measurement system based on triangulation. Because if we keep the baseline length constant, the system becomes more and more sensitive to noise as the measurement distance increases.

 figure: Fig. 8.

Fig. 8. (a) Error map of the dumbbell-shaped object; (b) error map of the standard plane.

Download Full Size | PDF

Tables Icon

Table 2. Measurement results of the dumbbell-shaped object.

Tables Icon

Table 3. Measurement results of the standard plane.

3.3 3D reconstructions

To compare the 3D reconstruction performance of our method and conventional ZNCC method, two plaster statues were measured at different perspectives. The results are shown in Fig. 9, where Fig. 9(a) and 9(b) show the front and side of the first statue, Fig. 9(c) shows the side of the second statue, Fig. 9(d)–9(f) show the corresponding results of our method, Fig. 9(g)–9(i) show the corresponding results of conventional ZNCC method. Since the proposed system can only reconstruct areas where are captured by both cameras, there are some holes caused by occlusion. Except the occluded areas, the measured objects were reconstructed well using our method. There are not obvious noises in the results and the details of noise, eyes, mouth, and hair were reconstructed well. But in the results of conventional ZNCC method, there are a lot of outliers caused by mismatches, which seriously affect the quality of 3D reconstruction. It is worth noting that the size of the second statue is much smaller than that of the first statue, so the surface height variation of the second statue is smaller. This results in fewer outliers in Fig. 9(i) than in Fig. 9(g) and 9(h).

 figure: Fig. 9.

Fig. 9. 3D reconstruction of plaster statues. (a) The front of the first statue; (b) the side of the first statue; (c) the side of the second statue; (d-f) corresponding results of our method; (g-i) corresponding results of conventional ZNCC method.

Download Full Size | PDF

In summary, we conducted three experiments to test the performance of our method. In the first experiment, we compare the performance of our method and conventional ZNCC method on absolute phase recovery. The results demonstrate the improvement of our method on conventional ZNCC. Then, the standard plane and the dumbbell-shaped object were measured to test the accuracy of our method. Finally, two plaster statues were reconstructed to compare the performance of our method and conventional ZNCC method on 3D reconstruction.

4. Conclusion

In this paper, a speckle-assisted four-steps phase-shifting method was proposed for 3D measurement. This method uses five images, including four phase-shifting images and a speckle image, to complete 3D measurement. The four phase-shifting images are used to obtain the wrapped phase map and the speckle image is used to do the phase unwrapping. To avoid the mismatches due to the same local speckle pattern being distorted to different degrees in the left and right cameras, we proposed the AHSM to rectify the local speckle image. The parameters of horizontal scaling are calculated from the four phase-shifting images, with no need of additional images. Besides, we proposed a voting strategy based on phase-connected regions to simplify the process of absolute phase recovery. Three experiments were conducted to evaluate the performance of our method. The results demonstrate that our method can significantly improve the accuracy of speckle matching compared to conventional ZNCC method. The dumbbell-shaped object and the standard plane were measured and an accuracy of 0.21 mm was achieved. Finally, two plaster statues were measured to demonstrate our method can complete 3D reconstruction well on objects with complex surfaces.

Funding

Key-Area Research and Development Program of Guangdong Province (2021B0101410001); National Natural Science Foundation of China (62074128, U21B2035).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. S. Zhang, “High-speed 3-D shape measurement with structured light methods: A review,” Opt. Laser Eng. 106, 119–131 (2018). [CrossRef]  

2. S. Feng, L. Zhang, C. Zuo, T. Tao, Q. Chen, and G. Gu, “High dynamic range 3-D measurements with fringe projection profilometry: A review,” Meas. Sci. Technol. 29(12), 122001 (2018). [CrossRef]  

3. J. Xu and S. Zhang, “Status, challenges, and future perspectives of fringe projection profilometry,” Opt. Lasers Eng 135, 106193 (2020). [CrossRef]  

4. X. Su and Q. Zhang, “Dynamic 3-D shape measurement method: a review,” Opt. Laser Eng. 48(2), 191–204 (2010). [CrossRef]  

5. J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010). [CrossRef]  

6. H. Zhao, W. Chen, and Y. Tan, “Phase-unwrapping algorithm for the measurement of three-dimensional object shapes,” Appl. Opt. 33(20), 4497 (1994). [CrossRef]  

7. S. Zhang, “Recent progresses on real-time 3d shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010). [CrossRef]  

8. V. Srinivasan, H. C. Liu, and M. Halioua, “Automated phase-measuring profilometry of 3-D diffuse objects,” Appl. Opt. 23(18), 3105–3108 (1984). [CrossRef]  

9. L. Kinell and M. Sjödahl, “Robustness of reduced temporal phase unwrapping in the measurement of shape,” Appl. Opt. 40(14), 2297–2303 (2001). [CrossRef]  

10. X. Peng, Z. Yang, and H. Niu, “Multi-resolution reconstruction of 3-D image with modified temporal unwrapping algorithm,” Opt. Commun. 224(1-3), 35–44 (2003). [CrossRef]  

11. H. Yu, X. Chen, Z. Zhang, C. Zuo, Y. Zhang, D. Zheng, and J. Han, “Dynamic 3-d measurement based on fringe-to-fringe transformation using deep learning,” Opt. Express 28(7), 9405–9418 (2020). [CrossRef]  

12. S. Zhang and P. S. Huang, “High-resolution, real-time three-dimensional shape measurement,” Opt. Eng. 45(12), 123601 (2006). [CrossRef]  

13. Y. Wang, K. Liu, Q. Hao, D. L. Lau, and L. G. Hassebrook, “Period coded phase shifting strategy for real-time 3-D structured light illumination,” IEEE Trans. Image Process. 20(11), 3001–3013 (2011). [CrossRef]  

14. R. Goldstein, H. Zebker, and C. Werner, “Satellite radar interferometry: two-dimensional phase unwrapping,” Radio Sci. 23(4), 713–720 (1988). [CrossRef]  

15. D. C. Ghiglia and M. D. Pritt, Two-dimensional phase unwrapping: theory, algorithms, and software (John Wiley and Sons, 1998).

16. D. C. Ghiglia and L. A. Romero, “Minimum Lp-norm two-dimensional phase unwrapping,” J. Opt. Soc. Am. A 13(10), 1999–2013 (1996). [CrossRef]  

17. T. R. Judge and P. J. Bryanston-Cross, “A review of phase unwrapping techniques in fringe analysis,” Opt. Lasers Eng. 21(4), 199–239 (1994). [CrossRef]  

18. E. Zappa and G. Busca, “Comparison of eight unwrapping algorithms applied to Fourier-transform profilometry,” Opt. Lasers Eng. 46(2), 106–116 (2008). [CrossRef]  

19. J. M. Huntley and H. Saldner, “Temporal phase-unwrapping algorithm for automated interferogram analysis,” Appl. Opt. 32(17), 3047–3052 (1993). [CrossRef]  

20. J. Burke, T. Bothe, W. Osten, and C. F. Hess, “Reverse engineering by fringe projection,” in International Symposium on Optical Science and Technology (2002), pp. 312–324.

21. J. M. Huntley and H. O. Saldner, “Error—reduction methods for shape measurement by temporal phase unwrapping,” J. Opt. Soc. Am. A 14(12), 3188–3196 (1997). [CrossRef]  

22. S. Zhang, “Digital multiple wavelength phase shifting algorithm,” Proc. SPIE 7432, 74320N (2009). [CrossRef]  

23. C. Polhemus, “Two-wavelength interferometry,” Appl. Opt. 12(9), 2071–2074 (1973). [CrossRef]  

24. G. Sansoni, M. Carocci, and R. Rodella, “Three-dimensional vision based on a combination of gray-code and phase-shift light projection: Analysis and compensation of the systematic errors,” Appl. Opt. 38(31), 6565–6573 (1999). [CrossRef]  

25. D. P. Towers, J. D. C. Jones, and C. E. Towers, “Optimum frequency selection in multi-frequency interferometry,” Opt. Lett. 28(11), 887 (2003). [CrossRef]  

26. Z. Wang, D. Nguyen, and J. Barnes, “Some practical considerations in fringe projection profilometry,” Opt. Lasers Eng. 48(2), 218–225 (2010). [CrossRef]  

27. Y.-Y. Cheng and J. C. Wyant, “Multiple-wavelength phase-shifting interferometry,” Appl. Opt. 24(6), 804–807 (1985). [CrossRef]  

28. Y. Zhang, Z. Xiong, and F. Wu, “Unambiguous 3D measurement from speckle-embedded fringe,” Appl. Opt. 52(32), 7797–7805 (2013). [CrossRef]  

29. S. Feng, Q. Chen, and C. Zuo, “Graphics processing unit–assisted real-time three-dimensional measurement using speckle-embedded fringe,” Appl. Opt. 54(22), 6865–6873 (2015). [CrossRef]  

30. W. Lohry and S. Zhang, “High-speed absolute three-dimensional shape measurement using three binary dither-ed patterns,” Opt. Express 22(22), 26752–26762 (2014). [CrossRef]  

31. W. Yin, S. Feng, T. Tao, L. Huang, M. Trusiak, Q. Chen, and C. Zuo, “High-speed 3d shape measurement using the optimized composite fringe patterns and stereo-assisted structured light system,” Opt. Express 27(3), 2411–2431 (2019). [CrossRef]  

32. Y. An, J. S. Hyun, and S. Zhang, “Pixel-wise absolute phase unwrapping using geometric constraints of structured light system,” Opt. Express 24(16), 18445–18459 (2016). [CrossRef]  

33. C. Godard, O. Mac Aodha, and G. J. Brostow, “Unsupervised monocular depth estimation with left-right consi-stency,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017).

34. S. Zhang, X. Li, and S. T. Yau, “Multilevel quality-guided phase unwrapping algorithm for real-time three-dimensional shape reconstruction,” Appl. Opt. 46(1), 50–57 (2007). [CrossRef]  

35. D. Yang, D. Qiao, and C. Xia, “Curved light surface model for calibration of a structured light 3d modeling system based on striped patterns,” Opt. Express 28(22), 33240–33253 (2020). [CrossRef]  

36. J. Tauscher, W. O. Davis, D. Brown, M. Ellis, Y. Ma, M. E. Sherwood, D. Bowman, M. P. Helsel, S. Lee, and J. W. Coy, “Evolution of MEMS scanning mirrors for laser projection in compact consumer electronics,” Proc. SPIE 7594, 75940A (2010). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. The impact of different camera angles. (a) The two cameras capture a flat surface; (b) the two cameras capture a steep surface; (c) the real image captured by the left camera; (d) the real image captured by the right camera.
Fig. 2.
Fig. 2. The process of obtaining slope maps of left and right cameras. (a) The four-steps phase-shifting images of the left camera; (b) the wrapped phase map of the left camera; (c) the details of the wrapped phase distributed along the horizontal axis around ${p_a}$; (d) the slope map of the left camera. (e-h) corresponding results of the right camera.
Fig. 3.
Fig. 3. Comparison of the proposed method and conventional ZNCC method. (a) The local speckle image of the pixel corresponding to Fig. 1(c-d) in the left camera; (b) the origin local speckle image in the right camera corresponding to (a); (c) the local speckle image obtained from (b) by using horizontal scaling; (d) the ZNCC values calculated from (b) and (c) respectively; (e-h) the corresponding results of another pixel.
Fig. 4.
Fig. 4. Absolute phase recovery based on phase-connected regions. (a) Results of phase-connected region segmentation; (b) the absolute phase map obtained by using voting strategy based on phase-connected regions; (c-f) local details of phase-connected region segmentation.
Fig. 5.
Fig. 5. The flow charts of the proposed method.
Fig. 6.
Fig. 6. The composition of the proposed system.
Fig. 7.
Fig. 7. Comparison of our method with conventional ZNCC. (a) Absolute phase map of the carton obtained by using conventional ZNCC; (b) absolute phase map of the carton obtained by using our method; (c) local details of (a) and (b); (d-f) corresponding results for the plaster statue; (g-i) corresponding results for the carton and the plaster statue.
Fig. 8.
Fig. 8. (a) Error map of the dumbbell-shaped object; (b) error map of the standard plane.
Fig. 9.
Fig. 9. 3D reconstruction of plaster statues. (a) The front of the first statue; (b) the side of the first statue; (c) the side of the second statue; (d-f) corresponding results of our method; (g-i) corresponding results of conventional ZNCC method.

Tables (3)

Tables Icon

Table 1. Mismatch rates of our method and conventional ZNCC

Tables Icon

Table 2. Measurement results of the dumbbell-shaped object.

Tables Icon

Table 3. Measurement results of the standard plane.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

I n ( x , y ) = I A ( x , y ) + I B ( x , y ) cos [ ϕ ( x , y ) + π 2 ( n 1 ) ]
ϕ ( x , y ) = tan 1 n = 1 4 I n ( x , y ) sin [ ( n 1 ) π / 2 ] n = 1 4 I n ( x , y ) cos [ ( n 1 ) π / 2 ]
ϕ a ( x , y ) = ϕ ( x , y ) + 2 π ( k 1 )
γ ( f , g ) = x , y ( f ( x , y ) f ¯ ) ( g ( x , y ) g ¯ ) x , y ( f ( x , y ) f ¯ ) 2 x , y ( g ( x , y ) g ¯ ) 2
above pixel : | ϕ c u r r e n t ϕ a b o v e | < T
left pixel : 0 < ϕ c u r r e n t ϕ l e f t < T
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.