Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-frequency background modulation fringe patterns based on a fringe-wavelength geometry-constraint model for 3D surface-shape measurement

Open Access Open Access

Abstract

A new fringe projection method for surface-shape measurement was developed using four high-frequency phase-shifted background modulation fringe patterns. The pattern frequency is determined using a new fringe-wavelength geometry-constraint model that allows only two corresponding-point candidates in the measurement volume. The correct corresponding point is selected with high reliability using a binary pattern computed from intensity background encoded in the fringe patterns. Equations of geometry-constraint parameters permit parameter calculation prior to measurement, thus reducing measurement computational cost. Experiments demonstrated the ability of the method to perform 3D shape measurement for a surface with geometric discontinuity, and for spatially isolated objects.

© 2017 Optical Society of America

1. Introduction

Three-dimensional (3D) surface shape measurement by optical non-contacting methods has become increasingly important in mechanical engineering, industrial inspection, computer vision, virtual reality, medical imaging and entertainment [1–5]. Fringe projection profilometry (FPP) is a common full-field surface-shape measurement technique, where one or more spatially-continuous fringe patterns are projected onto the object surface and images of the deformed patterns are captured by a camera. A phase map is computed from images of the patterns formed on the object surface using fringe phase analysis techniques. The camera-projector system can be calibrated as a stereovision system, and the object surface height can be computed by stereovision techniques, where the phase map is used to aid in determining correspondences between camera and projector images or between images of multiple cameras [6]. Standard phase analysis techniques produce a wrapped phase map ranging from −π to π with 2π discontinuities, resulting in phase ambiguity [7]. Approaches to solve the phase ambiguity problem include spatial phase unwrapping, temporal phase unwrapping, and geometry-constraint based methods.

In spatial phase unwrapping, the phase is unwrapped by comparing the phase at neighboring pixels and adjusting the 2π phase discontinuities by following a continuous path on the wrapped phase map [8]. However, the calculated phase is relative to a point on the phase map, and phase errors will propagate through the path, thus limiting the technique to measurement of objects without surface-geometry discontinuities that cause fringe shifting over 2π radians [9].

Temporal phase unwrapping methods determine fringe order for every camera pixel by projecting an additional sequence of patterns. The phase is unwrapped by adding the detected fringe order to the wrapped phase, thus generating a continuous absolute phase map, where the fringe order information is pre-defined in the patterns [10,11]. The fringe order information can be obtained by using multi-frequency (multi-wavelength) phase-shifted patterns [12], binary-coding or gray-coding patterns [13,14], phase-coding patterns [15], background or amplitude modulation patterns [16,17] and aperiodic fringe patterns [18]. All of these temporal phase unwrapping methods can work well in retrieving the absolute phase map. The large number of images required with most temporal phase unwrapping methods, however, makes the measurement of dynamic surfaces (moving or deforming) difficult without using high-speed projection techniques.

Geometry-constraint based techniques can solve the phase ambiguity problem for discontinuous object surfaces without performing phase unwrapping and without the need for additional patterns as in temporal phase unwrapping. Geometry-constraint based techniques typically require one projector and two cameras, whereby phase maps can be obtained from two different perspectives (two camera viewpoints). For a pixel in one camera image, the corresponding point in the other camera image should have the same wrapped phase value. If the correspondences between the two camera images can be found for all pixels [6], the object surface can then be reconstructed directly by stereovision techniques. To reduce the search for corresponding points, the epipolar line is commonly used. However, based on the wrapped phase value alone, there may be several candidate corresponding points that can be found along the epipolar line, even in a pre-defined measurement volume [19]. To reduce the number of candidates, the fringe patterns used in geometry-constraint based techniques are usually designed with a relatively low frequency [20]. However, high-frequency fringe patterns lead to lower phase errors [21] and are therefore preferred. The challenge with high-frequency fringe patterns, however, is that there would be a large number of candidate points along the epipolar line.

Statistical patterns [22] and triangular patterns [19] have been embedded into high-frequency fringe patterns to provide additional information in selecting the correct corresponding points. However, the amplitude of the fringe patterns needs to be adjusted to embed the extra signals, and thus decreases the signal-to-noise ratio (SNR) of the phase-shifted patterns. The triangular wave method is also sensitive to noise in the triangular wave intensity. Recently, geometry-constraint based methods with a single-camera, single-projector system were developed without requiring correspondence search. The phase map was unwrapped based on artificial absolute phase maps generated by geometry constraints [23,24]. However, prior-knowledge of the approximate surface geometry was required.

High computational cost is another common challenge for geometry-constraint based methods using high-frequency fringe patterns. The computation lies mainly in the calculation of epipolar-geometry parameters and coordinates of a large number of candidate points, as well as the search and refinement of the correct corresponding points. While parallel computing techniques can be employed to accelerate the entire process [20], a method that uses both high-frequency fringe patterns toward lower phase errors, and less computation toward faster measurement would be ideal. Furthermore, the frequency of fringe patterns used in geometry-constraint based methods is usually determined empirically. Developing a method of theoretically determining the maximum fringe frequency that can be used in geometry-constraint based methods is still a problem that needs to be addressed. A theoretical mathematical model that relates the fringe frequency and geometry constraints is needed.

This paper presents a new fringe projection method for surface-shape measurement that uses novel high-frequency background modulation fringe patterns, where the highest pattern frequency that allows only two corresponding-point candidates in the measurement volume for reliable corresponding-point selection, is determined using a novel model that relates the fringe wavelength to system-geometry constraints. Newly developed mathematical expressions of geometry-constraint parameters permit calculation of the parameters immediately after system calibration prior to measurement, thus permitting use of a stored look-up table (LUT) to reduce computation during measurement. The method requires only four high-frequency fringe patterns for full-field surface-shape measurement and handles objects with surface discontinuities.

2. Principle and method

2.1 Four-step phase-shifting background modulation fringe patterns

In this paper, the four-step phase-shifting method is used, and the phase-shifted patterns are encoded with different background values for different phase periods. The projected fringe images Ii (i = 1, 2, 3, 4) for four phase-shifted patterns are given by the following equations:

I1(x,y)=B(x,y)+A(x,y)cosφ(x,y)+B0(x,y)I2(x,y)=B(x,y)+A(x,y)cos[φ(x,y)π/2]I3(x,y)=B(x,y)+A(x,y)cos[φ(x,y)π]+B0(x,y),I4(x,y)=B(x,y)+A(x,y)cos[φ(x,y)3π/2]
where B(x, y) and A(x, y) represent the background intensity and amplitude of modulation, respectively. B0(x, y) represents a background offset which changes values for different phase periods. B0 = 0.2B when the phase period is odd, and B0 = −0.2B when the phase period is even. φ(x, y) represents the phase distribution (phase map) and can be computed by:

φ(x,y)=tan1[I2(x,y)I4(x,y)I1(x,y)I3(x,y)].

The wrapped phase map φ(x, y) in Eq. (2) is independent from the background offset. The background offset can be extracted by:

B0(x,y)=I1(x,y)+I3(x,y)I2(x,y)I4(x,y)2.

The background offset B0(x, y) is then binarized by using a threshold value of 0, generating a binary pattern C(x, y), where C(x, y) = 1 when B0(x, y) ≥ 0, and C(x, y) = 0 when B0(x, y) < 0. The binary pattern C(x, y) can thus denote each neighbouring odd and even phase period for every camera pixel. The extraction of the binary pattern in this paper is simple; however, the binarization could have error at edges of sinusoidal fringes (edge points) since the extracted background offset B0(x, y) does not usually have a sharp cut-off, resulting in a mismatch between phase map and binary pattern. Therefore, a binary pattern correction is necessary.

2.2 Binary pattern correction

Since the wrapped-phase analysis in this paper is independent of the background offset, the wrapped phase map still has high accuracy and reliability and can be used to guide the binary pattern correction. The mismatch between wrapped phase map and binary pattern tends to occur at edge points and their neighbouring pixels within a 1 pixel distance. Three examples of phase map values and their corresponding binary pattern values when C changes from 0 to 1 are shown in Fig. 1. The phase value φ jumps between (x, y) and (x, y + 1); however, the binary pattern C jumps between (x, y−1) and (x, y) in Case 1 and between (x, y + 1) and (x, y + 2) in Case 3, resulting in mismatch between the phase map and binary pattern. Case 2 is correct since the edges of the phase map and binary pattern are matched between (x, y) and (x, y + 1). Similarly, the mismatch can happen when the binary pattern C changes from 1 to 0.

 figure: Fig. 1

Fig. 1 Examples of phase map values and their corresponding binary pattern values.

Download Full Size | PDF

The correction of the binary pattern error is given as follows:

  • 1. For each pixel (x, y):

    IF φ(x, y) − φ(x, y−1) < π AND φ(x, y + 1) − φ(x, y) < −π AND φ(x, y + 2) − φ(x, y + 1) < π:

    THEN the pixel is an edge point.

  • 2. IF the pixel is an edge point:

    THEN C(x, y) = C(x, y−1) and C(x, y + 1) = C(x, y + 2).

2.3 Geometry-constraint parameters

The method in this paper was developed for a two-camera single-projector system (projector between cameras), as shown in Fig. 2, where any two of the three devices form a standard stereovision system, since the projector can be regarded as an inverse camera. The camera and projector lenses follow the well-known pinhole model [25]. In a stereo system consisting of a projector and a left camera, the transformation of 3D world coordinates (Xw, Yw, Zw) to 3D camera coordinates (Xc, Yc, Zc) can be performed by:

[XcYcZc]=[LTL][XwYwZw1]=[l11l21l31l12l22l32l13l23l33t1t2t3][XwYwZw1],
where L and TL are the left camera extrinsic-parameter matrices for rotation and translation with l and t matrix elements, respectively. The projection of any point (Xc, Yc, Zc) to the left camera normalized image plane is expressed as:

 figure: Fig. 2

Fig. 2 Diagram of geometry constraints and correspondence. (OLC, OP, and ORC, represent the left camera, projector, and right camera centers; eP and eRC are the projector and right camera epipolar lines; and Qk-1,… Qk+2 represent points on the line of sight).

Download Full Size | PDF

[xcyc]=[Xc/ZcYc/Zc].

All the points imaged to this normalized projection (xc, yc) lie on a straight line, usually called the line of sight. Thus the 3D world coordinates of points on the line of sight can be expressed by the following:

Xw=a1Zw+a0Yw=b1Zw+b0,
where:

a1=(ycl33l23)(xcl32l12)(ycl32l22)(xcl33l13)(ycl32l22)(xcl31l11)(ycl31l21)(xcl32l12)a0=(yct3t2)(xcl32l12)(ycl32l22)(xct3t1)(ycl32l22)(xcl31l11)(ycl31l21)(xcl32l12).b1=(ycl33l23)(xcl31l11)(ycl31l21)(xcl33l13)(ycl31l21)(xcl32l12)(ycl32l22)(xcl31l11)b0=(yct3t2)(xcl31l11)(ycl31l21)(xct3t1)(ycl31l21)(xcl32l12)(ycl32l22)(xcl31l11)

Similarly, the transformation of 3D world coordinates (Xw, Yw, Zw) to 3D projector coordinates (Xp, Yp, Zp) can be performed by:

[XpYpZp]=[MTM][XwYwZw1]=[m11m21m31m12m22m32m13m23m33t4t5t6][XwYwZw1],
where M and TM are projector extrinsic-parameter matrices for rotation and translation with m and t elements, respectively. Thus, the normalized projection on the projector image plane can be computed by:
[xpyp]=[Xp/ZpYp/Zp]=[p1Zw+p0s1Zw+s0q1Zw+q0s1Zw+s0],
where:

p1=m11a1+m12b1+m13p0=m11a0+m12b0+t4q1=m21a1+m22b1+m23.q0=m21a0+m22b0+t5s1=m31a1+m32b1+m33s0=m31a0+m32b0+t6

For all points on the line of sight, the normalized projections onto the projector image plane should lie on the projector epipolar line eP. The equation of the projector epipolar line can be expressed by:

(q1s0q0s1)xp+(p0s1p1s0)yp+(p1q0p0q1)=0.

After system calibration, the camera and projector extrinsic-parameter matrices are known. Thus, the parameters of the projector epipolar line can be pre-calculated using Eq. (11), for each line of sight before measurement. Similarly, the parameters of the epipolar line on the right camera image plane eRC, can also be pre-calculated for every left camera pixel. The pre-calculated parameters of the epipolar line on the projector and camera image planes are independent from phase map values and can be saved in LUTs. During object measurement, these parameters can be retrieved from the LUTs and used directly for the correspondence search on the epipolar line, thus reducing the computational cost during the measurement.

2.4 Correct corresponding point selection and refinement using geometry constraint and binary pattern

For a fringe pattern with N phase periods, since the phase map obtained by Eq. (2) is wrapped, each line of sight will pierce through N equiphase planes, generating N corresponding candidate points Qi (i = 1, 2, … k, … N), as shown in Fig. 2. (Candidate points for i = k-1, k, k + 1, k + 2 are explained further below). There is only one correct candidate located on the object surface; the other N−1 candidates are false. As the first step in selecting the correct candidate, a measurement volume (the region between two dashed lines, Zmin < Zw < Zmax in Fig. 2) is used to exclude false candidates; all candidates outside the measurement volume are considered false. Qmin and Qmax are the intersections of the line of sight and the maximum and minimum measurement depth planes. If a high-frequency fringe pattern is used and the required capacity of measurement volume is large, there will be still several candidate points located in the measurement volume.

To further determine the correct candidate in the measurement volume, the binary pattern C(x, y) is used. The method in this paper allows only two candidates Qk, Qk+1 to be located in the measurement volume by restricting the fringe-pattern wavelength (explained further in Sec. 2.5). The binary pattern can uniquely denote each neighbouring odd and even phase period. If C(x, y) = 1, the correct phase period is odd; if C(x, y) = 0, the correct phase period is even. The correct candidate, either Qk or Qk+1, can thus be selected based on the value of C(x, y) (note that C(x, y) was based only on the encoded background intensity). The correct candidate is then projected onto the right camera image plane. The ideal projection onto the right camera image plane should lie on the epipolar line eRC. However, there may be error between the calculated and ideal projections because of phase error.

To correct for phase error, for every pixel in the left camera image (and thus for every line of sight from the left camera), correspondence refinement is applied by searching for a phase value (corresponding to that of the left camera image) along the epipolar line on the right camera image plane within a small range from the calculated projection. The phase values at edge points need to be adjusted when searching the epipolar line. Methods of edge-point adjustment and correspondence refinement are detailed in [20]. However, in the proposed method of this paper, since the epipolar line eRC is pre-calculated, and furthermore in this paper, only one candidate remains, the computational cost of correspondence refinement is reduced to a minimum. After accurate corresponding points are found by the above correspondence refinement for every pixel, accurate 3D coordinates can be obtained, and the object surface reconstructed.

Corresponding-point selection and refinement can be summarized by the following steps:

  • 1. Limit candidate points to within the measurement volume, i.e. Zmin < Zw < Zmax.
  • 2. Select correct candidate Qk or Qk+1, based on C(x, y): Phase period odd if C(x, y) = 1 or even if C(x, y) = 0.
  • 3. Adjust phase values at edge points.
  • 4. Search for the corresponding-point on epipolar line eRC.

2.5 Fringe-pattern wavelength selection

High-frequency fringe patterns are desired to avoid phase error [21]. However, high-frequency patterns generate more corresponding-point candidates in the measurement volume than low frequency patterns, tending to make the correct candidate selection unreliable and computationally expensive. As mentioned earlier, in this paper, to ensure high reliability of the corresponding-point candidate selection (with the further benefit of minimal computational cost in finding the correct candidate), the maximum number of candidates allowed in the measurement volume is two. This limits the highest fringe-pattern frequency that can be used. The highest fringe-pattern frequency (smallest wavelength) that can be used is now theoretically determined. Let the maximum measurement depth Zmax = d and minimum measurement depth Zmin = −d, with vertical fringe-pattern wavelength λ.

The projector-image pixel coordinates (up, vp) can be expressed as:

[upvp1]=[fuγu00fvv0001][xpyp1],
where fu and fv respectively, describe the effective focal lengths of the projector; γ is the skew factor of the u and v axes, which is usually assumed to be zero; and (u0, v0) is the projector principal point. Since Qmax and Qmin are on the maximum and minimum measurement depth planes, respectively, the u (horizontal) axis pixel coordinate difference between the projections of Qmin and Qmax on projector image can be expressed as:

umaxpuminp=fu(p1Zmax+p0s1Zmax+s0p1Zmin+p0s1Zmin+s0)=2fu(p1s0p0s1)ds02s12d2.

To ensure that only two corresponding-point candidates are located in the measurement volume, the pixel coordinate difference is constrained by:

umaxpuminp<2λ.
The smallest wavelength that can be used for the fringe patterns is thus:
λmin=fu(p1s0p0s1)ds02s12d2.
The smallest wavelength λmin can be calculated at every camera pixel. In this paper, the maximum value over all pixels of the smallest wavelength, max{ceil[λmin(x, y)]} is used to generate the smallest wavelength and thus highest frequency permitted for the fringe patterns that satisfies the coordinate-difference constraint of Eq. (14) for all pixels. (ceil[ ] is the ceiling operator that gives the nearest upper integer number; and max{} yields the maximum value over all pixels (x, y)).

2.6 Summary of method

The 3D surface-shape measurement using high-frequency background-modulation fringe patterns is summarized as follows:

  • 1. Prior to measurement:
    • a) Perform system gamma calibration and correction and stereovision system calibration (standard techniques not described in this paper).
    • b) Determine the smallest fringe wavelength (Sec. 2.5).
    • c) For every camera pixel (usually left camera), compute the corresponding epipolar lines on projector and right camera image planes (Sec. 2.3).
  • 2. During measurement:
    • a) Project the four background modulation fringe patterns (Section 2.1) onto surface. Capture fringe pattern images and perform lens distortion correction.
    • b) Compute the wrapped phase maps (Sec. 2.1).
    • c) Compute the binary patterns after fringe analysis for both cameras (Sec. 2.1 and 2.2).
  • 3. 3D surface-shape reconstruction:
    • a) For every pixel in one camera, compute the corresponding point candidates using the wrapped phase map and pre-calculated epipolar line (Sec. 2.4).
    • b) Eliminate false corresponding-point candidates using the binary pattern (Sec. 2.4).
    • c) Perform correspondence refinement (Sec. 2.4).
    • d) Compute 3D coordinates at all camera pixels based on stereovision techniques.

3. Experiments and results

To verify the performance of the new method, an experimental system was developed consisting of two monochrome cameras (Basler acA1300-200um) and one DLP projector (Optoma PK301), with resolutions 640 × 480 and 848 × 480, respectively. The extrinsic and intrinsic parameters of the projector and cameras were obtained by system calibration [26]. The measurement volume depth was set to 200 mm (d = 100 mm). By first calculating the smallest wavelength of the fringe pattern at every camera pixel using Eq. (15), the maximum smallest wavelength over all pixels and thus resulting wavelength to use for the fringe patterns was determined to be 21 pixels. Four high-frequency phase-shifted background modulation fringe patterns were generated with a 21-pixel wavelength (approximately 40 periods per pattern). The system gamma was corrected prior to fringe pattern projection [27].

The measurement accuracy of the method was first evaluated by measurement of a double hemisphere object, which had true radii of 50.800 ± 0.015 mm and distance between hemisphere centers 120.000 ± 0.005 mm based on manufacturing specification and precision. One of the four high-frequency phase-shifted background modulation fringe patterns is shown in Fig. 3(a). The wrapped phase map obtained by Eq. (2) is shown in Fig. 3(b). The computed binary pattern C(x, y) is shown in Fig. 3(c). The correspondence search and refinement were then performed directly along the pre-calculated epipolar lines for each pixel. Finally, the 3D reconstructed point cloud was obtained, as shown in Fig. 3(d).

 figure: Fig. 3

Fig. 3 Experimental result of measurement of double-hemisphere object: a) one of the captured images, b) wrapped phase map, c) computed binary pattern, and d) 3D reconstructed point cloud.

Download Full Size | PDF

The measurement accuracy of the new method was determined by least-square fitting of a sphere to each hemisphere point cloud. The calculated radii of the two hemispheres were 50.741 mm and 50.775 mm, compared to the true radius 50.800 ± 0.015 mm. The sphere-fitting standard deviations were 0.045 mm and 0.041 mm, respectively. The root mean square (RMS) errors based on differences between measured points on the hemisphere and the true radius were 0.063 mm and 0.060 mm, respectively. The distance between the two calculated sphere centers was 120.073 mm, compared to the true distance 120.000 ± 0.005 mm. The measurement accuracy was quite excellent because of the high-frequency fringe patterns and high SNR of the wrapped phase map.

In comparison to the new method, measurement by a conventional three-frequency phase-shifting method, performed on the double hemisphere object using 12-, 13-, and 14-pixel pitch, and a total of twelve projected and captured fringe patterns, had nearly identical calculated radii 50.741 mm and 50.760 mm, lower sphere-fitting standard deviations 0.024 mm and 0.027 mm, lower RMS errors 0.039 mm and 0.040 mm, and identical sphere center distance 120.073 mm. The measurement accuracies of the new method and conventional multi-wavelength phase-shifting method were both excellent. The new method had slightly higher RMS error and standard deviation, likely because of the limitation on the highest fringe-pattern frequency (smallest wavelength) that can be used. The method in this paper has the advantage of using only four fringe patterns compared to the 12 patterns of the three-wavelength method. This advantage would be desirable when acquisition speed is important, considering that the measurement accuracy is still excellent.

Measurement with the new method was also performed on an object with surface discontinuity, a curling stone handle. One of the four high-frequency phase-shifted background modulation fringe patterns is shown in Fig. 4(a), the wrapped phase map and binary pattern in Figs. 4(b) and 4(c), and the 3D reconstructed point cloud in Fig. 4(d). The measurement results show that the new method, which performs all calculations pixel-wise, can handle objects with surface discontinuities.

 figure: Fig. 4

Fig. 4 3D measurement of curling-stone handle: a) one of the captured images, b) computed wrapped phase map, c) computed binary pattern, and d) 3D reconstructed point cloud.

Download Full Size | PDF

A further measurement with the new method was performed on spatially isolated objects, a mask and cylinder, shown in Fig. 5(a). The 3D reconstructed point cloud in Fig. 5(b) shows that the new method can handle measurement of separate object surfaces. Some small regions on the reconstructed mask were missing due to occlusion from one camera, which is a common problem for geometry-constraint based systems using two cameras [24].

 figure: Fig. 5

Fig. 5 3D measurement of spatially isolated objects: a) mask and cylinder, and b) 3D reconstructed point cloud.

Download Full Size | PDF

4. Conclusion

A novel method of 3D surface-shape measurement that uses high-frequency background-modulation fringe patterns generated based on a new fringe-wavelength geometry-constraint model was developed. The model determines the highest pattern frequency that can be used for the measurement and limits the number of corresponding-point candidates located in the measurement volume to two, thus permitting reliable corresponding-point selection, and minimal computational cost. Calculation of the epipolar line prior to measurement further reduces the computational cost of the measurement. Experiments demonstrated the ability of the method to perform full-field 3D shape measurement with high-accuracy, for a surface with geometric discontinuity, and for spatially isolated objects, using only four high-frequency fringe patterns. Computation is performed pixel-wise and is thus promising for parallel computing.

Funding

Natural Sciences and Engineering Research Council of Canada (NSERC); University of Waterloo; China Scholarship Council (CSC).

References and links

1. G. Sansoni and F. Docchio, “Three-dimensional optical measurements and reverse engineering for automotive applications,” Robot. Comput.-Integr. Manuf. 20(5), 359–367 (2004). [CrossRef]  

2. K. Zhong, Z. Li, X. Zhou, Y. Li, Y. Shi, and C. Wang, “Enhanced phase measurement profilometry for industrial 3D inspection automation,” Int. J. Adv. Manuf. Technol. 76(9-12), 1563–1574 (2015). [CrossRef]  

3. B. Wei, J. Liang, J. Li, and M. Ren, “Rapid three-dimensional chromoscan system of body surface based on digital fringe projection,” Proc. SPIE 9576, 95760P (2015). [CrossRef]  

4. C. L. Heike, K. Upson, E. Stuhaug, and S. M. Weinberg, “3D digital stereophotogrammetry: a practical guide to facial image acquisition,” Head Face Med. 6(1), 18 (2010). [CrossRef]   [PubMed]  

5. C. Cao, Y. Weng, S. Zhou, Y. Tong, and K. Zhou, “FaceWarehouse: A 3D facial expression database for visual computing,” IEEE Trans. Vis. Comput. Graph. 20(3), 413–425 (2014). [CrossRef]   [PubMed]  

6. C. Bräuer-Burchardt, M. Möller, C. Munkelt, M. Heinze, P. Kühmstedt, and G. Notni, “On the accuracy of point correspondence methods in three-dimensional measurement systems using fringe projection,” Opt. Eng. 52(6), 063601 (2013). [CrossRef]  

7. C. Zuo, Q. Chen, G. Gu, S. Feng, F. Feng, R. Li, and G. Shen, “High-speed three-dimensional shape measurement for dynamic scenes using bi-frequency tripolar pulse-width-modulation fringe projection,” Opt. Lasers Eng. 51(8), 953–960 (2013). [CrossRef]  

8. A. Asundi and Z. Wensen, “Fast phase-unwrapping algorithm based on a gray-scale mask and flood fill,” Appl. Opt. 37(23), 5416–5420 (1998). [CrossRef]   [PubMed]  

9. Y. Xu, S. Jia, Q. Bao, H. Chen, and J. Yang, “Recovery of absolute height from wrapped phase maps for fringe projection profilometry,” Opt. Express 22(14), 16819–16828 (2014). [CrossRef]   [PubMed]  

10. Y. An and S. Zhang, “High-resolution, real-time simultaneous 3D surface geometry and temperature measurement,” Opt. Express 24(13), 14552–14563 (2016). [CrossRef]   [PubMed]  

11. Z. Wang, D. A. Nguyen, and J. C. Barnes, “Some practical considerations in fringe projection profilometry,” Opt. Lasers Eng. 48(2), 218–225 (2010). [CrossRef]  

12. L. Song, Y. Chang, Z. Li, P. Wang, G. Xing, and J. Xi, “Application of global phase filtering method in multi frequency measurement,” Opt. Express 22(11), 13641–13647 (2014). [CrossRef]   [PubMed]  

13. W. H. Su, C. Y. Kuo, and F. J. Kao, “Three-dimensional trace measurements for fast-moving objects using binary-encoded fringe projection techniques,” Appl. Opt. 53(24), 5283–5289 (2014). [CrossRef]   [PubMed]  

14. D. Zheng and F. Da, “Self-correction phase unwrapping method based on Gray-code light,” Opt. Lasers Eng. 50(8), 1130–1139 (2012). [CrossRef]  

15. Y. Wang and S. Zhang, “Novel phase-coding method for absolute phase retrieval,” Opt. Lett. 37(11), 2067–2069 (2012). [CrossRef]   [PubMed]  

16. S. Gai and F. Da, “Fringe image analysis based on the amplitude modulation method,” Opt. Express 18(10), 10704–10719 (2010). [CrossRef]   [PubMed]  

17. X. Liu and J. Kofman, “Background and amplitude encoded fringe patterns for 3D surface-shape measurement,” Opt. Lasers Eng. 94, 63–69 (2017). [CrossRef]  

18. S. Heist, A. Mann, P. Kühmstedt, P. Schreiber, and G. Notni, “Array projection of aperiodic sinusoidal fringes for high-speed three-dimensional shape measurement,” Opt. Eng. 53(11), 112208 (2014). [CrossRef]  

19. T. Tao, Q. Chen, J. Da, S. Feng, Y. Hu, and C. Zuo, “Real-time 3-D shape measurement with composite phase-shifting fringes and multi-view system,” Opt. Express 24(18), 20253–20269 (2016). [CrossRef]   [PubMed]  

20. K. Zhong, Z. Li, Y. Shi, C. Wang, and Y. Lei, “Fast phase measurement profilometry for arbitrary shape objects without phase unwrapping,” Opt. Lasers Eng. 51(11), 1213–1222 (2013). [CrossRef]  

21. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016). [CrossRef]  

22. W. Lohry, V. Chen, and S. Zhang, “Absolute three-dimensional shape measurement using coded fringe patterns without phase unwrapping or projector calibration,” Opt. Express 22(2), 1287–1301 (2014). [CrossRef]   [PubMed]  

23. Y. An, J.-S. Hyun, and S. Zhang, “Pixel-wise absolute phase unwrapping using geometric constraints of structured light system,” Opt. Express 24(16), 18445–18459 (2016). [CrossRef]   [PubMed]  

24. C. Jiang, B. Li, and S. Zhang, “Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers,” Opt. Lasers Eng. 91, 232–241 (2017). [CrossRef]  

25. J. Heikkila and O. Silven, “A four-step camera calibration procedure with implicit image correction,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1997), pp. 1106–1112. [CrossRef]  

26. Z. Li, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008). [CrossRef]  

27. K. Liu, Y. Wang, D. L. Lau, Q. Hao, and L. G. Hassebrook, “Gamma model and its analysis for phase measuring profilometry,” J. Opt. Soc. Am. A 27(3), 553–562 (2010). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 Examples of phase map values and their corresponding binary pattern values.
Fig. 2
Fig. 2 Diagram of geometry constraints and correspondence. (OLC, OP, and ORC, represent the left camera, projector, and right camera centers; eP and eRC are the projector and right camera epipolar lines; and Qk-1,… Qk+2 represent points on the line of sight).
Fig. 3
Fig. 3 Experimental result of measurement of double-hemisphere object: a) one of the captured images, b) wrapped phase map, c) computed binary pattern, and d) 3D reconstructed point cloud.
Fig. 4
Fig. 4 3D measurement of curling-stone handle: a) one of the captured images, b) computed wrapped phase map, c) computed binary pattern, and d) 3D reconstructed point cloud.
Fig. 5
Fig. 5 3D measurement of spatially isolated objects: a) mask and cylinder, and b) 3D reconstructed point cloud.

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

I 1 (x,y)=B(x,y)+A(x,y)cosφ(x,y)+ B 0 (x,y) I 2 (x,y)=B(x,y)+A(x,y)cos[ φ(x,y)π/2 ] I 3 (x,y)=B(x,y)+A(x,y)cos[ φ(x,y)π ]+ B 0 (x,y) , I 4 (x,y)=B(x,y)+A(x,y)cos[ φ(x,y) 3π/2 ]
φ(x,y)= tan 1 [ I 2 (x,y) I 4 (x,y) I 1 (x,y) I 3 (x,y) ] .
B 0 (x,y)= I 1 (x,y)+ I 3 (x,y) I 2 (x,y) I 4 (x,y) 2 .
[ X c Y c Z c ]=[ L T L ][ X w Y w Z w 1 ]=[ l 11 l 21 l 31 l 12 l 22 l 32 l 13 l 23 l 33 t 1 t 2 t 3 ][ X w Y w Z w 1 ] ,
[ x c y c ]=[ X c / Z c Y c / Z c ] .
X w = a 1 Z w + a 0 Y w = b 1 Z w + b 0 ,
a 1 = ( y c l 33 l 23 )( x c l 32 l 12 )( y c l 32 l 22 )( x c l 33 l 13 ) ( y c l 32 l 22 )( x c l 31 l 11 )( y c l 31 l 21 )( x c l 32 l 12 ) a 0 = ( y c t 3 t 2 )( x c l 32 l 12 )( y c l 32 l 22 )( x c t 3 t 1 ) ( y c l 32 l 22 )( x c l 31 l 11 )( y c l 31 l 21 )( x c l 32 l 12 ) . b 1 = ( y c l 33 l 23 )( x c l 31 l 11 )( y c l 31 l 21 )( x c l 33 l 13 ) ( y c l 31 l 21 )( x c l 32 l 12 )( y c l 32 l 22 )( x c l 31 l 11 ) b 0 = ( y c t 3 t 2 )( x c l 31 l 11 )( y c l 31 l 21 )( x c t 3 t 1 ) ( y c l 31 l 21 )( x c l 32 l 12 )( y c l 32 l 22 )( x c l 31 l 11 )
[ X p Y p Z p ]=[ M T M ][ X w Y w Z w 1 ]=[ m 11 m 21 m 31 m 12 m 22 m 32 m 13 m 23 m 33 t 4 t 5 t 6 ][ X w Y w Z w 1 ] ,
[ x p y p ]=[ X p / Z p Y p / Z p ]=[ p 1 Z w + p 0 s 1 Z w + s 0 q 1 Z w + q 0 s 1 Z w + s 0 ] ,
p 1 = m 11 a 1 + m 12 b 1 + m 13 p 0 = m 11 a 0 + m 12 b 0 + t 4 q 1 = m 21 a 1 + m 22 b 1 + m 23 . q 0 = m 21 a 0 + m 22 b 0 + t 5 s 1 = m 31 a 1 + m 32 b 1 + m 33 s 0 = m 31 a 0 + m 32 b 0 + t 6
( q 1 s 0 q 0 s 1 ) x p +( p 0 s 1 p 1 s 0 ) y p +( p 1 q 0 p 0 q 1 )=0 .
[ u p v p 1 ]=[ f u γ u 0 0 f v v 0 0 0 1 ][ x p y p 1 ] ,
u max p u min p = f u ( p 1 Z max + p 0 s 1 Z max + s 0 p 1 Z min + p 0 s 1 Z min + s 0 )=2 f u ( p 1 s 0 p 0 s 1 )d s 0 2 s 1 2 d 2 .
u max p u min p <2λ .
λ min = f u ( p 1 s 0 p 0 s 1 )d s 0 2 s 1 2 d 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.