Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Depth recovering method immune to projector errors in fringe projection profilometry by use of cross-ratio invariance

Open Access Open Access

Abstract

In fringe projection profilometry, the projector parameters are difficult to calibrate accurately thus inducing errors in measurement results. To solve this problem, this paper analyzes the epipolar geometry of the fringe projection system, revealing that, on an epipolar plane, the depth variation along an incident ray induces the pixel movement along the epipolar line on the image plane of the camera. The depth variation and the pixel movement can be related to each other by using projective transformations. Under this condition, their cross-ratio keeps invariant. By use of this cross-ratio invariance, we suggest a depth recovering method immune to projector errors. To calibrate the measurement system, we shift a reference board perpendicularly to three positions with known depths and measure its phase maps as the reference phase maps. When measuring an object, we calculate the object depth at each pixel by equating the cross-ratio of the depths to that of the corresponding pixels having the same phase. Experimental results demonstrate that, with this technique, the errors associated with the projector, including its errors in geometry parameters, its lens distortions, and its luminance nonlinearity, do not affect the measurement results.

© 2017 Optical Society of America

1. Introduction

Fringe projection profilometry, as a whole-field, non-contacting, and environment-insensitive three-dimensional (3D) measurement technique, has been extensively developed to meet the demands from various applications [1–4]. This technique is based on the principle of triangulation. In implementation, it uses a projector to cast sinusoidal fringe patterns onto the measured object, and uses a camera at a different angle to capture the deformed patterns caused by the depth variations of the surface. Analyzing the distorted patterns allows us to recover their phase distribution, and subsequently reconstruct the depth map of the object surface. With this technique, the used projector is difficult to calibrate accurately [5, 6], and therefore its residual errors have been recognized as one of the most crucial factors decreasing the measurement accuracy.

In triangulation-based techniques, the binocular geometry serves as a standard model for describing a measurement system. Typically, a binocular stereovision system consists of two cameras, and there should be a separation between the cameras. For calibrating this system, a target having specially designed markers (e.g., a flat checkboard) with known sizes is used. By capturing the images of this target at various positions and angles and extracting their markers, the intrinsic and extrinsic parameters of the two cameras can be accurately determined through the perspective projection geometry [7, 8]. These parameters can be used to define the relations between the coordinates of an object point and its corresponding pixels in the two cameras. Although the data processing with this model involves complex nonlinear minimizations, such software is readily available, making it easy to use in many applications.

When calibrating a fringe projection system, however, much more complicated operation must be implemented. The reason is evident that the projector, which replaces one of two cameras in the measurement system, cannot be used to capture the image of the calibration board. For adapting the fringe projection system to the binocular model, the used projector is thought of as an inverse camera. A pattern with markers is projected by the projector onto the target surface, and captured by a well-calibrated camera [5]. These markers are extracted from the captured images, and then their spatial coordinates are calculated by use of the known camera parameters. Subsequently, the projector parameters can be estimated in an indirect way through coordinate transformations between the projector and the camera. This indirection increases the time duration for calibration, and risks the calibration results suffering from greater uncertainties. For example, it is possible that the pattern is blurred when projected onto the target because of the low-pass property of the projection system. This phenomenon may decrease the accuracy of marker extraction. For suppressing these uncertainties, in [6], a two-step strategy is suggested: estimating the coarse parameters by projecting and capturing some markers, and then iteratively searching, within the ranges centered at the coarse values just obtained, for more accurate results that minimize the errors in measuring the target.

Based on the binocular model, sinusoidal fringes instead of simple markers are also used for the calibration purpose, with which the pixel coordinates of the projector are coded as fringe phases [9–11]. Using sinusoidal fringe patterns is helpful for improving the calibration accuracy, because they can provide sufficiently many points for averaging out the effects of noise. Simultaneously, from these whole-field patterns, one can exactly select the featured points aligned with the printed markers on the target [10, 11], thus well matching the camera calibration results and simplifying the computation in data processing.

Using the same sinusoidal fringes, there exists a different strategy for calibrating the system without explicitly determining its parameters. With this method, a planar calibration board on a motorized track moves, along the direction perpendicular to this board, to a sequence of known depths, thus sampling the measurement space. At each depth, the phase distribution is measured in order to establish a mapping relationship between fringe phases and object depths. This phase-to-depth relation is usually represented by using a function derived from the system geometry [12–16], or simply fitted with a polynomial [17–19]. This approach, termed as the implicit calibration technique, is popularly used in measurement practice because of its simplicities in both implementing and data processing. As the aforementioned techniques need a precision target object, the implicit calibration technique has a requirement for the flatness of the calibration board. Besides, the calibration result of the implicit technique is easy to affect by the movement errors of the planar calibration board, including its piston and tilt errors, but this drawback can be effectively overcome by using a precision mechanism.

Another issue related to the projector is about its luminance nonlinearity, which makes the fringes nonsinusoidal [20]. For eliminating this phenomenon, the standard approach is to perform a photometric calibration to this projector [21, 22], establishing a look-up table or plotting a luminance curve denoting the nonlinearity. Because this calibration result depends not only on the behavior of the projector but also on such factors as the reflectivity and slope of the target plane, the ambient brightness, and the temperature and aging of the projector bulb, one has to interpret and use this calibration result carefully. Without doing a photometric calibration, we can compress the effects of the projector nonlinearity by using phase-shifting technique. Increasing the number of phase shifts [23, 24] or including the coefficients of harmonics into the algorithms [25, 26] enables us to eliminate the higher-order harmonics in the fringe patterns caused by the luminance nonlinearity. Besides, some alternative techniques, such as the statistic method [20], the look-up table for phase errors [27], and the iterative correction technique [28], have been developed for compensating for the influence of the luminance nonlinearity. Even so, simpler methods are still worth expecting.

From the aforementioned facts, we know that it is not easy to accurately calibrate a fringe projection system, and that the errors associated with the projector, including its geometry parameter errors, lens distortions, and luminance nonlinearity, will affect the measurement results. Some attempts have been made for solving this problem. By combining with stereo-photogrammetry, a projector or a structured light source can be used to measure 3D objects without affecting the measurement accuracies [29, 30]. With these methods, the measurement system must equip at least two cameras, and the principle of stereovision rather than of fringe projection technique is used to recover the 3D shapes. Because the projected fringe pattern is just an aid for establishing the point correspondences between the cameras, the projector parameters are excluded from computing the object depths with this technique.

In this paper, we present, to the best of our knowledge, a novel depth recovering method immune to projector errors in fringe projection profilometry. By analyzing the epipolar geometry of the fringe projection system, we know that, on an epipolar plane, the depth variation along an incident ray induces the pixel movement along the epipolar line on the image plane of the camera [31], and the depth variation and pixel movement can be connected by using projective transformations. Under this condition, their cross-ratio [32] keeps invariant and can be used for calculating the object depths. Firstly, we shift the reference board perpendicularly to three positions with known depths, and measure their phase maps as the reference phase maps; and secondly, when measuring an object, we calculate the object depth at each pixel by equating the cross-ratio of the depths to that of the corresponding pixels having the same phase. In so doing, the errors associated with the projector, including its errors in geometry parameters, its lens distortions, and its luminance nonlinearity, will not affect the measurement result in theory. The validity of this method is verified by experimental results.

2. Phase measuring in fringe projection profilometry

The measurement system of fringe projection profilometry, as schemed in Fig. 1, mainly consists of a projector for casting fringe patterns onto the measured object, and a camera for recording the distorted fringe patterns. In addition, we add a reference board mounted on a track for calibrating this system.

 figure: Fig. 1

Fig. 1 Measurement system.

Download Full Size | PDF

Generally, the fringe projection technique measures the depth map of an object surface by recovering the phase distribution from the distorted fringe patterns, and many methods have been developed for performing this task. Among them, we presume for the moment that the phase-shifting technique is used. K sinusoidal fringe patterns, with a constant phase increment of 2π/K radians between consecutive frames, are cast in sequence onto the object surface using the projector. The kth (k = 0, 1, ..., K−1) distorted fringe pattern captured by the camera is represented with

Ik(u,v)=a(u,v)+b(u,v)cos[ϕ(u,v)+2kπ/K],
where (u, v) are pixel coordinates on the image plane of the camera. Ik(u, v), a(u, v), and b(u, v) denote the recorded intensity, the background intensity, and the modulation at the point (u, v), respectively. ϕ(u, v) is the phase to be measured. By utilizing the phase-shifting algorithm [4] the wrapped phase map is calculated as
ϕwrapped(u,v)=arctan[k=0K1Ik(u,v)sin(2kπ/K)k=0K1Ik(u,v)cos(2kπ/K)].
Here, we have to calculate the absolute phases ϕ(u, v) from the wrapped phase ϕwrapped(u, v), because the phase-to-depth relation with fringe projection technique is nonlinear in most cases [15]. In this work, we get the absolute phase map ϕ(u, v) by projecting multifrequency fringe patterns and employing the temporal phase unwrapping algorithm [33, 34]. Note that, other methods (e.g., Gray code method) can also be used for performing the same task.

In fringe projection profilometry, the pixel positions in the fringe patterns of the projector are coded as fringe phases. By matching the pixels between the camera and the projector through the calculated phases, the depth map is recovered using the principle of triangulation. We know from this procedure that the errors associated with the projector, e.g., its geometry parameter errors, lens distortions, and luminance nonlinearity, will affect the measurement results. In the next section, we shall provide a method for calculating the depth map by use of the cross-ratio invariance. This method provides a possibility of precluding the errors of the projector from affecting the measurement results.

3. Depth map recovering by use of the cross-ratio invariance

3.1 Epipolar geometry of the fringe projection system

Figure 2(a) depicts the epipolar geometry of a fringe projection system. In it, the points Oc and Op denote the centers of lenses of the camera and the projector, respectively. The line OcOp is the baseline of the system. The planes πc and πp denote the image plane of the camera and the fringe pattern plane of the projector, respectively. The baseline OcOp crosses πc at ec, and crosses πp at ep. These two points are the epipoles on the image plane of the camera and on the fringe pattern plane of the projector, respectively. Assuming that Q is a point in the object field, the three points, Q, Oc, and Op, form a plane called the epipolar plane. Its intersecting lines with πc and πp, i.e., lc and lp, are the epipolar lines on πc and πp, respectively. The intersecting line between the epipolar plane and the reference plane is denoted as lR.

 figure: Fig. 2

Fig. 2 Geometry of cross-ratio invariance for fringe projection profilometry. (a) shows the epipolar geometry of the measurement system. (b) shows the 2D plot of the epipolar plane in (a), on which the cross-ratio of the object depths, the cross-ratio of the phases, and the cross-ratio of the pixel shifts on πc are equal to one another.

Download Full Size | PDF

In Fig. 2(a), (u, v) and (s, t) are pixel coordinates on the image plane of the camera and on the fringe pattern plane of the projector, respectively, with their origins being at their image centers. We set the camera coordinate system, Ocxcyczc, with its origin locating at Oc, and its zc-axis coinciding with the optical axis of the camera. Similarly, the projector coordinate system, Opxpypzp, has the origin at Op, and its zp-axis coincides with the optical axis of the projector. In the world coordinate system Oxyz, the z-axis is perpendicular to the reference plane.

Along the incident light ray OpQ, the object point Q is illuminated by the light source point q’ on the fringe pattern plane of the projector. By extending the line OpQ, this incident ray crosses the reference plane at M. Because M can also be illuminated by the same point q’, the two points Q and M have the same phase equal to that of q’. On the plane πc, Q and M produce their images at q and m, respectively, which should lie on the same epipolar line lc. When the point Q moves along the incident ray OpQ, its image point q will shift along lc.

Similarly, the reflected ray crosses the reference plane at N, meaning that the two points Q and N produce their images at the same point q on the plane πc. These two points Q and N are illuminated by the light source points q’ and n on the plane πp, respectively. The points q’ and n lie on the same epipolar line lp. When Q moves along the reflected ray QOc, its light source point q’ will shift along lc.

If we shift the reference plane, along the direction perpendicular to this plane, to three positions with known depths and measure its phase maps as reference phase maps, the object depth can be calculated by use of cross-ratio invariance on the epipolar plane. Here, we have two choices. The first choice is to fix the pixel coordinates and calculate the depth by use of the cross-ratio of phases. Like other conventional techniques, this method is sensitive to the errors related to the projector model. The second choice is to fix the phase value of a pixel and calculate the depth by use of the cross-ratio of pixel shifts. This method, in principle, is immune to the errors related to the projector model. In the following subsections, we shall introduce these two methods. The first one is also included because it provides a comparison belonging to the conventional techniques, helping us gain more insights into the proposed method.

3.2 Depth calculation by use of the cross-ratio of fringe phases

In the conventional techniques of fringe projection profilometry, the mapping relationship between the fringe phase and the object depth is established by calibrating the measurement system. In principle, this relationship can also be derived by using the cross-ratio invariance of the fringe phases.

We shift the reference plane perpendicularly (i.e., along z-axis) to three positions with known depths denoted as H1, H2, and H3. Here, we define the depth to be a value along the negative direction of the z-axis. Accordingly, its phase maps at these depth positions are measured and denoted as Φ1(u, v), Φ2(u, v), and Φ3(u, v), respectively. In order to show the geometry more clearly, we select an epipolar plane from Fig. 2(a), and show its 2D plot in Fig. 2(b). In it, the intersecting lines of the epipolar plane with the reference plane at three depths are denoted as lR1, lR2, and lR3.

In Fig. 2(b), when the reference plane is at the depths H1, H2, and H3, the reflected ray QOc crosses it at the points N1, N2, and N3, respectively. These points are collinear with Q, and produce their images at the same point q on the image plane of the camera. The points, N1, N2, and N3, are illuminated by the light source points, n1, n2, and n3, respectively, on the fringe pattern plane of the projector. The four points Q, N1, N2, N3 and the four points q’, n1, n2, n3, can be related by a projective transformation, so that their cross ratios (Q, N1; N2, N3) and (q’, n1; n2, n3) are equal to each other [32]. Assuming that the object point has a depth h(u, v) and a phase ϕ(u, v), we have

(Q,N1;N2,N3)=QN2N1N3N1N2QN3=[H2h(u,v)](H3H1)(H2H1)[H3h(u,v)]=(q,n1;n2,n3)=qn2n1n3n1n2qn3=[Φ2(u,v)ϕ(u,v)][Φ3(u,v)Φ1(u,v)][Φ2(u,v)Φ1(u,v)][Φ3(u,v)ϕ(u,v)]
where all the distances, e.g., QN2 or n1n2, should have signs keeping their direction information. From Eq. (3), we observe that the depths and phases of the reference plane are known, and the object phase has been measured. Therefore, the object depth can be calculated by solving this equation. The result is
h(u,v)=A(u,v)+B(u,v)ϕ(u,v)C(u,v)+D(u,v)ϕ(u,v)
with its coefficients, A(u, v), B(u, v), C(u, v), and D(u, v), being functions of (u, v) and depending on the depths and phases of the reference plane. By omitting the coordinate notations for shortening the expression, these coefficients are given as follows:

{A=Φ1Φ2H3(H1H2)+Φ2Φ3H1(H2H3)+Φ3Φ1H2(H3H1)B=Φ3H3(H1H2)+Φ1H1(H2H3)+Φ2H2(H3H1)C=Φ1Φ2(H1H2)+Φ2Φ3(H2H3)+Φ3Φ1(H3H1)D=Φ3(H1H2)+Φ1(H2H3)+Φ2(H3H1)

Equation (4) represents the phase-to-depth relation, whose coefficients are obtained by perpendicularly shifting the reference plane to three depth positions and measuring the corresponding phase maps. Its implementation is the same as that in some existing techniques [12–19]. When using the phase differences instead of the absolute phases, it is easy to deduce, from Eq. (4), an equation having the same form as that in Ref [15]. In this view, the method depending on the cross-ratio of fringe phases is substantially a method belonging to the conventional techniques. In them, the presence of phase in their equations makes the calculated depths easy to affect by the errors associated with the projector.

In the next subsection, we shall suggest a different solution immune to the projector errors.

3.3 Depth calculation by use of the cross-ratio of pixel shifts

In Fig. 2(b), when the reference plane is at the depths H1, H2, and H3, the incident ray OpQ crosses it at the points M1, M2, and M3, respectively. These points are collinear with Q, and illuminated by the same point q’ on the fringe pattern plane of the projector. Therefore, they have the same phase as ϕ(u, v). These points produce their images at m1, m2, and m3, respectively, on the image plane of the camera. We denote the pixel coordinates of m1, m2, and m3 as (um1, vm1), (um2, vm2), and (um3, vm3), respectively, satisfying

Φ1(um1,vm1)=Φ2(um2,vm2)=Φ3(um3,vm3)=ϕ(u,v).
The four points Q, M1, M2, M3 and the four points q, m1, m2, m3 can be related by a projective transformation, so that their cross ratios (Q, M1; M2, M3) and (q, m1; m2, m3) are equal to each other [32]. We have
(Q,M1;M2,M3)=QM2M1M3M1M2QM3=[H2h(u,v)](H3H1)(H2H1)[H3h(u,v)]=(q,m1;m2,m3)=qm2m1m3m1m2qm3=[(um2+jvm2)(u+jv)][(um3+jvm3)(um1+jvm1)][(um2+jvm2)(um1+jvm1)][(um3+jvm3)(u+jv)]=(um2u)(um3um1)(um2um1)(um3u)=(vm2v)(vm3vm1)(vm2vm1)(vm3v)
where j is the square root of ‒1, by which we conveniently denote the pixel coordinates as complex numbers thus keeping the direction information in their differences. In the last line of Eq. (7), the cross ratio (q, m1; m2, m3) can be calculated by using a coordinate component along u or v axis, because these four points are collinear.

Equations (6) and (7) allows us to derive a depth calculating method immune to the errors associated with the projector. Its procedure is summarized as the following two steps.

STEP1. Measure the reference phase maps and determine the epipole on the image plane of the camera.

Shift the reference plane in its perpendicular direction to three positions with known depths H1, H2, and H3, project both the horizontal and the vertical fringe patterns onto it, and then capture the deformed patterns and calculate their phase maps. Use the temporal phase-unwrapping technique based on multifrequency fringe patterns [34] to get the absolute phase maps. As a result, we have three phase maps corresponding to the horizontal fringes and three phase maps corresponding to the vertical fringes. For distinguishing them, in this step, we denote the phases of the horizontal and the vertical fringes, at the κth depth with κ = 1, 2, 3, as ΦHκ(u, v) and ΦVκ(u, v), respectively.

Because this method in the next step will involve a searching procedure along an epipolar line on the image plane of the camera, we must determine its epipole coordinates first. For doing it, the phase maps of both directions have to be used. A direct method is to search, in the phase maps just obtained, for the pixels having the same phases in both directions. By fitting these pixels, one can determine the functions of equal-phase lines, i.e., the functions of epiplolar lines on the image plane of the camera. Furthermore, the epipole can be determined by calculating the intersection of these equal-phase lines in the least squares sense. This method is somewhat cumbersome in data processing. In this work, we use a simpler and more accurate method. According to [35], the phases of straight fringes on a flat plane are theoretically distributed as a rational function, so we simultaneously fit ΦHκ(u, v) and ΦVκ(u, v) using

{ΦHκ(u,v)dκ3+dκ4u+dκ5v1+dκ1u+dκ2vΦVκ(u,v)dκ6+dκ7u+dκ8v1+dκ1u+dκ2v
Note that the denominators in these two equations have the same coefficients, because they are related to the depth and independent of the fringe directions [35]. Substituting the measured phases into Eq. (8), we have a system of equations. Solving it in the least squares sense, the coefficients, dκ1 through dκ8, for each depth position (κ = 1, 2, 3) are calculated.

At the epipole on the image plane of the camera, namely (ue, ve), the depth variation does not induce any changes in phase. Therefore, all the phase maps denoted by Eq. (8) certainly intersect with one another at this epipole, viz.

{dκ3+dκ4ue+dκ5ve1+dκ1ue+dκ2vedλ3+dλ4ue+dλ5ve1+dλ1ue+dλ2ve=0dκ6+dκ7ue+dκ8ve1+dκ1ue+dκ2vedλ6+dλ7ue+dλ8ve1+dλ1ue+dλ2ve=0
with κ, λ = 1, 2, 3 and κ≠λ. In this step, we have had six phase maps corresponding to three different depths. By fixing λ = 1 and making κ = 2, 3, we have four equations based on Eq. (9). Solving them in the least squares sense, the epipole (ue, ve) are calculated.

STEP 2. Measure the depth map of an object.

Equation (7) shows that the cross-ratio can be calculated by using u or v coordinate, implying that we can measure the object depth by using fringes of one direction. We presume for the moment that the vertical fringes are used, so that ΦV1(u, v), ΦV2(u, v), and ΦV3(u, v) are the reference phase maps, and the horizontal coordinate u is used for calculating the cross-ratio. By projecting the same vertical fringe patterns onto the measured object, we measure the object phase map ϕ(u, v). Connecting each pixel (u, v) with (ue, ve), we have the function of the epipolar line passing through (u, v). Along this epipolar line, we search, in the reference phase maps, for the points having the same phase as ϕ(u, v). They are denoted as (um1, vm1), (um2, vm2), and (um3, vm3). In this step, the searched point generally does not locate at a pixel exactly. In this case, we calculate its coordinates from its four closest pixels using a bilinear interpolation. Using the searching results, the object depth at the pixel (u, v), according to Eq. (7), is calculated as

h(u,v)=(um1um2+um3u)H3(H1H2)+(um2um3+um1u)H1(H2H3)+(um3um1+um2u)H2(H3H1)(um1um2+um3u)(H1H2)+(um2um3+um1u)(H2H3)+(um3um1+um2u)(H3H1).
Similarly, when using the horizontal fringes, we should select ΦH1(u, v), ΦH2(u, v), and ΦH3(u, v) as the reference phase maps, and use the coordinate v instead of u for calculating the depths.

In Eq. (10), the phase is absent, making the depth calculation insensitive to the errors of the projector. See Fig. 2(b) once again, where the three reference points M1, M2, and M3 are involved in calculating the depth of Q. All these four points are illuminated by the same light ray from the single point q’, so the geometric parameters of the projector are not important. At these four points, the luminance nonlinearity induces the same phase errors, not affecting the searching results for the pixels having the same phase. In the next section, we shall verify the validity of this method experimentally.

4. Experiment

In this section, we experimentally verify the validity of the proposed method by measuring practical objects. The measurement system, as schemed in Fig. 1, mainly consists of a DLP projector (PHILIPS PPX4010, 854 × 480 pixels) and a digital camera (AVT Stingray F-125B) with its lens (KOWA LM12JC) having a focal length of 12 mm. Using this system, we do the experiments under different conditions of the projector errors. In each situation, for calibrating the system, we perpendicularly shift the reference plane to three positions having different depths, and project both the horizontal and the vertical fringe patterns onto this plane. The captured patterns allow us to measure the reference phase maps and determine the epipole on the image plane of the camera. The phase-shifting technique is used to recover the fringe phases. With it, the number of phase shifts is three and the phase increment between consecutive frames is 2π/3 radians. Sampling theory shows that the N-step phase-shifting algorithm can remove the influence of harmonics, induced by the projector nonlinearity, up to N-2 order [24], meaning that the three-step phase-shifting algorithm used in this experiment cannot restrain any harmonics induced by the projector nonlinearity. The temporal phase unwrapping technique based on multifrequency fringe patterns [34] is employed to unwrap the phases and get the absolute phase maps. When the origin of the pixel coordinates is at the center of the image, the epipole coordinates is calculated to be (−1950.1, 2173.9), lying to the south-west of the image. We select the horizontal fringe patterns for measurement because these patterns, according to the epipole position, have slightly larger angles with the epipolar lines in comparison with the vertical ones.

Figure 3 illustrates, in its top row, some captured fringe patterns of the reference plane. In each set, the three fringe patterns have the same phase shift of 0, and correspond, from front to back, to the depth positions H1 = 0 mm, H2 = 30 mm, and H3 = 60 mm, respectively. Comparing them, the fringe movements caused by the depth variations are observable. Below the fringe patterns are their phase maps correspondingly. In Fig. 3, the columns correspond to different error conditions. The first column, i.e., (1), shows the results of using “good” fringe patterns, with their fringes being straight with no deformations and the projector nonlinearity having been corrected in advance through a photometric calibration. In the second column, i.e., (2), the same straight fringe patterns are used, but the projector nonlinearity is not corrected, meaning that the fringes are not perfectly sinusoidal, but contain high order harmonics. In the third column [Column (3)] the projector nonlinearity has been corrected, but the fringes are slightly curved for emulating unknown distortions. We do this because the lens distortions of the projectors can deform the fringes, and simultaneously the errors in the geometric parameters of the system, in their effects on the measurement results, are equivalent to the fringe shape distortions. The rightmost column [Column (4)] illustrates the worst situation that the fringes have distortions in both the geometry shapes and the intensity profiles.

 figure: Fig. 3

Fig. 3 Calibration results of the reference phase maps. Row (a) shows the captured fringe patterns on the reference plane. The three fringe patterns in each set have the same phase shift of 0, and correspond, from front to back, to the depth positions H1 = 0 mm, H2 = 30 mm, and H3 = 60 mm, respectively. Row (b) shows their corresponding phase maps in radians, which will be used as the reference phase maps in measurement. The columns correspond to different error conditions. (1) The fringes are straight, and the projector nonlinearity has been corrected. (2) The fringes are straight, but the projector nonlinearity is not corrected. (3) The fringes are slightly curved, but the projector nonlinearity has been corrected. (4) The fringes are slightly curved, and the projector nonlinearity is not corrected.

Download Full Size | PDF

By using the phase maps shown in Fig. 3 as the reference phase maps, and correspondingly projecting the same fringe patterns onto a measured object, we continue the experiment. When measuring an object having a small height, the measurement results are shown in Fig. 4. In it, the columns, (1) through (4), are aligned with those in Fig. 3, corresponding to different situations of the projector errors. In Fig. 4, the top row [i.e., Row (a)] shows the captured fringe patterns having a phase shift of 0, and the second row [i.e., Row (b)] shows the absolute phase maps recovered from the fringe patterns.

 figure: Fig. 4

Fig. 4 Measurement results of an object having a height smaller than 30mm. The columns, (1) through (4), aligned with those in Fig. 3, correspond to different situations of errors of the projector. The rows, from (a) to (d), show the fringe patterns having a phase shift of 0, the unwrapped phase maps in radians, the depth maps in millimeters calculated by use of the cross-ratio of fringe phases, and the depth maps in millimeters calculated by use of the cross-ratio of pixel shifts, respectively.

Download Full Size | PDF

When the fringe phases are measured, the depth maps can be recovered. In Fig. 4, the third row [i.e., (c)] gives the depth maps calculated by using the method introduced in Section 3.2. It is based on the cross-ratio of fringe phases. Because this method directly converts the phases into the depths through Eq. (4), the phase errors will cause measurement artifacts in the depth maps. Seeing the result in Fig. 4(c1) for example, although straight fringes are used, and the projector nonlinearity has been corrected, the residual errors still induce visible artifacts in the calculated depth map. On its right side, Fig. 4(c2) is obtained by using straight fringes without correcting the projector nonlinearity. In it, the artifacts are parallel with and three times denser than the fringes. These artifacts are typically induced by the projector nonlinearity. Theoretical analysis has demonstrated that, when using N-step phase-shifting algorithm, the residual errors induced by the projector nonlinearity, in the calculated phase maps and further in the depth maps, have frequencies N times higher than the fringes [36]. Figure 4(c3) shows the depth map obtained when the fringes are slightly distorted, but the projector nonlinearity has been corrected. From this depth map, it is evident that the distortions in fringe shapes affect the measurement result, though the errors caused by them are not so significant as those by the projector nonlinearity. In Fig. 4(c4), the depth map is ruined simultaneously by the fringe shape distortions and the projector nonlinearity. Below them, the bottom row [Row (d)] in Fig. 4 shows the results of using the method proposed in Section 3.3, which calculates the depth maps by use of the cross-ratio of pixel shifts. In these recovered depth maps, the artifacts induced by the errors of the projector have been suppressed significantly, demonstrating the proposed method to be valid.

Note that, when calibrating the system in this experiment, we shifted the reference board from 0 to 60 mm in depth. The measured object in Fig. 4 has a height smaller than 30 mm, lying within this range. Beyond this range, however, the measurement is subject to greater uncertainty. For investigating the performance of the proposed method in this case, the second object with a height over 80 mm is measured. The fringe patterns and the measurement results are shown in Fig. 5, which has the same layout as that in Fig. 4. The similar phenomena are observed in Fig. 5, that the artifacts in the third row, induced by the projector errors, have been eliminated in the bottom row, demonstrating that the proposed method still works well in this case.

 figure: Fig. 5

Fig. 5 Measurement results of an object having a height over 80mm. The columns, (1) to (4), aligned with those in Fig. 3, correspond to different situations of errors of the projector. The rows, from (a) to (d), show the fringe patterns having a phase shift of 0, the unwrapped phase maps in radians, the depth maps in millimeters calculated by use of the cross-ratio of fringe phases, and the depth maps in millimeters calculated by use of the cross-ratio of pixel shifts, respectively.

Download Full Size | PDF

In fringe projection profilometry, the conventional methods calculate object depths from fringe phases. Their underlying principle is to determine the depth of an object point by fixing its pixel in the camera and measuring the variation of its corresponding pixel in the projector. With them, the errors associated with the projector, including the fringe shape distortions and the intensity nonlinearity, affect the measurement results by introducing some uncertainties in determining the variation of the pixel in the projector, as we see in the third rows of Figs. 4 and 5. With the proposed technique, we do the same task by fixing the projector pixel and measuring the variation of its corresponding pixel in the camera. In principle, only a single projector pixel, which illuminates the object point, is involved, thus its position and intensity being not important for determining the depth of the object point. The bottom rows of Figs. 4 and 5 shows that, by using this method, the artifacts induced by the errors of the projector are suppressed significantly.

For quantitively examining the accuracy of this technique, we implement an additional experiment, in which a plane positioned at the depth of 45 mm is measured. Figure 6 shows the measurement results along the same cross-section of the measured surface. In this figure, the columns are still aligned with those in Fig. 3, corresponding to different situations of the errors of the projector. The first row is obtained by use of the method based on the cross-ratio of fringe phases. In this row, the ripple-like artifacts are typically induced by the errors of the projector. Their amplitudes of the fluctuations show that the errors induced by the projector nonlinearity, in this experiment, are more remarkable than those by the fringe shape distortions. The reason for this phenomenon is that the small distortions in fringe shapes induce slowly varied phase errors, which can be approximated as linearly varied errors. The cross-ratio of phases in Eq. (3) is insensitive to such linear phase errors. Even so, the nonlinear components of these errors still degrade the measurement accuracy. Different from the fringe shape distortions, the phase errors caused by the projector nonlinearity usually have high frequencies thus inducing relatively large errors in phase cross-ratio. For suppressing such errors, the method proposed in Section 3.3 is used for recovering the depth map. As a result, the measurement errors become small as shown in Fig. 6 by the bottom row. These residual errors are mainly caused by random noise rather than by the projector errors. Table 1 lists the root-mean-square (RMS) values of the deviations of the measurement results from the nominal depth value. These experimental results reveal that, in comparison with a conventional method which calculates object depths from fringe phases, the proposed method enables the measurement to be immune to the errors associated with the projector.

 figure: Fig. 6

Fig. 6 Cross-sections of the measured profiles of a flat plane positioned at the depth of 45 mm. The columns, (1) through (4), are aligned with those in Fig. 3, corresponding to different situations of errors of the projector. The top row (a) is obtained by using the method based on the cross-ratio of fringe phases, and the bottom row (b) are obtained by using the method based on the cross-ratio of pixel shifts.

Download Full Size | PDF

Tables Icon

Table 1. The RMS values (mm) of the deviations of the measurement results.

With fringe projection profilometry, there are some issues worth discussing. When measuring the reference phase maps, the flatness error and the deformation of the reference board may decrease the accuracies. In practice, the commercially available calibration boards can have a 0.05mm or higher flatness for meeting demands form computer vision and other applications. The rigidity of a calibration board depends on its material and thickness. In this work, we use a calibration board (without printed markers) as the reference board. This board is made of aluminum alloy and has a thickness of 8 mm. The phase errors related to this reference board are much smaller than those induced by random noise. In addition, the movement errors of this board, e.g., its piston errors and tilts, also decrease the accuracies. Using a precision mechanism enables overcoming this problem.

Random noise is a main factor spoiling the fringe profile. It affects the measurement results by increasing the uncertainty in phase measuring results. If the noise is additive and independent of the fringe signal, the variance of the phase errors induced by the noise is proportional to the noise variance, and inversely proportional to the number of phase shifts and to the square of modulations [37]. Therefore, increasing the number of phase shifts is helpful for restraining the effect of noise. Alternatively, smoothing the fringe patterns using a low-pass spatial filter enables compressing the level of noise, but doing so may make the image blurred. The illumination fluctuations during image capturing also induce the ripple-like phase errors having the same frequencies as fringes. These fluctuations affect our proposed technique because of their time-variant property. Using fringe histograms allows us to correct such illumination fluctuations [38].

Phase sensitivity is a key factor determining the accuracy and the resolution of fringe projection profilometry. This phase sensitivity mainly depends on the system geometry, and is generally not uniformly distributed over the fringe patterns. The theoretical analysis demonstrates that, at a certain pixel in the projected fringe pattern, the phase sensitivity is directly proportional not only to the fringe frequency, but also to the cosine of the angle between the direction of the fringe frequency and the epipolar line [31]. With the proposed method in this paper, the depths are calculated by use of the cross-ratio of pixel shifts, rather than directly converted from the phases. The phase sensitivity indirectly affects the measurement results by determining the accuracy in searching for the pixels having the same phases. A higher phase sensitivity can be achieved by using finer fringes or by generating fringes having an orientation perpendicular to the epipolar lines on the projector plane. In this experiment, the fringes on the captured images have a wide pitch over 60 pixels. Table 1 shows that, using the proposed technique, the RMS values of the deviations are greater than 0.1 mm. If we use finer fringe patterns having a higher frequency, these RMS values can be effectively suppressed under the same noise condition.

Another issue is regarding the efficiency. In comparison with the conventional techniques, the new method has a somewhat low efficiency. For measuring the depth of a point, it needs more time for searching for its corresponding points in the reference phase maps. Even so, we can complete a measurement, including the calibration procedure, within several minutes by shifting the reference board manually.

5. Conclusion

In conclusion, we have proposed in theory and demonstrated by experiments that, in fringe projection profilometry, using the cross-ratio invariance in the system geometry allows us to derive a depth map recovering technique immune to the errors associated with the projector. For doing it, we shift the reference board in its perpendicular direction to three positions with known depths, and measure their phase maps as the reference phase maps. By calculating the cross-ratio of points along each incident light ray, the depth map of the measured object is recovered from the pixel shifts induced by the depth variations. This method is immune to the errors sourced from the projector, including the distortions in the geometric shapes or in the intensity profiles of the projected fringe patterns, at the expense of more time for searching for the corresponding points between the object and reference phase maps.

Funding

Jiangsu Provincial Department of Education (16KJB460030); National Natural Science Foundation of China (NSFC) (61433016, 21231004); Jiangsu Overseas Research & Training Program for University Prominent Young & Middle-aged Teachers and Presidents.

References and links

1. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?” Opt. Lasers Eng. 48, 133–140 (2010).

2. Z. Wang, D. Nguyen, and J. Barnes, “Recent advances in 3D shape measurement and imaging using fringe projection technique,” in Proceedings of the SEM Annual Congress and Exposition on Experimental and Applied Mechanics 2009 (Society for Experimental Mechanics, 2009), pp. 2644–2653.

3. M. Takeda and K. Mutoh, “Fourier transform profilometry for the automatic measurement of 3-D object shapes,” Appl. Opt. 22(24), 3977–3982 (1983). [PubMed]  

4. V. Srinivasan, H. C. Liu, and M. Halioua, “Automated phase-measuring profilometry of 3-D diffuse objects,” Appl. Opt. 23(18), 3105–3108 (1984). [PubMed]  

5. W. Gao, L. Wang, and Z. Hu, “Flexible method for structured light system calibration,” Opt. Eng. 47, 083602 (2008).

6. Q. Hu, P. S. Huang, Q. Fu, and F.-P. Chiang, “Calibration of a three-dimensional shape measurement system,” Opt. Eng. 42, 487–493 (2003).

7. R. Y. Tsai, “An efficient and accurate camera calibration technique for 3D machine vision,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1986), pp. 364–374.

8. Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000).

9. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45, 083601 (2006).

10. Z. Li, Y. Shi, C. Wang, and Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. 47, 053604 (2008).

11. R. Legarda-sáenz, T. Bothe, and W. P. Jüptner, “Accurate procedure for the calibration of a structured light system,” Opt. Eng. 43, 464–471 (2004).

12. W. Zhou and X. Y. Su, “A direct mapping algorithm for phase measuring profilometry,” J. Mod. Opt. 41, 89–94 (1994).

13. Y. Y. Hung, L. Lin, H. M. Shang, and B. G. Park, “Practical three dimensional computer vision techniques for full-field surface measurement,” Opt. Eng. 39, 143–149 (2000).

14. H. Liu, W. Su, and K. Reichard, “Calibration-based phase-shifting projected fringe profilometry for accurate absolute 3D surface profile measurement,” Opt. Commun. 216, 65–80 (2003).

15. H. Guo, H. He, Y. Yu, and M. Chen, “Least-squares calibration method for fringe projection profilometry,” Opt. Eng. 44, 033603 (2005).

16. L. Huang, P. S. Chua, and A. Asundi, “Least-squares calibration method for fringe projection profilometry considering camera lens distortion,” Appl. Opt. 49(9), 1539–1548 (2010). [PubMed]  

17. R. Sitnik, M. Kujawińska, and J. Woźnicki, “Digital fringe projection system for large-volume 360-deg shape measurement,” Opt. Eng. 41, 443–449 (2002).

18. P. J. Tavares and M. A. Vaz, “Linear calibration procedure for the phase-to-height relationship in phase measurement profilometry,” Opt. Commun. 274, 307–314 (2007).

19. I. leandry, C. Brequei, and V. Valle, “Calibration of a structured-light projection system: development to large dimension objects,” Opt. Laser Technol. 50, 373–378 (2012).

20. H. Guo, H. He, and M. Chen, “Gamma correction for digital fringe projection profilometry,” Appl. Opt. 43(14), 2906–2914 (2004). [PubMed]  

21. C. R. Coggrave and J. M. Huntley, “High-speed surface profilometer based on a spatial light modulator and pipeline image processor,” Opt. Eng. 38, 1573–1581 (1999).

22. S. Kakunai, T. Sakamoto, and K. Iwata, “Profile measurement taken with liquid-crystal gratings,” Appl. Opt. 38(13), 2824–2828 (1999). [PubMed]  

23. K. A. Stetson and W. R. Brohinsky, “Electro-optic holography and its application to hologram interferometry,” Appl. Opt. 24(21), 3631–3637 (1985). [PubMed]  

24. H. Guo and M. Chen, “Fourier analysis of the sampling characteristics of the phase-shifting algorithm,” Proc. SPIE 5180, 437–444 (2003).

25. K. Liu, Y. Wang, D. L. Lau, Q. Hao, and L. G. Hassebrook, “Gamma model and its analysis for phase measuring profilometry,” J. Opt. Soc. Am. A 27(3), 553–562 (2010). [PubMed]  

26. T. Hoang, B. Pan, D. Nguyen, and Z. Wang, “Generic gamma correction for accuracy enhancement in fringe-projection profilometry,” Opt. Lett. 35(12), 1992–1994 (2010). [PubMed]  

27. S. Zhang and S. T. Yau, “Generic nonsinusoidal phase error correction for three-dimensional shape measurement using a digital video projector,” Appl. Opt. 46(1), 36–43 (2007). [PubMed]  

28. B. Pan, Q. Kemao, L. Huang, and A. Asundi, “Phase error analysis and compensation for nonsinusoidal waveforms in phase-shifting digital fringe projection profilometry,” Opt. Lett. 34(4), 416–418 (2009). [PubMed]  

29. J. Guehring, “Dense 3D surface acquisition by structured light using off-the-shelf components,” Proc. SPIE 4309, 220–231 (2000).

30. R. Yang, S. Cheng, and Y. Chen, “Flexible and accurate implementation of a binocular structured light system,” Opt. Lasers Eng. 46, 373–379 (2008).

31. R. Zhang, H. Guo, and A. K. Asundi, “Geometric analysis of influence of fringe directions on phase sensitivities in fringe projection profilometry,” Appl. Opt. 55(27), 7675–7687 (2016). [PubMed]  

32. https://en.wikipedia.org/wiki/Cross-ratio.

33. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016).

34. H. Zhao, W. Chen, and Y. Tan, “Phase-unwrapping algorithm for the measurement of three-dimensional object shapes,” Appl. Opt. 33(20), 4497–4500 (1994). [PubMed]  

35. H. Guo, M. Chen, and P. Zheng, “Least-squares fitting of carrier phase distribution by using a rational function in fringe projection profilometry [corrected],” Opt. Lett. 31(24), 3588–3590 (2006). [PubMed]  

36. F. Lü, S. Xing, and H. Guo, “Self-correction of projector nonlinearity in phase-shifting fringe projection profilometry,” Appl. Opt. 56(25), 7204–7216 (2017). [PubMed]  

37. S. Xing and H. Guo, “Temporal phase unwrapping for fringe projection profilometry aided by recursion of Chebyshev polynomials,” Appl. Opt. 56(6), 1591–1602 (2017). [PubMed]  

38. Y. Lu, R. Zhang, and H. Guo, “Correction of illumination fluctuations in phase-shifting technique by use of fringe histograms,” Appl. Opt. 55(1), 184–197 (2016). [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 Measurement system.
Fig. 2
Fig. 2 Geometry of cross-ratio invariance for fringe projection profilometry. (a) shows the epipolar geometry of the measurement system. (b) shows the 2D plot of the epipolar plane in (a), on which the cross-ratio of the object depths, the cross-ratio of the phases, and the cross-ratio of the pixel shifts on πc are equal to one another.
Fig. 3
Fig. 3 Calibration results of the reference phase maps. Row (a) shows the captured fringe patterns on the reference plane. The three fringe patterns in each set have the same phase shift of 0, and correspond, from front to back, to the depth positions H1 = 0 mm, H2 = 30 mm, and H3 = 60 mm, respectively. Row (b) shows their corresponding phase maps in radians, which will be used as the reference phase maps in measurement. The columns correspond to different error conditions. (1) The fringes are straight, and the projector nonlinearity has been corrected. (2) The fringes are straight, but the projector nonlinearity is not corrected. (3) The fringes are slightly curved, but the projector nonlinearity has been corrected. (4) The fringes are slightly curved, and the projector nonlinearity is not corrected.
Fig. 4
Fig. 4 Measurement results of an object having a height smaller than 30mm. The columns, (1) through (4), aligned with those in Fig. 3, correspond to different situations of errors of the projector. The rows, from (a) to (d), show the fringe patterns having a phase shift of 0, the unwrapped phase maps in radians, the depth maps in millimeters calculated by use of the cross-ratio of fringe phases, and the depth maps in millimeters calculated by use of the cross-ratio of pixel shifts, respectively.
Fig. 5
Fig. 5 Measurement results of an object having a height over 80mm. The columns, (1) to (4), aligned with those in Fig. 3, correspond to different situations of errors of the projector. The rows, from (a) to (d), show the fringe patterns having a phase shift of 0, the unwrapped phase maps in radians, the depth maps in millimeters calculated by use of the cross-ratio of fringe phases, and the depth maps in millimeters calculated by use of the cross-ratio of pixel shifts, respectively.
Fig. 6
Fig. 6 Cross-sections of the measured profiles of a flat plane positioned at the depth of 45 mm. The columns, (1) through (4), are aligned with those in Fig. 3, corresponding to different situations of errors of the projector. The top row (a) is obtained by using the method based on the cross-ratio of fringe phases, and the bottom row (b) are obtained by using the method based on the cross-ratio of pixel shifts.

Tables (1)

Tables Icon

Table 1 The RMS values (mm) of the deviations of the measurement results.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

I k (u,v)=a(u,v)+b(u,v)cos[ϕ(u,v)+2kπ/K],
ϕ wrapped (u,v)=arctan[ k=0 K1 I k (u,v)sin( 2kπ/K ) k=0 K1 I k (u,v)cos( 2kπ/K ) ].
(Q, N 1 ; N 2 , N 3 )= Q N 2 N 1 N 3 N 1 N 2 Q N 3 = [ H 2 h(u,v)]( H 3 H 1 ) ( H 2 H 1 )[ H 3 h(u,v)] =( q , n 1 ; n 2 , n 3 )= q n 2 n 1 n 3 n 1 n 2 q n 3 = [ Φ 2 (u,v)ϕ(u,v)][ Φ 3 (u,v) Φ 1 (u,v)] [ Φ 2 (u,v) Φ 1 (u,v)][ Φ 3 (u,v)ϕ(u,v)]
h(u,v)= A(u,v)+B(u,v)ϕ(u,v) C(u,v)+D(u,v)ϕ(u,v)
{ A= Φ 1 Φ 2 H 3 ( H 1 H 2 )+ Φ 2 Φ 3 H 1 ( H 2 H 3 )+ Φ 3 Φ 1 H 2 ( H 3 H 1 ) B= Φ 3 H 3 ( H 1 H 2 )+ Φ 1 H 1 ( H 2 H 3 )+ Φ 2 H 2 ( H 3 H 1 ) C= Φ 1 Φ 2 ( H 1 H 2 )+ Φ 2 Φ 3 ( H 2 H 3 )+ Φ 3 Φ 1 ( H 3 H 1 ) D= Φ 3 ( H 1 H 2 )+ Φ 1 ( H 2 H 3 )+ Φ 2 ( H 3 H 1 )
Φ 1 ( u m1 , v m1 )= Φ 2 ( u m2 , v m2 )= Φ 3 ( u m3 , v m3 )=ϕ(u,v).
(Q, M 1 ; M 2 , M 3 )= Q M 2 M 1 M 3 M 1 M 2 Q M 3 = [ H 2 h(u,v)]( H 3 H 1 ) ( H 2 H 1 )[ H 3 h(u,v)] =(q, m 1 ; m 2 , m 3 )= q m 2 m 1 m 3 m 1 m 2 q m 3 = [( u m2 +j v m2 )(u+jv)][( u m3 +j v m3 )( u m1 +j v m1 )] [( u m2 +j v m2 )( u m1 +j v m1 )][( u m3 +j v m3 )(u+jv)] = ( u m2 u)( u m3 u m1 ) ( u m2 u m1 )( u m3 u) = ( v m2 v)( v m3 v m1 ) ( v m2 v m1 )( v m3 v)
{ Φ Hκ (u,v) d κ3 + d κ4 u+ d κ5 v 1+ d κ1 u+ d κ2 v Φ Vκ (u,v) d κ6 + d κ7 u+ d κ8 v 1+ d κ1 u+ d κ2 v
{ d κ3 + d κ4 u e + d κ5 v e 1+ d κ1 u e + d κ2 v e d λ3 + d λ4 u e + d λ5 v e 1+ d λ1 u e + d λ2 v e =0 d κ6 + d κ7 u e + d κ8 v e 1+ d κ1 u e + d κ2 v e d λ6 + d λ7 u e + d λ8 v e 1+ d λ1 u e + d λ2 v e =0
h(u,v)= ( u m1 u m2 + u m3 u) H 3 ( H 1 H 2 )+( u m2 u m3 + u m1 u) H 1 ( H 2 H 3 )+( u m3 u m1 + u m2 u) H 2 ( H 3 H 1 ) ( u m1 u m2 + u m3 u)( H 1 H 2 )+( u m2 u m3 + u m1 u)( H 2 H 3 )+( u m3 u m1 + u m2 u)( H 3 H 1 ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.