Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Phase error model and compensation method for reflectivity and distance discontinuities in fringe projection profilometry

Open Access Open Access

Abstract

Phase shifting fringe projection profilometry is a widely used optical 3D surface measurement method due to its high resolution, high speed, and full-field inspection. However, the measurement accuracy decreases in regions with a reflectivity or distance discontinuity. To this end, first, a general continuous quasi-one-dimensional phase error model is proposed for discontinuity representation. Second, the discontinuities are further divided into degenerate and nondegenerate discontinuities to improve the computational speed. Third, a phase error compensation algorithm is proposed with a parameter estimation method to improve the measurement accuracy. Simulations and experiments demonstrate that the proposed methods are effective.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Fringe projection profilometry is a noncontact 3D surface measurement technique with high accuracy and flexibility [1] for both textured and texture-less surfaces [2]. Hence it is widely used in reverse engineering [3], medical engineering [4], quality inspection [5], smart manufacturing [6] and other fields. A typical fringe projection profilometry system includes at least a projector and a camera. Among different fringe projection profilometry methods, the phase shifting method is widely used for its high resolution, high speed, and full-field inspection [7]. In the phase shifting method, the pixel coordinate of the projector is encoded into the phase of a series of sinusoidal patterns [8]. For each pixel of the camera, the corresponding pixel coordinate of the projector can be calculated through the captured images [9] and then the 3D point cloud is constructed with a stereo vision algorithm [10].

Being optical systems, defocus in the projector and camera would appear when the measured object is not right on the focused distance. This will lead to the information coupling of the neighborhood pixels. Specifically, for the projector, a large aperture is commonly used for high light intensity and thus the depth of field is usually short [11], that is, the projected image is clear only in a short distance range. Similar to the projector, the captured image in the camera would be clear only when the object was at the focus plane within a certain distance. Defocus causes the projected and captured images to be blurred. Therefore, the defocus, which can be modeled as a point spread function (PSF) [12], should be taken into account for high-accuracy measurement [13].

For the projector, the defocus caused by the measurement distance change can reduce the amplitude of the sinusoidal pattern, resulting in reduced measurement accuracy [14]. To reduce the effect of projector defocus, some methods have been proposed, such as frequency modulation according to the measurement distance, to improve the measurement accuracy [15]. Moreover, by virtue of the defocus of the projector, binary defocusing techniques have been developed to obtain sinusoidal fringe patterns for high-speed measurement [16]. Subsequently, dithering technology has been used to improve the quality of the patterns, resulting in improved measurement accuracy [17,18].

For the camera, a captured ideal sinusoidal pattern can avoid the effect of an isotropic PSF for its single frequency [7]. The reflectivity or distance discontinuity of the measured object would generate an imperfect sinusoidal pattern, which would lead to phase error when calculating with the decode algorithm in phase shifting. In previous studies, the phenomenon of the phase error near discontinuities was found, and a method to detect and discard the error data was proposed [19]. However, in some cases, the information near the discontinuity is needed so some methods to fix the phase with error have been developed. In [20], the phase with error was modeled as weighting with the light intensity and the defocus, which was then computed using a first-order Taylor series and compensated for the error. However, the model is based on a hypothesis. In [21], a discrete phase error model was formulated, and for each pixel, the phase error was calculated with parameters calibrated from nearby pixels. However, the accuracy was not high, and the computation required was time-consuming. In [22], deconvolution with a 2D Fourier transform or the Lucy-Richardson algorithm was applied to the captured images to recover the light intensity directly. However, this method is not pixel-by-pixel, which would introduce new error for neighborhood pixels.

To solve the problems in the phase error compensation for discontinuities, we propose a continuous phase error model based on the effect of defocus to light intensity, which can be computed pixel by pixel. First, the phase is calculated with the blurred light intensity by the defocus, and a general continuous phase error model is deduced by the difference between the calculated phase and the theoretical phase. Then, a quasi-one-dimensional phase error model is established to increase the computation speed based on the fact that the discontinuity can be considered to be a one-dimensional curve with a length far longer than the scale of the defocus area for a certain pixel. Second, the discontinuity is divided into degenerate and nondegenerate discontinuities, where the quasi-one-dimensional phase error model for degenerate discontinuities has fewer parameters to improve the computation speed. Third, the parameter estimation method for the error model is proposed, and then the phase error compensation algorithm is constructed. Finally, simulations and experiments are carried out to verify the proposed methods, and the results show that the proposed methods are effective.

Our contributions are as follows.

  • (1) A general continuous quasi-one-dimensional phase error model based on the effect of defocus to light intensity and the principle of phase-sifting is proposed for reflectivity and distance discontinuities in fringe projection profilometry.
  • (2) The discontinuities are divided into degenerate and nondegenerate discontinuities to simplify the model.
  • (3) A phase error compensation algorithm is constructed with a corresponding parameter estimation method.

Section 2 describes the principle of our phase error model for reflectivity and distance discontinuities. Section 3 includes some simulations and experiments to verify the proposed model. Finally, a summary is given in Section 4.

2. Principle

2.1 Quasi-one-dimensional phase error model

A fringe projection profilometry system usually contains a projector and a camera. The projector shoots a coded pattern with phase information onto the measured object surface, while the camera captures the deformed pattern. Then, the phase is decoded in the camera, and the correspondence between the camera and projector has been calibrated before measurement. Finally, the 3D point cloud is determined by the triangulation measurement method. The phase shifting method is most widely used. The intensity $I_{p,k}\left ( u_p, v_p\right )$ of sinusoidal fringe patterns for N-step phase shifting can be described by:

$$I_{p,k}\left( u_p, v_p\right) = I' + I^{\prime\prime} \cos \left(\varphi_p \left( u_p, v_p \right) + \frac{2 \pi k}{N}\right)$$
where $k = 0, 1, \ldots, N-1$ is the index number of the phase shifting pattern, $\left ( u_p, v_p \right )$ is the pixel coordinate in the projector, $I'$ is the average light intensity, $I''$ is the light intensity amplitude and $\varphi _p \left ( u_p, v_p \right )=\omega _p u_p$ is the encoded phase with the angular frequency $\omega _p$ of the sinusoidal fringe pattern.

As shown in Fig. 1(a), in fringe projection profilometry, the correspondence problem is to find the projector pixel $\left ( u_p, v_p \right )$ corresponding to each camera pixel $\left ( u_c, v_c \right )$. The intensity of camera pixel $\left ( u_c, v_c \right )$ in the captured images is:

$$I_{c,k}\left( u_c, v_c\right) = K_r\left( u_c, v_c\right) \left( I' + I^{\prime\prime} \cos \left( \varphi_c \left( u_c, v_c\right) + \frac{2 \pi k}{N}\right)\right)$$
where $K_r\left ( u_c, v_c\right )$ is the reflectivity. Assuming that the reflectivity is uniform and the measured smooth surface is focused within the depth of field, we can calculate phase $\varphi _c \left ( u_c, v_c\right )$ at pixel $\left ( u_c, v_c\right )$ in the captured image with the least square method [23].
$$\varphi_c \left( u_c, v_c\right) = \mathrm{atan2} \left( I_{sin} \left( u_c, v_c\right), I_{cos} \left( u_c, v_c\right) \right)$$
where:
$$\begin{aligned} I_{cos} \left( u_c, v_c\right) &= \frac{2}{N} \sum_{k=0}^{N-1} I_{c,k} \left( u_c, v_c\right) \cos \frac{2 \pi k}{N} \\ I_{sin} \left( u_c, v_c\right) &={-}\frac{2}{N} \sum_{k=0}^{N-1} I_{c,k} \left( u_c, v_c\right) \sin \frac{2 \pi k}{N} \end{aligned}$$

 figure: Fig. 1.

Fig. 1. (a) Principle of fringe projection profilometry. (b) The relationship of point $\left ( u_c, v_c \right )$, the PSF domain of $\left ( u_c, v_c \right )$ and point $\left ( u_{cp}, v_{cp} \right )$ in the PSF domain. (c) The calculation of $\hat {\varphi }\left ( u_{cp}, v_{cp}; u_c, v_c\right )$. (d) The local coordinate system on the discontinuity curve.

Download Full Size | PDF

As shown in Fig. 1(a), in one measuring task, one valid camera point $\left ( u_c, v_c \right )$ is expected to correspond to a certain projector point $\left ( u_p, v_p \right )$ with the light path. For a pair of the points, they have the same wrapped phases, that is $\varphi _c \left ( u_c, v_c \right )=\varphi _p \left ( u_p, v_p \right )$. Additionally, the unwrapped phases are also the same.

Moreover, we can obtain the light intensity amplitude $I_c''$ of the captured images as:

$${I}_c^{\prime\prime} \left( u_c, v_c\right) = \sqrt{I_{sin}^2 \left( u_c, v_c\right) + I_{cos}^2 \left( u_c, v_c\right)}$$

In summary, in the ideal case, we have ${I}_c^{\prime \prime }\propto I^{\prime \prime }$ and $\varphi _c \left ( u_c, v_c \right )=\varphi _p \left ( u_p, v_p \right )$.

To simplify the representation in what follows, ${I}_c^{\prime \prime }$ and $\varphi _c \left ( u_c, v_c \right )$ are the ideal and desired values, respectively, whereas $\tilde {I}_c^{\prime \prime }$ and $\tilde {\varphi }_c \left ( u_c, v_c\right )$ with hats are calculated values in real conditions. In the ideal case, we have $\tilde {\varphi }_c = \varphi _c$ and $\tilde {I}_c^{\prime \prime }= {I}_c^{\prime \prime }$, which means that there is no phase error.

However, when defocus appears, the phase calculated from Eq. (3) is no longer accurate. Therefore, the phase error is first analyzed. Projector defocus is generally described by the convolution of the original illumination pattern and PSF [11]. In most cases, the PSF can be seen isotropic for a certain point and depends on the depth, so the projected pattern $I_{p, k}\left ( u_p, v_p\right )$ with the defocus of the projector $g_p \left ( u_p, v_p\right )$ can be described by:

$$\begin{aligned} I_{p, k}\left( u_p, v_p\right) &= \left( I' + I^{\prime\prime} \cos \left( \varphi_p \left( u_p, v_p \right) + \frac{2 \pi k}{N}\right)\right) \otimes g_p \left( u_p, v_p\right)\\ &= I' + K_d \left( u_p, v_p\right) I^{\prime\prime} \cos \left( \varphi_p \left( u_p, v_p \right) + \frac{2 \pi k}{N}\right), \end{aligned}$$
where $K_d<1$ is the reduction of the contrast ratio caused by the projection defocus, leading to decreased amplitude of sinusoid pattern and low signal to noise ratio (SNR) [15].

Camera defocus can also be described by the PSF. Similar to Eqs. (2) and (5), the captured light intensity with camera defocus can also be described by:

$$I_{c,k}\left( u_c, v_c\right) = \left( K_r\left( u_c, v_c\right) I_{c0,k}\left( u_c, v_c\right) \right) \otimes g_c \left( u_c, v_c\right)$$
where:
$$I_{c0,k}\left( u_c, v_c\right) = \left( I' + K_d \left( u_c, v_c\right) I^{\prime\prime} \cos \left(\varphi_c \left( u_c, v_c\right) + \frac{2 \pi k}{N}\right)\right),$$
where $g_c \left ( u_c, v_c\right )$ is the PSF of the camera and $\varphi _c \left ( u_c, v_c\right )$ are the ideal and desired values.

Obviously, the phase calculated by Eq. (2) is inaccurate because of the inhomogeneous coupling of the phase in the local area. For one point $\left ( u_c, v_c\right )$ in the captured image, the phase $\tilde {\varphi }_c\left ( u_c, v_c\right )$ calculated by Eq. (2) has a phase error $\Delta \varphi _c\left ( u_c, v_c\right )$:

$$\tilde{\varphi}_c\left( u_c, v_c\right) = \varphi_c\left( u_c, v_c\right) + \Delta\varphi\left( u_c, v_c\right)$$

The phase error $\Delta \varphi \left ( u_c, v_c\right )$ can be represented with the light intensity and phase of all the points $\left ( u_{cp}, v_{cp}\right )$ in the PSF domain $\mathbb {P}\left ( u_c, v_c\right )$ of $\left ( u_c, v_c\right )$, as shown in Fig. 1(b):

$$\Delta\varphi_c\left( u_c, v_c\right) = \mathrm{atan2} \left( S_m\left( u_c, v_c\right), C_m\left( u_c, v_c\right) \right)$$
where:
$$\begin{aligned} & S_m\left( u_c, v_c\right) = \iint \limits_{\mathbb{P}\left( u_c, v_c\right)} K \left( u_{cp}, v_{cp}\right) \sin \hat{\varphi}\left( u_{cp}, v_{cp}; u_c, v_c\right) g_c\left( u_c - u_{cp}, v_c - v_{cp}\right) \mathrm{d} u_{cp}\mathrm{d} v_{cp} \\ & C_m\left( u_c, v_c\right) = \iint \limits_{\mathbb{P}\left( u_c, v_c\right)} K \left( u_{cp}, v_{cp}\right) \cos \hat{\varphi}\left( u_{cp}, v_{cp}; u_c, v_c\right) g_c\left( u_c - u_{cp}, v_c - v_{cp}\right) \mathrm{d} u_{cp}\mathrm{d} v_{cp} \\ & \hat{\varphi}\left( u_{cp}, v_{cp}; u_c, v_c\right) = \varphi_c\left( u_{cp}, v_{cp}\right) - \varphi_c\left( u_c, v_c\right) \\ & K \left( u_{cp}, v_{cp}\right) = I^{\prime\prime} K_d \left( u_{cp}, v_{cp}\right) K_r \left( u_{cp}, v_{cp}\right) \end{aligned}$$
where $\hat {\varphi }\left ( u_{cp}, v_{cp}; u_c, v_c\right )$ is a function of points $\left ( u_{cp}, v_{cp}\right )$ in the PSF domain $\mathbb {P}\left ( u_c, v_c\right )$ of point $\left ( u_c, v_c\right )$ and $\hat {\varphi }\left ( u_{cp}, v_{cp}; u_c, v_c\right )$ shows the phase change between any point $\left ( u_{cp}, v_{cp}\right )$ and point $\left (u_c, v_c\right )$, as shown in Fig. 1(c). Therefore, if we calculate $\Delta \varphi \left ( u_c, v_c\right )$ with Eq. (8), the desired phase $\varphi _c$ can be recovered from the calculated phase by $\varphi _c\left ( u_c, v_c\right ) = \tilde {\varphi }_c\left ( u_c, v_c\right ) - \Delta \varphi \left ( u_c, v_c\right )$. When the desired phase is obtained, a high-accuracy measurement can be achieved, which is the objective of this paper.

Equation (8) shows that the phase error is related to the camera PSF, the local reflectivity, and the local phase change (caused by measurement distance discontinuity) (see Supplement 1). The camera PSF $g_c$ can be seen as a convolution term, and the local reflectivity $K_r$ can be seen as a weight on the local phase change $\hat {\varphi }$. When the reflectivity is almost uniform and the phase change is approximately linear, $\Delta \varphi \approx 0$ because of its symmetry. However, in some special cases, for example, when there is a reflectivity discontinuity, such as a color discontinuity, or there is a distance discontinuity, such as a step on the measured surface, the phase error cannot be ignored.

Equation (8) can be used to calculate the phase error $\Delta \varphi \left ( u_c, v_c\right )$ to obtain the desired phase; however, two-dimensional convolution often needs complex calculation and programing especially when the domain of definition is not regular. It is not easy to calculate 2D integral with an irregular integral interval directly. To overcome this, a quasi-one-dimensional phase error model is proposed in this paper. As shown in Fig. 1(d), a discontinuity in captured images can be seen as a curve, which we call the discontinuity curve below. The global coordinates are $u$ and $v$. At a point $P$ on the discontinuity curve, we can set a local curvilinear coordinate system $Pst$. The $s$ and $t$ directions are parallel to the normal and tangential directions of the discontinuity curve. $M\left ( s_m, 0 \right )$ is a point on the local $s$ axis. Generally, the PSF domain for a given pixel is a few pixels, whereas the length of the discontinuity curve is far larger. Therefore, it can be assumed that in the local area affected by defocus:

  • (1) Considering that the curvature radius of the discontinuity curve is far larger than the radius of the PSF domain. Thus, the local curve can be seen as a straight line in the PSF domain.
  • (2) For a given point, in its PSF domain, the reflectivity change along the local $t$ direction is small. $K \left ( s, t \right ) = K_s \left ( s \right )$.
  • (3) For a given point, in its PSF domain, the discontinuity curve can be approximated as the line $s = 0$.
  • (4) For a given point, in its PSF domain, the phase along the two directions is not relevant, and the phase along the local $t$ direction is linear. $\hat {\varphi } \left ( s, t; s_m, 0 \right ) = \hat {\varphi }_s \left ( s; s_m \right ) + \omega _t t$. where $\hat {\varphi }_s\left ( s ; s_m\right )$ is a function of coordinate $s$.
  • (5) The PSF is isotropic, so the definition domain is a circle with radius $r$.

With the assumptions above, Eq. (8) can be simplified into a one-dimensional form for each point $M$ with local coordinate $\left ( s_m, 0 \right )$. We call this a general quasi-one-dimensional phase error model in this paper:

$$\begin{aligned}\Delta\varphi_c\left( s_m \right) &= \mathrm{atan2} \left( S_m\left( s_m \right), C_m\left( s_m \right) \right) \\ S_m\left( s_m\right) &= \int_{s_m-r}^{s_m+r} K_s \left( s\right) \sin \hat{\varphi}_s\left( s; s_m\right) g_s\left( s_m - s \right) \mathrm{d} s \\ C_m\left( s_m \right) &= \int_{s_m-r}^{s_m+r} K_s \left( s\right) \cos \hat{\varphi}_s\left( s; s_m\right) g_s\left( s_m - s \right) \mathrm{d} s \end{aligned}$$
where:
$$g_s\left( s_m - s \right) = \int_{-\sqrt{r^2 - \left( s - s_m \right)^2}}^{\sqrt{r^2 - \left( s - s_m \right)^2}} \cos \left( \omega_t t \right) g_c \left(s_m - s, - t\right) \mathrm{d} t$$

The proposed quasi-one-dimensional phase error model enables the required computation to be feasible and fast regardless of the reflectivity discontinuity and distance discontinuity (see Supplement 1).

2.2 Phase error models for degenerate and nondegenerate discontinuities

The discontinuities can be analyzed and divided into degenerate and nondegenerate discontinuities according to whether the phase distribution is continuous. (Phase in cases where there is no phase on one side of the discontinuity is seen as continuous.) For degenerate discontinuities with continuous phase distributions, the proposed quasi-one-dimensional phase error model can be simplified with fewer parameters to improve the computation speed. Since the defocus area for a given pixel is small, the higher order terms in the reflectivity and phase change can be ignored. Thus we can further assume:

  • (1) On each side of the discontinuity curve, the reflectivity and the SNR are approximately a constant, which can be written as:
    $$K_s \left( s \right) = \begin{cases} \eta_1, & s \le 0 \\ \eta_2, & s > 0 \end{cases}$$
  • (2) On each side of the discontinuity curve, the local phase is approximately a linear function, which can be written as: when $s_m < 0$:
    $$\hat{\varphi}_s \left( s; s_m \right) = \begin{cases} \omega_1 \left( s - s_m \right), & s \le 0 \\ \omega_2 \left( s - s_m \right) + \delta -(\omega_1-\omega_2)\;s_m,\;\; & s > 0 \end{cases}$$
    when $s_m > 0$:
    $$\hat{\varphi}_s \left( s; s_m \right) = \begin{cases} \omega_1 \left( s - s_m \right) - \delta +(\omega_1-\omega_2)\;s_m,\;\; & s \le 0 \\ \omega_2 \left( s - s_m \right), & s > 0 \end{cases}$$
    where $\omega _1$ and $\omega _2$ are the angular frequency on each side and $\delta$ is the phase discontinuity at the discontinuity.
  • (3) The PSF is a Gaussian function with zero mean and $\sigma$ standard deviation. For the Gaussian PSF, The radius of the PSF domain can be seen as $3\sigma$ [20,21]. However, the definition domain of the Gaussian function is $\left ( -\infty, \infty \right )$, although its effect is limited to a local area. Therefore, we can set $r = \infty$ in calculation for convenience, which would not affect the result, and we have:
    $$g_s\left( s \right) = \frac{e^{-\left( \omega_t \sigma \right)^2 / 2}}{\sqrt{2\pi} \sigma} e^{-\frac{s^2}{2\sigma^2}}$$
Substituting Eqs. (10) – (13) into Eq. (9), we can obtain:
$$\begin{aligned} \Delta\varphi_c\left( s_m \right) &= \mathrm{atan2} \left( S_m\left( s_m \right), C_m\left( s_m \right) \right)\\ S_m\left( s_m \right) &= \begin{cases} \frac{\eta_1}{\lambda_1} \left( S_1 \cos\alpha + C_1 \sin\alpha \right) + \frac{\eta_2}{\lambda_2} S_2, & s_m \le 0 \\ \frac{\eta_1}{\lambda_1} S_1 + \frac{\eta_2}{\lambda_2} \left( S_2 \cos\alpha - C_2 \sin\alpha \right), & s_m > 0 \end{cases}\\ C_m\left( s_m \right) &= \begin{cases} \frac{\eta_1}{\lambda_1} \left( C_1 \cos\alpha - S_1 \sin\alpha \right) + \frac{\eta_2}{\lambda_2} C_2, & s_m \le 0 \\ \frac{\eta_1}{\lambda_1} C_1 + \frac{\eta_2}{\lambda_2} \left( C_2 \cos\alpha + S_2 \sin\alpha \right), & s_m > 0 \end{cases} \end{aligned}$$
where
$$\begin{aligned} \lambda_1 &= \omega_1 \sigma, \lambda_2 = \omega_2 \sigma, \alpha = \left( \omega_1 - \omega_2 \right) s_m - \delta \\ S_1 &= \int_{-\infty}^{-\omega_1 s_m} e^{-\frac{s^2}{2\lambda_1^2}} \sin s \mathrm{d} s, C_1 = \int_{-\infty}^{-\omega_1 s_m} e^{-\frac{s^2}{2\lambda_1^2}} \cos s \mathrm{d} s \\ S_2 &= \int_{-\omega_2 s_m}^{\infty} e^{-\frac{s^2}{2\lambda_2^2}} \sin s \mathrm{d} s, C_2 = \int_{-\omega_2 s_m}^{\infty} e^{-\frac{s^2}{2\lambda_2^2}} \cos s \mathrm{d} s \end{aligned}$$

From Eq. (14), we obtain two dimensionless parameters $\lambda _1$ and $\lambda _2$, which show the magnitude of defocus. The larger the angular frequency and the standard deviation are, the greater the defocus. Moreover, two dimensionless coordinates $\omega _1 s_m$ and $\omega _2 s_m$ are set to reflect the distance of the affected point to the discontinuity curve. The smaller the dimensionless coordinates are, the greater the defocus.

For a certain structured light system, $\omega _c$ is related to the location of the measured part and the local normal direction, and $\sigma$ is related to the distance to the focused plane. The parameter $\delta$ is related to the distance of the two sides and the size of the obscured part. And from Eq. (14), it is shown that $\delta$ leads to a periodic change in the phase error.

However, the distance between the object and the camera is different on the two sides of the discontinuity, so the parameter $\sigma$ varies, too. Considering cases with different $\sigma$ written as $\sigma _1, \sigma _2$ on the two sides, where Eq. (13) should be revised as:

$$g_s\left( s_m - s \right) = \begin{cases} \frac{e^{-\left( \omega_t \sigma_1 \right)^2 / 2}}{\sqrt{2\pi} \sigma_1} e^{-\frac{\left( s - s_m\right) ^2}{2\sigma_1^2}}, & s \le 0 \\ \frac{e^{-\left( \omega_t \sigma_2 \right)^2 / 2}}{\sqrt{2\pi} \sigma_2} e^{-\frac{\left( s - s_m\right) ^2}{2\sigma_2^2}}, & s > 0 \end{cases}$$

It is important to note that $g_s\left ( s_m - s \right )$ no longer the represents PSF at the point $\left ( s_m, t_m \right )$. It represents the effect of the PSF at the point $\left ( s, t \right )$ to the point $\left ( s_m, t_m \right )$ (see Supplement 1). For Eq. (14), the only thing should be changed is:

$$\lambda_1 = \omega_1 \sigma_1, \lambda_2 = \omega_2 \sigma_2$$

As shown in Fig. 2, there are some typical discontinuities. A reflectivity discontinuity can be regarded as a degenerate discontinuity, as shown in Fig. 2(a). The distance discontinuity (such as the step) can be further divided into two categories in Fig. 2(b). and Fig. 2(c) according to the relative position of the projector and the measured object. Specifically, there is an area without a projected pattern in the captured image because the step blocks some light rays from the projector, leading to degenerated discontinuity, as shown in Fig. 2(b). On the other hand, there is a phase discontinuity in the captured images because some of the step blocks reflected light rays to the camera, leading to nondegenerated discontinuity, as shown in Fig. 2(c).

 figure: Fig. 2.

Fig. 2. Degenerative and nondegenerative discontinuity. (a) Degenerative discontinuity on a plane. (b) The degenerative discontinuity on a step. (c) The nondegenerative discontinuity on a step.

Download Full Size | PDF

For a degenerative discontinuity, the phase is approximately linear in the whole defocus-affected area, as shown in Fig. 2(a) or there is a phase blank on one side, as shown in Fig. 2(b). Thus, the approximate phase can be written as:

$$\hat{\varphi}_s \left( s; s_m \right) = \omega \left( s - s_m \right)$$

Notice that:

$$\int _{-\infty}^{\infty} e^{-\frac{s^2}{2\lambda^2}} \sin s \mathrm{d} s = 0, \int _{-\infty}^{\infty} e^{-\frac{s^2}{2\lambda^2}} \cos s \mathrm{d} s = \sqrt{2\pi} \lambda e^{-\frac{\lambda^2}{2}}$$

We can simplify Eq. (14) as:

$$\begin{aligned} \Delta\varphi_c\left( s_m \right) &= \mathrm{atan2} \left( S_m\left( s_m \right), C_m\left( s_m \right) \right)\\ S_m\left( s_m \right) &= \left( \eta_2 - \eta_1 \right) \int_{-\omega s_m}^{\infty} e^{-\frac{s^2}{2\lambda^2}} \sin s \mathrm{d} s\\ C_m\left( s_m \right) &= \sqrt{2\pi} \eta_1 \lambda e^{-\frac{\lambda^2}{2}} + \left( \eta_2 - \eta_1 \right) \int_{-\omega s_m}^{\infty} e^{-\frac{s^2}{2\lambda^2}} \cos s \mathrm{d} s \end{aligned}$$
where $\lambda = \omega \sigma$. It is easy to see that when $\eta _1 = \eta _2$, $\Delta \varphi \left ( s_m \right ) = 0$. However, this does not hold in a nondegenerative discontinuity with the phase discontinuity $\delta \neq 0$.

To simplify the calculation, an approximate method is adopted to transform the pixel coordinate to the local discontinuity coordinate.

As Fig. 1(d) shows, for a given point $P$ on the discontinuity and its local curvilinear coordinate system $Pst$ , the integral operation should be performed along the local $s$ axis, where there may be no points with integer pixel coordinates. On the other hand, for a given point $M$ on one side, it is difficult to find the point $P$ whose local $s$ axis passes through $M$, so the integral operation is difficult to perform directly. Considering that the curvature radius of the discontinuity curve is far larger than the radius of the PSF domain. That means the intersection of two normal lines of different points on the discontinuity curve doesn’t in the region effected by the discontinuity so one point in the region effected by the discontinuity only corresponds to a unique point on the discontinuity curve. Thus, the local curve can be seen as a straight line in the PSF domain, so we replace the local coordinate system on point $P$ with one on point $Q$, which is the point on the discontinuity We can simplify $v$ with point $M$. $s_m$ and $\omega$ are replaced with the local coordinate in the $Q$ coordinate system $Qs_q t_q$:

$$s_m \approx \left( u_q - u_p \right) \cos \theta$$
$$\omega \approx \omega_{cu} \cos \theta - \omega_{cv} \sin \theta$$
where $\omega _{cu}$ and $\omega _{cv}$ are the components of the angular frequency in the captured images, $\omega _c$. $\theta$ is the included angle between the normal at $Q$ and the global $u$ axis. Therefore, the quasi-one-dimensional phase error model is further simplified to speed up the calculation for the degenerate discontinuity case.

2.3 Parameter estimation

The proposed error model involves the following parameters: PSF domain (including the angular frequency $\omega _c$, the light intensity $\eta$, and the defocus parameter $\sigma$), the phase discontinuity $\delta$, the discontinuity location $\{ \left ( u_b, v_b \right ) \}$ and the normal of the discontinuity $\theta \left ( u_b, v_b \right )$. To compensate for the discontinuity error, these parameters must be estimated first [20,21].

2.3.1 Pixel discontinuity detection

Some parameters are not continuous on the two sides of a discontinuity, like the light intensity and the phase, so we can detect the discontinuities with edge detection operators like Canny operator on the gray value image $I_w$ without pattern and the original unwrapped phase map. In application, the gray value image can help to detect all kinds of the discontinuities but some nondegenerative discontinuities with $\delta \neq 0$ may not be detected easily for similar reflectivity on the two sides. The phase map can detect the distance discontinuities. Because the discontinuities need to be located to subpixel in the next step, the choice of the two detection method is optional. In our experiment, we use the gray value image for all degenerative discontinuities while the unwrapped phase map for all nondegenerative discontinuities.

Different continuity of different parameters can help to distinguish different kinds of the discontinuities. In the phase map, for the nondegenerative discontinuity, the phase is not continuous on the two sides of the discontinuity. For the degenerative distance discontinuity, the phase on one side loses. And for the reflectivity discontinuity, the phase is continuous on the two sides.

2.3.2 Subpixel discontinuity location and discontinuity direction

For each pixel discontinuity point detected in Section 2.3.1, we then refine the $u$ coordinate to subpixel. For a pixel discontinuity point, we fit the norm of the gradient $\| \nabla I_w \|$ in its neighboring points with the same $v$ coordinate as a quadratic function. The extreme point of the quadratic function is the refined $u$ coordinate, and the normal is the direction of the gradient $\nabla I_w$ at the extreme point by $\theta = \mathrm {atan2} \left ( \nabla I_{wv}, \nabla I_{wu} \right )$. For a nondegenerative discontinuity, though it is hard to be detected in some cases, once we have detected the degenerative discontinuities in the phase map, we can find the sub-pixel discontinuity with the same method as degenerative discontinuities.

2.3.3 Area division and PSF calibration

Each side of the discontinuity can be divided into two areas. One is the area near the discontinuity with phase effected by the defocus, which is called the area to compensate. The other one is the area away from the discontinuity with accurate phase unaffected by the defocus, which is called the area for fitting. The PSF is a Gaussian function with the standard deviation $\sigma$. The parameter $\sigma$ is calibrated with the re-blurred method in [24]. The two areas can be separated by comparing the distance to the discontinuity with $3\sigma$.

For our error model, we only need $u$ coordinates to calculate the phase error. Therefore, for each point on the discontinuity, we divide the two areas on each line parallel to the $u$ axis passing the point.

2.3.4 Other parameters by fitting

The angular frequency $\omega _c$, the light intensity $\eta _1, \eta _2$ and the phase discontinuity $\delta$ can be fitted from the area for fitting. The light intensity $\eta _1, \eta _2$ are estimated by the light intensity amplitude $\tilde {I}_c^{\prime \prime }$ calculated by Eq. (4) in the area for fit. The angular frequency $\omega _c$ is estimated by linear fitting of the phase in the area for fit. The phase discontinuity $\delta$ is estimated by linear extrapolation of the phase to the discontinuity on both sides. It should be noted that Eq. (8) with trigonometric function can handle the periodicity automatically. That means the unwrapped phase can be used to compensate the wrapped phase. To be detailed, the angular frequency $\omega _c$ is fitted form the unwrapped phase while the compensation method is applied to the wrapped phase to avoid potential unwrapping error.

2.4 Phase error compensation algorithm

Based on the proposed quasi-one-dimensional phase error model, the phase error compensation algorithm consists of the following steps:

  • Step 1: Detect and distinguish the discontinuities. Discontinuities can be detected by edge detection methods in the gray value map and the phase map. Then different kinds of discontinuities can be distinguished by different continuity in the two maps.
  • Step 2: Parameter estimation. Estimate the parameters in the proposed quasi one-dimensional phase error model with the methods in Section 2.3, including the angular frequency $\omega _c$, the light intensity $\eta$, the defocus parameter $\sigma$, the phase discontinuity $\delta$, the discontinuity location $\{ \left ( u_b, v_b \right ) \}$ and the normal of the discontinuity $\theta \left ( u_b, v_b \right )$.
  • Step 3: Phase correction. Substitute the parameters into Eq. (17) for degenerative discontinuities and Eq. (14) for nondegenerative discontinuities and calculate the phase error $\Delta \varphi _c$ for each pixel. Then, the desired phase can be obtained for accurate 3D surface measurement by $\varphi _c = \tilde {\varphi }_c - \Delta \varphi _c$.

3. Simulations and experiments

3.1 Simulation on dimension reduction

Simulations were performed to verify the feasibility of our error model, which reduces the dimension from the general two-dimensional phase error model.

In the simulations, to reduce quantization error, we used 10 times larger images to simulate the defocus and then recovered to the desired size by downsampling. In the large images, we set the original angular frequency $\omega _0 = 0.03$, and the phase was coded with a four-step phase shifting method. Then, we added a circular reflectivity discontinuity curve with a radius $R_0 = 2000$ where the light intensity changes from $\eta _1 = 25$ to $\eta _2 = 100$, as shown in Fig. 3(a). To simulate the image capture procedure of the camera, a Gaussian filter with $\sigma _0 = 20$, a 10 times downsampling and a rounding function were sequentially applied to the image. Therefore, in the final image, the angular frequency $\omega _c = 0.3$, the standard deviation of the Gaussian filter $\sigma = 2$ and the radius of the discontinuity curve $R = 200$.

 figure: Fig. 3.

Fig. 3. The result of the simulation on dimensionality reduction. (a) Reflectivity distribution and pixels with phase error. (b) Error without compensation. (c) Error after compensation with the two-dimensional model. (d) Error after compensation with the quasi-one-dimensional model.

Download Full Size | PDF

The phase error map without compensation is shown in Fig. 3(b) The phase error maps after compensation with two-dimensional and quasi-one-dimensional models with accurate parameters are shown in Fig. 3(c). and Fig. 3(d). The maximum error and the root mean square error (RMSE) of pixels with discontinuity phase error (red area in Fig. 3(a)) are calculated as shown in Table 1.

Tables Icon

Table 1. The maximum and RMSE of phase error

To quantify the phase error, the mean error of the $40 \times 40$ pixel area of the phase error caused by light intensity quantification in the lower light intensity region (LLIR) was selected as the baseline. The results show that both the two-dimensional and quasi-one-dimensional phase error models could reduce the error by more than 90% after compensation. Moreover, both errors after compensation are smaller than the selected baseline, which indicates that both the two-dimensional and quasi-one-dimensional phase error models are effective. Obviously, the parameter estimation and integral operation of the quasi-one-dimensional phase error model are much easier than those of the two-dimensional method. Thus the proposed quasi-one-dimensional phase error model is a feasible simplification.

3.2 Simulations on robustness

In this section, we report the results of a simulation experiment that was carried out to verify the robustness of the proposed phase error model with respect to the estimated parameters in Section 2.3. The estimated parameters could introduce error caused by quantization, model approximation, and noise.

In the simulations, we used the same settings as in Section 3.1 except that the images were one-dimensional. In the final image, the length of the final image was 25 pixels, and the discontinuity was located at the middle pixel.

First, we checked the proposed model on a degenerative discontinuity with $\delta = 0$. We changed the estimation of the parameters $\omega _c, \eta _2, \sigma$ within $\pm 20 \%$ from the desired value and the location of the discontinuity $u_q$ within $\pm 1 \, \text {pixel}$ from the desired value. As shown in Fig. 4 and Table 2, the calculated phase is higher than the desired value near the discontinuity when $\eta _1 < \eta _2$. In contrast, the calculated phase is lower when $\eta _1 > \eta _2$. When the parameters $\omega _c, \eta _2, \sigma$ agree with the theoretical value, the error can be almost eliminated. However, when the parameters have some deviation from the theoretical value, the error still remains but is reduced compared with the original calculated phase when the single deviation is less than $20\%$ or only the discontinuity location deviation is within 1 pixel. For $\omega _c, \eta _2, \sigma$, when $\eta _1 < \eta _2$, if the parameters are estimated large, the phase after correction is larger than the desired phase; and if the parameters are estimated small, the phase after correction is smaller than the desired phase. With the discontinuity location error, the phase after correction is with different error direction on the two sides of the discontinuity.

 figure: Fig. 4.

Fig. 4. The phase and phase error with different parameter estimation errors. (a) $\omega _c$. (b) $\eta _2$. (c) $\sigma$. (d) $u_q$.

Download Full Size | PDF

Tables Icon

Table 2. The RMSE with different single parameter estimation error

For the nondegenerative discontinuity, we chose $\delta = 90^{\circ }, 180^{\circ }, 270^{\circ }$, and the results are shown in Fig. 5. To simplify the representation, the points on the discontinuity are deleted because of their singularity property to calculate the phase. The results show that when the parameters agree with the true value, the phase after compensation almost coincides with the desired phase. Then, we added a $30^{\circ }$ error to the estimation of $\delta$, and the error is still reduced compared with the original calculated phase, especially on the side with higher light intensity. In the results, the error in one side is large than another. And on the side with larger error, there is a unwrapping error near the discontinuity. And the error when $\delta =180^{\circ }$ is large than one in $\delta =90^{\circ }$ or $\delta =270^{\circ }$.

 figure: Fig. 5.

Fig. 5. The phase and phase error with different $\delta$ estimation errors. (a) $\delta = 90 ^{\circ }$. (b) $\delta = 180^{\circ }$. (c) $\delta = 270^{\circ }$.

Download Full Size | PDF

3.3 Experiments

To verify the performance of the proposed phase compensation model, a fringe projection profilometry system was developed, composed of a single CMOS camera (JAI GO5000M-USB) with a 12.5 mm focal length and one digital light process (DLP) projector (DLP Lightcrafter 6500). The resolutions of the projector and the camera were $1920 \times 1080$ and $2560 \times 2048$, respectively. The measurement distance of the projector was approximately 500 mm. In all the experiments, $\sigma$ is calibrated from $1.4$ to $1.9$ pixels with the method in [24]. We chose the 4 typical situations shown in Fig. 6. To show the effect of defocus, we only calculate the point clouds near the discontinuity as the red squares shown in Fig. 6.

 figure: Fig. 6.

Fig. 6. Some typical discontinuities. (a) A plane with a linear reflectivity discontinuity. (b) A plane with a curvilinear reflectivity discontinuity. (c) A step with a degenerative discontinuity. (d) A step with a nondegenerative discontinuity.

Download Full Size | PDF

3.3.1 Plane with linear reflectivity discontinuity

The simplest discontinuity is the linear reflectivity discontinuity on a plane shown in Fig. 6(a). It is a common situation when measuring a plane with different materials. The defocus parameter $\sigma$ was estimated online with a white projection. The proposed method was also compared with methods developed in [2022]. The measured point clouds are shown in Fig. 7. In the original point cloud in Fig. 7(a), there is a pulse-shaped artifact on the discontinuity. The peak of the pulse-shaped artifact is reduced by all the methods and almost disappears after compensation with the proposed phase error model (shown in Fig. 7(b)–(e)). We fitted the plane in the region not affected by the discontinuity and calculate the RMSE of plane fitting with the pixels with phase error. The results are shown in Table 3, which show that the proposed method achieved the highest accuracy.

 figure: Fig. 7.

Fig. 7. The point cloud of a plane with a linear reflectivity discontinuity. (a) Origin. (b) Ours. (c) Ref. [20]. (d) Ref. [21]. (e) Ref. [22].

Download Full Size | PDF

Tables Icon

Table 3. The phase RMSE on the linear reflectivity discontinuity

To compare with the simulation, we also calculated the phase RMSE in the LLIR, which is simular with the phase RMSE after compensated with our method by coincidence.

3.3.2 Plane with curvilinear reflectivity discontinuity

The reflectivity discontinuity on a plane is not always linear when there are painted patterns, rivets or other such things. We tested our method on a plane with the solid circle pattern shown in Fig. 6(b). Methods developed in [2022] are also tested for comparision. The settings follow Section 3.3.1 except that the normal of the discontinuity curve was estimated with the method in Section 2.3. The point cloud results are shown in Fig. 8. The maximum error on the discontinuity is at the points with maximum and minimum $u$ coordinates, where $\theta = 0$, and the directions of the error are contrary at the points with maximum and minimum $u$. The error at the points with maximum and minimum $v$ coordinates is almost zero with $\theta = \frac {\pi }{2}$. The results are shown in Fig. 8 and Table 4, which show that the proposed method performed best.

 figure: Fig. 8.

Fig. 8. The point cloud of a plane with a curvilinear reflectivity discontinuity. (a) Origin. (b) Ours. (c) Ref. [20]. (d) Ref. [21]. (e) Ref. [22].

Download Full Size | PDF

Tables Icon

Table 4. Phase RMSE on the curvilinear reflectivity discontinuity

However, the phase RMSE in the LLIR is less than the phase RMSE after compensated with our method in this case, which doesn’t match the simulation. One possible reason is that we used the accurate parameters in the simulation but fitted parameters in the experiment. We believe the error can reduce more if the parameters are more accurate.

3.3.3 Step with a degenerative discontinuity

A step with a degenerative discontinuity is shown in Fig. 6(c). The settings are same as ones in Section 3.3.1, except that the focus parameter $\sigma$ is estimated with the offline calibration. The results are shown in Fig. 9 and Table 5. All four methods can reduce the phase error on the discontinuity. Our method and the method in [20] performed better.

 figure: Fig. 9.

Fig. 9. The point clouds on the step with a degenerative discontinuity. (a) Origin. (b) Ours. (c) Ref. [20]. (d) Ref. [21]. (e) Ref. [22].

Download Full Size | PDF

Tables Icon

Table 5. The phase RMSE on the step with a degenerative discontinuity

To make the experiments valid, we compared our results with the coordinate measuring machine (CMM). We measured 50 points for each plane. The plane fitting RMSE of Plane 1 and Plane 2 are $0.0081$mm and $0.0036$mm respectively, while the plane fitting RMSE of the structured light method in the region without the effect of the discontinuity are $0.0378$mm and $0.0312$mm. The plane fitting error of our optical measurement is far larger than CMM, so almost all the error is attributable to the error of optical measurement. For the region affected by the discontinuity, the error is expected to be similar with the region without the effect of the discontinuity. However, the RMSE increases to $1.002$mm and $1.245$mm, which is about $30$ times larger than expectation. With our method the RMSE reduces $90\%$ to $0.0931$mm and $0.124$mm.

3.3.4 Step with a nondegenerative discontinuity

A step with a nondegenerative discontinuity is shown in Fig. 6(d). As mentioned in Section 2.3, we used the phase map to detect the pixel discontinuity and located the sub-pixel discontinuity in the gray image. Other settings follow Section 3.3.3. The results are shown in Fig. 10 and Table 6. In both point clouds before and after compensation, the points at a distance of less than one pixel from the discontinuity are dropped. In this case, the method in [20] failed (worse than the origin point cloud) and our method performed best among the others. We think the failure of [20] is because that the phase weighting model is not accurate when the phase changes a lot at the discontinuity.

 figure: Fig. 10.

Fig. 10. The point clouds on the step with a nondegenerative discontinuity. (a) Origin. (b) Ours. (c) Ref. [20]. (d) Ref. [21]. (e) Ref. [22].

Download Full Size | PDF

Tables Icon

Table 6. The phase RMSE on the step with a nondegenerative discontinuity

4. Conclusion and discussion

This paper proposes a phase error model for discontinuities caused by the defocus of the camera in fringe projection profilometry. We first propose a general two-dimensional phase error model. Then, with some additional assumption, the model is simplified into a general one-dimensional model. Next, we classify discontinuities into two classes, namely, nondegenerative discontinuities and degenerative discontinuities, and two formulas are proposed for the two different kinds of discontinuities. A method for the parameter estimation in the model is also proposed. To show the effectiveness and robustness of the proposed model, simulations were performed under different parameters. Finally, experiments on some typical discontinuities show that the proposed method is effective for the phase error caused by defocus.

In the experiments, four typical cases show the effect of the proposed method and three other methods in as well. The first two cases represent reflectivity discontinuities, in which all four method can reduce more than 50% of the error. The third case represents degenerative distance discontinuities and the method in [21] performs much worse than it on reflectivity discontinuities, because the number of pixels to calculate for discrete method reduces to half on the degenerate distance discontinuities. The last case represents nondegenerative distance discontinuities and the method in [20] failed in this case for it cannot deal with the phase step on the nondegenerate distance discontinuities though it performs well on degenerative discontinuities. To summarize, our method shows good accuracy and adaptability in all four typical cases.

However, there are still some problems remain to resolove. First, the method ignores the defocus change depending on the depth, which actually exists in nondegenerate discontinuities. For a more detailed model, the asymmetry of PSF caused by lens aberration also needs to be considered. Better parameter estimation methods are also needed for more accurate results.

Funding

National Natural Science Foundation of China (51935010, 62173198); State Key Laboratory of Tribology of China (SKLT2022C17); Beijing Municipal Natural Science Foundation (L192001).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for detailed derivation for Eqs. (8), (9), and (15).

References

1. J. Geng, “Structured-light 3d surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128–160 (2011). [CrossRef]  

2. R. Chen, J. Xu, and S. Zhang, “Comparative study on 3d optical sensors for short range applications,” Opt. Lasers Eng. 149, 106763 (2022). [CrossRef]  

3. J. Burke, T. Bothe, W. Osten, and C. F. Hess, “Reverse engineering by fringe projection,” in Interferometry XI: Applications, vol. 4778 (SPIE, 2002), pp. 312–324.

4. M. G. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000). [CrossRef]  

5. C. Rocchini, P. Cignoni, C. Montani, P. Pingi, and R. Scopigno, “A low cost 3d scanner based on structured light,” in computer graphics forum, vol. 20 (Wiley Online Library, 2001), pp. 299–308.

6. J. Xu and S. Zhang, “Status, challenges, and future perspectives of fringe projection profilometry,” Opt. Lasers Eng. 135, 106193 (2020). [CrossRef]  

7. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018). [CrossRef]  

8. E. Li, X. Peng, J. Xi, J. F. Chicharo, J. Yao, and D. Zhang, “Multi-frequency and multiple phase-shift sinusoidal fringe projection for 3d profilometry,” Opt. Express 13(5), 1561–1569 (2005). [CrossRef]  

9. S. Zhang and S.-T. Yau, “Generic nonsinusoidal phase error correction for three-dimensional shape measurement using a digital video projector,” Appl. Opt. 46(1), 36–43 (2007). [CrossRef]  

10. S. Van der Jeught and J. J. Dirckx, “Real-time structured light profilometry: a review,” Opt. Lasers Eng. 87, 18–31 (2016). [CrossRef]  

11. L. Zhang and S. Nayar, “Projection defocus analysis for scene capture and image display,” in ACM SIGGRAPH 2006 Papers, (2006), pp. 907–915.

12. G. A. Ayubi, J. A. Ayubi, J. M. Di Martino, and J. A. Ferrari, “Pulse-width modulation in defocused three-dimensional fringe projection,” Opt. Lett. 35(21), 3682–3684 (2010). [CrossRef]  

13. M. Subbarao and G. Surya, “Depth from defocus: A spatial domain approach,” Int. J. Comput. Vision 13(3), 271–294 (1994). [CrossRef]  

14. S. Zhang, “Recent progresses on real-time 3d shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010). [CrossRef]  

15. G. Rao, L. Song, S. Zhang, X. Yang, K. Chen, and J. Xu, “Depth-driven variable-frequency sinusoidal fringe pattern for accuracy improvement in fringe projection profilometry,” Opt. Express 26(16), 19986–20008 (2018). [CrossRef]  

16. Y. Wang and S. Zhang, “Three-dimensional shape measurement with binary dithered patterns,” Appl. Opt. 51(27), 6631–6636 (2012). [CrossRef]  

17. J. Dai and S. Zhang, “Phase-optimized dithering technique for high-quality 3d shape measurement,” Opt. Lasers Eng. 51(6), 790–795 (2013). [CrossRef]  

18. J. Zhu, C. Zhu, P. Zhou, Z. He, and D. You, “An optimizing diffusion kernel-based binary encoding strategy with genetic algorithm for fringe projection profilometry,” IEEE Trans. Instrum. Meas. 71, 1–8 (2022). [CrossRef]  

19. P. Brakhage, M. Heinze, G. Notni, and R. Kowarschik, “Influence of the pixel size of the camera on 3d measurements with fringe projection,” in Optical Measurement Systems for Industrial Inspection III, vol. 5144 (International Society for Optics and Photonics, 2003), pp. 478–483.

20. H. Yue, H. G. Dantanarayana, Y. Wu, and J. M. Huntley, “Reduction of systematic errors in structured light metrology at discontinuities in surface reflectivity,” Opt. Lasers Eng. 112, 68–76 (2019). [CrossRef]  

21. L. Rao and F. Da, “Local blur analysis and phase error correction method for fringe projection profilometry systems,” Appl. Opt. 57(15), 4267–4276 (2018). [CrossRef]  

22. Y. Wu, X. Cai, J. Zhu, H. Yue, and X. Shao, “Analysis and reduction of the phase error caused by the non-impulse system psf in fringe projection profilometry,” Opt. Lasers Eng. 127, 105987 (2020). [CrossRef]  

23. S. Lee and L. Q. Bui, “Accurate estimation of the boundaries of a structured light pattern,” J. Opt. Soc. Am. A 28(6), 954–961 (2011). [CrossRef]  

24. S. Zhuo and T. Sim, “Defocus map estimation from a single image,” Pattern Recognit. 44(9), 1852–1858 (2011). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Appendix

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. (a) Principle of fringe projection profilometry. (b) The relationship of point $\left ( u_c, v_c \right )$, the PSF domain of $\left ( u_c, v_c \right )$ and point $\left ( u_{cp}, v_{cp} \right )$ in the PSF domain. (c) The calculation of $\hat {\varphi }\left ( u_{cp}, v_{cp}; u_c, v_c\right )$. (d) The local coordinate system on the discontinuity curve.
Fig. 2.
Fig. 2. Degenerative and nondegenerative discontinuity. (a) Degenerative discontinuity on a plane. (b) The degenerative discontinuity on a step. (c) The nondegenerative discontinuity on a step.
Fig. 3.
Fig. 3. The result of the simulation on dimensionality reduction. (a) Reflectivity distribution and pixels with phase error. (b) Error without compensation. (c) Error after compensation with the two-dimensional model. (d) Error after compensation with the quasi-one-dimensional model.
Fig. 4.
Fig. 4. The phase and phase error with different parameter estimation errors. (a) $\omega _c$. (b) $\eta _2$. (c) $\sigma$. (d) $u_q$.
Fig. 5.
Fig. 5. The phase and phase error with different $\delta$ estimation errors. (a) $\delta = 90 ^{\circ }$. (b) $\delta = 180^{\circ }$. (c) $\delta = 270^{\circ }$.
Fig. 6.
Fig. 6. Some typical discontinuities. (a) A plane with a linear reflectivity discontinuity. (b) A plane with a curvilinear reflectivity discontinuity. (c) A step with a degenerative discontinuity. (d) A step with a nondegenerative discontinuity.
Fig. 7.
Fig. 7. The point cloud of a plane with a linear reflectivity discontinuity. (a) Origin. (b) Ours. (c) Ref. [20]. (d) Ref. [21]. (e) Ref. [22].
Fig. 8.
Fig. 8. The point cloud of a plane with a curvilinear reflectivity discontinuity. (a) Origin. (b) Ours. (c) Ref. [20]. (d) Ref. [21]. (e) Ref. [22].
Fig. 9.
Fig. 9. The point clouds on the step with a degenerative discontinuity. (a) Origin. (b) Ours. (c) Ref. [20]. (d) Ref. [21]. (e) Ref. [22].
Fig. 10.
Fig. 10. The point clouds on the step with a nondegenerative discontinuity. (a) Origin. (b) Ours. (c) Ref. [20]. (d) Ref. [21]. (e) Ref. [22].

Tables (6)

Tables Icon

Table 1. The maximum and RMSE of phase error

Tables Icon

Table 2. The RMSE with different single parameter estimation error

Tables Icon

Table 3. The phase RMSE on the linear reflectivity discontinuity

Tables Icon

Table 4. Phase RMSE on the curvilinear reflectivity discontinuity

Tables Icon

Table 5. The phase RMSE on the step with a degenerative discontinuity

Tables Icon

Table 6. The phase RMSE on the step with a nondegenerative discontinuity

Equations (26)

Equations on this page are rendered with MathJax. Learn more.

I p , k ( u p , v p ) = I + I cos ( φ p ( u p , v p ) + 2 π k N )
I c , k ( u c , v c ) = K r ( u c , v c ) ( I + I cos ( φ c ( u c , v c ) + 2 π k N ) )
φ c ( u c , v c ) = a t a n 2 ( I s i n ( u c , v c ) , I c o s ( u c , v c ) )
I c o s ( u c , v c ) = 2 N k = 0 N 1 I c , k ( u c , v c ) cos 2 π k N I s i n ( u c , v c ) = 2 N k = 0 N 1 I c , k ( u c , v c ) sin 2 π k N
I c ( u c , v c ) = I s i n 2 ( u c , v c ) + I c o s 2 ( u c , v c )
I p , k ( u p , v p ) = ( I + I cos ( φ p ( u p , v p ) + 2 π k N ) ) g p ( u p , v p ) = I + K d ( u p , v p ) I cos ( φ p ( u p , v p ) + 2 π k N ) ,
I c , k ( u c , v c ) = ( K r ( u c , v c ) I c 0 , k ( u c , v c ) ) g c ( u c , v c )
I c 0 , k ( u c , v c ) = ( I + K d ( u c , v c ) I cos ( φ c ( u c , v c ) + 2 π k N ) ) ,
φ ~ c ( u c , v c ) = φ c ( u c , v c ) + Δ φ ( u c , v c )
Δ φ c ( u c , v c ) = a t a n 2 ( S m ( u c , v c ) , C m ( u c , v c ) )
S m ( u c , v c ) = P ( u c , v c ) K ( u c p , v c p ) sin φ ^ ( u c p , v c p ; u c , v c ) g c ( u c u c p , v c v c p ) d u c p d v c p C m ( u c , v c ) = P ( u c , v c ) K ( u c p , v c p ) cos φ ^ ( u c p , v c p ; u c , v c ) g c ( u c u c p , v c v c p ) d u c p d v c p φ ^ ( u c p , v c p ; u c , v c ) = φ c ( u c p , v c p ) φ c ( u c , v c ) K ( u c p , v c p ) = I K d ( u c p , v c p ) K r ( u c p , v c p )
Δ φ c ( s m ) = a t a n 2 ( S m ( s m ) , C m ( s m ) ) S m ( s m ) = s m r s m + r K s ( s ) sin φ ^ s ( s ; s m ) g s ( s m s ) d s C m ( s m ) = s m r s m + r K s ( s ) cos φ ^ s ( s ; s m ) g s ( s m s ) d s
g s ( s m s ) = r 2 ( s s m ) 2 r 2 ( s s m ) 2 cos ( ω t t ) g c ( s m s , t ) d t
K s ( s ) = { η 1 , s 0 η 2 , s > 0
φ ^ s ( s ; s m ) = { ω 1 ( s s m ) , s 0 ω 2 ( s s m ) + δ ( ω 1 ω 2 ) s m , s > 0
φ ^ s ( s ; s m ) = { ω 1 ( s s m ) δ + ( ω 1 ω 2 ) s m , s 0 ω 2 ( s s m ) , s > 0
g s ( s ) = e ( ω t σ ) 2 / 2 2 π σ e s 2 2 σ 2
Δ φ c ( s m ) = a t a n 2 ( S m ( s m ) , C m ( s m ) ) S m ( s m ) = { η 1 λ 1 ( S 1 cos α + C 1 sin α ) + η 2 λ 2 S 2 , s m 0 η 1 λ 1 S 1 + η 2 λ 2 ( S 2 cos α C 2 sin α ) , s m > 0 C m ( s m ) = { η 1 λ 1 ( C 1 cos α S 1 sin α ) + η 2 λ 2 C 2 , s m 0 η 1 λ 1 C 1 + η 2 λ 2 ( C 2 cos α + S 2 sin α ) , s m > 0
λ 1 = ω 1 σ , λ 2 = ω 2 σ , α = ( ω 1 ω 2 ) s m δ S 1 = ω 1 s m e s 2 2 λ 1 2 sin s d s , C 1 = ω 1 s m e s 2 2 λ 1 2 cos s d s S 2 = ω 2 s m e s 2 2 λ 2 2 sin s d s , C 2 = ω 2 s m e s 2 2 λ 2 2 cos s d s
g s ( s m s ) = { e ( ω t σ 1 ) 2 / 2 2 π σ 1 e ( s s m ) 2 2 σ 1 2 , s 0 e ( ω t σ 2 ) 2 / 2 2 π σ 2 e ( s s m ) 2 2 σ 2 2 , s > 0
λ 1 = ω 1 σ 1 , λ 2 = ω 2 σ 2
φ ^ s ( s ; s m ) = ω ( s s m )
e s 2 2 λ 2 sin s d s = 0 , e s 2 2 λ 2 cos s d s = 2 π λ e λ 2 2
Δ φ c ( s m ) = a t a n 2 ( S m ( s m ) , C m ( s m ) ) S m ( s m ) = ( η 2 η 1 ) ω s m e s 2 2 λ 2 sin s d s C m ( s m ) = 2 π η 1 λ e λ 2 2 + ( η 2 η 1 ) ω s m e s 2 2 λ 2 cos s d s
s m ( u q u p ) cos θ
ω ω c u cos θ ω c v sin θ
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.