Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Structured light 3-D sensing for scenes with discontinuous reflectivity: error removal based on scene reconstruction and normalization

Open Access Open Access

Abstract

Structured light-based 3-D sensing technique reconstructs the 3-D shape from the disparity given by pixel correspondence of two sensors. However, for scene surface containing discontinuous reflectivity (DR), the captured intensity deviates from its actual value caused by the non-ideal camera point spread function (PSF), thus generating 3-D measurement error. First, we construct the error model of fringe projection profilometry (FPP). From which, we conclude that the DR error of FPP is related to both the camera PSF and the scene reflectivity. The DR error of FPP is hard to be alleviated because of unknown scene reflectivity. Second, we introduce single-pixel imaging (SI) to reconstruct the scene reflectivity and normalize the scene with scene reflectivity "captured" by the projector. From the normalized scene reflectivity, pixel correspondence with error opposite to the original reflectivity is calculated for the DR error removal. Third, we propose an accurate 3-D reconstruction method under discontinuous reflectivity. In this method, pixel correspondence is first established by using FPP, and then refined by using SI with reflectivity normalization. Both the analysis and the measurement accuracy are verified under scenes with different reflectivity distributions in the experiments. As a result, the DR error is effectively alleviated while taking an acceptable measurement time.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Structured light-based technique plays an important role in fast, flexible and accurate three-dimensional (3-D) measurement [13], which reconstructs 3-D shape of the measured object by the triangulation method [4]. When the image correspondence between the camera and the projector is established, the 3-D surface shape can be reconstructed by combining with system parameters [4]. This image correspondence is usually established by following feature [1,5,6] or phase [4,79] consistency. Both the feature and phase do not change during the projection and acquisition processes, which can be easily encoded in the projected light and decoded from the captured light. The feature-based method reconstructs the 3-D shape with relatively low resolution [10,11]. The phase-based method is preferable for high-speed and high-resolution 3-D measurement [4].

Fringe projection profilometry (FPP), a typical phase-based method, can establish pixel-level image correspondence and sense 3-D data with thousands of frames per second [4,8]. In FPP, the system point spread function (PSF) becomes non-ideal with a finite size because of the lens aberration and the diffraction limit of the camera. As we know, the captured intensity is the integral of the intensity inside the PSF of each pixel, which will deviate from its actual intensity in surface areas containing discontinuous reflectivity [12]. For example, the captured intensity of the low reflectivity area will be averaged towards that of the high reflectivity area, which will generate error around discontinuous areas in the calculated phase. The phase error will result in pixel correspondence establishment error, thus generating non-ignorable discontinuous reflectivity (DR) error.

FPP can reduce the DR error by first estimating the PSF distribution and then compensating the error area by its nearest correct area [1317]. This PSF-estimation method reduces the root mean square height error to one-third of the original, but it is difficult to determine the correct area for objects containing sharply depth change. A deconvolution-based error reduction method has been recently proposed, which first calibrates the camera PSF and then retrieves the desired patterns for the phase retrieval [12]. The PSF-estimation method reduces the RMS height error by up to four times, but the ringing effect of its deconvolution introduces ripple errors in areas including feature details and sharp depth change [18].

Generally, the phase error could be less than 0.3 rad when the PSF standard deviation is less than 8 [12]. If the fringe frequency is larger than 0.05, the error of pixel correspondence is usually less than one pixel in the measurement range [19]. The DR error is proportional to the size of the camera PSF, which can be reduced by alleviating the defocus level or increasing the acquisition resolution of the scene [20]. Thus, we can directly reduce the DR error by selecting a high-resolution camera but heavily increase the system cost [21]. In addition, the desired patterns can be retrieved if we can normalize the discontinuous reflectivity of adjacent pixels before the scene is modulated by the projected patterns, but it is difficult to obtain the scene reflectivity for FPP.

Different from feature- and phase-based 3-D reconstruction techniques, single-pixel imaging metrology (SIM) establishes the image correspondence by following the scene intensity consistency between the camera and the projector [22,23]. By treating each camera pixel as a single-pixel detector, SIM projects patterns with full image spatial frequency and acquires spectrum components with full spatial frequency in transformation domain, which can reconstruct the scene reflectivity through reverse transform [2433]. The reconstructed scene is the convolution of the camera PSF and the reflected light and the desired pixel coordinate is the grayscale center of the reconstructed scene. When the camera PSF contains discontinuous reflectivity, the grayscale center will be biased towards the area with high reflectivity. Thus image correspondence error is introduced and causes DR error.

As aforementioned, the DR error can be alleviated by normalizing the discontinuous reflectivity of the reconstructed scene before calculating the corresponding pixel. The sum of the reconstructed scene reflectivity of each camera pixel is the modulated projector light intensity, which is proportional to the scene reflectivity. Thus, the captured scene reflectivity of each camera pixel can be normalized to alleviate the DR error. However, traditional SIM is very time-consuming caused by the large number of encoding patterns, i.e., thousands of frames.

In summary, DR error is introduced at discontinuous reflectivity edges for structured light-based 3-D reconstruction methods. Traditional FPP is high-speed but the DR error reduction methods need to calibrate scene-dependent parameters, which is inflexible and has limitations in measurement scenes. While SIM shows potential for accurate 3-D measurement for objects with discontinuous reflectivity due to its scene reconstruction ability, but suffers from long data acquisition time.

In this paper, we first construct the model of DR error of both FPP and SIM. From which, we conclude that the DR error of SIM can be compensated through reflectivity normalization. Based on the conclusion drawn above, a reflectivity-based accurate 3-D reconstruction method under discontinuous reflectivity is proposed to alleviate the DR error. Furthermore, due to the extremely long acquisition time of SI, we adopt parallel single-pixel imaging technique [22,23], and FPP is used for the pixel-level correspondence establishment to accelerate the data acquisition. Both the error model and the experiment results verify that the proposed method can reconstruct accurate 3-D shape with discontinuous reflectivity.

The rest of the paper is organized as follows. Section 2 constructs the pixel correspondence error model of FPP and SIM. Section 3 presents the principle of the proposed reflectivity-based accurate 3-D reconstruction method under discontinuous reflectivity. Section 4 exhibits and discusses the experimental results. Section 5 concludes this paper.

2. Analysis of DR error

As aforementioned, the measurement error is induced by the spatial averaging effect caused by the camera PSF at discontinuous reflectivity area. The model of DR error of both FPP and SIM are constructed for the error compensation. From the point view of establishment error model, we analyze the pixel correspondence error and conclude that the DR error of SIM can be compensated. The constructed error model in FPP and SIM are detailed as follows.

2.1 Error analysis of FPP

In FPP, the projector projects a set of phase-shifted sinusoidal fringe patterns onto the object, and the camera captures these patterns reflected from the object’s surface [27]. Each pixel of the camera will uniquely correspond to one projector pixel with the same phase. These patterns are designed as

$$\label{} I_{n}^{Fp}(u,v)=a(u,v)+b(u,v)\cos [2\pi fu-{2\pi n}/{N}],n=1,2,3,\ldots,N,$$
where $(u,v)$ represents the projector coordinate, $a$ represents the average intensity, $b$ represents the amplitude, $f$ represents the fringe frequency, and $N$ represents the total number of the used fringe patterns.

We assume that when the light emits from the projector pixel $(u,v)$ and reaches the camera pixel $(x,y)$, the scene reflectivity captured by one camera pixel can be expressed by $r(u,v;x,y)$. Then, the image captured by the camera can be expressed as

$$\label{} I_{n}^{Fc}(x,y)=\sum_{u=1}^{U}{\sum_{v=1}^{V}{r(u,v;x,y)I_{n}^{Fp}(u,v)}},$$
where $U\times V$ denotes the projector resolution.

For clarity, we illustrate the light transmission process in Fig. 1. As shown in Fig. 1 (a), when the system PSF is ideal, the camera pixel $(x,y)$ receives the light reflected from the object surface which is illuminated by the projector pixel $(u,v)$. But when the system PSF is non-ideal, the camera pixel $(x,y)$ receives the light reflected from the object’s surface which is illuminated by several adjacent projector pixels, e.g., $(u_0,v_0)$, $(u_1,v_1)$ and $(u_2,v_2)$ have discontinuous surface reflectivity $r$, $r(u_1,v_1)$ and $r(u_2,v_2)$, respectively.

 figure: Fig. 1.

Fig. 1. Light transmission of 3-D shape measurement system for object with discontinuous reflectivity under conditions that (a) the system PSF is ideal and (b) the system PSF is non-ideal.

Download Full Size | PDF

For any projector-camera system with an ideal PSF, a pixel of the camera only receives the light emitted from one single pixel of the projector. In that situation, the wrapped phase can be expressed as

$$\label{} \phi (x,y)={{\tan }^{{-}1}}\frac{r({{u}_{0}},{{v}_{0}};x,y)b\sin (2\pi f{{u}_{0}})}{r({{u}_{0}},{{v}_{0}};x,y)b\cos (2\pi f{{u}_{0}})}={{\tan }^{{-}1}}\frac{\sin (2\pi f{{u}_{0}})}{\cos (2\pi f{{u}_{0}})},$$
where $(u_0,v_0)$ denotes the projector pixel corresponding to the camera pixel $(x,y)$, $r(u_0,v_0;x,y)$ denotes the scene reflectivity of the captured scene point.

Due to the diffraction limit of the camera and the projector, the non-ideal PSF generates the blur effect. Each pixel of the camera receives light originated from a small area of the projector sensor, and the wrapped phase becomes

$$\label{} \phi '(x,y) = ta{n^{ - 1}}\frac{{\sum_{u = 1}^U {\sum_{v = 1}^V {r(u,v)G(u,v;x,y)\sin (2\pi fu)} } }}{{\sum_{u = 1}^U {\sum_{v = 1}^V {r(u,v)G(u,v;x,y)\cos (2\pi fu)} } }}.$$

The phase error caused by the diffraction limit of the projector-camera system can be obtained by

$$\label{} \begin{aligned} \Delta \phi (x,y) & = \phi '(x,y) - \phi (x,y)\\ & = {\tan ^{ - 1}}\left[ {\frac{{r({u_2},{v_2}) - r({u_1},{v_1})}}{{r({u_1},{v_1}) + r({u_2},{v_2})}}\frac{{\sum {\sum {G(u,v)\sin (2\pi fu)} } }}{{\sum {\sum {G(u,v)\cos (2\pi fu)} } }}} \right]\\ & = {\tan ^{ - 1}}[\frac{{r({u_2},{v_2}) - r({u_1},{v_1})}}{{r({u_1},{v_1}) + r({u_2},{v_2})}}erf(\sqrt 2 \pi f\sigma )] \end{aligned},$$
where $erf$ is the Gaussian error function with the variance of $\sigma ^2$. From which, $\Delta \phi$ is proportional to $\sigma ^2$, $f$ and $r({u_2},{v_2}) - r({u_1},{v_1})$. Because the camera is focused, $\sigma <1$ is satisfied, thus it can be concluded that $\Delta \phi <1.2 \sqrt 2 \pi f$.

The absolute phase can be transferred to the corresponding projector pixel through [34]

$$\label{} u = {{\Phi (x,y)} / 2\pi f}.$$

Thus, the pixel correspondence error of FPP can be expressed as

$$\label{} \begin{aligned} \Delta u(x,y) & = {{\Delta \phi (x,y) } \mathord{\left/ {\vphantom {{\Delta \phi (x,y) } {2\pi f}}} \right.} {2\pi f}}\\ & = {\tan ^{ - 1}}[\frac{{r({u_2},{v_2}) - r({u_1},{v_1})}}{{r({u_1},{v_1}) + r({u_2},{v_2})}}erf(\sqrt 2 \pi f\sigma )] {\mathord{\left/ {\vphantom {{2\pi f}}} \right.} {2\pi f}} \end{aligned}.$$

From Eq. (7), we can conclude the following points:

  • (i) $\Delta u$ is inversely proportional to $f$. In FPP, a high frequency results in a high 3-D sensing accuracy. However, $\Delta \phi$ is proportional to $f$, thus it is difficult to reduce $\Delta u$ by simply increasing $f$.
  • (ii) $\Delta u$ is proportional to $\sigma$. $\sigma$ is proportional to the captured scene resolution. Thus $\Delta u$ can be reduced by increasing the captured scene resolution or estimating accurate spatial-varying PSF. However, accurate PSF estimation is difficult.
  • (iii) $\Delta u$ is proportional to $r({u_2},{v_2}) - r({u_1},{v_1})$. However, $r({u_2},{v_2}) - r({u_1},{v_1})$ is related to the measured scene and cannot be measured in FPP.
  • (iv) From $\Delta \phi <1.2 \sqrt 2 \pi f$ analyzed above, $\Delta u < 0.6 \sqrt 2$ can be concluded. Because $\Delta u < 1$, FPP can achieve accurate coarse pixel correspondence establishment with different fringe frequencies.

In summary, in FPP, the DR error is related to the system PSF, fringe frequency and the difference between two different reflectivity. These influencing factors are difficult to be changed, thus DR error is hard to be alleviated in FPP.

2.2 Error analysis of SIM

In SIM, a series of basis patterns are projected onto the measured scene. The captured intensity can be expressed as [27]

$$\label{} I_n^{Hc}(x,y) = \sum_{u = 1}^U {\sum_{v = 1}^V {{r_c}(u,v;x,y) \cdot I_n^{Hp}(u,v)} },$$
where $r_{c}(u,v;x,y)$ denotes the reflectivity of the scene point illuminated by the projector pixel $(u,v)$ and captured by one camera pixel $(x,y)$, and $I_{n}^{Hp}$ denotes the intensity of the projected pattern. Because of the existence of the system PSF, the $r_{c}(u,v;x,y)$ can be expressed as ${{r}_{c}}(u,v;x,y)=r(u,v)G(u,v;x,y)$, where $G$ denotes the system PSF, and $r$ denotes the actual reflectivity of the scene illuminated by the projected light. Fourier basis patterns and Hadamard basis patterns are two representative deterministic model patterns used in SI. Hadamard basis patterns are chosen for higher data acquisition efficiency.

Here, HSI reconstructs the captured scene by [26]

$$\label{} {r_c}(u,v;x,y) = {H^{ - 1}}[{I^{Hc}}(x,y)],$$
where $H^{-1}$ denotes the inverse Hadamard transform, ${{I}^{Hc}}(x,y)=\left [ I_{1}^{Hc},I_{2}^{Hc},\ldots,I_{n}^{Hc} \right ]$.

When the object contains uniform surface reflectivity, the correspondence of $(x,y)$ and $(u,v)$ can be established by calculating the gray center of the captured reflectivity [22]. Taking the $u$ axis as an example, we have the coordinate

$$\label{} {u_0}(x,y) = \frac{{\sum_{(u,v) \in \Omega } {u{r_c}(u,v;x,y)} }}{{\sum_{(u,v) \in \Omega } {{r_c}(u,v;x,y)} }} = \frac{{\sum_{(u,v) \in \Omega } {ur(u,v)G(u,v;x,y)} }}{{\sum_{(u,v) \in \Omega } {r(u,v)G(u,v;x,y)} }},$$
where $\Omega$ denotes the imaging spot in HSI.

The grayscale center at the edge of discontinuous reflectivity becomes

$$\label{} \begin{aligned} {u_s}^\prime (x,y) & = \frac{{\sum_{(u,v) \in \Omega } {ur(u,v)G(u,v;x,y)} }}{{\sum_{(u,v) \in \Omega } {r(u,v)G(u,v;x,y)} }}\\ & = \frac{{\sum_{(u,v) \in {\Omega _1}} {ur({u_1},{v_1})G(u,v;x,y)} + \sum_{(u,v) \in {\Omega _2}} {ur({u_2},{v_2})G(u,v;x,y)} }}{{\sum_{(u,v) \in {\Omega _1}} {r({u_1},{v_1})G(u,v;x,y)} + \sum_{(u,v) \in {\Omega _2}} {r({u_2},{v_2})G(u,v;x,y)} }} \end{aligned},$$
where $\Omega _1$ and $\Omega _2$ denote local surface areas with different reflectivity $r({u_1},{v_1})$ and $r({u_2},{v_2})$, respectively.

The pixel correspondence error can be obtained by

$$\label{} \begin{aligned} & \Delta {u_s}^\prime (x,y) = u'(x,y) - {u_0}(x,y)\\ & = \sum_{(u,v) \in {\Omega _1}} {\frac{{r({u_1},{v_1}) - {r_c}(x,y)}}{{{r_c}(x,y)}}uG(u,v;x,y)} + \sum_{(u,v) \in {\Omega _2}} {\frac{{r({u_2},{v_2}) - {r_c}(x,y)}}{{{r_c}(x,y)}}} uG(u,v;x,y) \end{aligned},$$
where ${{r}_{c}}(x,y)=\sum _{u=1}^{U}{\sum _{v=1}^{V}{{{r}_{c}}(u,v;x,y)}}$ denotes the scene reflectivity captured by one camera pixel $(x,y)$. From which, it can be concluded that $\Delta u^\prime$ is inversely proportional to $abs(\Omega _1-\Omega _2)$ and proportional to $r({{u}_{1}},{{v}_{1}})-r({{u}_{2}},{{v}_{2}})$.

From Eq. (12), we can conclude the following points:

  • (i) $\Delta u^\prime$ is proportional to $G$. $G$ is proportional to the captured scene resolution. Thus, the influence of $G$ can only be alleviated by increasing the captured scene resolution.
  • (ii) $\Delta u^\prime$ is inversely proportional to $abs(\Omega _1-\Omega _2)$. $abs(\Omega _1-\Omega _2)$ depends on both the camera sensor and the measured scene and cannot be changed.
  • (iii) $\Delta u^\prime$ is proportional to $r({{u}_{1}},{{v}_{1}})-r({{u}_{2}},{{v}_{2}})$. $\Delta u^\prime$ can be alleviated through reducing $r({{u}_{1}},{{v}_{1}})-r({{u}_{2}},{{v}_{2}})$, which can be realized through normalizing $r({{u}_{1}},{{v}_{1}})$ and $r({{u}_{2}},{{v}_{2}})$ to the same.

The measured scene is illuminated by the projector and captured by the camera. The captured reflectivity of each camera pixel ${{r}_{c}}(u,v;x,y)$ is calculated by SI. ${{r}_{c}}(u,v;x,y)$ can be transformed to the DMD plane and can be regarded as the scene reflectivity "captured" by the projector ${{r}_{p}}(u,v)$. By dividing $r({{u}_{1}},{{v}_{1}})$ and $r({{u}_{2}},{{v}_{2}})$ by ${r}_{p}$, i.e., normalizing the captured reflectivity, $r({{u}_{1}},{{v}_{1}})$ and $r({{u}_{2}},{{v}_{2}})$ becomes $1$ in Eq. (12) and $\Delta u^\prime$ becomes $0$ if $G$ is an ideal PSF.

In summary, in SIM, the DR error is related to the system PSF, reflectivity distribution and the difference between two different reflectivity. The former two factors cannot be changed in SIM. Thus, the DR error of SIM can be alleviated through normalizing the captured reflectivity.

3. Principle of reflectivity-based accurate 3-D reconstruction method under discontinuous reflectivity

As aforementioned, SIM can get accurate 3-D shape through normalizing the captured reflectivity. The error model of SIM with normalized reflectivity is analyzed, which is opposite to the correspondence calculated by the original reflectivity. From which, a reflectivity-based accurate 3-D reconstruction method is proposed, it first acquires coarse pixel correspondence by FPP, and then uses SIM to achieve accurate pixel correspondence. From the point view of establishment error model, we analyze the pixel correspondence error and verify the feasibility of the proposed method. Both the constructed model and the proposed method are detailed as follows.

3.1 DR error removal based on reflectivity normalization

In SIM, the scene reflectivity captured by each camera pixel is calculated. From that, the modulated projector light intensity can be calculated to normalize the captured reflectivity and compensate the pixel correspondence error.

The modulated projector light intensity can be calculated by [23]

$$\label{} {r_p}(u,v) = \sum_{x = 1}^X {\sum_{y = 1}^Y {{r_c}(u,v;x,y)} },$$
where $X\times Y$ denotes the camera resolution.

To eliminate the DR error, the captured reflectivity is normalized by ${{r}_{c}}/{{r}_{p}}$, and the pixel correspondence is changed into

$$\label{} {u_s}^{\prime \prime }(x,y) = \frac{{\sum_{(u,v) \in \Omega } {u{{{r_c}(u,v;x,y)} \mathord{\left/ {\vphantom {{{r_c}(u,v;x,y)} {{r_p}(u,v)}}} \right.} {{r_p}(u,v)}}} }}{{\sum_{(u,v) \in \Omega } {{{{r_c}(u,v;x,y)} \mathord{\left/ {\vphantom {{{r_c}(u,v;x,y)} {{r_p}(u,v)}}} \right.} {{r_p}(u,v)}}} }}.$$

As mentioned above, ${u_s}^{\prime \prime }=u_0$ if the system PSF $G$ is ideal. However, if $G$ is not an ideal PSF, the calculated corresponding projector pixel ${u_s}^{\prime \prime }$ can be expressed as

$$\label{} {u_s}^{\prime \prime }(x,y) = \frac{{\sum_{(u,v) \in \Omega } {uG} + \sum_{(u,v) \in {\Omega _1}} {{{u\Delta {r_2}G(u,v;x,y)} \mathord{\left/ {\vphantom {{u\Delta {r_2}G(u,v;x,y)} {{r_p}}}} \right.} {{r_p}}}} + \sum_{(u,v) \in {\Omega _2}} {{{u\Delta {r_1}G(u,v;x,y)} \mathord{\left/ {\vphantom {{u\Delta {r_1}G(u,v;x,y)} {{r_p}}}} \right.} {{r_p}}}} }}{{1 + \Delta r}},$$
where $\Delta r(u,v)=\sum \limits _{(u,v)\in \Omega }{{(r(u,v)-{{\text {r}}_{p}}(u,v))G(u,v;x,y)}/{{{r}_{p}}}\;}(u,v)$, $\Delta {{r}_{1}}={{r}_{p}}(u,v)-{r}({{u}_{1}},{{v}_{1}})$, and $\Delta {{r}_{2}}={{r}_{p}}(u,v)-{r}({{u}_{2}},{{v}_{2}})$.

Thus, the pixel correspondence error can be expressed by

$$\label{} \begin{aligned} & \Delta {u_s}^{\prime \prime } = {u_s}^{\prime \prime } - {u_0}\\ & = \frac{1}{{1 + \Delta r}}\left[ {\sum_{(u,v) \in {\Omega _1}} {\frac{{{{\textrm{r}}_p}(u,v){\textrm{ - r}}({u_1},{v_1})}}{{{{\textrm{r}}_p}(u,v)}}} uG(u,v;x,y) + \sum_{(u,v) \in {\Omega _2}} {\frac{{{{\textrm{r}}_p}(u,v){\textrm{ - r}}({u_2},{v_2})}}{{{{\textrm{r}}_p}(u,v)}}} uG(u,v;x,y)} \right]\\ & = \frac{1}{{1 + \Delta r}}\left[ {\sum_{(u,v) \in \Omega } {uG(u,v)} - \sum_{(u,v) \in {\Omega _1}} {\frac{{{\textrm{r}}({u_1},{v_1})}}{{{{\textrm{r}}_p}(u,v)}}uG(u,v)} - \sum_{(u,v) \in {\Omega _2}} {\frac{{{\textrm{r}}({u_2},{v_2})}}{{{{\textrm{r}}_p}(u,v)}}uG(u,v)} } \right] \end{aligned},$$
where $abs(\Delta r)\le 1/2$ and the maximum value is obtained when ${{r}_{p}}=r({{u}_{1}},{{v}_{1}})/2$ and ${{\Omega }_{1}}/{{\Omega }_{2}}=1/3$.

It can be concluded that by normalizing $r_c$ with $r_p$, pixel correspondence with an error opposite to the result of Eq. (12) is obtained, thus enables error compensation for pixel correspondence.

Thus, the accurate corresponding projector pixel coordinate can be calculated by

$$\label{} {u_s} = \frac{{{u_s}^\prime + {u_s}^{\prime \prime }}}{2}.$$

The pixel correspondence error after compensation is

$$\label{} \Delta {u_s} = \frac{{\Delta {u_s}^\prime + \Delta {u_s}^{\prime \prime }}}{2} = \frac{{\Delta r}}{{2 \cdot (1 + \Delta r)}}\Delta {u_s}^\prime .$$

Considering $abs(\Delta r)\le 1/2$, it can be concluded that $abs(\Delta u)\le 1/6$.

Because of the extremely long data acquisition time of SI, we adopt parallel single-pixel imaging technique [23], and FPP is used for the pixel-level correspondence establishment. Based on the above, we propose a reflectivity-based accurate 3-D reconstruction method.

3.2 Procedure of the reflectivity-based accurate 3-D reconstruction method

According to the important conclusion drawn above that the pixel correspondence error obtained by the captured scene reflectivity and the normalized scene reflectivity are opposite, the DR error can be removed through reflectivity normalization. Combining the high speed of FPP and the high accuracy of SIM, we propose a reflectivity-based accurate 3-D reconstruction method. The schematic of the proposed reflectivity-based accurate 3-D reconstruction method is illustrated in Fig. 2.

 figure: Fig. 2.

Fig. 2. Schematic of the proposed reflectivity-based accurate 3-D reconstruction method.

Download Full Size | PDF

A scene point $(X,Y)$ is illuminated by a projector pixel $(u_0,v_0)$ and captured by the camera pixel $(x,y)$. The intensity of the captured patterns for camera pixel $(x,y)$ is

$$\label{} I_n^c(x,y) = A(x,y) + B(x,y)\cos \left[ {\phi (x,y) - 2\pi n/N} \right],n = 1,2,\ldots,N,$$
where $\phi$ denotes the phase to be resolved, $A$ and $B$ represent the fringe background and amplitude, respectively.

The wrapped phase $\phi$ of camera pixel $(x,y)$ can be resolved by

$$\label{} \phi (x,y) = {\tan ^{ - 1}}\frac{{\sum\limits_{n = 1}^N {I_n^c(x,y)\sin (2\pi n/N)} }}{{\sum\limits_{n = 1}^N {I_n^c(x,y)\cos (2\pi n/N)} }},$$
which is wrapped in $(-\pi,\pi ]$. Then, a temporal phase unwrapping algorithm is used to remove the phase discontinuities [4,8,9]. The desired absolute phase can be obtained by
$$\label{} \Phi (x,y) = \phi (x,y) + 2\pi K(x,y),$$
where $K$ is fringe order required to be determined.

Because phase error is introduced at the edge of discontinuous reflectivity, the coarse projector pixel $(u_c,v_c)$ corresponds to $(x,y)$ can be established by

$$\label{} \begin{array}{l} {u_c} = round[{{\Phi_{u} (x,y) } \mathord{\left/ {\vphantom {{\Phi_{u} (x,y)} {2\pi f}}} \right.} {2\pi f}}]\\ {v_c} = round[{{\Phi_{v} (x,y)} \mathord{\left/ {\vphantom {{\Phi_{v} (x,y) } {2\pi f}}} \right.} {2\pi f}}] \end{array},$$
where $round$ denotes the rounding operator, $\Phi _{u}$ and $\Phi _{v}$ denote the absolute phase along the horizontal and vertical axis, respectively.

Then, the periodical extension pattern of SIM is used for the scene reflectivity reconstruction. Specifically, Hadamard single-pixel imaging (HSI) is used due to its high speed and noise resistance characteristics. The Hadamard basis pattern $P_H (u,v)$ can be obtained by applying an inverse Hadamard transform to a delta function $\delta _H (m,n)$

$$\label{} {P_H}(u,v) = \frac{1}{2}\left[ {1 + {H^{ - 1}}\left\{ {{\delta _H}(m,n)} \right\}} \right],$$
where $(m,n)$ is the coordinate in the Hadamard domain representing the number of sign changes along $u$ and $v$ directions, respectively, and
$$\label{} {\delta _H}(m,n) = \left\{ {\begin{array}{l} {1,\;m = {m_0},n = {n_0}}\\ {0,\;otherwise} \end{array}} \right..$$

Specially, Hadamard matrix with the size of $16\times 16$ pixels is chosen to cover the visible region of each camera pixel. Thus, a series of periodical extension patterns with a period of $16$ pixels is used to accelerate the data acquisition [22], and the number of Hadamard basis patterns required for the measurement is $512$ frames. The periodical extension patterns are designed by

$$\label{} I_{n}^{H}(u,v)=\sum_{{{r}_{1}}=0}^{U/16}{\sum_{{{r}_{2}}=0}^{V/16}{{{P}_{H}}(u-16{{r}_{1}},v-16{{r}_{2}})}},$$
where ${r}_{1}$ and ${r}_{2}$ are integers.

Then, by applying inverse Hadamard transform in Eq. (9), the scene reflectivity captured by one camera pixel $r_c (u,v;x,y)$ is obtained. And the subpixel coordinate $({u_s}^{\prime },{v_s}^{\prime })$ with error caused by discontinuous reflectivity is calculated by grayscale centroid method.

Then, the object reflectivity captured by the projector $r_p (u,v)$ is calculated by Eq. (13), and the subpixel coordinate $({u_s}^{\prime \prime },{v_s}^{\prime \prime })$, whose subpixel error is opposite to $({u_s}^{\prime },{v_s}^{\prime })$ is calculated. Thus, the accurate subpixel correspondence is established.

To sum up, the detailed reflectivity-based accurate 3-D reconstruction method is given as follows:

  • (1) Get a coarse pixel correspondence by projecting fringe patterns designed by Eq. (1).
  • (2) Reconstruct the scene reflectivity $R(u,v;x,y)$ and the initial pixel $({u_s}^{\prime },{v_s}^{\prime })$ by projecting Hadamard basis patterns designed by Eqs. (23)–(25).
  • (3) Calculate the modulated projector light intensity $r_p (u,v)$, calculate the pixel with opposite error $({u_s}^{\prime \prime },{v_s}^{\prime \prime })$ by Eq. (14).
  • (4) Calculate the refined pixel $({u_s},{v_s})$ through Eq. (17).

4. Experiment

We further test the performance of the proposed reflectivity-based accurate 3-D reconstruction method through the following experiments. A calibration board and a complex object with discontinuous reflectivity are first measured to evaluate the measurement accuracy. Then a bottle with discontinuous reflectivity and a colored toy are measured to verify the performance on complex surfaces.

The experimental configuration is illustrated in Fig. 3. The 3-D measurement system mainly includes a DLP6500 projector with a resolution of $1920\times 1080$, a Basler acA800-510um CMOS camera with a resolution of $800\times 600$, and a suited camera lens with a 16 mm focal length. The measured object is placed around 0.5 m away from the measurement system. For FPP, according to Eq. (1), $25$ phase-shifted fringe patterns are generated by selecting $f=1/20$, $N=25$, and takes 2 seconds for each measurement. For SIM, Fourier slice patterns and periodical extension patterns with period of 16 pixels are used, $1536$ frames of Fourier slice patterns and $512$ frames of Hadamard basis patterns are projected and takes $102$ seconds for each measurement. For the proposed method, the fringe patterns of FPP and periodical extension patterns with period of 16 pixels are projected and takes $8$ seconds.

 figure: Fig. 3.

Fig. 3. Experimental setup. The digital projector projects patterns onto the measured object, and the camera captures light reflected by the object’s surface.

Download Full Size | PDF

4.1 Accuracy evaluation

We first verify the 3-D measurement accuracy of the proposed reflectivity-based accurate 3-D reconstruction method. The plane fitting error of the measured calibration board and the measurement error of a complex surface are both used to evaluate the measurement accuracy.

4.1.1 Accuracy evaluation by a calibration board

We first verify the influence of image resolution and fringe frequency to 3-D reconstruction accuracy in FPP by measuring a dot calibration board. The details of the reconstructed 3-D with different resolution is compared and the plane fitting error is compared for the 3-D shape reconstructed by using fringe with four different frequencies. The experimental results are shown in Fig. 4.

 figure: Fig. 4.

Fig. 4. The 3-D reconstruction result with different captured image resolution and different fringe frequencies. (a) the measured calibration board, (b)-(d) the calibration board reconstructed with the captured image resolution of $200\times 200$ pixels, $520\times 520$ pixels, and $1800\times 1800$ pixels, (e)-(h) the plane fitting error distribution of 3-D shape measured by fringe with frequency of $f=1/20$, $f=1/30$, $f=1/40$ and $f=1/60$.

Download Full Size | PDF

In Fig. 4 (b), the 3-D shape reconstructed with low resolution fringe image has obvious errors around the discontinuous reflectivity edges. In Fig. 4 (c) and (d), these errors become smaller with the increasing of fringe image resolution. From Fig. 4 (e)- (f), it can be concluded that with the increasing of fringe frequency, the plane fitting error decreases. However, the error at discontinuous reflectivity edges cannot be eliminated by reducing the fringe frequency. This conclusion is consistent with the conclusion drawn from Eq. (7).

Then, the measurement accuracy of the proposed method is verified. Firstly, sinusoidal fringe patterns are projected to calculate the coarse pixel correspondence, the vertical and horizontal coarse pixel correspondence are shown in Fig. 5 (a) and (b), respectively. Figure 5 (c) and (d) show the errors of the vertical and horizontal coarse pixel correspondence, respectively. It can be concluded that the pixel correspondence error is less than 1, which makes an accurate coarse pixel correspondence. Then, periodical extended Hadamard basis patterns are projected to calculate the captured reflectivity of each camera pixel. The calculated captured reflectivity is periodically distributed, as shown in Fig. 5 (e). With the help of coarse pixel correspondence established by FPP, the needed reflectivity can be extracted, which is inside the yellow block in Fig. 5 (e). Then, the subpixel correspondence is established as shown in Fig. 5 (f). From the calculated reflectivity, the scene reflectivity is calculated, as shown in Fig. 5 (g). The scene reflectivity is used to normalize the calculated reflectivity, and the DR can be compensated through the normalized reflectivity. The pixel correspondence after DR error compensation is shown in Fig. 5 (h).

 figure: Fig. 5.

Fig. 5. Subpixel correspondence establishment result for the measured calibration board. (a) the vertical coarse pixel correspondence established by FPP, (b) the horizontal coarse pixel correspondence established by FPP, (c) error distribution of (a), (d) error distribution of (b), (e) the captured reflectivity calculated by HSI, (f) the calculated subpixel correspondence, (g) the scene reflectivity map, and (h) the accurate subpixel correspondence after DR error removal.

Download Full Size | PDF

The error distribution of the 3-D shape reconstructed by the three methods are shown in Fig. 6. As shown in Fig. 6 (d) and (e), the plane fitting error of the 3-D shape reconstructed by FPP, i.e., 0.071 mm, is higher than that of SIM, i.e., 0.043 mm. The possible reason is that, SIM performs better for the low reflectivity regions. Besides, there exists significant errors at the discontinuous reflectivity edges. While as shown in Fig. 6 (f), plane fitting error at the discontinuous reflectivity edges are reduced significantly. The plane fitting error is reduced to 0.033 mm for the 3-D shape reconstructed by the proposed reflectivity-based accurate 3-D reconstruction method. The last row of Fig. d shows the plane fitting error of the blue line drawn in the second row. It is clear that the plane fitting error of 3-D shape reconstructed by the proposed method is uniformly distributed, while that of 3-D shape reconstructed by traditional FPP and SIM has significant errors at discontinuous reflectivity edges.

 figure: Fig. 6.

Fig. 6. Accuracy evaluation. The 3-D shape reconstructed by (a) FPP, (b) traditional SIM and (c) the proposed reflectivity-based accurate 3-D reconstruction method, the error distribution of 3-D shape reconstructed by (d) FPP, (e) traditional SIM and (f) the reflectivity-based accurate 3-D reconstruction method.

Download Full Size | PDF

4.1.2 Accuracy evaluation of a complex surface

Then, a complex object with discontinuous reflectivity is measured to compare the reconstruction accuracy and efficiency of traditional FPP, SIM and the proposed method. To verify the measurement accuracy, a white object is used in this experiment, as shown in Fig. 7 (b). The white object is first measured by traditional FPP to get the ground truth of the object’s 3-D shape, as shown in Fig. 7 (a). Then the white object was painted by black paint to get discontinuous reflectivity, as shown in Fig. 7 (c). The object with discontinuous reflectivity was measured by traditional FPP, SIM and the proposed method. The experimental results are shown in Fig. 7.

 figure: Fig. 7.

Fig. 7. Measurement result of a white object painted by black paint. (a) 3-D shape of the white object, (b) captured image of the white object, (c) captured image of the painted object, (d)-(f) 3-D shape of the painted object measured by FPP, traditional SIM, and the proposed method, (h) measurement error of (d), (i) measurement error of (e) and (j) measurement error of (f).

Download Full Size | PDF

Figure 7 (d) and (h) shows the measurement result and the measurement error of traditional FPP. It can be concluded that discontinuous reflectivity causes measurement errors at the discontinuous reflectivity edges. Figure 7 (e) and (i) shows the measurement result and the measurement error of SIM. It can be concluded that the measurement result is similar to that of FPP, which has significant errors at discontinuous reflectivity edges. Figure 7 (f) and (j) shows the measurement result and the measurement error of the proposed method. Compared with the other two methods, the measurement error was significantly reduced by the proposed method.

The measurement error and the measurement time of traditional FPP, SIM and the proposed method are shown in Table 1. The measurement error is calculated from the painted area, which is inside the blue block in Fig. 7 (c). From Table 1, it can be concluded that FPP has high speed but has significant error for object with discontinuous error, traditional SIM can slightly reduce the measurement error, but suffers from very long data acquisition time. While the proposed method can achieve high accuracy measurement for object with discontinuous reflectivity and takes the time much less than that of the traditional SIM.

Tables Icon

Table 1. Comparison of time and accuracy of the traditional FPP, SIM and the proposed method.

4.2 3-D measurement of object with complex surface

A black bottle with white words on it and a colored toy with details are measured to verify the performance of the proposed reflectivity-based accurate 3-D reconstruction method. Traditional FPP is also used to measure the objects to compare with the proposed method. The experimental results are exhibited in Fig. 8.

 figure: Fig. 8.

Fig. 8. 3-D shape of an object with complex surface. (a) captured scene, (b) the 3-D shape reconstructed by FPP and (c) the 3-D shape reconstructed by the proposed reflectivity-based accurate 3-D reconstruction method.

Download Full Size | PDF

In the first row of Fig. 8, a black bottle with white word is measured. In the red block, it is clear that at the border of black and white, there exists error in 3-D shape reconstructed by traditional FPP while the error is smaller in 3-D reconstructed by the proposed method. In the second row of Fig. 8, a colored toy is measured. In the orange block, at the place of black line, there exists concave in 3-D shape reconstructed by FPP, while the 3-D shape reconstructed by the proposed method is smooth. In the yellow block, the black area is raised in 3-D shape reconstructed by FPP while the 3-D shape reconstructed by the proposed method is smooth.

5. Conclusion

In this paper, the error model of FPP and SIM are constructed. From the constructed error model, it can be concluded that both DR error in FPP and SIM are influenced by system PSF and the scene reflectivity. Because of the unknown scene reflectivity, it is hard to alleviate the DR error in FPP. Meanwhile, the scene reflectivity can be reconstructed by SIM. As analyzed in the error model of SIM, the pixel correspondence calculated by the normalized reflectivity has error opposite to the original one. Based on this, a DR error removal method based on reflectivity normalization is proposed, which first reconstructs the scene reflectivity by traditional SIM, and then normalizes the captured reflectivity for error compensation. To accelerate the data acquisition of SIM, FPP is introduced to get the coarse pixel correspondence. As shown in the experiments, compared to the traditional SIM and FPP, the measurement error has reduced to $60\%$ and $46\%$, respectively, and the measurement time is reduced to $8\%$ of the traditional SIM.

Funding

National Natural Science Foundation of China (62031018, 62201261); Jiangsu Provincial Key Research and Development Program (BE2022391); China Postdoctoral Science Foundation (2022M721620); Jiangsu Funding Program for Excellent Postdoctoral Talent (2022ZB256).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognition 43(8), 2666–2680 (2010). [CrossRef]  

2. S. Zhang, “Recent progresses on real-time 3d shape measurement using digital fringe projection techniques,” Opt. Laser Eng.Opt. Laser Eng. 48(2), 149–158 (2010). [CrossRef]  

3. X. Su and Q. Zhang, “Dynamic 3-d shape measurement method: a review,” Opt. Laser Eng. 48(2), 191–204 (2010). [CrossRef]  

4. S. Zhang, “High-speed 3d shape measurement with structured light methods: A review,” Opt. Laser Eng. 106, 119–131 (2018). [CrossRef]  

5. T. Miyasaka, K. Kuroda, M. Hirose, and K. Araki, “High speed 3-d measurement system using incoherent light source for human performance analysis,” International Archives of Photogrammetry and Remote Sensing 33, 547–551 (2000).

6. D. Zheng, Q. Kemao, F. Da, and H. S. Seah, “Ternary gray code-based phase unwrapping for 3d measurement using binary patterns with projector defocusing,” Appl. Opt. 56(13), 3660–3665 (2017). [CrossRef]  

7. J. Gühring, “Dense 3d surface acquisition by structured light using off-the-shelf components,” in Videometrics and Optical Methods for 3D Shape Measurement, vol. 4309 (SPIE, 2000), pp. 220–231.

8. C. Zuo, Q. Chen, G. Gu, S. Feng, and F. Feng, “High-speed three-dimensional profilometry for multiple objects with complex shapes,” Opt. Express 20(17), 19493–19510 (2012). [CrossRef]  

9. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Laser Eng. 85, 84–103 (2016). [CrossRef]  

10. J. Salvi, J. Pages, and J. Batlle, “Pattern codification strategies in structured light systems,” Pattern Recognition 37(4), 827–849 (2004). [CrossRef]  

11. J. Batlle, E. Mouaddib, and J. Salvi, “Recent progress in coded structured light as a technique to solve the correspondence problem: a survey,” Pattern Recognition 31(7), 963–982 (1998). [CrossRef]  

12. Y. Wu, X. Cai, J. Zhu, H. Yue, and X. Shao, “Analysis and reduction of the phase error caused by the non-impulse system psf in fringe projection profilometry,” Opt. Laser Eng. 127, 105987 (2020). [CrossRef]  

13. H. Yue, H. G. Dantanarayana, Y. Wu, and J. M. Huntley, “Reduction of systematic errors in structured light metrology at discontinuities in surface reflectivity,” Opt. Laser Eng. 112, 68–76 (2019). [CrossRef]  

14. Y. Wu, H. G. Dantanarayana, H. Yue, and J. M. Huntley, “Accurate characterisation of hole size and location by projected fringe profilometry,” Meas. Sci. Technol. 29(6), 065010 (2018). [CrossRef]  

15. J. Burke and L. Zhong, “Suppression of contrast-related artefacts in phase-measuring structured light techniques,” in Optical Measurement Systems for Industrial Inspection X, vol. 10329 (SPIE, 2017), pp. 196–207.

16. L. Rao and F. Da, “Local blur analysis and phase error correction method for fringe projection profilometry systems,” Appl. Opt. 57(15), 4267–4276 (2018). [CrossRef]  

17. C. Hu, S. Liu, D. Wu, and J. Xu, “Phase error model and compensation method for reflectivity and distance discontinuities in fringe projection profilometry,” Opt. Express 31(3), 4405–4422 (2023). [CrossRef]  

18. A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26(3), 70 (2007). [CrossRef]  

19. V. Srinivasan, H.-C. Liu, and M. Halioua, “Automated phase-measuring profilometry of 3-d diffuse objects,” Appl. Opt. 23(18), 3105–3108 (1984). [CrossRef]  

20. N. Joshi, R. Szeliski, and D. J. Kriegman, “Psf estimation using sharp edge prediction,” in 2008 IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2008), pp. 1–8.

21. A. G. Marrugo, F. Gao, and S. Zhang, “State-of-the-art active optical techniques for three-dimensional surface metrology: a review,” J. Opt. Soc. Am. A 37(9), B60–B77 (2020). [CrossRef]  

22. H. Jiang, Y. Li, H. Zhao, X. Li, and Y. Xu, “Parallel single-pixel imaging: A general method for direct–global separation and 3d shape reconstruction under strong global illumination,” Int. J. Comput. Vis. 129(4), 1060–1086 (2021). [CrossRef]  

23. Y. Wang, H. Zhao, H. Jiang, X. Li, Y. Li, and Y. Xu, “Paraxial 3d shape measurement using parallel single-pixel imaging,” Opt. Express 29(19), 30543–30557 (2021). [CrossRef]  

24. X. Yang, Y. Liu, X. Mou, T. Hu, F. Yuan, and E. Cheng, “Imaging in turbid water based on a hadamard single-pixel imaging system,” Opt. Express 29(8), 12010–12023 (2021). [CrossRef]  

25. C. Zhuoran, Z. Honglin, J. Min, W. Gang, and S. Jingshi, “An improved hadamard measurement matrix based on walsh code for compressive sensing,” in 2013 9th International Conference on Information, Communications & Signal Processing, (IEEE, 2013), pp. 1–4.

26. Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Hadamard single-pixel imaging versus fourier single-pixel imaging,” Opt. Express 25(16), 19619–19639 (2017). [CrossRef]  

27. Z. Qiu, Z. Zhang, J. Zhong, et al., “Comprehensive comparison of single-pixel imaging methods,” Opt. Laser Eng. 134, 106301 (2020). [CrossRef]  

28. Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015). [CrossRef]  

29. Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Fast fourier single-pixel imaging via binary illumination,” Sci. Rep. 7(1), 12029 (2017). [CrossRef]  

30. H. Jiang, S. Zhu, H. Zhao, B. Xu, and X. Li, “Adaptive regional single-pixel imaging based on the fourier slice theorem,” Opt. Express 25(13), 15118–15130 (2017). [CrossRef]  

31. M.-J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7(1), 12010 (2016). [CrossRef]  

32. Z. Zhang and J. Zhong, “Three-dimensional single-pixel imaging with far fewer measurements than effective image pixels,” Opt. Lett. 41(11), 2497–2500 (2016). [CrossRef]  

33. Y. Ma, Y. Yin, S. Jiang, X. Li, F. Huang, and B. Sun, “Single pixel 3d imaging with phase-shifting fringe projection,” Opt. Laser Eng. 140, 106532 (2021). [CrossRef]  

34. J. Li, Y. Zheng, L. Liu, and B. Li, “4d line-scan hyperspectral imaging,” Opt. Express 29(21), 34835–34849 (2021). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Light transmission of 3-D shape measurement system for object with discontinuous reflectivity under conditions that (a) the system PSF is ideal and (b) the system PSF is non-ideal.
Fig. 2.
Fig. 2. Schematic of the proposed reflectivity-based accurate 3-D reconstruction method.
Fig. 3.
Fig. 3. Experimental setup. The digital projector projects patterns onto the measured object, and the camera captures light reflected by the object’s surface.
Fig. 4.
Fig. 4. The 3-D reconstruction result with different captured image resolution and different fringe frequencies. (a) the measured calibration board, (b)-(d) the calibration board reconstructed with the captured image resolution of $200\times 200$ pixels, $520\times 520$ pixels, and $1800\times 1800$ pixels, (e)-(h) the plane fitting error distribution of 3-D shape measured by fringe with frequency of $f=1/20$, $f=1/30$, $f=1/40$ and $f=1/60$.
Fig. 5.
Fig. 5. Subpixel correspondence establishment result for the measured calibration board. (a) the vertical coarse pixel correspondence established by FPP, (b) the horizontal coarse pixel correspondence established by FPP, (c) error distribution of (a), (d) error distribution of (b), (e) the captured reflectivity calculated by HSI, (f) the calculated subpixel correspondence, (g) the scene reflectivity map, and (h) the accurate subpixel correspondence after DR error removal.
Fig. 6.
Fig. 6. Accuracy evaluation. The 3-D shape reconstructed by (a) FPP, (b) traditional SIM and (c) the proposed reflectivity-based accurate 3-D reconstruction method, the error distribution of 3-D shape reconstructed by (d) FPP, (e) traditional SIM and (f) the reflectivity-based accurate 3-D reconstruction method.
Fig. 7.
Fig. 7. Measurement result of a white object painted by black paint. (a) 3-D shape of the white object, (b) captured image of the white object, (c) captured image of the painted object, (d)-(f) 3-D shape of the painted object measured by FPP, traditional SIM, and the proposed method, (h) measurement error of (d), (i) measurement error of (e) and (j) measurement error of (f).
Fig. 8.
Fig. 8. 3-D shape of an object with complex surface. (a) captured scene, (b) the 3-D shape reconstructed by FPP and (c) the 3-D shape reconstructed by the proposed reflectivity-based accurate 3-D reconstruction method.

Tables (1)

Tables Icon

Table 1. Comparison of time and accuracy of the traditional FPP, SIM and the proposed method.

Equations (25)

Equations on this page are rendered with MathJax. Learn more.

I n F p ( u , v ) = a ( u , v ) + b ( u , v ) cos [ 2 π f u 2 π n / N ] , n = 1 , 2 , 3 , , N ,
I n F c ( x , y ) = u = 1 U v = 1 V r ( u , v ; x , y ) I n F p ( u , v ) ,
ϕ ( x , y ) = tan 1 r ( u 0 , v 0 ; x , y ) b sin ( 2 π f u 0 ) r ( u 0 , v 0 ; x , y ) b cos ( 2 π f u 0 ) = tan 1 sin ( 2 π f u 0 ) cos ( 2 π f u 0 ) ,
ϕ ( x , y ) = t a n 1 u = 1 U v = 1 V r ( u , v ) G ( u , v ; x , y ) sin ( 2 π f u ) u = 1 U v = 1 V r ( u , v ) G ( u , v ; x , y ) cos ( 2 π f u ) .
Δ ϕ ( x , y ) = ϕ ( x , y ) ϕ ( x , y ) = tan 1 [ r ( u 2 , v 2 ) r ( u 1 , v 1 ) r ( u 1 , v 1 ) + r ( u 2 , v 2 ) G ( u , v ) sin ( 2 π f u ) G ( u , v ) cos ( 2 π f u ) ] = tan 1 [ r ( u 2 , v 2 ) r ( u 1 , v 1 ) r ( u 1 , v 1 ) + r ( u 2 , v 2 ) e r f ( 2 π f σ ) ] ,
u = Φ ( x , y ) / 2 π f .
Δ u ( x , y ) = Δ ϕ ( x , y ) / Δ ϕ ( x , y ) 2 π f 2 π f = tan 1 [ r ( u 2 , v 2 ) r ( u 1 , v 1 ) r ( u 1 , v 1 ) + r ( u 2 , v 2 ) e r f ( 2 π f σ ) ] / 2 π f 2 π f .
I n H c ( x , y ) = u = 1 U v = 1 V r c ( u , v ; x , y ) I n H p ( u , v ) ,
r c ( u , v ; x , y ) = H 1 [ I H c ( x , y ) ] ,
u 0 ( x , y ) = ( u , v ) Ω u r c ( u , v ; x , y ) ( u , v ) Ω r c ( u , v ; x , y ) = ( u , v ) Ω u r ( u , v ) G ( u , v ; x , y ) ( u , v ) Ω r ( u , v ) G ( u , v ; x , y ) ,
u s ( x , y ) = ( u , v ) Ω u r ( u , v ) G ( u , v ; x , y ) ( u , v ) Ω r ( u , v ) G ( u , v ; x , y ) = ( u , v ) Ω 1 u r ( u 1 , v 1 ) G ( u , v ; x , y ) + ( u , v ) Ω 2 u r ( u 2 , v 2 ) G ( u , v ; x , y ) ( u , v ) Ω 1 r ( u 1 , v 1 ) G ( u , v ; x , y ) + ( u , v ) Ω 2 r ( u 2 , v 2 ) G ( u , v ; x , y ) ,
Δ u s ( x , y ) = u ( x , y ) u 0 ( x , y ) = ( u , v ) Ω 1 r ( u 1 , v 1 ) r c ( x , y ) r c ( x , y ) u G ( u , v ; x , y ) + ( u , v ) Ω 2 r ( u 2 , v 2 ) r c ( x , y ) r c ( x , y ) u G ( u , v ; x , y ) ,
r p ( u , v ) = x = 1 X y = 1 Y r c ( u , v ; x , y ) ,
u s ( x , y ) = ( u , v ) Ω u r c ( u , v ; x , y ) / r c ( u , v ; x , y ) r p ( u , v ) r p ( u , v ) ( u , v ) Ω r c ( u , v ; x , y ) / r c ( u , v ; x , y ) r p ( u , v ) r p ( u , v ) .
u s ( x , y ) = ( u , v ) Ω u G + ( u , v ) Ω 1 u Δ r 2 G ( u , v ; x , y ) / u Δ r 2 G ( u , v ; x , y ) r p r p + ( u , v ) Ω 2 u Δ r 1 G ( u , v ; x , y ) / u Δ r 1 G ( u , v ; x , y ) r p r p 1 + Δ r ,
Δ u s = u s u 0 = 1 1 + Δ r [ ( u , v ) Ω 1 r p ( u , v )  - r ( u 1 , v 1 ) r p ( u , v ) u G ( u , v ; x , y ) + ( u , v ) Ω 2 r p ( u , v )  - r ( u 2 , v 2 ) r p ( u , v ) u G ( u , v ; x , y ) ] = 1 1 + Δ r [ ( u , v ) Ω u G ( u , v ) ( u , v ) Ω 1 r ( u 1 , v 1 ) r p ( u , v ) u G ( u , v ) ( u , v ) Ω 2 r ( u 2 , v 2 ) r p ( u , v ) u G ( u , v ) ] ,
u s = u s + u s 2 .
Δ u s = Δ u s + Δ u s 2 = Δ r 2 ( 1 + Δ r ) Δ u s .
I n c ( x , y ) = A ( x , y ) + B ( x , y ) cos [ ϕ ( x , y ) 2 π n / N ] , n = 1 , 2 , , N ,
ϕ ( x , y ) = tan 1 n = 1 N I n c ( x , y ) sin ( 2 π n / N ) n = 1 N I n c ( x , y ) cos ( 2 π n / N ) ,
Φ ( x , y ) = ϕ ( x , y ) + 2 π K ( x , y ) ,
u c = r o u n d [ Φ u ( x , y ) / Φ u ( x , y ) 2 π f 2 π f ] v c = r o u n d [ Φ v ( x , y ) / Φ v ( x , y ) 2 π f 2 π f ] ,
P H ( u , v ) = 1 2 [ 1 + H 1 { δ H ( m , n ) } ] ,
δ H ( m , n ) = { 1 , m = m 0 , n = n 0 0 , o t h e r w i s e .
I n H ( u , v ) = r 1 = 0 U / 16 r 2 = 0 V / 16 P H ( u 16 r 1 , v 16 r 2 ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.