Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High dynamic reflection surface 3D reconstruction with sharing phase demodulation mechanism and multi-indicators guided phase domain fusion

Open Access Open Access

Abstract

Accurate and complete 3D measurement of complex high dynamic range (HDR) surfaces has been challenging for structured light projection technique. The behavior of spraying a layer of diffuse reflection material, which will inevitably incur additional thickness. Existing methods based on additional facilities will increase the cost of hardware system. The algorithms-based methods are cost-effective and nondestructive, but they generally require redundant patterns for image fusion and model training, which fail to be suitable for practicing automated 3D measurement for complex HDR surfaces. In this paper, a HDR surface 3D reconstruction method based on sharing demodulation phase unwrapping mechanism and multi-indicators guided phase fusion strategy is proposed. The division of the exposure interval is optimized via the image entropy to generate an optimal exposure sequence. The combination of temporal-spatial binary (TSB) encoding fringe patterns with time-integration strategy and the variable exposure mode of digital mirror device (DMD)-based projector with a minimum projection exposure time of 233μs enables the proposed approach to broadly adapt complex HDR surfaces. We propose an efficient phase analysis solution called sharing mechanism that wrapped phase sequences from captured different intensity fringe images are unwrapped through sharing the same group of misaligned Gray code (MGC) decoding result. Finally, a phase sequences fusion model guided by multi-indicators, including exposure quality, phase gradient smoothness and pixel effectiveness, is established to obtain an optimum phase map for final 3D reconstruction. Comparative experiments indicate that the proposed method can completely restore the 3D topography of HDR surfaces with the images reduction of at least 65% and the measurement integrity is maintained at over 98% while preserving the measurement accuracy and excluding the outliers.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Structural optical 3D measurement technology is widely used in industrial design, reverse engineering, cultural relic protection and other fields of [1,2] because of its advantages of non-contact, low cost and fast speed. However, 3D topography measurement of HDR surfaces has been a very challenging task. The existing structural light method is generally suitable for the topography measurement of the diffuse reflection surface. When measuring a HDR surface, due to the limitation of the camera intensity response range, it is difficult to balance both high reflection and low response areas under a single exposure. A large area of data loss may appear in 3D reconstruction results, which seriously affects the measurement accuracy and coverage. One way to solve the above problems is to spray a layer of diffuse reflection material (for example, TiO2 nanoscale powder) on the measured surface to eliminate the supersaturated effect of the mirror-like reflection area and enhance the low reflection area to ensure the effective modulation of the projected patterns. However, due to the additionally introduced thickness and corrosion of the diffuse materials themselves, as well as the operation procedure of spraying is generally time-consuming and laborious, this measure is not suitable for certain specific application scenarios, such as cultural relics protection and high-precision online device processing.

In recent years, many optical HDR surface 3D measurement methods have emerged, which are able to effectively recover the 3D topography without making contact with the measured surface. One of them, hardware-based methods require the introduction of additional hardware facilities in the measurement process to obtain high-quality reflected demodulated images, typical including polarization filter (PF) methods [35] and photometric stereo (PS) method [6,7]. The polarization filtering method adds a PF in front of the camera to retard the saturation phenomenon of captured reflected images. However, the intensity of the non-saturation area will simultaneously be weakened. PS technology introduces light sources in different directions around the tested target for illumination, and integrates multiple images from a fixed point of view to obtain high-quality reflected images. Both PF and PS require additional special devices in the measurement process, which inevitably lead to the increase of the hardware system cost to some extent.

How to obtain high-quality reflected fringe images without introducing additional hardware has become urgent in the field of HDR surface 3D measurement. Some scholars adjusted the projection intensity to cope with HDR surfaces. Li et al. [8] proposed an adaptive fringe projection intensity adjustment method (AFPIA) to establish the mapping relationship between the projection image coordinate system (projector) and the reflected image coordinate system (camera) according to the local reflectivity of the tested surface, and then pixel-to-pixel optimally adjust the intensity of projected patterns. Subsequently, the researchers improved AFPIA methods in two aspects: coordinate mapping and optimal projection intensity calculation. Coordinate mapping methods include orthogonal sinusoidal fringes-based [912], homography matrix-based [13], Gray code plus phase-shifting-based [14], etc. The optimal intensity calculation methods include uniform partition of gray level [12,15], polynomial fitting [16], and camera response function fitting [17]. The AFPIA method has the capability of effectively recovering 3D topography of HDR surface, but the coordinate mapping has to be re-calculated when the tested target is changed, resulting its difficulty and infeasibility in automated measurement.

Multi-exposure fusion method is another effective solution for HDR 3D topography measurement. Compared with AFPIA method, this method obtains high-quality HDR images by integrating or fusing captured reflected images of different intensities via changing the exposure of projector or camera. It is simple to operate and no additional structured patterns are projected in the measurement process, which thus is more suitable for automated measurement process without manual intervention. One of keys for this method is how to fuse images sequence of different intensities in order to maintain the reconstruction accuracy and coverage as possible. According to the principle of selecting the brightest but unsaturated intensity at each pixel, Zhang et al. [18] proposed that the image sequences (23 groups) collected under different exposures were fused into a group of self-defined “well-exposed” HDR images for the final 3D reconstruction. However, subjective adjustment of camera exposure time and a large number of images are required for 3D reconstruction, which are also two important factors to be further optimized and investigated. Especially, only the selection of optimal intensity or modulation instead of the phase extraction is considered during the image fusion procedure, which may lead to the demodulated unwrapped phase not being the optimal one.

To reduce the number of projection patterns during measurement, researchers proposed a series of strategies to reduce the number of exposure groups. Some researchers have tried to divide the exposure interval more reasonably making use of the reflective nature of the object surface. Feng et al. [19] proposed to predict the optimal exposure time and the number of exposure groups based on the histogram distribution. Rao et al. [20] reported a fully automated exposure technique to iteratively calculate the optimal exposure time via the regime histogram. However, both of methods are only applicable to objects with obvious histogram distribution. Additionally, some researchers [21,22] adopted a color camera to separate one color image into three monochrome images of different intensities. All of these methods effectively reduce the number of projection patterns, but most of them use single indicator to fuse the captured images, which is vulnerable to the non-linear response of the camera, image noise and other factors. Therefore, Zhang et al. [23] established a hybrid quality-guided phase fusion framework by weighting the initial phase maps under different camera exposure times. However, this method adopted the three frequencies four-step phase-shifting algorithm to analyze the phase, which might fail to guarantee the reliability of phase extraction in the underexposure and overexposure reflection regions when the exposure group is small. Furthermore, there existed a great number of images redundancy due to additional low and middle frequency sinusoidal fringe patterns being re-projected at each exposure for phase demodulation which resulted in a low 3D measurement efficiency.

Recently, some scholars have also tried to introduce deep learning into 3D topography measurement of HDR surfaces [2427]. With the help of convolutional neural network (CNN), Zhang et al. [24] could accurately extract phase information with low signal-to-noise ratio (SNR) and in the presence of image saturation. Yang et al. [25] designed a modified HDR image network and introduced a U-net based low-key regime region detection module to predict the phase information of the high and low reflection regions across the captured reflected images. Liu et al. [26] proposed a method to recover the 3D topography of the HDR surface from a single exposure image, based on the SP-CAN which uses a network with a skip pyramid structure to enhance single exposure image, and effectively preserved the phase information at the corners during the process of image enhancement. These methods do not require additional hardware facilities, but a large variety of training data are generally collected by a given real or virtual 3D system for model training, the performance or adaptability is heavily relied on the input data, and the generalization may remain to be observed when the 3D system and the tested scenario are completely varied.

As a whole, spraying diffuse reflection material will inevitably incur additional thickness and corrosion to the tested object. Existing methods requiring additional facilities will increase the cost of hardware system. The cost-effective and nondestructive algorithms-based generally manually adjust the optimal-intensity projection patterns, or require projection substantial patterns to implement image fusion, or collect massive images for deep learning model training. Meanwhile, The traditional step-by-step temporal phase unwrapping algorithm requires extra projection of low-frequency and intermediate-frequency phase-shift fringes for each exposure group. It requires a large number of projection images during the measurement process, and images are redundant, which fail to meet the automated 3D measurement for complex HDR surfaces. The purpose of this paper is to propose a HDR surface 3D reconstruction method based on sharing phase demodulation mechanism and multi-indicators guided phase fusion strategy. The sharing phase demodulation mechanism is designed to eliminate the redundancy mentioned above, we choose to add a set of MGC patterns in the middle group of the whole exposure sequence to obtain the decoding result of the MGC patterns, and then use it in the wrapping phase of all exposure groups. As shown in Fig. 1, the architecture of the proposed approach can be outlined into several aspects:

  • • (1) The temporal-spatial binary (TSB) coding method [28] is adopted which encodes a sinusoidal fringe pattern into multiple binary (0/255) patterns, and the sinusoidal fringe images can be yielded through the slight defocus projection and time-integration strategy, which maintain the almost same depth measurement range with standard sinusoidal fringes. In order to generate an optimal exposure sequence for TSB patterns projection, the image entropy is introduced as the evaluation standard to guide the division of the exposure interval.
  • • (2) The projection exposure sequence is generated by the variable exposure mode of the digital mirror device (DMD)-based projector with a minimum projection exposure time of $T_p$ = 233 $\mu$s for TSB patterns. The camera can capture images sequences $\{I_{n,k}\}$ of broader intensity change without frequently changing the camera exposure and projection intensity, which has an extremely broad adaptability to HDR surface. Meanwhile, the MGC images are captured under an appropriate exposure to assist sharing phase demodulating unwrapping process;
  • • (3) Through the introduction of MGC-based temporal phase unwrapping algorithm, we put forward the phase analysis strategy based on sharing mechanism that the wrapped phase sequences from captured different intensity fringe images are unwrapped through the same set of MGC decoding result with carefully selected exposure, significantly improving the phase extraction reliability of underexposure and overexposure areas, and effectively reducing the image redundancy for 3D reconstruction;
  • • (4) We propose a phase sequence fusion model guided by multi-indicators, including exposure quality, phase gradient smoothness and pixel effectiveness. The pixel selection strategy involved in the fusion model combines the modulation and the intensity. After fusion, the fused phase will use the phase error mask to eliminate the phase outliers caused by the phase demodulation processing.

 figure: Fig. 1.

Fig. 1. The overall architecture of the proposed HDR surface 3D reconstruction method based on sharing phase demodulation mechanism and multi-indicators guided phase fusion strategy.

Download Full Size | PDF

Comparative experiments indicate that the proposed method can restore the 3D topography of HDR surfaces, and significantly improve the measurement efficiency and the reconstruction coverage rate of HDR area while preserving the measurement accuracy.

2. Principles

2.1 Generation of projection patterns sequence

Most of traditional multiple exposure methods obtain reflected fringe images of different intensity via adjusting the exposure time of the camera in the measurement process, which actually frequently switch the acquisition frequency of camera. Additionally, the subjective adjustment of the exposure time tend to produce image redundancy, which is not conductive to the improvement of measurement efficiency. For HDR workpieces to be tested that are characterized by low and high reflection area, it is extremely difficult for projecting sinusoidal fringe patterns and available off-the-shelf cameras to acquire a HDR images sequence to meet the requirement of 3D reconstruction. To our knowledge, the projection rate of DMD-projector is limited to 120Hz (DLP4500) for 8-bit sinusoidal fringe patterns, leading to theoretically the minimum projection exposure time of 1/120s, which may fail to meet the illumination requirement of high reflection surface. However, the minimum projection exposure time of 1/4225s can be achieved if binary patterns are adopted. Under such the condition, adjusting the exposure of the projector while not switching the acquisition rate of the projector and the camera may be a more wise option. In this paper, the TSB patterns [28] plus MGC patterns are adopted for accurate HDR measurement. Fortunately, the “variable exposure” work mode of the DMD-projector is available, which fails to result in the complexity of the 3D system in terms of hardware control and algorithm design.

In order to improve the 3D reconstruction efficiency from the prospective of collecting fringe image of different intensities, the projection sequence is creacted on the basis of image entropy $J$ [29]. As shown in Fig. 2, the curve of image entropy first increases and then decreases with the increase of projection exposures time. We divide the image entropy into $k$ parts at equal intervals from 0 to the maximum value $J_K$, and obtain the corresponding $T_n$ for each $J_n$ through the image entropy curve, and complete Generation of Optimal projection exposure sequence. So the division of image exposure interval can be optimally determined under the variable exposure mode of DMD-projector, and reflected fringe images sequence can be acquired without changing the acquisition rate of camera. To achieve the MGC-based sharing phase demodulation mechanism below, MGC (See section 2.2) patterns are inserted into the middle group (the 9th group, $T_p$=8 ms) of the projection sequence to assist the phase unwrapping of captured high frequency fringe images $\{I_{n,k}\}$ under different projection exposures. The selection of the MGC pattern in the 9th group is based on previous experience. In the middle part of the exposure sequence, the phase expansion obtained by supplementary projection of the MGC pattern is almost the same, and the selection tends to favor the group with a longer exposure time in the middle part to project the MGC pattern.

 figure: Fig. 2.

Fig. 2. The generation of projection patterns sequence and acquisition of images for 3D measurement of HDR parts. The TSB patterns plus MGC are slightly defocused projected using the “variable exposure” mode of DMD-projector.

Download Full Size | PDF

In the common measurement process, the projector and the camera need to be triggered simultaneously to accurately capture the variable intensity value of the projection. Since it is not easily available for the off-the-shelf camera to work at an extremely low exposure time, we turn to the ’Variable exposure mode’ of DMD-projector with a customized trigger signal. On the contrary, the automatic acquisition of the reflected fringe images of different intensities in this paper is realized with the help of the variable exposure mode of DMD-projector and customized trigger signal. By adjusting the projection exposure time $T_p$ in the constant projection cycle $T$, captured images with variable intensities can be obtained, where camera exposure time $T_c$ is fixed and slightly less than the projection cycle $T$.

2.2 Phase unwrapping via shared mechanism

2.2.1 Phase-shifting algorithm

In the measurement, the captured sinusoidal fringe image distorted by the measured HDR object can be represented by:

$$I_n \left( {{u_c},{v_c}} \right) = A\left( {{u_c},{v_c}} \right) + B\left( {{u_c},{v_c}} \right)\cos \left( {\varphi \left( {{u_c},{v_c}} \right) - {\delta _n}} \right),$$
Where $(u_c,v_c)$ represents the pixel coordinate, $A(u_c,v_c)$ and $B(u_c,v_c)$ are the background light intensity and modulation intensity of the captured image, respectively. $\delta _n = 2\pi (n-1)/Q$ is shifted reference phase($n=1,2,\ldots,Q$). $\varphi (u_c,v_c)$ represents the wrapped phase to be solved, which can be calculated by standard $Q$-step phase-shifting algorithm:
$$\varphi \left( {{u_c},{v_c}} \right) = \arctan \left[ {\tfrac{{\sum_{n = 1}^Q {{I_n}({u_c},{v_c})\sin \left( {{\delta _n}} \right)} }}{{\sum_{n = 1}^Q {{I_n}({u_c},{v_c})\cos \left( {{\delta _n}} \right)} }}} \right].$$
Where $\varphi (u_c,v_c)$ is wrapped in $[-\pi, \pi ]$ because of the artan [.] operation. In order to obtain the continuous absolute phase distribution of the tested surface, it is necessary to determine the fringe order $p$ at each pixel in the wrapped phase which is called phase unwrapping. Three frequency temporal phase unwrapping algorithm [1] and MGC assistant temporal phase unwrapping algorithm [30] are have been widely applied, but we select the latter for its reliability and efficiency on HDR surfaces, which will be included in the following contents.

2.2.2 MGC assistant phase unwrapping algorithm

Due to the discreteness of GC, the reconstructed point cloud accuracy often fails to meet the requirements of practical applications. Therefore, the GC is usually used to label fringe orders and assist the phase unwrapping process. In actual measurement, however, due to the disturbed by the limited DoF of the measured system, camera noise and the defocus of projector, the intensity transition of black and white of the captured GC is not sharply cut-off (fuzzy) (as marked by dotted circle in Fig. 3), which tends to produce decoding error, leading to fringe order error and phase unwrapping error.

 figure: Fig. 3.

Fig. 3. The process of MGC-assisted phase unwrapping.

Download Full Size | PDF

In order to avoid this situation, we propose to adopt MGC-based decoding strategy, in which MGC patterns are displaced by half a fringe period on the basis of the traditional GC so that the wrapped position of phase is exactly at the center of the sub-interval of MGC. As shown in Fig. 3, the boundary of the wrapped phase is used to assist the correction of phase level when decoding. Red ladder solid line indicates the decoding fringe order $p$, and green line is the corrected fringe order $p^{\prime }$. Because the MGC image is misplaced for half a fringe period during encoding, the jump point of the sinusoidal fringe wrapping phase falls exactly at the center of the GC value interval, which can effectively avoid the decoding error at the boundary. We will verify the reliability of MGC to assist the phase unwrapping for HDR surface 3D measurement on the basis of TSB patterns at a nearly focused projection.

2.2.3 MGC-based sharing mechanism for phase unwrapping

The current multi-exposure fusion method usually adopts three frequencies phase unwrapping algorithm, which requires additional projection of low and middle sinusoidal fringe patterns under different exposure intensities. As the exposure group increases, the number of images required for 3D reconstruction increases significantly. Simultaneously, because low frequency fringes are easy to be affected by noise, phase unwrapping errors tend to be transferred to the higher frequency ones, resulting in the phase unwrapping error of high frequency fringes for final 3D reconstruction. Considering that the temporal phase unwrapping algorithm based on MGC is decoded by binary judgments, it is able to adapt to a broadly dynamic range of intensity change typically emerging in HDR surfaces. For sake of reducing the number of images and boosting the measurement efficiency, this paper proposes a phase analysis strategy based on sharing mechanism that the phase unwrapping of all of reflected fringe images groups under different exposures share the same decoding result of MGC under a proper projection exposure.

The projection exposure sequence is generated according to the method described in 2.2, and a set of MGC patterns corresponding to the high-frequency fringe period is inserted into the 9th group of the projection exposure sequence. Captured MGC images are decoded and corrected according to the above description. As shown in Fig. 4, the decoding result of MGC is obtained, and shared by all of captured fringes groups $\{I_{n,k}\}$ under different exposures to unwrap their corresponding wrapped phases $\{\varphi _k\}$. Finally, we are able to acquire original absolute phase sequences $\{\Phi _k\}$, where $k(k=1,2,\ldots,K)$ corresponds to the exposure group.

 figure: Fig. 4.

Fig. 4. MGC assistant phase unwrapping strategy via sharing mechanism.

Download Full Size | PDF

2.3 Phase sequence fusion guided by multi-indicators

After obtaining the unwrapping phase maps $\{\Phi _k\}$ of high frequency fringe images under different exposures, differing from the traditional intensity-based fusion schemes, we propose the phase fusion model guided by multiple indicators, which is jointly taken the modulation, the intensity and phase gradient (monotonicity) into the pixel selection process. As illustrated in Fig. 5, the model coupling three weighted factors including exposure quality, phase gradient smoothness and pixel effectiveness. Subsequently, the optimum fusion phase map can be obtainable after the fused phase map $\Phi ^{opt}$ undergoes the Laplacian operator to exclude the phase outliers.

 figure: Fig. 5.

Fig. 5. The flowchart of phase sequence fusion guided by multi-indicators.

Download Full Size | PDF

(1)Exposure quality

The exposure quality $W_k^1\left ( {{u_c},{v_c}} \right )$ of one pixel is characterized by the product between the ratio of good exposure pixels whose intensity within the range of 30-220 with the phase-shifting steps $Q$ and its modulation. The higher the modulation, the less the current pixel is affected by the noise.

$$\textstyle M_k \left( {{u_c},{v_c}} \right) = \tfrac{2}{Q} \sqrt {\left[ {{{\left( {\sum\limits_{n = 1}^Q {{I_{n,k}}\left( {{u_c},{v_c}} \right)\sin \left( {\tfrac{2\pi \left( {n - 1} \right)}{Q}} \right)} } \right)}^2} + {{\left( {\sum\limits_{n = 1}^Q {{I_{n,k}}\left( {{u_c},{v_c}} \right)\cos \left( {\tfrac{2\pi \left( {n - 1} \right)}{Q}} \right)} } \right)}^2}} \right]},$$
$$\textstyle W_k^1( u_c,v_c ) = \exp \left( { - \tfrac{{{{\left( {g/Q} \right)}^2}}}{{2{\sigma ^2}}}} \right) \cdot M_k ( u_c,v_c ).$$
Where $Q$ represents the number steps of phase-shifting; $I_{n,k}$ indicates high frequency phase-shifting fringe images under some single projection exposure $T_k^p$; $g$ expresses poorly-exposed pixels in the $Q$ pixels involved in the phase-shifting fringe images; $\sigma$ is constant and is usually set to 0.5 [23].

(2) Phase-gradient smoothness When the phase gradient of one pixel is clearly different from its surrounding pixels, 3D calculation errors inevitably appear. In this paper, the error is suppressed by phase gradient smoothing during phase fusion:

$$W_k^2({u_c},{v_c}) = {\Phi _k} \times L \times G,$$
$${L_{x,y}} = \left\{ \begin{array}{ll} {S^2} - 1, & x = y = 0\\ - 1, & else\\ \end{array} \right.,$$
$${G_{x,y}} = \frac{1}{{2\pi {\sigma_g ^2}}}\exp ( - \frac{{{x^2} + {y^2}}}{{2{\sigma_g ^2}}}).$$
Where $L$ is the Laplacian, $x$ and $y$ respectively represent the position index of the Laplacian operator $L$. $G$ is the Gaussian operator suppressing the influence of noise, and $S$ is the size of the Laplacian, here $S=5$; $\sigma _g$ is the standard deviation of the Gaussian operator.

(3) The pixel effectiveness

In order to ensure the effectiveness of the fused results, this paper introduces the concept of effective pixel. If one pixel is effective, the following conditions should be met: 1) the number of gray value of the pixel in the reflected fringe images in the set range (0-248) is greater than $Q$ ; Moreover, 2) the modulation of this pixel is greater than $\overline {{M_k}} \times c$ ($c$ is the scaling parameter of controlling the modulation threshold of fringe images under single exposure, here $c$=0.1 in our experiments.). Here are some suggestions, for objects with a small changes in surface reflectivity, the selection of c will be around 0.5; And for objects with large changes in surface reflectivity, the selection of c will be conservative, generally around 0.1.

$$Mask_k = \left\{ \begin{array}{ll} 0, & {M_k}({u_c},{v_c}) \le \overline {{M_k}} \times c \quad or \quad q \le \frac{Q}{2}\\ 1, & else \\ \end{array} \right.,$$
Where $Mask_k$ represents the coupled pixel effectiveness mask for the fused phase; $M_k$ and $\overline {{M_k}}$ are respectively the modulation of $Q$-step phase-shifting fringe images and its average under the $k$th group exposure; $q$ is the number of pixel gray value within the given range (0-248 for 8-bit images).

$W_k^1\left ( {{u_c},{v_c}} \right )$, $W_k^2\left ( {{u_c},{v_c}} \right )$ and $Mask_k({u_c},{v_c})$ are calculated according to the captured images sequence $I_{n,k}$, the corresponding modulation $M_k$ and unwrapping phase map $\Phi _k$. The coupled weight $W_k$ is established to guide the phase fusion [23]:

$${W_k}\left( {{u_c},{v_c}} \right) = {\left[ {W_k^1\left( {{u_c},{v_c}} \right)} \right]^{{\lambda _1}}}{\left[ {W_k^2\left( {{u_c},{v_c}} \right)} \right]^{{\lambda _2}}}Mas{k_k}({u_c},{v_c}),$$
Where $\lambda _1$ and $\lambda _2$ represent the proportion of the corresponding weights involved in the phase fusion, respectively. Since $W_k^1\left ( {{u_c},{v_c}} \right )$ is positively correlated with phase quality, the bigger it is, the better the phase quality is, but $W_k^2\left ( {{u_c},{v_c}} \right )$ is negatively correlated with phase quality, the bigger it is, the bigger the phase gradient is, and the worse the phase quality is, so $\lambda _1$ is set to be positive and $\lambda _2$ is set to be negative generally. $Mask_k({u_c},{v_c})$ is used to label the effective pixels and exclude the invalid ones (outliers).

With the normalized coupling weights for the original phase maps sequence $\{\Phi _k\}$,we can obtain the fused phase:

$$\Phi \left( {{u_c},{v_c}} \right) = \sum {\left( {{W_k}\left( {{u_c},{v_c}} \right){\Phi _k}\left( {{u_c},{v_c}} \right)} \right)} /\sum {{W_k}\left( {{u_c},{v_c}} \right)},$$

However, for pixels that undergo phase unwrapping errors throughout the entire exposure sequence, their fused phase undoubtedly also be wrong, which need to be removed and considered as outliers. We apply Laplacian on the fused phase to obtain the gradient distribution of the fused phase $\Phi$. Since the calculated phase should ideally be a continuous slope with a constant gradient ideally. In the actual phase calculation process, there will be some outliers. These outliers are often far away from the ideal phase slope. It is easy to observe that the height of these abnormal points changes greatly compared with the surrounding normal points. So the gradient ${L * \Phi }$ of outliers will be significantly higher than their neighboring pixels, these outliers can be detected and corresponding positions in the fusion phase is set to Nan. On the basis of theoretical assumption, we use the $Mask_{Err}$ to exclude outliers, and the final fused phase can be expressed by:

$${\Phi ^{opt}}\left( {{u_c},{v_c}} \right) = \Phi \left( {{u_c},{v_c}} \right) \cdot Mas{k_{Err}}({u_c},{v_c}).$$

3. Experiments and analysis

As shown in Fig. 6, we use a well-constructed an optical 3D measurement system [31] consisting of a CMOS bi-telecentric camera (Model: MER2-502-79U3M, Resolution: 2448 x 2048) and a Scheimpflug projector (Model: DLP LightCrafter 4500, resolution: 912 x1140) to verify the effectiveness and feasibility of the proposed method. The projector is equipped with a pinhole-based lens while the camera is equipped with a bi-telecentric lens (Model: DTCA230-37C, Working distance: 110 mm, Magnification: 0.3x). The angle of optical axis between the camera and the projector is 22.5$^{\circ }$. The measurement range of this system is 24 mm (W)x28 mm (L)x6mm (H). The projector has a frame rate limit of 120 Hz and the corresponding minimum projection exposure time 8333 $\mu$s when projecting 8-bit sinusoidal fringe patterns.While 1-bit binary patterns can be projected at 4225 Hz, and the corresponding minimum projection exposure time is 235 $\mu$s, which enables the camera exposure time selection to be more flexible. Therefore, in all experiments, we adopt TSB encoding method [28] to encode the three frequencies four-step phase sinusoidal fringe patterns (totally 12 8-bit patterns) into 24 1-bit binary patterns, and approximate sinusoidal fringe images can be achieved by slightly defocused projection, making the camera have the ability of capturing HDR images sequence, theoretically beginning with the minimum projection exposure time 235 $\mu$s. Making full use of this merit, the coverage and accuracy of measurement can be guaranteed when faced with low, high-reflection or HDR areas. We will use a watch part with typical HDR surfaces to confirm this idea.

 figure: Fig. 6.

Fig. 6. The experimental system consisting of a bi-telecentric camera and a Scheimpflug projector, which enjoys the maximum common focus area between them for high accuracy full-field measurement.

Download Full Size | PDF

The period of high frequency sinusoidal fringe pattern undergoing TSB during the measurement is 24 pixels, and the wrapped phase is calculated using four-step phase shift algorithm. Before measuring a HDR part, $Exp \_ {End}$ = 35 ms and $Exp \_ {Start}$ = 2.3 ms are determined (see Section 2.2) for projection exposure. The minimum exposure time was set to 2.3ms because the theoretical minimum projection exposure time of 235$\mu$s resulted in a very dark image and low overall brightness, causing the fringe information to be submerged in noise. In order to fully make comparisons with existing methods, the number of exposure groups is set to 15. The projection exposure sequence is generated according to the method described in Section 2.2. The actual projection exposure time $T_p$ is set to be 2.3ms, 2.8ms, 3.5ms, 3.9ms, 4.5ms, 5.1ms, 5.6ms, 6.7ms, 8ms, 11.6ms,14.5ms, 18ms, 21ms, 25ms, 35ms, and the projection period $T$ is set to 40ms while the camera exposure time is fixed to $T_c$ = 38ms, slightly less than the projection period $T_p$. The slightly defocused projector will project TSB patterns under 15 different projection exposure times, and MGC patterns under the median of the projection exposure time sequence, onto the measured surface, while fringe images distorted by the HDR object surface are captured by the camera with time-integral imaging [28] and designed trigger signal. Subsequently, these captured fringe images undergo phase unwrapping via sharing the same decoding result of MGC patterns (see section 2.2), and the conversion of the finally fused absolute phase information into height information with Xiao’s [31] calibration method .

3.1 Feasibility test of sharing mechanism-based phase unwrapping via MGC

In order to verify the effectiveness of the proposed phase analysis strategy based on sharing mechanism, the watch part shown in Fig. 7(a) is selected as the tested object in this section. It presents typical HDR area but is relatively simple in geometric structure for the sake of excluding complex geometric structure disturbing the phase unwrapping. It can be seen that its surface reflectivity is varied largely, so that the intensity saturation easily appears in the reflected image because of relatively smooth geometry. MGC patterns and TSB patterns of low, medium and high frequencies are projected under 15 groups of projected exposure times, and the distorted fringe images are captured via integral imaging. Figure 7(b) demonstrates one of high frequency reflected sinusoidal fringe images under the 9th group projection exposure ($T_p$ = 8 ms), and some areas in the captured fringe image are saturated. The corresponding wrapped phase is given in Fig. 7(c). Figures 7(d)-(f) respectively show one of the reflected MGC images, its binarized result and decoding result under the 9th group projection exposure ($T_p$ = 8 ms). Although both under-saturation and over-saturation exists in both the phase-shifted fringe images and MGC reflected image, the MGC is still able to successfully offer correct decoding result.

 figure: Fig. 7.

Fig. 7. The images captured by the 9th group of projection exposure ($T_p$=8 ms) and its corresponding decoding results. (a)photo of a watch part under uniform white light illumination; (b)one of captured high frequency sinusoidal images; (c)wrapped phase; (d)one of captured MGC images; (e) binarized image of (d); (f) decoding result of MGC images.

Download Full Size | PDF

To further compare the performance of three frequencies temporal phase unwrapped algorithm and MGC-based temporal phase unwrapped algorithm on high and low reflection (over-saturation and under-saturation) regions under different projection exposure levels, we construct the following contrast groups:

(a) Three frequencies temporal phase unwrapped algorithm: the phase of high frequency fringe image is separately unwrapped with the low and medium frequency reflected fringe images under each group of exposure.

(b) MGC temporal phase unwrapped algorithm: the phase of high frequency fringe image is separately unwrapped with MGC decoding result under each group of exposure.

(c) MGC phase unwrapped algorithm with sharing mechanism: all of the high frequency fringe images under different exposures are unwrapped by sharing the same MGC patterns decoding result under the 9th group of projection exposure ($T_p$=8 ms).

As shown in Fig. 8, we select typical three groups fringe images ($T_p$=3.5ms, 8ms and 25ms) for comparison. The phase unwrapped results via different methods are respectively shown in Figs. 8(a)-(c).

 figure: Fig. 8.

Fig. 8. Comparison of phase unwrapping results via different algorithms under different projection exposures. (a) One of the fringe images under projection exposures $T_p$ = 3.5ms, 8ms, 25ms; (b)Three frequencies temporal phase unwrapping; (c)MGC-based phase unwrapping; (d) MGC-based unwrapping with sharing mechanism.

Download Full Size | PDF

As can be seen from Fig. 8, serious under-saturation exists in the reflected fringe image, and the high reflection area fail to reach over-saturation when the projection exposure time is $T_p$=2.3 ms. When the projection exposure time is switched to 25ms, most of the measured surface area is properly structure-illuminated, and the low reflection area is clearly visible, so that the intensity change range of the reflected fringe image meets the measurement requirement to some degree. By distinguishing the phase unwrapped results of three methods under low exposure condition ($T_p$=2.3 ms), it can be found that both of three frequencies temporal phase unwrapped algorithm and MGC temporal phase unwrapped algorithm yield some degree of error in the low reflection area, but the latter is slightly better than the former. However, MGC phase unwrapped algorithm with sharing mechanism is able to correctly unwrap the wrapped phase of low reflection area. Meanwhile, all of three methods can correctly implement the phase unwrapping under the moderate exposure condition ($T_p$ = 8 ms). Under high projection exposure condition ($T_p$ =25 ms), the high reflection area of the measured surface is saturated, which deteriorates the phase unwrapping process. Three frequencies temporal phase unwrapped algorithm has resulted in phase unwrapped error in the high reflection area, while MGC temporal phase unwrapped algorithm is immune from the saturation. Relatively speaking, MGC temporal phase unwrapped algorithm with sharing mechanism can adapt to a broader range of reflectivity variation, and also can correctly unwrap the wrapped phase of high-frequency fringe images under all of three projection exposure conditions, confirming the feasibility and reliability of our proposed phase unwrapped strategy.

3.2 Comparisons of different methods to measure HDR objects

We choose the following four methods to measure a mobile phone part with HDR surface, and the performance of different methods is compared in terms of measurement integrity (coverage), accuracy and efficiency to verify the advantages of the proposed method:

  • • Single optimal exposure fringe projection method (SPF). A single optimal exposure is calculated according to [32], and the absolute phase is obtained using three frequency temporal phase algorithm.
  • • Multi-exposure fringe intensity fusion (MIF). The maximum unsaturated intensity is selected according to [18] for multi-exposure image fusion in intensity domain, and the absolute phase is calculated via three frequency temporal phase algorithm.
  • • Multi-exposure fringe moduation fusion (MEF). The maximum unsaturated modulation is selected according to for multi-exposure image fusion [33], and the absolute phase is calculated via three frequency temporal phase algorithm.
  • • Hybrid phase domain fusion (HPF). Hybrid phase domain fusion under multiple exposures is performed according to the method described in literature [23], and the absolute phase is calculated by via three frequency temporal phase algorithm.
  • • The proposed method (Ours). We obtain the absolute phase sequence under different projection exposures through the proposed MGC assistant temporal phase unwrapping algorithm with sharing mechanism. Then they are fused into the final phase map for 3D reconstruction based on the phase domain fusion model described in Section 2.3.

Figure 9 shows the intermediate process of measuring a mobile part surface under the projection exposure time $T_p$ = 8ms. Figures 9(a)-(e) respectively illustrate one of captured high frequency fringe images, the MGC decoding result, the absolute phase map unwrapped by Fig. 9(b), coupled weights map $W_9$, as well as unsaturated intensity mask $Mask_9$. The corresponding pixel of unsaturated $Mask_9$ is set to 0, indicating that its phase does not participate in the final phase fusion process. Figure 9(f) demonstrates the final fusion phase map $\Phi ^{opt}$ of 15 groups projection exposure by our proposed method.

 figure: Fig. 9.

Fig. 9. Measuring the surface of a mobile phone part using the proposed method. (a) One of captured fringe images $I_{4,9}$ (the projection exposure time $T_p$ = 8ms); (b) MGC decoding result, (c)absolute phase map unwrapped by (b); (d) coupled weight map $W_9$ (see Eq. (9)) and (e)unsaturated $Mask_9$ (see Eq. (8)) when $T_p$ = 8ms; (f)The final fused phase map $\Phi ^{opt}$ of 15 groups projection exposure.

Download Full Size | PDF

3.3 Integrity evaluation

Figure 10 illustrates the high frequency fringe images of testing a mobile phone part under 15 group different exposures, while Fig. 10 presents the corresponding 3D point clouds reconstructed by the five methods (SFP, MIF, MEF, HFP and Ours). The exposure time of the single optimal projection for SFP is 21ms, and the corresponding reconstruction result is shown in Fig. 11(a). It can be distinguished that the reconstruction quality of the high and low reflection area cannot be simultaneously taken into account under the single exposure condition. When low reflection area on the left is completely reconstructed, but the 3D data of high reflection area on right is partly missing due to image over-saturation. The nonlinearity caused by image over-saturation also leads to ripple fluctuations in reconstructed point clouds in highly reflective areas (see red circle in Fig. 11(a) and Fig. 11(a1)-(a3) which can be observed clearly.) and severely impaired the reconstruction accuracy. In contrast, the remaining three multi-exposure fusion methods shown in Figs. 11(b)-(d) fuse the captured fringe images sequence under different exposures making full use of the complementarity between the high and low exposure images, which have better performance of adapting to the reflectance change of the bright and dark areas across the tested surface, and restoring the complete topography structure of the measured object without obvious data loss in the reconstructed point cloud. However, more or less outliers appear in Figs. 11(a)-(d) for SFP, MIF, MEF and HFP against Ours, typically the area marked with red circle in Fig. 11(a).

 figure: Fig. 10.

Fig. 10. Captured high frequency fringe images of testing a mobile phone part under 15 group different projection exposures. Only one of four-step phase-shifting fringe images is demonstrated under each exposure.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. 3D point clouds under 15 groups of projection exposures using different methods. (a)SFP; (a1)-(a3) Enlarged views of the regions as the red circle labeled in (a); (b)MIF; (c)MEF; (d)HPF; (e)Ours

Download Full Size | PDF

To prove that the proposed method also achieves high measurement integrity in the low reflection area when the number of projected exposure groups decreases. In this section, the starting point of projection exposure is fixed to $Exp\_Start$=2.3ms, the reflected fringe images under different projection exposure groups are selected for fusion, and the reconstructed point clouds of the three multi-exposure fusion methods in the low reflection area (Fig. 12 red rectangle box) are compared. As shown in Fig. 12, the captured reflected fringe images at the shortest ($T_p$ = 2.3 ms) and the longest ($T_p$= 35 ms) projection exposure time are shown in Fig. 12(a).

 figure: Fig. 12.

Fig. 12. Reconstructed point clouds of low reflection areas under different groups of exposure. (a) Captured two high frequency fringe images under projection exposure $T_p$ = 2.3ms and 35ms; Reconstructed point clouds of the red rectangle: (b) MIF; (c)MEF; (c)HPF; (e)Ours. The labeled percentage numbers represent the reconstruction coverage of different methods.

Download Full Size | PDF

It can be seen that when the projection exposure time is $T_p$ = 35ms, the most area of the measured surface is seriously overexposed, while the low reflected area is still seriously under-exposed (red rectangle box) because the reflected light of the low reflected area deviates from the camera lens due to its geometry and reflection distribution. The reconstruction details of three images fusion based methods (MIF,MEF,HPF and Ours) under the first 5 groups of projection exposure ($T_p$ = 2.3ms,2.8ms,3.5ms,3.9ms,4.6ms) are shown in the first column of Figs. 12(b)-(e). We can clearly identify that MIF, MEF and HPF fail to fully restore the surface topography of the low reflection area,but HPF is able to achieve slightly higher measurement integrity in the low reflection area compared with MIF and MEF. The reason for this phenomenon is that three frequency temporal phase algorithm fails to perform the correct phase unwrapping under the condition, which has been confirmed in the comparison experiment in Fig. 8. On the contrary, for the proposed method, the same MGC decoding result under a good exposure is shared to unwrap the wrapped phase of different groups of high frequency fringe images, which can completely restore the surface topography of the low reflection area even even when the number of exposure groups is only 5 groups, as illustrated in Fig. 12(c).

The reconstruction details of the four methods (MIF, MEF, HPF and Ours) under the first 10 groups of projection exposure ($T_p$ = 2.3ms, 2.8ms, 3.5ms, 3.9ms, 4.6ms, 5.1ms, 5.6ms, 6.7ms, 8ms, 11.6ms) are shown in the second column of Figs. 12(b)-(e). It can be seen that the difference between HPF and MEF is small. With the number of projection exposure group from 5 to 10, the reconstruction performance of MEF and HPF in low reflection area is markedly improved, but the reconstruction point cloud still is missing to some degree, meaning that the maximum projection exposure (11.6ms) has not reached the requirement of three frequency temporal phase unwrapping.

Under all 15 groups of projection exposure, four methods (MIF, MEF, HPF and Ours) can fully recover the 3D topography structure of the low reflection area, which is consistent with the validation experiment results shown in Fig. 8. As the projection exposure groups increase, the proposed method enables achievement of higher measurement integrity in comparison with other two multiple exposure fusion methods (HPF and MEF), which is especially demonstrable in the low reflection area. When the projection exposure group is reduced to 33% (the first 5 groups, $T_p$ = 2.3ms,2.8ms,3.5ms,3.9ms,4.6ms), the 3D topography of the low reflection area using our method can still be completely and correctly restored.

In order to quantitatively evaluate the reconstruction integrity of different multi-exposure fusion methods in the low reflection area, we project a pure white pattern on the object and it is used to get the Region of Interest (ROI) of the extracted object surface. The total number of measured points within at the red box line of Fig. 12 (low reflection area) is represented as $NO$. The number of effective points($NP$) in the low reflection area reconstructed by different methods is counted, and $MIR$ of low reflection area is calculated according to Eq. (12). The statistical results are summarized in Fig. 12. It can be seen that as the projection exposure group decreases, the reconstruction integrity of MEF and HPF gradually decreases, and the reconstruction integrity of the proposed method is basically maintained at 98% around. When the projection exposure group is reduced to 33% (the first 5 projection exposures), MEF and HPF can only recover approximately 40-60% of the low reflection area, while the proposed method still reaches 98% of the reconstruction integrity.

$$MIR = \frac{NP}{NO} \times 100{\%} .$$

3.3.1 Precision evaluation

In the accuracy evaluation experiment, two ceramic blocks of 1$\pm$0.00006mm and 3$\pm$0.00008mm height are tested. Figure 13 demonstrates the part of fringe images distorted by the surface of the ceramic blocks under the low ($T_p$ = 2.3ms), medium($T_p$ = 8ms) and high ($T_p$ =35ms) projection exposure conditions. It can be seen that the overall distribution of the surface reflectivity of the ceramic block is relatively uniform. Under the high projection exposure time $T_p$ = 35ms, a large saturation area appears in the reflected fringe image, and the phase information is annihilated. Only a few pixels that are not yet saturated can provide some effective phase information for fusion.

 figure: Fig. 13.

Fig. 13. One of captured high-frequency fringe images under different exposure conditions when measuring ceramic standard blocks. (a)$T_p$ = 2.3ms; (b)$T_p$ = 8ms; (c)$T_p$ = 35ms.

Download Full Size | PDF

The article compares different methods for measuring ceramic standard blocks, and evaluates their strengths and weaknesses in terms of plane reconstruction results, as shown in Fig. 14. The SFP method can obtain a good plane reconstruction result by using a proper exposure time. Since the SNR and modulation maps may be quite different under varying exposures, the fusion based methods may not necessarily surpass the SFP method. For example, MIF selects pixels based on optimal intensity, but does not consider the sinusoidal characteristic of pixel gray-value in phase retrieval, leading to large local fluctuation in the final phase map. MEF selects pixels under optimal modulation but the optimal modulation does not mean the highest phase quality. For example, the modulation of overexposure points is usually large, so the final phase map has periodic fluctuation errors similar to MIF. The reason for this phenomenon is that when MEF performs the fusion, the original HDR images with the maximum unsaturated intensity is involved into fusion. Each pixel is regarded as an independent region, and its phase is solved separately without considering the influence of surrounding pixels on the current pixel and the relationship among pixels involved into phase-shifting computation. The HPF method considers weight coefficients based on modulation quality, but does not effectively consider intensity and modulation simultaneously, leading to errors in some regions. Compared with the counterparts, the proposed method introduces the phase domain fusion model, considering temporal and spatial influence of surrounding pixels, phase gradient smoothness, pixel effectiveness, and outlier removal mask for smoother results.

 figure: Fig. 14.

Fig. 14. Measurement results of ceramic standard blocks via different methods: (a) STD error diagram of of 3mm ceramic standard block; (b) STD error diagram of 1mm ceramic standard block; (c) statistical error map of height difference between 1mm and 3mm blocks.

Download Full Size | PDF

In order to quantify the performance of different methods in terms of measurement accuracy, the standard derivation(STD) is used to characterize the distribution of the error. The STDs in Fig. 14 show that there exist small differences in accuracy performance without HDR disturbance, and methods based on phase domain fusion slightly better than intensity(modulation) domain fusion. The plane reconstruction STDs for 3mm / 1mm ceramic standard blocks of the proposed method is 0.00497mm and 0.00435mm. Compared with the SFP method and the current best HPF method, the average improvement is 12.28% and 2.64%. It can be seen that the proposed method can achieve a comparable accuracy as other multi-exposure fusion methods while maintaining a higher reconstruction integrity and measurement efficiency.

3.3.2 Efficiency evaluation

Table 1 summarizes the comparison of the images number for different methods to measure a mobile phone part with HDR surface. Taking four-step phase-shifting used in this paper as an example, the least images is required by SFP at a single exposure. Since SFP solves the absolute phase using three frequency temporal phase unwrapping algorithm, so that the low and high frequencies are required in the measurement, and the images number during the measurement is 12. However, it is difficult for SPF to obtain a complete point cloud on the tested HDR surface. In contrast, although the remaining three multi-exposure fusion methods need to project patterns under different projection exposure conditions, they can fully recover the 3D topography of the HDR surface, all showing good measurement results. In terms of the measurement efficiency of three multiple exposures methods, both MIF, MEF and HPF calculated the absolute phase with the same number of images during the measurement, which are 12x15 = 180. The proposed method is based on shared MGC, which avoids extra projected low and middle sinusoidal fringe patterns under each projection exposure, reducing the number of images by at least 65% in comparison with the remaining three multi-exposure fusion approaches (MIF, MEF and HPF).

Tables Icon

Table 1. Comparison of the number of images projected in different methods

The process of phase fusion is the weighted summation of the phase results under multiple exposures. The weighted sum is calculated independently for each pixel. Due to the HDR surface, the phase results for each exposure have areas of good quality and areas of poor quality. The process of calculating the fusion weight is equivalent to scoring the phase results under each exposure, and the high-quality phase results under each exposure are retained through weighted fusion. Even if the quality of the phase in some regions is poor under all exposure groups of the entire exposure sequence, our method will find a relatively less bad group from these poor results and give it a greater weight, such that, the quality of the phase fusion must be better than the result of a single exposure.

4. Summary and future work

In this paper, we propose a HDR surface 3D reconstruction method based on sharing phase demodulation mechanism and multi-indicators guided phase fusion strategy. The projection exposure sequence with the projector’s variable exposure mode is created, and the reflected fringe images sequence under different intensities can be obtained without changing the exposure time of the camera. The temporal phase unwrapping based on MGC is adopted to improve the phase unwrapped quality of the high and low reflection regions of tested HDR parts. A phase unwrapping strategy based on sharing mechanism is proposed to effectively reduce the image redundancy and simplify the measurement process. Differing from the current widely used modulation domain multi-exposure fusion method, we introduce a mixed weight-based phase domain fusion model. The normalized weights coupling multiple indicators, including exposure quality, phase gradient smoothness, pixel effectiveness, as well as the mask of phase outliers excluding, are adopted to guide the fusion of phase maps sequence, and the reconstructed 3D point cloud is smoother without outliers. The experimental results show that the proposed method can completely reconstruct the 3D topography of complex HDR surfaces. Compared with the traditional multi-exposure fusion method, the number of projection images required in the measurement process is reduced by at least 65%, and the measurement integrity is still up to 98% when the number of exposure groups (undersaturation) is reduced to 33%. Since our proposed approach is not dependent on the collected massive images by a given 3D system for model training, which is hopeful to suitable for automatic 3D measurement of complex HDR as an efficient and reliable approach.

Next, we plan to theoretically establish a generalized 3D reconstruction (fusion) framework of making use of different information such as modulation, intensity, phase or their combination, experimentally investigate their influences on 3D reconstruction accuracy and integrity of HDR surfaces, and derive the application scope.

Funding

National Natural Science Foundation of China (62101364); The central government guides local funds for science and technology development (22ZYD0111); Key research and development project of Sichuan province (2021YFG0195, 2022YFG0053); China Postdoctoral Science Foundation (2021M692260).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. S. Zhang, “High-speed 3d shape measurement with structured light methods: A review,” Opt. Lasers Eng. 106, 119–131 (2018). [CrossRef]  

2. S. Van der Jeught and J. Dirckx, “Real-time structured light profilometry: A review,” Opt. Lasers Eng. 87, 18–31 (2016). [CrossRef]  

3. B. Salahieh, Z. Chen, J. Rodriguez, and R. Liang, “Multi-polarization fringe projection imaging for high dynamic range objects,” Opt. Express 22(8), 10064–10071 (2014). [CrossRef]  

4. P.-J. Lapray, “Exploiting redundancy in color-polarization filter array images for dynamic range enhancement,” Opt. Lett. 45(19), 5530–5533 (2020). [CrossRef]  

5. Y. Wang, Q. Zhang, Y. Hu, and H. Wang, “Rapid 3d measurement of high dynamic range surface based on multi-polarization fringe projection,” Opt. Eng. 60(08), 1 (2021). [CrossRef]  

6. Y. Cao, B. Ding, J. Chen, W. Liu, P. Guo, L. Huang, and J. Yang, “Photometric-stereo-based defect detection system for metal parts,” Sensors 22(21), 8374 (2022). [CrossRef]  

7. Z. Song, Z. Song, J. Zhao, and F. Gu, “Micrometer-level 3d measurement techniques in complex scenes based on stripe-structured light and photometric stereo,” Opt. Express 28(22), 32978–33001 (2020). [CrossRef]  

8. D. Li and J. Kofman, “Adaptive fringe-pattern projection for image saturation avoidance in 3d surface-shape measurement,” Opt. Express 22(8), 9887–9901 (2014). [CrossRef]  

9. Y. Liu, Y. Fu, X. Cai, K. Zhong, and B. Guan, “A novel high dynamic range 3d measurement method based on adaptive fringe projection technique,” Opt. Lasers Eng. 128, 106004 (2020). [CrossRef]  

10. S. Li, F. Da, and L. Rao, “Adaptive fringe projection technique for high-dynamic range three-dimensional shape measurement using binary search,” Opt. Eng. 56(9), 1 (2017). [CrossRef]  

11. Z. Sun, Y. Jin, D. Minghui, X. Fan, C. Zhu, and J. Zheng, “3-d measurement method for multireflectivity scenes based on nonlinear fringe projection intensity adjustment,” IEEE Trans. Instrum. Meas. 70, 1–14 (2021). [CrossRef]  

12. C. Chen, N. Gao, and X. Wang, “Adaptive pixel-to-pixel projection intensity adjustment for measuring a shiny surface using orthogonal color fringe pattern projection,” Meas. Sci. Technol. 29(5), 055203 (2018). [CrossRef]  

13. H. Lin, J. Gao, Q. Mei, Y. He, J. Liu, and X. Wang, “Adaptive digital fringe projection technique for high dynamic range three-dimensional shape measurement,” Opt. Express 24(7), 7703–7718 (2016). [CrossRef]  

14. S. Zhang, Y. Yang, W. Shi, L. Feng, and L. Jiao, “3d shape measurement method for high-reflection surface based on fringe projection,” Appl. Opt. 60(34), 10555–10563 (2021). [CrossRef]  

15. H. Lin, J. Gao, Q. Mei, G. Zhang, Y. He, and X. Chen, “Three-dimensional shape measurement technique for shiny surfaces by adaptive pixel-wise projection intensity adjustment,” Opt. Lasers Eng. 91, 206–215 (2017). [CrossRef]  

16. C. Chen, N. Gao, and X. Wang, “Adaptive projection intensity adjustment for avoiding saturation in three-dimensional shape measurement,” Opt. Commun. 410, 694–702 (2018). [CrossRef]  

17. H. Sheng, J. Xu, and S. Zhang, “Dynamic projection theory for fringe projection profilometry,” Appl. Opt. 56(30), 8452–8460 (2017). [CrossRef]  

18. S. Zhang and S.-T. Yau, “High dynamic range scanning technique,” Opt. Eng. 47(12), 1–9 (2008). [CrossRef]  

19. S. Feng, Y. Zhang, Q. Chen, C. Zuo, R. Li, and G. Shen, “General solution for high dynamic range three-dimensional shape measurement using the fringe projection technique,” Opt. Lasers Eng. 59, 56–71 (2014). [CrossRef]  

20. L. Rao and F. Da, “High dynamic range 3d shape determination based on automatic exposure selection,” J. Vis. Commun. Image Represent. 50, 217–226 (2018). [CrossRef]  

21. Y. Zheng, Y. Wang, V. Suresh, and B. Li, “Real-time high-dynamic-range fringe acquisition for 3d shape measurement with a rgb camera,” Meas. Sci. Technol. 30(7), 075202 (2019). [CrossRef]  

22. Y. Liu, Y. Fu, Y. Zhuan, K. Zhong, and B. Guan, “High dynamic range real-time 3d measurement based on fourier transform profilometry,” Opt. & Laser Technol. 138, 106833 (2021). [CrossRef]  

23. P. Zhang, K. Zhong, Z. Li, and B. Zhang, “Hybrid-quality-guided phase fusion model for high dynamic range 3d surface measurement by structured light technology,” Opt. Express 30(9), 14600–14614 (2022). [CrossRef]  

24. L. Zhang, Q. Chen, C. Zuo, and S. Feng, “High-speed high dynamic range 3d shape measurement based on deep learning,” Opt. Lasers Eng. 134, 106245 (2020). [CrossRef]  

25. G. Yang, M. Yang, N. Zhou, and Y. Wang, “High dynamic range fringe pattern acquisition based on deep neural network,” Opt. Commun. 512, 127765 (2022). [CrossRef]  

26. X. Liu, W. Chen, H. Madhusudanan, J. Ge, C. Ru, and Y. Sun, “Optical measurement of highly reflective surfaces from a single exposure,” IEEE Trans. Ind. Inf. 17(3), 1882–1891 (2021). [CrossRef]  

27. J. Zhang, L. Bin, F. Li, X. Niu, Q. Zhang, Y. Wang, and X. Chen, “Single-exposure optical measurement of highly reflective surfaces via deep sinusoidal prior for complex equipment production,” IEEE Trans. Ind. Inf. 19(2), 2039–2048 (2023). [CrossRef]  

28. P. Zhou, J. Zhu, X. Su, Z. You, H. Jing, C. Xiao, and M. Zhong, “Experimental study of temporal-spatial binary pattern projection for 3d shape acquisition,” Appl. Opt. 56(11), 2995–3003 (2017). [CrossRef]  

29. E. C. Shannon, “A mathematical theory of communication,” Bell Syst. Tech. J. 27(4), 623–656 (1948). [CrossRef]  

30. Z. Wu, W. Guo, Y. Li, Y. Liu, and Q. Zhang, “High-speed and high-efficiency three-dimensional shape measurement based on gray-coded light,” Photonics Res. 8(6), 819–829 (2020). [CrossRef]  

31. W. Xiao, P. Zhou, S. An, and J. Zhu, “Constrained nonlinear optimization method for accurate calibration of a bi-telecentric camera in a three-dimensional microtopography system,” Appl. Opt. 61(1), 157–166 (2022). [CrossRef]  

32. S. Zhang, “Rapid and automatic optimal exposure control for digital fringe projection technique,” Opt. Lasers Eng. 128, 106029 (2020). [CrossRef]  

33. H. Jiang, H. Zhao, and X. Li, “High dynamic range fringe acquisition: A novel 3-d scanning technique for high-reflective surfaces,” Opt. Lasers Eng. 50(10), 1484–1493 (2012). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. The overall architecture of the proposed HDR surface 3D reconstruction method based on sharing phase demodulation mechanism and multi-indicators guided phase fusion strategy.
Fig. 2.
Fig. 2. The generation of projection patterns sequence and acquisition of images for 3D measurement of HDR parts. The TSB patterns plus MGC are slightly defocused projected using the “variable exposure” mode of DMD-projector.
Fig. 3.
Fig. 3. The process of MGC-assisted phase unwrapping.
Fig. 4.
Fig. 4. MGC assistant phase unwrapping strategy via sharing mechanism.
Fig. 5.
Fig. 5. The flowchart of phase sequence fusion guided by multi-indicators.
Fig. 6.
Fig. 6. The experimental system consisting of a bi-telecentric camera and a Scheimpflug projector, which enjoys the maximum common focus area between them for high accuracy full-field measurement.
Fig. 7.
Fig. 7. The images captured by the 9th group of projection exposure ($T_p$=8 ms) and its corresponding decoding results. (a)photo of a watch part under uniform white light illumination; (b)one of captured high frequency sinusoidal images; (c)wrapped phase; (d)one of captured MGC images; (e) binarized image of (d); (f) decoding result of MGC images.
Fig. 8.
Fig. 8. Comparison of phase unwrapping results via different algorithms under different projection exposures. (a) One of the fringe images under projection exposures $T_p$ = 3.5ms, 8ms, 25ms; (b)Three frequencies temporal phase unwrapping; (c)MGC-based phase unwrapping; (d) MGC-based unwrapping with sharing mechanism.
Fig. 9.
Fig. 9. Measuring the surface of a mobile phone part using the proposed method. (a) One of captured fringe images $I_{4,9}$ (the projection exposure time $T_p$ = 8ms); (b) MGC decoding result, (c)absolute phase map unwrapped by (b); (d) coupled weight map $W_9$ (see Eq. (9)) and (e)unsaturated $Mask_9$ (see Eq. (8)) when $T_p$ = 8ms; (f)The final fused phase map $\Phi ^{opt}$ of 15 groups projection exposure.
Fig. 10.
Fig. 10. Captured high frequency fringe images of testing a mobile phone part under 15 group different projection exposures. Only one of four-step phase-shifting fringe images is demonstrated under each exposure.
Fig. 11.
Fig. 11. 3D point clouds under 15 groups of projection exposures using different methods. (a)SFP; (a1)-(a3) Enlarged views of the regions as the red circle labeled in (a); (b)MIF; (c)MEF; (d)HPF; (e)Ours
Fig. 12.
Fig. 12. Reconstructed point clouds of low reflection areas under different groups of exposure. (a) Captured two high frequency fringe images under projection exposure $T_p$ = 2.3ms and 35ms; Reconstructed point clouds of the red rectangle: (b) MIF; (c)MEF; (c)HPF; (e)Ours. The labeled percentage numbers represent the reconstruction coverage of different methods.
Fig. 13.
Fig. 13. One of captured high-frequency fringe images under different exposure conditions when measuring ceramic standard blocks. (a)$T_p$ = 2.3ms; (b)$T_p$ = 8ms; (c)$T_p$ = 35ms.
Fig. 14.
Fig. 14. Measurement results of ceramic standard blocks via different methods: (a) STD error diagram of of 3mm ceramic standard block; (b) STD error diagram of 1mm ceramic standard block; (c) statistical error map of height difference between 1mm and 3mm blocks.

Tables (1)

Tables Icon

Table 1. Comparison of the number of images projected in different methods

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

I n ( u c , v c ) = A ( u c , v c ) + B ( u c , v c ) cos ( φ ( u c , v c ) δ n ) ,
φ ( u c , v c ) = arctan [ n = 1 Q I n ( u c , v c ) sin ( δ n ) n = 1 Q I n ( u c , v c ) cos ( δ n ) ] .
M k ( u c , v c ) = 2 Q [ ( n = 1 Q I n , k ( u c , v c ) sin ( 2 π ( n 1 ) Q ) ) 2 + ( n = 1 Q I n , k ( u c , v c ) cos ( 2 π ( n 1 ) Q ) ) 2 ] ,
W k 1 ( u c , v c ) = exp ( ( g / Q ) 2 2 σ 2 ) M k ( u c , v c ) .
W k 2 ( u c , v c ) = Φ k × L × G ,
L x , y = { S 2 1 , x = y = 0 1 , e l s e ,
G x , y = 1 2 π σ g 2 exp ( x 2 + y 2 2 σ g 2 ) .
M a s k k = { 0 , M k ( u c , v c ) M k ¯ × c o r q Q 2 1 , e l s e ,
W k ( u c , v c ) = [ W k 1 ( u c , v c ) ] λ 1 [ W k 2 ( u c , v c ) ] λ 2 M a s k k ( u c , v c ) ,
Φ ( u c , v c ) = ( W k ( u c , v c ) Φ k ( u c , v c ) ) / W k ( u c , v c ) ,
Φ o p t ( u c , v c ) = Φ ( u c , v c ) M a s k E r r ( u c , v c ) .
M I R = N P N O × 100 % .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.