Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High dynamic range 3D shape measurement based on crosstalk characteristics of a color camera

Open Access Open Access

Abstract

Fringe projection profilometry (FPP) has been widely used in many fields due to its fast speed, high accuracy and full-field characteristics. However, it is still a challenging problem to deal with high dynamic range (HDR) objects for traditional FPP, which utilizes a single exposure time or a single projection intensity. Overexposure will occur in areas with large reflectivity, which exceeds the maximum capturing capacity of camera sensors, resulting in the failure to obtain the accurate intensity, absolute phase and three-dimensional (3D) data. In this paper, a uniform blue image is projected to divide object surface into three areas with different reflectivity by using different intensity responses of RGB channels of color images. Crosstalk coefficient function is applied to obtain intensity of overexposed areas, and then the optimal exposure time of areas is calculated by the linear photometric response of the camera. Finally, three sets of blue fringe patterns with optimal exposure time are synthesized into the fused HDR images to calculate the absolute phase. Experimental results confirm that the proposed method can accurately measure HDR objects with large variation range of reflectivity.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Fringe projection profilometry (FPP) has been widely used in industrial manufacturing, cultural relic conservation, medical engineering, along with other fields due to its advantages of fast speed, high accuracy, and full-field characteristics [13]. FPP is an intensity-based coding strategy, so the measurement is affected by surface texture and reflectivity [4]. Therefore, there is still an issue when measuring complex optical objects with large variation ranges of surface reflectivity, such as high dynamic range (HDR) objects (e.g., metallic and ceramic surface). The brightness of the captured fringe of HDR objects is not uniform: areas with high reflectivity will be overexposed, while areas with low reflectivity will be underexposed. It is difficult for traditional cameras with limited dynamic range to obtain high-quality data of HDR objects within a single exposure.

Thus, the measurement of HDR objects is one of the major challenges in optical field [5]. To conquer this challenge, a variety of studies have been proposed to give the corresponding solutions [6]. One type of approach is the multiple exposure technique. Zhang et al. [7] proposed an HDR scanning technique to capture fringe image sequences with different exposure time and to select corresponding pixels with the highest gray level (unsaturation) in the original fringe image sequence in order to obtain high-quality fringe images. The high reflectivity and the low reflectivity areas are extracted from image with a short exposure time and a long exposure time, respectively. This method effectively improves the dynamic range of HDR objects. However, since the effect of exposure time cannot be quantified, it is needs to capture a large number of images at different exposure time when measuring complex HDR, which has the disadvantages of time-consuming and redundant image data. Ekstrand et al. [8] proposed a method to automatically predict the required exposure time according to the surface reflectivity of the measured object. Exposure time is predicted by tradeoff between the brightest areas of overexposure and the darkest areas of shadow. This method improves the intelligence of 3D shape measurement system without manual intervention, but the single exposure time cannot satisfy the object with large range of surface reflectivity. Feng et al. [9] proposed an automatic HDR projection technology to predict the optimal exposure time according to the histogram of surface reflectivity distribution. Tang et al. [10] proposed a structured light technology based on automatic exposure, which calculates the surface reflectivity of the measured target and divides it into several groups. By analyzing the relationship between phase error caused by noise and intensity modulation, the modulation threshold is obtained, and the exposure time for each group is automatically determined. This method achieved good reconstruction results. However, the time efficiency of exposure time is not considered. Wang et al. [11] established the relationship between exposure time and captured patterns, estimated a reasonable exposure time interval using two sets of additional fringe patterns obtained in advance, and selected four exposure times with wider intervals to ensure the effectiveness of patterns and avoided redundant pattern sequences. Zhang et al. [12] automatically determined the global optimal exposure time with just one exposure time by analyzing a single texture image that is properly exposed by the camera, achieving high-quality 3D measurement. The work can successfully work on a static complex scene and can also be applicable for real-time applications, but the time efficiency of the exposure time is still not considered. Li et al. [13] proposed exposure image fusion method for 3D reconstruction of HDR surface, which fuses multiple exposure images through the principal frequency component to reduce quantization error and random noise. The work improves the 3D reconstruction accuracy of HDR surfaces. However, due to its computational complexity, processing time increases, and the use of two cameras, complexity of the equipment increases.

Adjusting the intensity of fringe is another solution to handle HDR object. Waddington et al. [14] adaptively adjusted the maximum input gray value of the projected fringe image and utilized the composite images of different intensity captured to avoid image saturation. However, due to the influence of ambient light, this method reduces the measurement accuracy of the dark areas. Babaie et al. [15] proposed a technology to enhance the dynamic range of FPP, which can recursively control the intensity of projection fringes at the pixel level according to the feedback of images captured by the camera and obtain fringes with HDR. Lin et al. [16] proposed a method to adaptively adjust projection pixel intensity for saturated pixels. The saturated areas in the image are marked, and then the low-intensity fringe image is projected onto these marked areas to avoid pixel saturation. Jiang et al. [17] used a hybrid method based on improving exposure time and projecting fringe intensity, where fringe images of high reflectivity areas are captured at different exposure time and projection intensity. Liu et al. [18] avoided pixel saturation by projecting low intensity fringe patterns onto bright areas at the pixel level. Only one set of orthogonal fringe patterns and two sets of uniform grayscale patterns are needed to generate adaptive projected fringe patterns. Cai et al. [19] adopted an improved mapping formula, used a uniform image to fit the grayscale values of the saturated area, and obtained the coordinates of the saturated area by calculating the horizontal and vertical patterns to generate adaptive patterns. Xu et al. [20] utilized speckle pattern for pixel matching to achieve adaptive fringe projection measurement. Only three additional patterns are needed to be projected to realize the measurement of HDR objects. [16,1820] are the adaptive fringe projection (AFP) method, and the intensity of the projected fringe pattern is adapted to the reflectivity of the HDR surfaces. However, accurately establishing the matching relationship between cameras and projectors is a challenge in these works.

There are other solutions to measure HDR object, such as using polarized light [21] and multiple cameras [22]. However, these methods all require additional auxiliary hardware to improve system performance, which will increase cost or computational complexity.

This paper proposes a HDR method based on crosstalk characteristics of color a camera. This paper projects blue fringe images and utilizes the captured blue fringe patterns to calculate the phase and reconstruct the 3D shape. Firstly, a uniform blue image with intensity of 255 is projected onto the object to mask the surface into three areas of ${Z_B}$, ${Z_G}$ and ${Z_R}$ with different reflectivity by using different intensity responses of red (R), green (G), and blue (B) channels. Among them, ${Z_B}$ is the unsaturated area on channel B, while ${Z_G}$ and ${Z_R}$ are the saturated area on channel B. Intensities of ${Z_G}$ and ${Z_R}$ on channel B cannot be obtained directly. Then calibrate crosstalk coefficient function of color camera to calculate intensities of the saturated areas ${Z_G}$ and ${Z_R}$ on channel B with the intensities on channel G of area ${Z_G}$ and intensities on channel R of area ${Z_R}$. The optimal exposure time of three areas is then obtained according to the linear photometric response of the camera. Finally, three sets of blue fringe patterns with optimal exposure time are synthesized into a new fused HDR image to calculate the absolute phase and to reconstruct 3D shape of the HDR object.

The remainder of this paper is organized as follow. The principle of the proposed method is given in Section 2. Section 3 presents experimental findings pertinent to the proposed technique. Section 4 concludes the paper.

2. Principle of HDR measurement method

2.1 Mark areas with different reflectivity

While fringe projection measurement method demands a better scattering impact on the surface, light having short wavelength has a better scattering effect on the surface of HDR object. Therefore, a uniform blue image is selected to project onto the measured object in the proposed method. Then the measured surface is marked into three areas with different reflectivity by taking advantage of the different responses of RGB channels [23,24]. The distorted color image is captured by a color camera with exposure time ${t_0}$. While making sure that the intensity on red channel of the captured distorted color image is less than 250, the initial exposure time ${t_0}$ should be larger. Then three images are separated from RGB channels of the captured color image with an intensity relationship of ${I_B} > {I_G} > {I_R}$. The measured surface is marked into three areas ${Z_B}$, ${Z_G}$ and ${Z_R}$ with different reflectivity by Eq.(1). The saturation threshold value is 250 when taking into account camera noise. ${Z_B}$ represents the area where the channel B is unsaturated, ${Z_G}$ represents the area where the channel B is saturated but the channel G is unsaturated, and ${Z_R}$ represents the area where the channels B and G are saturated but the channel R is unsaturated. When projecting blue fringe patterns, areas ${Z_G}$ and ${Z_R}$ on channel B are saturated and cannot accurately calculate phase. The proposed method utilizes the crosstalk characteristics of a camera to obtain the intensity of the overexposed areas, achieving measurement of HDR objects.

$$\begin{array}{l} mas{k_{{Z_B}}}(x,y) = \left\{ {\begin{array}{ll} {0}&{{I_B}(x,y) = 250}\\ {1}&{{I_B}(x,y) < 250} \end{array}} \right.\\ mas{k_{{Z_G}}}(x,y) = \left\{ {\begin{array}{lr} {0}&{\textrm{ else}}\\ {1}&{{I_B}(x,y) = 250\& \textrm{ }{I_G}(x,y) < 250} \end{array}} \right.\\ mas{k_{{Z_R}}}(x,y) = \left\{ {\begin{array}{lr} {0}&{\textrm{ else}}\\ {1}&{\textrm{ }{I_G}(x,y) = 250\& {I_R}(x,y) < 250} \end{array}} \right. \end{array}$$
where $mas{k_Z}_{_i}(x,y)\textrm{ (}i = RGB)$ denotes the mask of areas ${Z_B}$, ${Z_G}$ and ${Z_R}$, $(x,y)$ denotes the pixel coordinate in the camera plane, ${I_i}(x,y) (i = RGB)$ represents the intensity of three separated red, green, and blue channels of color image.

According to the $mas{k_Z}_{_i}$, intensities of areas with different reflectivity can be produced as

$$I_i^z(x,y) = {I_i}(x,y) \times mas{k_{{Z_i}}}(x,y),i = RGB$$
where $I_i^z(x,y)$ represents the intensity of three areas ${Z_B}$, ${Z_G}$ and ${Z_R}$.

2.2 Calibrate crosstalk coefficient function

Most color cameras or projectors are designed with spectral overlap to avoid color-blind areas of the spectrum, resulting in crosstalk between color channels, as shown in Fig. 1. Here, blue lighting is taken as a typical case. Under ideal conditions, when a uniform blue image is projected, only blue light can reach the camera chip, and the intensity of channel R and channel G of the captured image should be zero. However, they are not zero for the most color camera because of crosstalk or color coupling [25]. The spectral response properties of the color camera mostly determine crosstalk. When imaging a white surface, it shows that the response of red and green channels to blue light is fixed. We choose different initial exposure time to measure objects with different reflectivity.

 figure: Fig. 1.

Fig. 1. The spectral response curve of camera.

Download Full Size | PDF

When projecting a uniform blue image, the crosstalk coefficient function can be expressed as

$$\left\{ {\begin{array}{l} {{f_R}({I_B}) = \frac{1}{{{n_{{I_B}}}}}\sum\limits_{i,j} {I_{ij}^R\textrm{ }} }\\ {{f_G}({I_B}) = \frac{1}{{{n_{{I_B}}}}}\sum\limits_{i,j} {I_{ij}^G{\kern1cm}\textrm{ \{ }(i,j)|I_{ij}^B = {I_B}\textrm{\} }} }\\ {{f_B}({I_B}) = \frac{1}{{{n_{{I_B}}}}}\sum\limits_{i,j} {I_{ij}^B = {I_B}\textrm{ }} } \end{array}} \right.$$
where the crosstalk coefficient function ${f_R}({I_B})$, ${f_G}({I_B})$, ${f_B}({I_B})$ represent the intensities of channels RGB when the intensity on channel B is ${I_B}$. Intensities of the separated channels RGB under blue light are denoted as ${I_R}$, ${I_G}$ and ${I_B}$. ${n_{{I_B}}}$ is the number of pixels in the set $\{{(i,j)|I_{ij}^B = {I_B}} \}$ and $1 \le i \le M$, $1 \le j \le N$ (resolution of the image is $M$ pixels ×$N$ pixels). As the contribution of blue light to channel G or channel R when the intensity of channel B is ${I_B}$, the average of these values is set at the corresponding location for channel G or channel R. Therefore, a relationship between the intensity on channel G and that on channel B can be obtained, as well as the intensity on channel R and the intensity on channel B.

The expression of the calibration function is produced by fitting a curve of the intensity of the channel B corresponding to the blue light with the intensity of the other channels R and G. A linear function can be used to fit the crosstalk coefficient function. ${f_B}({I_B})$ is the linear function with a slope of one.

$$\left\{ {\begin{array}{c} {{f_G}({I_B}) = {m_G}{I_B} + {n_G}}\\ {{f_R}({I_B}) = {m_R}{I_B} + {n_R}}\\ {{f_B}({I_B}) = {I_B}} \end{array}} \right.$$
where ${m_G}$, ${n_G}$, ${m_R}$ and ${n_R}$ are fitting parameters.

Theoretically, following Eq. (4) crosstalk coefficient function ${f_G}({I_B})$ and ${f_R}({I_B})$ has two unknown parameters, respectively, which means two images are required to compute it. Four images are utilized to increase measurement accuracy. Four uniform blue images with intensity of 50, 100, 150 and 200 are projected onto a flat whiteboard, and then captured by a color camera. We separate the four images into R and G components, and take the average intensity of the pixels of the R and G components to obtain the intensity of ${f_R}({I_B})$ and ${f_G}({I_B})$. Then the least squares approach is used to solve the fitting parameters ${m_G}$, ${n_G}$, ${m_R}$ and ${n_R}$.

Surface of the HDR object is marked into three areas ${Z_B}$, ${Z_G}$ and ${Z_R}$ with different reflectivity. Among them, areas ${Z_G}$ and ${Z_R}$ are saturated areas on channel B. By calibrating the crosstalk coefficient function, the relationship between the intensity on channel G, channel R and that on channel B can be obtained. The intensity on channel B of area ${Z_G}$ and ${Z_R}$ can be inversely determined by Eq. (4), giving the intensity on channel G of area ${Z_G}$ and the intensity on channel R of area ${Z_R}$.

2.3 Determine the optimal exposure time

To obtain the optimal exposure time of three areas, it is necessary to understand how objects are imaged by camera sensors. The light reflected to camera are shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. The light reflected to camera.

Download Full Size | PDF

The captured intensity $I(x,y)$ can be described as [12]

$$I(x,y) = \alpha t({I^a} + \rho (x,y){I^e} + \rho (x,y){I^p}) + {I^n}$$
where $\alpha $ is camera sensitivity, t is exposure time, $\rho (x,y)$ is surface reflectivity, ${I^a}$ denotes intensity of ambient light that comes directly into the camera sensor, $\rho {I^e}$ denotes intensity of ambient light reflected by surface of object, $\rho {I^p}$ denotes intensity of projected light reflected by surface of object, ${I^n}$ denotes noise of the camera sensor.

For most digital cameras with a good linear photometric response, the camera sensitivity $\alpha $ is constant, namely image intensity is linearly related to scene intensity [26,27]. Assuming that the intensity of the projected light is bright enough, or the environment is dark enough, Eq. (5) can be simplified to

$$I = \alpha \rho {I^p}t$$

The captured light intensity only depends on the exposure time when the sensitivity of the camera, the intensity of the projected light, and the reflectivity of the given object surface are all constant. Equation (6) can be simplified as

$$I = Kt$$
where K is the product of sensitivity, projected light intensity and reflectivity. It can be seen from Eq. (7) that exposure time has a similar linear relationship with the image intensity.

The largest intensity gray level but without saturation intensity is obtained by adjusting the exposure time t. For an 8-bit camera, the optimal image intensity ${I_{ideal}}$ captured by the camera is 255. Taking into account the noise of the sensor, some gray level space has to be reserved to avoid saturation. The optimal captured intensity ${I_{ideal}}$ is set to 250. If K is obtained in advance, the optimal exposure time ${t_{opt}}$ can be obtained as

$${I_{ideal}} = K{t_{opt}}$$

A reasonable exposure time ${t_0}$ is preset to ensure that the captured image ${I_0}$ is not saturated. The product term $\alpha \rho {I^p}$ can be described as

$$K = \frac{{{I_0}}}{{{t_0}}}$$

Substituting Eq. (9) into Eq. (8), optimal exposure time ${t_{opt}}$ can be described as

$${t_{opt}} = \frac{{{I_{ideal}}{t_0}}}{{{I_0}}}$$

It is impossible to calculate the optimal exposure time for each pixel for commonly used industrial cameras. Experiments show that objects with large variation range of reflectivity can be divided into several areas with small variation range of reflectivity, and the optimal exposure time can be determined by areas. By the proposed method, the HDR surface with large variation range of reflectivity is further divided into three areas ${Z_B}$, ${Z_G}$ and ${Z_R}$. To predict an exposure time that is able to entirely cover each area, we choose the maximum intensity of each area to calculate its exposure time. The maximum intensity of areas ${Z_B}$, ${Z_G}$ and ${Z_R}$ on channel B are denoted as $I_{{Z_B}}^m$, $I_{{Z_G}}^m$ and $I_{{Z_R}}^m$. Areas ${Z_G}$ and ${Z_R}$ are the saturated areas on channel B. The maximum values of $I_{{Z_G}}^m$ and $I_{{Z_R}}^m$ cannot be obtained directly, but rather through the crosstalk coefficient function. By substituting $I_{{Z_B}}^m$, $I_{{Z_G}}^m$ and $I_{{Z_R}}^m$ into ${I_0}$ separately in Eq. (10), the optimal exposure time of areas ${Z_B}$, ${Z_G}$ and ${Z_R}$ can be expressed as

$$\begin{array}{l} t_{_{opt}}^{{Z_B}} = \frac{{{I_{ideal}}{t_0}}}{{I_{{Z_B}}^m}}\\ t_{opt}^{{Z_G}} = \frac{{{I_{ideal}}{t_0}}}{{I_{{Z_G}}^m}}\\ t_{opt}^{{Z_R}} = \frac{{{I_{ideal}}{t_0}}}{{I_{{Z_R}}^m}} \end{array}$$

2.4 Image fusion

Since the optimal exposure time $t_{_{opt}}^i$ for areas ${Z_B}$, ${Z_G}$ and ${Z_R}$ has been predicted, three sets of blue phase-shifting fringe patterns and blue uniform images are captured at $t_{opt}^i$. The captured blue fringe patterns $I_{opt}^{{Z_B}}$, $I_{opt}^{{Z_G}}$ and $I_{opt}^{{Z_R}}$ are multiplied by their corresponding masks to select the areas being optimally exposed. Then the fully extracted portions of the fringe pattern are added to a new fused image. The captured blue fringe pattern $I_{opt}^i$ and uniform blue image $I_u^i$ are retained for creating the fused fringe image ${I_f}$ and fused uniform image ${I_u}$, which can be expressed as

$$\begin{array}{l} {I_f}(x,y) = \sum\limits_i {mas{k_i}(x,y)} \times I_{opt}^i(x,y),{i_{}} = {Z_R},{Z_G},{Z_B}\\ {I_u}(x,y) = \sum\limits_i {mas{k_i}(x,y)} \times I_u^i(x,y),i = {Z_R},{Z_G},{Z_B} \end{array}$$
where $I_{opt}^i(x,y)$ represents the intensities of captured blue fringe pattern with exposure time $t_{opt}^i$, and $I_u^i(x,y)$ represents the intensities of captured blue uniform image with exposure time $t_{opt}^i$.

By background-normalized algorithm, the final fused normalized HDR fringe pattern ${I_n}$ is generated according to Eq. (13).

$${I_n}(x,y) = \frac{{2{I_f}(x,y) - {I_u}(x,y)}}{{{I_u}(x,y) + \beta }}$$
where ${I_n}(x,y)$ represents the fused HDR fringe pattern, $\beta $ is a small constant to avoid division by zero, which is set to be 0.005 in the following experiments.

Four-step phase-shifting algorithm is applied to calculate the wrapped phase of fused HDR images, which is from –π to +π. While with multiple fringe patterns, a spatial or temporal phase unwrapping algorithm [28,29] is needed to unwrap the phase for achieving absolute phase. The optimum three-frequency temporal phase-unwrapping algorithm is selected to remove the phase ambiguity [30]. Then the absolute phase is converted to 3D coordinate after system calibration [31]. Zhang [31] used a polynomial-based calibration method to build up the relationship between absolute phase and height data:

$$h(x,y) = \sum\limits_{n = 0}^N {{a_n}(x,y)\Delta \varphi {{(x,y)}^n}} ,n = 0,1,\ldots ,N$$
where $h(x,y)$ is the height relative to reference, ${a_n}(x,y)$ are a coefficient set containing the systematic parameters, $\Delta \varphi (x,y)$ is the difference of the absolute phase on the measured object and the reference plane.

Depth calibration refers to determining the coefficient set of a polynomial equation, which can be achieved by placing a circular calibration plate at several different positions in the measurement field. At each calibration position, the fringe patterns are projected onto the surface of the calibration board to provide absolute phase information for each pixel. After obtaining the relative depth of each pixel on the calibration board relative to the reference plane, the relationship between absolute phase and depth data can be established through Eq. (14).

A total of 12 fringe patterns (the four-step phase-shifting plus the optimum three-frequency selection method) and a uniform blue image are projected for each exposure. The proposed method requires three groups of fringe patterns with a total of 39 images to reconstruct 3D shape of HDR objects and one uniform image to calculate the optimal exposure time.

The flowchart of the proposed method is shown in Fig. 3, including the following steps.

 figure: Fig. 3.

Fig. 3. Flow chart of the proposed method.

Download Full Size | PDF

Step 1: preparation before measurement. System parameter calibration and crosstalk coefficient function calibration. Zhang's method [31] is applied to achieve system calibration, which involves determining the relationship between absolute phase and depth, as well as the correlation between pixel positions and XY coordinates.

Step 2: process of image fusion and phase calculation. Firstly, a uniform blue image with intensity of 255 is projected and captured by the color camera with exposure time ${t_0}$, and then the object surface is masked into three areas ${Z_B}$, ${Z_G}$ and ${Z_R}$ with different reflectivity by $mas{k_{{Z_B}}}$, $mas{k_{{Z_G}}}$ and $mas{k_{{Z_R}}}$. Then calibrate crosstalk coefficient function to calculate the intensity of fringe pattern in the saturated areas ${Z_G}$ and ${Z_R}$ on channel B. The optimal exposure time $t_{opt}^i$ of areas ${Z_B}$, ${Z_G}$ and ${Z_R}$ is obtained by linear photometric response of the camera. Three sets of blue phase-shifting fringe patterns and blue uniform images are captured at $t_{opt}^i$. Intensities are extracted from each set of images by $mas{k_{{Z_B}}}$, $mas{k_{{Z_G}}}$ and $mas{k_{{Z_R}}}$, and then the fully extracted portions of the fringe image are added to a fused HDR image. Finally, the fused HDR image is used to calculate the absolute phase.

Step 3: reconstruct 3D shape of the HDR object.

3. Experiments

3.1 System hardware setup

To verify the performance of the proposed method, a fringe projection hardware system has been developed, as shown in Fig. 4. The system consists of a digital light processing (DLP) projector (TI, LightCrafter 4500, with resolution of 912 × 1140) and a complementary metal oxide semiconductor (CMOS) camera fitted with a 16 mm lens (XIMEA, MQ042CG-CM, with a resolution of 2048 × 2048).

 figure: Fig. 4.

Fig. 4. System setup.

Download Full Size | PDF

3.2 System and crosstalk coefficient function calibration

The study employs Zhang's method [31] to ascertain the relationship of absolute phase and depth, as well as the association between pixel position and X, Y coordinates to accomplish system calibration. To verify the accuracy of measurement system, we measured the standard ceramic spheres shown in Fig. 5(a). The representative blue fringe pattern is shown in Fig. 5(b), the absolute phase map and reconstructed 3D result are obtained as Fig. 5(c) and Fig. 5(d). We utilized the measured data shown in Fig. 5(d) to fit the sphere’s radius of sphere A and sphere B. The radius of sphere A and sphere B measured by a coordinate measuring machine (CMM) are 19.0502 mm and 19.0479 mm, which can be used as the ground truth. The differences between the measured data and ground truth are shown in Table 1. The maximum absolute error is 0.0350 mm. The calibrated fringe projection system can accurately convert absolute phase into 3D data.

 figure: Fig. 5.

Fig. 5. Standard ceramic spheres. (a) Photograph of the pair of standard ceramic spheres, (b) representative fringe pattern, (c) absolute phase map, (d) 3D result.

Download Full Size | PDF

Tables Icon

Table 1. Accuracy results of measurement system

To calibrate the crosstalk coefficient function, four uniform blue images with intensity of 50, 100, 150 and 200 were projected onto a flat whiteboard, and then captured by the color camera. We separated the four images into R and G components, and take the average intensity of the pixels of the R and G components to obtain the intensity of ${f_R}({I_B})$ and ${f_G}({I_B})$. Then the least squares approach was used to solve the fitting parameters ${m_G}$, ${n_G}$, ${m_R}$ and ${n_R}$ of Eq. (4). The crosstalk coefficient function of the camera was computed as shown in Eq. (15). The intensity responses of channels G and R to channel B of the camera are shown in Fig. 6(a) and 6(b).

$$\left\{ {\begin{array}{c} {{f_R}({I_B}) = 0.028{I_B} + 3.260}\\ {{f_G}({I_B}) = 0.191{I_B} + 6.803}\\ {{f_B}({I_B}) = {I_B}} \end{array}} \right.$$

 figure: Fig. 6.

Fig. 6. (a) The intensity response of channel G to channel B, (b) the intensity response of channel R to channel B.

Download Full Size | PDF

3.3 HDR measurement

In this experiment, the metal part of a spherical cover as shown in Fig. 7(a) was measured. Firstly, blue fringe pattern was projected onto the surface of the metal part of a spherical cover, and it was measured by the traditional method with a single exposure time ${t_0} = 40000\mu s$. The four-step phase-shifting algorithm was used to calculate the wrapped phase, and the optimum three-frequency selection method with the optimum fringe numbers of 100, 99 and 90 was applied to calculate the absolute phase. Figure 8 shows the measurement results. One of the captured blue fringe patterns with an exposure time of ${t_0} = 40000\mu s$ is shown in Fig. 8(a). The absolute phase map and 3D result by the traditional method are shown in Fig. 8(b) and 8(c). It shows that pixel saturation occurred in the bright area, causing obvious errors in the reconstructed 3D results. By the proposed method, a uniform blue image with intensity of 255 was projected onto the object, and the camera captured the image at initial exposure time of ${t_0} = 40000\mu s$ to mask the surface into three areas of ${Z_B}$, ${Z_G}$ and ${Z_R}$ with different reflectivity. Area ${Z_B}$ is the unsaturated area on the blue channel and areas ${Z_G}$ and ${Z_R}$ are the saturated areas on the blue channel. By Eq. (11), the corresponding exposure time of areas ${Z_B}$, ${Z_G}$ and ${Z_R}$ was computed as $t_{opt}^{{Z_B}} = 40000\mu s$, $t_{opt}^{{Z_G}} = 8304\mu s$ and $t_{opt}^{{Z_R}} = 1394\mu s$. Then three sets of blue fringe patterns $I_{opt}^{{Z_B}}$, $I_{opt}^{{Z_G}}$ and $I_{opt}^{{Z_R}}$ were captured with exposure time of $t_{opt}^{{Z_B}}$, $t_{opt}^{{Z_G}}$ and $t_{opt}^{{Z_R}}$. $I_{opt}^{{Z_B}}$ was multiplied by $mas{k_{{Z_B}}}$ of area ${Z_B}$ to obtain the fringe pattern being optimally exposed of area ${Z_B}$. Similarly, the fringe pattern optimally exposed of area ${Z_B}$ and ${Z_G}$ were obtained. Then the fully extracted portions of the fringe pattern were added to a new fused image. Figure 9 shows the process of fusing blue fringe patterns captured at three exposure time into fused HDR fringe pattern. The fused HDR fringe pattern by the proposed method is shown in Fig. 8 (d), and the corresponding phase map and 3D result are shown in Fig. 8 (e) and Fig. 8 (f). The proposed method can provide 3D shape without measurement error caused by pixel saturation.

 figure: Fig. 7.

Fig. 7. (a) Photograph of the metal part of a spherical cover and (b) photograph of metal flat part.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Measurement results for the metal part of a spherical cover. (a) fringe pattern of the traditional method with a single exposure time, (b) absolute phase map by the traditional method, (c) corresponding 3D result of (a), (d) fused HDR fringe pattern of the proposed method, (e) absolute phase map by the proposed method, (f)corresponding 3D result of (d).

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Fusion process of the fringe pattern.

Download Full Size | PDF

To further demonstrate the performance of the proposed method, another metal flat part having specular surface as shown in Fig. 7(b) was tested. One of the blue fringe patterns by the traditional method was captured at exposure time ${t_0} = 72000\mu s$, as shown in Fig. 10(a). Those overexposed areas result in poor absolute phase map and 3D result, as shown in Fig. 10(b) and 10(c). Then the proposed method was implemented. By Eq. (11), exposure time of the proposed method was obtained as $t_{opt}^{{Z_B}} = 72000\mu s$, $t_{opt}^{{Z_G}} = 15074\mu s$ and $t_{opt}^{{Z_R}} = 3144\mu s$. The fused HDR fringe pattern by the proposed method is shown in Fig. 10(d), and the corresponding phase map and 3D result are shown in Fig. 10(e) and Fig. 10(f). Experimental results confirm that the proposed method can reconstruct 3D shape for HDR objects.

 figure: Fig. 10.

Fig. 10. Measurement results for metal flat part. (a) fringe pattern of the traditional method with a single exposure time, (b) absolute phase map by the traditional method, (c) corresponding 3D result of (a), (d) fused HDR fringe pattern by the proposed method, (e) absolute phase map by the proposed method, (f)corresponding 3D result of (d).

Download Full Size | PDF

To evaluate the proposed method quantitatively, a metal artificial standard steps as shown in Fig. 11(a) was tested. To better describe the effectiveness and accuracy of the proposed method, we measured it using Zhang’s method [7] for comparison. The exposure time of the traditional method was set as ${t_0} = 35000\mu s$, the exposure time of Zhang’s method was set as ${t_1} = 50000\mu s$, ${t_2} = 40000\mu s$, ${t_3} = 30000\mu s$, ${t_4} = 20000\mu s$, ${t_5} = 7000\mu s$, ${t_6} = 4000\mu s$, ${t_7} = 3000\mu s$, ${t_8} = 1000\mu s$ and the corresponding exposure time of the proposed method was computed as $t_{opt}^{{Z_B}} = 35000\mu s$, $t_{opt}^{{Z_G}} = 7266\mu s$ and $t_{opt}^{{Z_R}} = 1150\mu s$. The corresponding fringe patterns are shown in Fig. 11(b)-11(d). The absolute phase map and the reconstructed 3D result are obtained as shown in Fig. 12(a)-12(c) and Fig. 12(d)-12(f), respectively. It can be seen from Fig. 12 that there is obvious measurement error by the traditional method, while Zhang’s and the proposed method can reconstruct high-quality 3D data.

 figure: Fig. 11.

Fig. 11. Metal artificial standard steps. (a) Photograph of metal artificial standard steps, (b) representative fringe pattern of the traditional method with a single exposure time, (c) fringe pattern of metal artificial standard steps by Zhang’s method, (d) fused HDR fringe pattern of metal artificial standard steps by the proposed method.

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. Measurement results for metal artificial standard steps. (a) absolute phase map of fringe pattern by the traditional method, (b) absolute phase map by Zhang’s method, (c) absolute phase map by the proposed method, (d) corresponding 3D result by the traditional method, (e) corresponding 3D measurement result by Zhang’s method, (f)corresponding 3D result by the proposed method.

Download Full Size | PDF

To quantitatively evaluate the accuracy of the proposed method, 3D data of each step surface was fitted into different step planes, and the average distance from points on the step plane to the adjacent step plane was used as step height. Height data of the step surface was measured by the CMM as a reference value. The comparisons of the proposed method with Zhang’s method are shown in Table 2. Zhang’s method needs eight groups of fringe patterns with a total of 96 images to get highly accurate 3D result of the metal artificial standard steps, while the proposed method is more efficient by requiring three groups of fringe patterns with a total of 39 images to reconstruct 3D shape and one uniform image to calculate the optimal exposure time. Moreover, compared with Zhang's method, the proposed method has slightly higher accuracy than Zhang's method.

Tables Icon

Table 2. Comparison of the proposed method with Zhang’s method

The accuracy of the proposed method has been verified by measuring metal artificial standard steps. To evaluate the effectiveness of the proposed method, it was compared with the latest multi-exposure methods [10,11,13], as shown in Table 3. Tang et al. [10] estimated the surface reflectivity of the measured target by using five to six exposure times to achieve automatic exposure technology. Wang et al. [11] chose a wide interval of four exposure times to ensure the effectiveness of the fringes, requiring four different exposure times and two four-step phase-shifting fringe patterns. Li et al. [13] used two cameras to fuse multiple exposure images through the main frequency component, requiring five to nine exposure times. While the proposed method requires one uniform blue image to calculate the optimal exposure time and three exposure times to reconstruct 3D data. Due to the lack of specific phase calculation methods, the proposed method was only compared with literature [10] and [13] in terms of exposure times. The number of captured images in literature [11] and the proposed method are 12 × 4 + 4 × 2 = 56 and 13 × 3 + 1 = 40, respectively. Therefore, the proposed method having the following advantages: does not require calculating the surface reflectivity of the measured object and requires less exposure times compared with [10]; requires less exposure times and fewer fringe patterns, compared with [11], requires fewer hardware devices and has lower computational complexity, compared with [13].

Tables Icon

Table 3. Comparison of the proposed method with the latest multi-exposure methods

4. Conclusions

In this paper, a HDR 3D shape measuring method has been proposed based on crosstalk characteristic of a color camera. For HDR objects with unknown surface reflectivity, the surface is marked into three areas with different reflectivity by using different intensity responses of red, green and blue channels of color images. Crosstalk coefficient function of the color camera is applied to compute the intensity for saturated areas. The optimal exposure time is determined by areas using the linear photometric response of the camera. The proposed method utilizes the fixed response of red and green channels to blue light, determining the intensity value of the saturation areas. The proposed method can divide the surface of HDR objects with unknown surface reflectivity into saturated and unsaturated areas with different reflectivity. By utilizing the crosstalk characteristics of color cameras, the intensity value of the saturated areas is cleverly determined, which can achieve the optimal exposure time of HDR objects. It has the following advantages: using fewer images to measure HDR objects, avoiding complex computational problems and additional hardware facilities. The experimental results show that the proposed method reconstructs 3D shape of HDR objects with high accuracy.

The proposed method can realize the measurement of HDR objects such as metal objects with specular reflection properties, without calculating the surface reflectivity of the measured object. However, the proposed method still needs improvements: 1) it may not work well when the object is in motion and is not suitable for real-time measurement, as multiple sets of images are projected; 2) because of using the crosstalk characteristics of color cameras, it is not suitable for measuring color HDR objects. Our future work will involve reducing the number of fringe patterns, developing measurement techniques for moving HDR objects and color HDR objects.

Funding

National Natural Science Foundation of China (52075147), Post-doctoral Funded Scientific Research Projects in Hebei Province (B2021003024), State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology (EERIPD2021003), S & T Program of Hebei (215676146H, 225676163GH).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128–160 (2011). [CrossRef]  

2. F. Chen, G.M. Brown, and M. Song, “Overview of 3-D shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000). [CrossRef]  

3. D. Yang, D. Y. Qiao, C.F. Xia, et al., “Adaptive horizontal scaling method for speckle-assisted fringe projection profilometry,” Opt. Express 31(1), 328–343 (2023). [CrossRef]  

4. J. Xu and S. Zhang, “Status, challenges, and future perspectives of fringe projection profilometry,” Opt. Lasers Eng. 135, 106193 (2020). [CrossRef]  

5. S. Zhan, H.L. Jiang, H.B. Lin, et al., “A high dynamic range structured light means for the 3D measurement of specular surface,” Opt. Lasers Eng. 95, 8–16 (2017). [CrossRef]  

6. J.H. Wang and Y.X. Yang, “A new method for high dynamic range 3D measurement combining adaptive fringe projection and original-inverse fringe projection,” Opt. Lasers Eng. 163, 107490 (2023). [CrossRef]  

7. S. Zhang and S.T. Yau, “High dynamic range scanning technique,” Opt. Eng. 48(3), 033604 (2009). [CrossRef]  

8. L. Ekstrand and S. Zhang, “Autoexposure for three-dimensional shape measurement using a digital-light-processing projector,” Opt. Eng. 50(12), 123603 (2011). [CrossRef]  

9. S.J. Feng, Y.Z. Zhang, Q. Chen, et al., “General solution for high dynamic range three-dimensional shape measurement using the fringe projection technique,” Opt. Lasers Eng. 59, 56–71 (2014). [CrossRef]  

10. S.M. Tang, X. Zhang, C. Li, et al., “High dynamic range three-dimensional shape reconstruction via an auto-exposure-based structured light technique,” Opt. Eng. 58(06), 064108 (2019). [CrossRef]  

11. J.H. Wang, Y.G. Zhou, and Y.X. Yang, “A novel and fast three-dimensional measurement technology for the objects surface with non-uniform reflection,” Results Phys. 16, 102878 (2020). [CrossRef]  

12. S. Zhang, “Rapid and automatic optimal exposure control for digital fringe projection technique,” Opt. Lasers Eng. 128, 106029 (2020). [CrossRef]  

13. J. Li, J.T. Guan, X.B. Chen, et al., “Exposure map fusion for precise 3-D reconstruction of high dynamic range surfaces,” IEEE Trans. Instrum. Meas. 71, 5022911 (2022). [CrossRef]  

14. C. Waddington and J. Kofman, “Analysis of measurement sensitivity to illuminance and fringe-pattern gray levels for fringe-pattern projection adaptive to ambient lighting,” Opt. Lasers Eng. 48(2), 251–256 (2010). [CrossRef]  

15. G. Babaie, M. Abolbashari, and F. Farahi, “Dynamics range enhancement in digital fringe projection technique,” Precis. Eng. 39, 243–251 (2015). [CrossRef]  

16. H. Lin, J. Gao, Q. Mei, et al., “Adaptive digital fringe projection technique for high dynamic range three-dimensional shape measurement,” Opt. Express 24(7), 7703–7718 (2016). [CrossRef]  

17. H.Z. Jiang, H.J. Zhao, and X.D. Li, “High dynamic range fringe acquisition: a novel 3-D scanning technique for high-reflective surfaces,” Opt. Lasers Eng. 50(10), 1484–1493 (2012). [CrossRef]  

18. Y.Z. Liu, Y.J. Fu, X.Q. Cai, et al., “A novel high dynamic range 3D measurement method based on adaptive fringe projection technique,” Opt. Lasers Eng. 128, 106004 (2020). [CrossRef]  

19. X.X. Cai, R.H. Xu, H. Li, et al., “High-reflective surfaces shape measurement technology based on adaptive fringe projection,” Sens. Actuators, A 347, 113916 (2022). [CrossRef]  

20. S.Y. Xu, T.Y. Feng, and F.F. Xing, “3D measurement method for high dynamic range surfaces based on adaptive fringe projection,” IEEE Trans. Instrum. Meas. 72, 5013011 (2023). [CrossRef]  

21. B. Salahieh, Z.Y. Chen, J.J. Rodriguez, et al., “Multi-polarization fringe projection imaging for high dynamic range objects,” Opt. Express 22(8), 10064–10071 (2014). [CrossRef]  

22. S.J. Feng, Q. Chen, C. Zuo, et al., “Fast three-dimensional measurements for dynamic scenes with shiny surfaces,” Opt. Commun. 382, 18–27 (2017). [CrossRef]  

23. Y. Zheng, Y.J. Wang, V. Suresh, et al., “Real-time high-dynamic-range fringe acquisition for 3D shape measurement with a RGB camera,” Meas. Sci. Technol. 30(7), 075202 (2019). [CrossRef]  

24. Y.Z. Liu, Y.J. Fu, Y.H. Zhuan, et al., “High dynamic range real-time 3D measurement based on Fourier transform profilometry,” Opt. Laser Technol. 138, 106833 (2021). [CrossRef]  

25. M.K. Yue, J.Y. Wang, J.S. Zhang, et al., “Color crosstalk correction for synchronous measurement of full-field temperature and deformation,” Opt. Lasers Eng. 150, 106878 (2022). [CrossRef]  

26. Y.K. Yin, Z.W. Cai, H. Jiang, et al., “High dynamic range imaging for fringe projection profilometry with single-shot raw data of the color camera,” Opt. Lasers Eng. 89, 138–144 (2017). [CrossRef]  

27. L. Zhang, Q. Chen, C. Zuo, et al., “High dynamic range 3D shape measurement based on the intensity response function of a camera,” Appl. Opt. 57(6), 1378–1386 (2018). [CrossRef]  

28. X.Y. Su and W.J. Chen, “Reliability-guided phase unwrapping algorithm: a review,” Opt. Lasers Eng. 42(3), 245–261 (2004). [CrossRef]  

29. C. Zuo, S.J. Feng, L. Huang, et al., “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018). [CrossRef]  

30. Z.H. Zhang, C.E. Towers, and D.P. Towers, “Time efficient color fringe projection system for simultaneous 3D shape and color using optimum 3-frequency selection,” Opt. Express 14(14), 6444–6455 (2006). [CrossRef]  

31. Z.H. Zhang, S.J. Huang, S.S. Meng, et al., “A simple, flexible and automatic 3D calibration method for a phase calculation-based fringe projection imaging system,” Opt. Express 21(10), 12218–12227 (2013). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. The spectral response curve of camera.
Fig. 2.
Fig. 2. The light reflected to camera.
Fig. 3.
Fig. 3. Flow chart of the proposed method.
Fig. 4.
Fig. 4. System setup.
Fig. 5.
Fig. 5. Standard ceramic spheres. (a) Photograph of the pair of standard ceramic spheres, (b) representative fringe pattern, (c) absolute phase map, (d) 3D result.
Fig. 6.
Fig. 6. (a) The intensity response of channel G to channel B, (b) the intensity response of channel R to channel B.
Fig. 7.
Fig. 7. (a) Photograph of the metal part of a spherical cover and (b) photograph of metal flat part.
Fig. 8.
Fig. 8. Measurement results for the metal part of a spherical cover. (a) fringe pattern of the traditional method with a single exposure time, (b) absolute phase map by the traditional method, (c) corresponding 3D result of (a), (d) fused HDR fringe pattern of the proposed method, (e) absolute phase map by the proposed method, (f)corresponding 3D result of (d).
Fig. 9.
Fig. 9. Fusion process of the fringe pattern.
Fig. 10.
Fig. 10. Measurement results for metal flat part. (a) fringe pattern of the traditional method with a single exposure time, (b) absolute phase map by the traditional method, (c) corresponding 3D result of (a), (d) fused HDR fringe pattern by the proposed method, (e) absolute phase map by the proposed method, (f)corresponding 3D result of (d).
Fig. 11.
Fig. 11. Metal artificial standard steps. (a) Photograph of metal artificial standard steps, (b) representative fringe pattern of the traditional method with a single exposure time, (c) fringe pattern of metal artificial standard steps by Zhang’s method, (d) fused HDR fringe pattern of metal artificial standard steps by the proposed method.
Fig. 12.
Fig. 12. Measurement results for metal artificial standard steps. (a) absolute phase map of fringe pattern by the traditional method, (b) absolute phase map by Zhang’s method, (c) absolute phase map by the proposed method, (d) corresponding 3D result by the traditional method, (e) corresponding 3D measurement result by Zhang’s method, (f)corresponding 3D result by the proposed method.

Tables (3)

Tables Icon

Table 1. Accuracy results of measurement system

Tables Icon

Table 2. Comparison of the proposed method with Zhang’s method

Tables Icon

Table 3. Comparison of the proposed method with the latest multi-exposure methods

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

m a s k Z B ( x , y ) = { 0 I B ( x , y ) = 250 1 I B ( x , y ) < 250 m a s k Z G ( x , y ) = { 0  else 1 I B ( x , y ) = 250 &   I G ( x , y ) < 250 m a s k Z R ( x , y ) = { 0  else 1   I G ( x , y ) = 250 & I R ( x , y ) < 250
I i z ( x , y ) = I i ( x , y ) × m a s k Z i ( x , y ) , i = R G B
{ f R ( I B ) = 1 n I B i , j I i j R   f G ( I B ) = 1 n I B i , j I i j G  {  ( i , j ) | I i j B = I B f B ( I B ) = 1 n I B i , j I i j B = I B  
{ f G ( I B ) = m G I B + n G f R ( I B ) = m R I B + n R f B ( I B ) = I B
I ( x , y ) = α t ( I a + ρ ( x , y ) I e + ρ ( x , y ) I p ) + I n
I = α ρ I p t
I = K t
I i d e a l = K t o p t
K = I 0 t 0
t o p t = I i d e a l t 0 I 0
t o p t Z B = I i d e a l t 0 I Z B m t o p t Z G = I i d e a l t 0 I Z G m t o p t Z R = I i d e a l t 0 I Z R m
I f ( x , y ) = i m a s k i ( x , y ) × I o p t i ( x , y ) , i = Z R , Z G , Z B I u ( x , y ) = i m a s k i ( x , y ) × I u i ( x , y ) , i = Z R , Z G , Z B
I n ( x , y ) = 2 I f ( x , y ) I u ( x , y ) I u ( x , y ) + β
h ( x , y ) = n = 0 N a n ( x , y ) Δ φ ( x , y ) n , n = 0 , 1 , , N
{ f R ( I B ) = 0.028 I B + 3.260 f G ( I B ) = 0.191 I B + 6.803 f B ( I B ) = I B
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.