Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Suppressing blooming interference in structured light systems by adaptively reducing camera oversaturated energy

Open Access Open Access

Abstract

Structured light systems often suffer interference of the fringes by blooming when scanning metal objects. Unfortunately, this problem cannot be reliably solved using conventional methods such as the high dynamic range (HDR) method or adaptive projection technique. Therefore, this study proposes a method to adaptively suppress the oversaturated areas that cause blooming as the exposure time increases and then fuse the multi-exposure time decoding results using a decoding inheritance method. The experimental results demonstrate that the proposed method provides a more effective suppression of blooming interference than existing methods.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Many systems, such as binocular vision systems [1] and structured light systems [2], use three-dimensional (3D) scanner technology to confirm machining precision. Nevertheless, binocular vision systems have difficulty recreating a surface of object when there is no discernible texture. Structured light systems use particular patterns projected onto the item to solve this challenge. The shape of the object is then determined by the 3D spatial relationship between the projector and camera. However, the additional light source (i.e., the projector) may cause the camera to oversaturate, thereby resulting in reconstruction errors. Furthermore, structured light systems often encounter the problem of insufficient camera dynamic range when measuring metallic objects. This leads to camera pixel saturation [3] and the generation of blooming, which interferes with the captured fringe patterns. The saturated pixel problem can be improved by changing the reflectance characteristics of the surface through the use of anti-reflective paints or coatings to mitigate the extreme reflection [4,5]. However, the measured surface then exhibits thickness errors with respect to the actual shape. Moreover, the use of paints and coatings is not permitted on certain metal surfaces. Accordingly, various alternative measurement solutions for the paints and coatings of high-reflectance metal surfaces have been proposed. For example, the high dynamic range (HDR) [6,7] method fuses multiple images with different exposure times in order to retain the details of both the bright and the dark areas. However, it also retains much of the blooming around saturated pixels. Multi-view structured light systems [810] have also been proposed as a means of compensating for defective areas. These methods [11,12] determine the best fringe areas and fuse fringe images at different exposure times. However, properly adjusting the camera exposure time requires significant user experience. Therefore, it is essential to streamline the exposure time adjustment procedure. The study provides a simplified method of controlling exposure time as well as guidelines (see Sec. 3.2.3). Generally, different wavelengths of light exhibit different reflectance for different metallic materials [13]. With the exception of aluminum, most materials have the characteristic that their reflectance scales proportionally with the wavelength of the incident light. Consequently, it is possible to circumvent the pixel saturation problem by projecting light at specific wavelengths. Although polarizing filters [14,15] can effectively remove any excess light that is not polarized, manual adjustment of the polarized light is required, and the polarized light received by the camera must be at the same angle to avoid equipment displacement errors. Yamazaki and Xu [16] combined the projection methods of screen and projector to capture the mirrored and non-mirrored areas and merged two kinds of images. Zhang et al. [17] used a deep learning approach to repair the saturated regions of the image without the need for additional images or hardware adjustments during the measurement process. The adaptive projection techniques in [18,19] project darker grayscale intensities for more reflective surfaces.

However, none of the methods described above solve the issue of blooming around the saturated area and the interference that this then produces with the fringe patterns. Blooming occurs when the excessive electric charge of the oversaturated pixels flows into surrounding pixels. Consequently, if the energy of the oversaturated pixels can be decreased, the generation of blooming can also be suppressed. In structured light systems, the most direct way of achieving this is to reduce the intensity of the corresponding projector pixels. To obtain high-quality fringe images, it is necessary to capture multiple-exposure-time images and perform fusion processing. However, as the exposure time increases, more serious energy overflows of the oversaturated pixels occur, and thus the blooming problem worsens. To solve this problem, it is necessary to dynamically update the area over which the intensity of the affected pixels is suppressed at different exposure times.

In this study, a suppressing blooming structured light system is presented. By decoding results at low exposure times, the system finds the projector pixels that are causing the camera to become oversaturated. The intensity of those pixels is then decreased. The system modifies the areas with decreased intensity and corresponding decoding results are obtained as the exposure time increases. In order to fuse these results, the study presents a method called decoding inheritance that uses decoding results from high exposure times to compensate for dark regions that cannot be decoded at low exposure times. An exposure time adjustment rule is developed to pinpoint the projector pixels that are oversaturated at high exposure times, ensuring that reliable results from low exposure times can be used. Finally, the results of the decoding inheritance obtained at the maximum exposure time are reconstructed using a triangulation method [20].

The remainder of this paper is organized as follows. Section 2 describes the light reflection model of metals and the difficulties associated with structured light scanning. Section 3 introduces the architecture of the proposed adaptive blooming suppression structured light system. Section 4 presents and discusses the experimental results. Finally, Section 5 provides some brief concluding remarks.

2. Reflection light model and difficulties associated with structured light systems

2.1 Reflection models of metal

Ideally, the surface reflectivity of objects can be described by the Lambertian diffuse reflection model [21], which assumes that the reflected light is isotropic and uniformly scattered in all directions. However, this ideal model does not hold for surfaces with a metallic luster because these materials have a higher reflectance than others with the same surface roughness. Nayer [22] proposed an arbitrary light reflection model in which the reflected light comprised three components: a diffuse lobe, a specular lobe, and a specular spike, depending on the object surface reflection direction and energy concentration. As shown in Fig. 1, the diffuse lobe has a semi-circular random reflection. By contrast, in the specular lobe, most of the light is transmitted in a certain direction, and hence the light is more concentrated. Finally, for the specular spike component, all the light is transmitted in a certain direction, and the sensor must therefore be located in a specific position to obtain the reflective information. Unground metal surfaces are typically rougher, which causes the reflected light energy to be more widely distributed. On the other hand, metal surfaces with fine grinding concentrate more reflected light. For most metal surfaces, the reflected light is dominated by the diffuse lobe and specular spike components. However, when only a weak diffuse component is present on the surface, the sensor captures less reflective information. Consequently, the exposure time of the camera must be increased to compensate for this deficiency. However, increasing the exposure time tends to increase the size of the pixel-saturated area, thereby worsening the blooming effect.

 figure: Fig. 1.

Fig. 1. Arbitrary light reflection model proposed by Nayer [22].

Download Full Size | PDF

2.2 Short- and long-range effects of structured light

In addition to the light reflection modes proposed by Nayer [22], it is necessary to consider the interreflection effects that occur during light transmission. In this situation, the light that is directly reflected from the object to the sensor is called the direct component, whereas that which is reflected to other surfaces of the object before it enters the sensor is called the indirect component. When the intensity of the indirect light is greater than that of the direct light, it interferes with the direct light and results in a phenomenon called interreflection. The closer the geometry of the object approaches a V-shape, the more severe the effect of interreflection becomes. Nayer and Krishnan [23] proposed a method to quickly separate the direct and indirect components. However, this method requires the capture of a large number of images and is therefore impractical for industrial applications. Gupta [24] showed that the projection of low-frequency fringes can effectively suppress the short-range effects of blurring but is susceptible to interreflection. Conversely, the projection of high-frequency fringes can effectively suppress the long-range effects of interreflection but is liable to blurring. This problem can be addressed using Large gap gray code (LGGC) [25]. As shown in Figs. 2(a) and 2(b), the fringe frequency of LGGC lies between low frequency and high frequency and consequently mitigates the interreflection and out-of-focus issues associated with conventional gray code. In this study, complementary fringes are used for better binarization of LGGC fringes.

 figure: Fig. 2.

Fig. 2. (a) Conventional gray code contains low-frequency and high-frequency fringes and is susceptible to short-range and long-range effects; (b) large gap gray code (LGGC) [25] can be effective in mitigating the interreflection and out-of-focus problem.

Download Full Size | PDF

2.3 Camera blooming phenomenon

According to [26], the electric charge of a camera is constrained by the capacity of its individual photodiodes (or pixels), and a pixel is said to be saturated when its saturation charge value is reached. In this situation, excess electric charge overflows from the photodiode into the neighboring pixel structure, resulting in blooming. Sequin et al. [27] designed an anti-blooming camera to overcome this problem. However, this type of camera is not yet widely available. During the structured light scanning process, the saturation of the camera pixels and the resulting blooming effect seriously impair the quality of the fringes, as shown in Fig. 3, and thus degrade the accuracy of the measurement results. The present study addresses this issue by developing an adaptive blooming suppression structured light system that adjusts the region of the oversaturated pixels to be suppressed as the exposure time increases.

 figure: Fig. 3.

Fig. 3. Camera saturation and blooming affect fringe quality in structured light systems.

Download Full Size | PDF

3. Reflection light model and difficulties associated with structured light systems

3.1 Blooming detector

As shown in Fig. 3, blooming seriously interferes with nearby fringes. To alleviate this effect, it is desirable to reduce the intensity of the projector pixels that cause the camera to become oversaturated to zero. The first step in detecting the oversaturated area is to construct a minimum intensity map ${P^{min}}$ using the minimum grayscale values of each pixel in image P of the projected fringes and image ${P^c}$ of the inverse fringes, as shown in Fig. 4(a). The minimum intensity map ${P^{min}}$ represents the black area of the projected fringes. The intensity value of ${P^{min}}$ is usually less than 255, although the intersection boundary between the black and white fringes is neither fully dark nor fully light because of the point spread phenomenon [28]. If any pixel of ${P^{min}}$ suffers from saturation, it can be inferred that the fringes are disturbed by the saturation problem. Similarly, the presence of blooming around the oversaturated area of ${P^{min}}$ indicates that the fringes are disturbed by blooming.

 figure: Fig. 4.

Fig. 4. Identification of oversaturated areas ${O_n}$ that cause camera blooming: (a) obtain the minimum energy map ${P^{min}}$ from the fringe image P and the inverse fringe image ${P^c}$, (b) apply an averaging filter of size 7${\times} $7 to the minimum intensity map ${P^{min}}$ and compute the average intensity map ${\bar{P}_{min}}$, (c) set the averaging threshold ${T_{Avg}}$ to 250 and roughly locate the oversaturated areas ${O_n}\; $that cause camera blooming, (d) perform UNION calculation on $\tilde{R}_n^m$ detected in each bit at the n-th exposure time.

Download Full Size | PDF

Oversaturated areas of ${P^{min}}$ contain higher energy than other areas. One can locate the oversaturated areas by merely looking for localized areas with higher energy. An averaging filter [29] calculates the average energy value of each area in the image in order to identify the local area with the higher relative energy. The averaging filter is useful for calculating the average energy of various local areas in the image, even though it blurs the images and reduces noise. As shown in Fig. 4(b), any area of the filtered image ${\bar{P}_{min}}$ with a high average energy value is deduced to be affected by oversaturation or blooming. Accordingly, an average energy threshold ${T_{Avg}}$ can be set such that any area with an average energy higher than this threshold can be crudely approximated as an oversaturated region $\tilde{R}$, as shown in Fig. 4(c). Notably, the size of the averaging filter depends on the range of the blooming effect. For instance, if blooming occurs over a larger range, the filter should be assigned a larger size. However, in this case, small-range blooming may be overlooked. In practice, the average energy threshold, ${T_{Avg}},$ determines the sensitivity of the system in detecting oversaturated areas. In particular, if ${T_{Avg}}$ is set too large, the sensitivity of the system will be poor; conversely, if the threshold value is set too small, the sensitivity will be too high. According to the experimental environment considered in the present study, the size of the averaging filter was set to 7 × 7, and the ${T_{Avg}}$ value was set to 250. Since oversaturated regions typically have a grayscale value of 255, 254 is chosen as the binarization threshold value. On the other hand, it is crucial to take into account how Gaussian noise [30] affects the camera image. Errors in estimate due to random intensity values could result in oversaturated areas having a grayscale value of 254 or below. As a result, the blooming detector will be more stable if the binarization threshold is set to a lower value, like 250. The area that lessens blooming will still fluctuate based on how severe blooming is even if the parameter is set to a constant value. It is often advised to alter depending on the material and surface roughness. The landscape created by reflected light is actually quite intricate. Consequently, only a simple estimation of the oversaturated regions can be performed in this manner.

Since a bit LGGC contains a set of images and inverse images, there will be one detection of oversaturated areas causing blooming. At any exposure time, n, the detection results for the oversaturated bits are processed by the UNION operation shown in Fig. 4(d) to obtain the oversaturated area, ${O_n}$, that causes blooming, and described in the following equation:

$${O_n} = \tilde{R}_n^1 \cup \tilde{R}_n^2 \cup \ldots \cup \tilde{R}_n^M,where\; \; n = 1,2,3, \ldots N$$
where $\tilde{R}_n^m$ is a rough estimate of the oversaturated area of the m-th bit at the n-th exposure time, N is the total number of exposure times, and M is the total number of bits projected at a single exposure time.

The decoding results can be regarded as the correspondence relationship between the camera pixels and projector pixels [19]. Thus, the decoding results can be used to map the estimated ${O_n}$ on the camera image plane to the projector image plane to find the projector pixels that cause the camera pixels to saturate, and should therefore be decreased to zero, as shown in Fig. 5. Since the decoding result cannot be obtained for the pixels that are presently saturated, the previous decoding result ${D_{n - 1}}$ at which the position of ${O_n}$ was not saturated must be used instead to find the mapping relationship between the oversaturated area ${O_n}$ in the camera image plane and the projector image plane. After decreasing the intensity of the projector pixels that cause the camera to be oversaturated to zero, a blooming suppression mask can be obtained, as shown in Fig. 6(a). This mask can then be applied to the LGGC code mentioned in Section 2.2, with the result that the overflow energy from ${O_n}$ is suppressed when capturing the LGGC code. In other words, at the n-th exposure time, the oversaturated area ${O_n}$ is determined, the corresponding blooming suppression mask is applied to the captured LGGC, and the corresponding adaptive inhibition blooming pattern (AIBP) is obtained, as shown in Fig. 6(b).

 figure: Fig. 5.

Fig. 5. After finding the oversaturated area ${O_n}$ in the image, the decoding results are used to find the projector pixels that cause the camera to be oversaturated.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. At the n-th exposure time, (a) obtain the blooming suppression mask when the intensity of the projector pixels that cause camera oversaturation is reduced to zero, and (b) apply this blooming suppression mask to the LGGC fringes and obtain the AIBP.

Download Full Size | PDF

3.2 Decoding inheritance

When the camera is set to a higher exposure time, the details in the low-reflectance areas of the image become clearer. Conversely, when the camera is set to a lower exposure time, the details in the high-reflectance areas are more visible. The images obtained at different exposure times are generally fused into a single image using some form of HDR algorithm. However, the resulting image is susceptible to blooming interference. Therefore, in the present study, the AIBP construct described above is added to the structured light scanning system. In the proposed approach, a novel method is employed to find the optimal decoding result region, and a decoding inheritance method is then used to fuse the best decoding results obtained at each exposure time.

3.2.1 Method for detecting decoding error regions

In general, image P of the projected pattern and image ${P^c}$ of the projected inverse pattern should be complementary after individual binarization. This can be verified using an $XOR$ operation, as shown in Fig. 7. However, this relationship does not hold for the oversaturated regions in the image. This feature can be exploited to develop a simple method for checking whether the fringes are eroded as a result of overexposure of the camera (decoding error area). As shown in Fig. 7, the areas for which the Boolean value is equal to 1 after $XOR$ calculation can be regarded as areas of correct fringes (white area), hereinafter referred to as the reliable region, R. By contrast, the areas for which the Boolean value is equal to 0 after $XOR$ calculation can be regarded as areas of erroneous fringes (black area). This method provides a simple approach for detecting underexposed or overexposed areas in the camera images. However, it is susceptible to binarization errors caused by blooming. Thus, blooming suppression should be applied.

 figure: Fig. 7.

Fig. 7. Project the fringe pattern and inverse fringe pattern onto the object, and then capture the projected fringe images P and projected inverse fringe images ${P^c}$. After individual binarization, obtain $\tilde{P}$ and ${\tilde{P}^C}$. The reliable region $R$ can then be obtained by performing $XOR$ operation on $\tilde{P}$ and ${\tilde{P}^C}$.

Download Full Size | PDF

For a given exposure time, the reliable regions obtained by different bit fringes may differ, as shown in Fig. 8. As described above, the areas for which the XOR calculation yields a value of 0 can be regarded as untrustworthy areas, and the fringes falling in the untrustworthy area are regarded by extension as damaged fringes. However, the reconstruction process requires the fringes of all the bits to perform decoding. As a result, any damaged fringes will inevitably degrade the accuracy of the reconstruction results. To reduce the reconstruction errors caused by such fringes, this study first performs an AND operation on the reliable regions of each bit fringe obtained at the n-th exposure time to generate a reliable mask $Mask_n^{reliability}$, and then applies this mask to the decoding result ${D_n}$ at the current exposure time (see Fig. 9). The reliable mask at the n-th exposure time is thus given as

$$Mask_n^{reliability} = R_n^1 \cap \; R_n^2\; \cap \; \ldots \cap \; R_n^M,\; where\; n = 1,2,3, \ldots N$$
where $R_n^M$ represents the reliable region of the m-th bit at the n-th exposure time, N is the total number of exposure times, and M is the total number of bits projected at each exposure time.

 figure: Fig. 8.

Fig. 8. Calculated reliable region for LGGC fringes of each bit is different.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Obtain the reliable region of each bit at the $n$-th exposure time and perform AND operation to generate a reliable mask. Application of the reliable mask to the decoding result ${D_n}$ at the current exposure time.

Download Full Size | PDF

3.2.2 Multiple exposure decoding result fusion

The overexposed areas in the camera image increase as the exposure time increases. Thus, the number of reliable regions in the high-reflective areas of the object decreases, while the number of reliable regions in the low-reflective regions increases. Exploiting these tendencies, the decoding results obtained at high exposure times can be used to repair areas with no decoding results at low exposure times due to insufficient reflectance. In the present study, the decoding results acquired at different exposure times are fused using a novel decoding inheritance method. In particular, the decoding results in the unreliable regions at the first exposure time are repaired by the decoding results obtained in the reliable regions at the second exposure time. If the decoding results in the unreliable region cannot be successfully repaired in this way, the decoding results obtained in the reliable region at the third exposure time are used instead. This process is performed continuously in this manner until the decoding results for the entire object are completely repaired, as shown in Fig. 10. Figure 11 presents a typical example of the results obtained using the proposed decoding inheritance technique. To reduce the effects of blooming, the repair process employs the decoding results obtained at higher exposure times to repair the decoding results obtained at lower exposure times. In other words, the decoded inheritance result ($D_n^{inherit}$) can be expressed mathematically as

$$\begin{aligned} D_n^{inherit} &= {D_1} \times Mask_1^{reliability} + {D_2} \times ({Mask_2^{reliability} - Mask_1^{reliability}} )\\&\quad+ {D_3} \times ({Mask_3^{reliability} - Mask_2^{reliability} - Mask_1^{reliability}} )+ \ldots \\ &= {D_1} \times Mask_1^{reliability} + \mathop \sum \limits_{n = 2}^N {D_n} \times \left( {Mask_n^{reliability} - \mathop \sum \limits_{i = 1}^{n - 1} Mask_i^{reliability}} \right) \end{aligned}$$
where ${D_n}$ represents the decoding result at the n-th exposure time, $Mask_n^{reliability}$ represents the reliable mask at the n-th exposure time, and N is the number of exposure times.

 figure: Fig. 10.

Fig. 10. In the decoding inheritance method, the decoding result at a higher exposure time is used to supplement the decoding results that cannot be obtained at a lower exposure time.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Regions repaired by decoding inheritance under different exposure times.

Download Full Size | PDF

3.2.3 Setting rule of exposure time

The user's experience must be used to adjust the appropriate exposure time because standards and control mechanisms are frequently lacking. To systematically control the increase in the exposure time and automatically calculate the most suitable blooming suppression pattern at each exposure moment, it is necessary to formulate an appropriate rule for setting the exposure time in the image capture process. To formulate this rule, it is necessary to establish the minimum exposure time ${t_{min}}$, maximum exposure time ${t_{max}}$, and time interval value ${t_{step}}$ for each increment in the exposure time. The exposure time interval value ${t_{step}}$ can be calculated as

$${t_{step}} = \frac{{{t_{max}} - {t_{min}}}}{{N - 1}}$$
where N is the total number of exposures. The $n$-th exposure time, denoted as ${\bar{t}_n}$, can then be calculated simply as
$${\bar{t}_n} = {t_{step}} \times ({n - 1} )+ {t_{min}}$$

Ideally, ${t_{min}}$ should be set such that the maximum intensity value of the captured object image is close to (but less than) 255. However, if this ideal value of ${t_{min}}$ is less than the minimum limit exposure time of the equipment, it should be set as the minimum exposure time of the camera. Conversely, ${t_{max}}$ should be set such that the minimum intensity value of the captured object image is greater than 0. If the corresponding value of ${t_{max}}$ exceeds the capability of the camera, it should be set as the maximum exposure time of the camera instead. From Fig. 5, it can be seen that ${O_n}$ detected by the blooming detector requires the result of the previous decoding inheritance operation to deduce the projector pixels that cause the camera to be oversaturated. However, if the exposure time interval ${t_{step}}$ is too large, the oversaturated areas may not have any decoding inheritance results from the previous exposure. Thus, there is no way to know which projector pixels are responsible for the oversaturation of the camera image. However, if ${t_{step}}$ is set too small, the time required to complete the restoration of the complete image will be too long. Hence, the most suitable value of ${t_{step}}$ must be determined according to the particular experimental environment.

Figure 12 presents a flow chart of the entire adaptive blooming suppression system. At ${\bar{t}_1} = {t_{min}}$, the initial decoding result ${D_1}$ is obtained after capturing a set of fringe patterns without blooming suppression. At the next exposure time (${\bar{t}_2} = {t_{step}} \times 1 + {t_{min}}$), the fringe patterns are captured once again, and the oversaturated area ${O_2}$ is obtained using the blooming detector. The projector pixels responsible for causing the oversaturated pixels are determined using the initial decoding result, ${D_1}$, and the AIBP is generated by reducing the identified projector pixels to zero. The AIBP is projected, yielding the second decoding result ${D_2}$, and the decoding inheritance method is applied on ${D_2}$ to obtain $D_2^{inherit}$. If ${\bar{t}_n}$ has not reached ${t_{max}}$, the exposure time is increased to (${\bar{t}_n} = {t_{step}} \times ({n - 1} )+ {t_{min}}$) and the capturing cycle is repeated to obtain ${O_n}.$ The previous decoding inheritance result, $D_{n - 1}^{inherit}$, is used to find the projector pixels that cause camera oversaturation. The corresponding AIBP is regenerated for projection, and decoding inheritance is performed to obtain $D_n^{inherit}$. The process is repeated iteratively in this way until ${\bar{t}_n} = {t_{max}}$, and the final decoding inheritance result $D_n^{inherit}$ is used to complete the three-dimensional reconstruction process [20].

 figure: Fig. 12.

Fig. 12. Flow chart of adaptive blooming suppression structured light system.

Download Full Size | PDF

4. Experiment results

4.1 Experiment setup

The structured light system consisted of a digital projector (ML1050SL, Optoma) with a resolution of 1280 × 800 pixels and a Basler camera (acA2440-20gc) with a resolution of 2464 × 2056 pixels and maximum exposure time of 10 s. The camera lens (HF1618-12 M, Fujifilm) had a focal length of 16 mm. To avoid interference from the ambient environment, the experiments were conducted in a completely dark environment, as shown in Fig. 13. According to the resolution of the projector, the experiments were performed using LGGC patterns with 11 bits in the vertical direction and 10 bits in the horizontal direction. A total of 42 pictures were used for each projection process (i.e., 21 projection images and 21 inverse images).

 figure: Fig. 13.

Fig. 13. Experimental setup.

Download Full Size | PDF

4.2 Metal workpiece measurement

The ability of the proposed method to suppress blooming was investigated using a metal faucet as the tested object. A faucet was deliberately chosen because of its complex geometry and almost mirror-like surface finish, which caused the reflection to be more extreme. The areas with high reflectance required a lower exposure time to capture, but the intensity of the areas with low reflectance fell to almost zero. Thus, to properly capture all the fringes around the metal faucet, it was necessary to increase the exposure time, as shown in Fig. 14(a). However, as the exposure time increased, the fringes became clearer, but the blooming problem around the oversaturated areas worsened.

 figure: Fig. 14.

Fig. 14. As the exposure time increases, (a) the intensity around the metal faucet also increases, but the blooming caused by the oversaturated areas becomes more severe. (b) The problem of blooming interference fringes thus also becomes more serious.

Download Full Size | PDF

In implementing the adaptive blooming suppression system for the metal faucet, the exposure parameters were set as follows: ${t_{min}} = 0.1s$, ${t_{max}} = 4.1\textrm{s}$, and ${t_{step}} = 2s$. Moreover, in the blooming detector, the size of the averaging filter was set to 7 × 7 and ${T_{Avg}}$ was set to 250. Figure 15 shows the image capture steps of the proposed method at exposure times of 0.1 s, 2.1 s, and 4.1 s. Note that Figs. 15(a-1)-(a-3) show images captured without blooming suppression, Figs. 15(b-1)-(b-3) show images captured with blooming suppression, and Figs. 15(c-1)-(c-3) show the adaptive blooming suppression masks. In addition, Figs. 15(d-1)-(d-3) show the decoding results without blooming suppression, Figs. 15(e-1)-(e-3) show the results of decoding inheritance at ${\bar{t}_3} = {t_{max}}$ for the final 3D reconstruction results [20].

 figure: Fig. 15.

Fig. 15. (a) Unsuppressed blooming image, (b) suppressed blooming image, (c) adaptive blooming suppression projection mask, (d) decoding results without blooming suppression, and (e) results of the decoding inheritance with blooming suppression. The three rows correspond to exposure times of 0.1 s, 2.1 s, and 4.1s.

Download Full Size | PDF

Figures 5(a-1)-(a-3) show that the fringes around the faucet become clearer as the exposure time increases. Consequently, the decoding results around the metal faucet become more complete, as shown in Figs. 15(d-1)-(d-3). However, the higher exposure time results in blooming interference fringes. Consequently, the oversaturated area responsible for the blooming is estimated, and the decoding result (or decoding inheritance result) obtained at the previous exposure time is used to generate the corresponding AIBP (Figs. 15(c-1)-(c-3)). The number of areas in which the AIBP intensity is equal to zero increases as the exposure time increases. Nonetheless, the oversaturated areas that cause blooming are effectively suppressed, as shown in Figs. 15(b-1)-(b-3), and an AIBP mask is successfully generated (Figs. 15(c-1)-(c-3)). The decoding inheritance method is then used to fuse the decoding results (with blooming suppressed) at the different exposure times, resulting in a complete decoding results as the exposure time increases (Figs. 15(e-1)-(e-3)).

The performance of the proposed blooming suppression and decoding inheritance method was compared with that of the original method (no overexposure suppression), the HDR method in [6] (fusing multiple exposure time images using a MATLAB 2022b function), and the adaptive projection method proposed by Lin [18]. The exposure time for the adaptive projection method was set as 4.1 s. The exposure time and number of fused images in the HDR method were assigned the same values as those in the proposed method. Figure 16 compares the results obtained using the four methods.

 figure: Fig. 16.

Fig. 16. Comparing the (a) original method, (b) HDR method, (c) Lin’s method, and (d) the proposed method in terms of the blooming interference fringes, the effect of suppressing blooming interference, and the reconstruction results of (e) original method, (f) HDR method, (g) Lin’s method, and (h) the proposed method.

Download Full Size | PDF

The fringe images and reconstruction results captured at ${t_{max}} = 4.1\textrm{s}$ are shown in Figs. 16(a) and 16(e), respectively. As shown in Fig. 16(b), HDR handles the fringes very well in the low-reflective areas but suffers blooming in the high-reflective regions. Therefore, it achieves a good reconstruction performance in the areas around the metal faucet but a poor performance in the areas around the oversaturated areas (Fig. 16(f)). It is observed in Fig. 16(c) that the adaptive projection method proposed by Lin has only a limited effect on suppressing blooming. In this method, the intensity of the camera attenuation is mapped to the projector pixels using the assumption of homography. However, this assumption is limited to plane-to-plane mapping. Thus, for cases of larger reflections and non-planar geometric shapes, the mapping relationship between the camera pixels and projector pixels fails. As a result, the adaptive projection method has only limited effectiveness in reducing the blooming. By contrast, the method proposed in the present study exhibits a greatly improved suppression performance, as shown in Fig. 16(d). The improved fringes lead in turn to a superior reconstruction performance, as shown in Fig. 16(h).

In addition to the metal faucet, the proposed method was applied to a metal cylinder, a wrench, and a clip. The results obtained using the four methods (original method, HDR method, Lin’s method, and proposed method) for each object are shown in Fig. 17. For the metal cylinder, the best reconstruction results are obtained using the HDR method and the proposed method. For the wrench and metal clip, only the proposed method achieves a reasonable reconstruction performance. Nonetheless, the mirror properties and geometric shape of the metal clip make it difficult to capture the fringes, even at the highest exposure time. Thus, while the proposed system successfully addresses the problem of blooming suppression, it cannot improve the reconstruction performance for regions of the clip with low reflectance or shadows.

 figure: Fig. 17.

Fig. 17. Using metal cylinders, wrenches and clip to compare original images, HDR, Lin’s method and the proposed method.

Download Full Size | PDF

The proposed method is similar to the HDR method in that it fuses images with multiple exposure times. However, the proposed method more effectively suppresses the blooming interference caused by pixel oversaturation. Hence, it is inferred that the proposed method has a better ability to improve the quality of the fringes and 3D reconstruction results. The proposed method is also similar to the adaptive projection method in that it reduces the intensity of the projector to prevent overexposure of the captured image. The adaptive projection method sets the exposure time in such a way as to capture the darkest fringes and then performs projection to suppress the overly bright areas. Since the projector projects an all-black pattern, the high-reflective areas remain oversaturated. Consequently, although good reconstruction results are obtained in low-reflective areas, the reconstruction performance is still poor in the high-reflective areas. In addition, finding the mapping relationship between the projector pixels and the oversaturated area of the camera is problematic in the Lin’s adaptive projection method. In the method proposed in this study, the mapping relationship between the saturated camera pixels and projector pixels is improved using the decoding inheritance results captured at the previous exposure time. It thus provides a better reconstruction performance than those based on the homography correspondence assumption.

4.3 Gauge block measurement

The viability of the proposed adaptive blooming suppression structured light system for industrial inspection applications was investigated using 1 mm, 2 mm, 4 mm, and 16 mm metal gauge blocks (Mitutoyo). As shown in Fig. 18, the accuracy of the structured light system was verified by measuring the height difference using the 1 mm gauge block as the reference plane and the plane-fitting standard deviation of each metal gauge blocks, in which the height difference from 1 mm plane to 2 mm, 1 mm plane to 4 mm, and 1 mm plane to 16 mm, and the plane-fitting standard deviation of 1 mm, 2 mm, 4 mm, and 16 mm gauge blocks were measured. As in the previous experiments, the performance of the proposed method was compared to that of the original method, the HDR method [6], and the adaptive projection method [18].

 figure: Fig. 18.

Fig. 18. Metal gauge blocks.

Download Full Size | PDF

For each gauge block, structured light images were captured ten times and used to reconstruct the object using the four considered methods. The average height differences between the 1 mm gauge block and the 2 mm, 4 mm, and 16 mm gauge blocks, respectively, were computed for each of the ten reconstruction methods. The computed height differences were then compared to the ideal height differences to estimate the reconstruction error. Figure 19 shows the average error of the computed height difference for each of the four methods. Figure 20 shows the average plane standard deviation results for the four gauge blocks.

 figure: Fig. 19.

Fig. 19. Average error of height difference.

Download Full Size | PDF

 figure: Fig. 20.

Fig. 20. Average plane standard deviation of the gauge blocks when fitting the plane.

Download Full Size | PDF

As shown in Fig. 19, the original method, with no camera suppression, yields the highest measurement error, while the HDR method yields the lowest. The average height difference errors of the HDR method are 18.1 µm for the 1 mm gauge block-2 mm gauge block, 28.3 µm for the 1 mm gauge block-4 mm gauge block, and 67.3 µm for the 1 mm gauge block-16 mm gauge block. The results thus indicate that the HDR method effectively enhances the measurement accuracy of metal objects than the adaptive projection and the proposed method. As shown in Fig. 20, the proposed method generally achieves the best stability of the four methods, with average standard deviations of the fitting planes equal to 40.3 µm for the 1 mm gauge block, 36.3 µm for the 2 mm gauge block, 31.8 µm for the 4 mm gauge block, and 37.5 µm for the 16 mm gauge block. By contrast, the HDR method has the poorest stability of the four methods.

One way to measure the stability of the shape of the point cloud is by examining the standard deviation of the fitted plane of the point cloud. The proposed approach demonstrates the highest level of stability, while the HDR method has the lowest stability. The original method and Lin's method can choose the appropriate exposure time to avoid noise at low exposure times, whereas the HDR method combines fringe at multiple exposure times. However, at low exposure times [31], the noise in the image becomes more noticeable, resulting in a subpar HDR image. Consequently, even though it is more accurate, the HDR method has the least stability. This concept is similar to averaging a large amount of data with random errors to increase accuracy at the expense of a larger standard deviation. In contrast, the proposed approach reduces noise interference by eliminating the incorrect decoding area before combining the decoding results obtained at multiple exposure times (see Fig. 9). As a result, the accuracy of the proposed method is lower, but it demonstrates the highest stability. Overall, if the camera has a sufficient dynamic range without overexposure or underexposure issues, it is recommended to use the original method as it saves the most time. However, if the object image is extremely dark or bright, the HDR method will perform best. The HDR method captures details of the fringe with both high and low exposure times. It is advisable to implement the proposed technique as soon as blooming interference appear. Despite being the slowest, the proposed method is the most effective at reducing blooming interference.

5. Conclusion

This study has proposed an adaptive blooming suppression structured light system for improving the measurement of highly reflective objects by capturing multiple exposure time images while suppressing overexposed areas in the camera image. In the proposed approach, a novel detection error decoding areas method is used to determine the damaged decoding areas; a blooming detector is employed to identify the oversaturated areas that cause blooming; and a decoding inheritance technique is utilized to decode the results obtained at different exposure times. The experimental results have shown that the reconstruction performance of the proposed method is better than that of the adaptive projection method in [18] and HDR method in [6]. The method proposed in this study differs from previous reconstruction methods in that, provided that the minimum exposure time, maximum exposure time, and exposure time interval values are properly set, the system has the ability to automatically calculate the optimal blooming-suppressed projection image for each exposure time for itself. Thus, the need for human involvement in the measurement process is reduced, thereby reducing the risk of judgment errors of the exposure time and accidentally touching the system when capturing.

Funding

National Science and Technology Council (112-2221-E-006-142, 112-2218-E-194-007).

Acknowledgments

This research was supported in part by Higher Education Sprout Project, Ministry of Education to the Headquarters of University Advancement at National Cheng Kung University (NCKU).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. S. Zhang, B. Li, F. Ren, et al., “High-precision measurement of binocular telecentric vision system with novel calibration and matching methods,” IEEE Access 7, 54682–54692 (2019). [CrossRef]  

2. F. Chen, G.M. Brown, and M. Song, “Overview of 3-D shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000). [CrossRef]  

3. Z. Qi, Z. Wang, J. Huang, et al., “Error of image saturation in the structured-light method,” Appl. Opt. 57(1), A181–A188 (2018). [CrossRef]  

4. A. Kuş, “Implementation of 3d optical scanning technology for automotive applications,” Sensors 9(3), 1967–1979 (2009). [CrossRef]  

5. D. Palousek, M. Omasta, D. Koutny, et al., “Effect of matte coating on 3D optical measurement accuracy,” Opt. Mater. 40, 1–9 (2015). [CrossRef]  

6. Y. Liu, Y. Fu, Y. Zhuan, et al., “High dynamic range real-time 3D measurement based on Fourier transform profilometry,” Opt. Laser Technol. 138, 106833 (2021). [CrossRef]  

7. K. Wu, J. Tan, H. L. Xia, et al., “An exposure fusion-based structured light approach for the 3D measurement of a specular surface,” IEEE Sens. J. 21(5), 6314–6324 (2020). [CrossRef]  

8. G. H. Liu, X. Y. Liu, and Q. Y. Feng, “3D shape measurement of objects with high dynamic range of surface reflectivity,” Appl. Opt. 50(23), 4557–4565 (2011). [CrossRef]  

9. H. Jiang, H. Zhao, and X. Li, “High dynamic range fringe acquisition: a novel 3-D scanning technique for high-reflective surfaces,” Opt. Lasers Eng. 50(10), 1484–1493 (2012). [CrossRef]  

10. S. Liu, G. Dong, L. Kong, et al., “Measurement and characterization of feature parameters for assembly parts based on blue structured light,” in Eleventh International Conference on Information Optics and Photonics (CIOP, 2019), Vol. 11209, pp. 316–322.

11. S. Zhang and S. T. Yau, “High dynamic range scanning technique,” Opt. Eng. 48(3), 033604 (2009). [CrossRef]  

12. Z. Qi, Z. Wang, J. Huang, et al., “Improving the quality of stripes in structured-light three-dimensional profile measurement,” Opt. Eng. 56(3), 031208 (2017). [CrossRef]  

13. K. Shanks, S. Senthilarasu, and T. K. Mallick, “Optics for concentrating photovoltaics: Trends, limits and opportunities for materials and design,” Renew. Sustain. Energy Rev. 60, 394–407 (2016). [CrossRef]  

14. Y. Yoshinori, M. Hiroyuki, N. Osamu, et al., “Shape measurement of glossy objects by range finder with polarization optical system,” Gazo Denshi Gakkai Kenkyukai Koen Yoko 200, 43–50 (2003).

15. B. Salahieh, Z. Chen, J. J. Rodriguez, et al., “Multi-polarization fringe projection imaging for high dynamic range objects,” Opt. Express 22(8), 10064–10071 (2014). [CrossRef]  

16. M. Yamazaki and G. Xu, “3D reconstruction of glossy surfaces using stereo cameras and projector-display,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR, 2010), pp. 1213–1220.

17. L. Zhang, Q. Chen, C. Zuo, et al., “High-speed high dynamic range 3D shape measurement based on deep learning,” Opt. Lasers Eng. 134, 106245 (2020). [CrossRef]  

18. H. Lin, J. Gao, Q. Mei, et al., “Adaptive digital fringe projection technique for high dynamic range three-dimensional shape measurement,” Opt. Express 24(7), 7703–7718 (2016). [CrossRef]  

19. Y. Liu, Y. Fu, X. Cai, et al., “A novel high dynamic range 3D measurement method based on adaptive fringe projection technique,” Opt. Lasers Eng. 128, 106004 (2020). [CrossRef]  

20. D. Lanman and G. Taubin, “Build Your Own 3D Scanner: 3D Photography for Beginners,” in ACM SIGGRAPH 2009 Courses (ACM, 2009), pp. 8:1–8:94.

21. S. J. Koppal, “Lambertian reflectance,” in Computer Vision: A Reference Guide (Springer, 2014), pp. 441–443.

22. S. K. Nayar, K. Ikeuchi, and T. Kanade, “Surface reflection: physical and geometrical perspectives,” IEEE Trans. Pattern Anal. Machine Intell. 13(7), 611–634 (1991). [CrossRef]  

23. S. K. Nayar, G. Krishnan, M. D. Grossberg, et al., “Fast separation of direct and global components of a scene using high frequency illumination,” ACM Trans. Graph. 25(3), 935–944 (2006). [CrossRef]  

24. M. Gupta, A. Agrawal, A. Veeraraghavan, et al., “Structured light 3D scanning in the presence of global illumination,” in Proceedings of the computer vision and pattern recognition (CVPR, 2011), pp. 713–720.

25. L. Goddyn, G. M. Lawrence, and E. Nemeth, “Gray codes with optimized run lengths,” Utilitas Math. 34, 179–192 (1988).

26. P. Fiorentin, P. Iacomussi, and G. Rossi, “Characterization and calibration of a CCD detector for light engineering,” IEEE Trans. Instrum. Meas. 54(1), 171–177 (2005). [CrossRef]  

27. C. H. Sequin, “Blooming suppression in charge coupled area imaging devices,” Bell Syst. Tech. J. 51(8), 1923–1926 (1972). [CrossRef]  

28. H. Yue, H. G. Dantanarayana, Y. Wu, et al., “Reduction of systematic errors in structured light metrology at discontinuities in surface reflectivity,” Opt. Lasers Eng. 112, 68–76 (2019). [CrossRef]  

29. R. C. Gonzalez and R. E. Woods, Digital Image Processing (Pearson, 2007), Chap. 3.

30. C. Boncelet, “Image noise models,” in The essential guide to image processing (Academic Press, 2009), pp. 143–167.

31. T. H. Min, R. H. Park, and S. Chang, “Noise reduction in high dynamic range images,” Signal, Image, and Video Processing 5(3), 315–328 (2011). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (20)

Fig. 1.
Fig. 1. Arbitrary light reflection model proposed by Nayer [22].
Fig. 2.
Fig. 2. (a) Conventional gray code contains low-frequency and high-frequency fringes and is susceptible to short-range and long-range effects; (b) large gap gray code (LGGC) [25] can be effective in mitigating the interreflection and out-of-focus problem.
Fig. 3.
Fig. 3. Camera saturation and blooming affect fringe quality in structured light systems.
Fig. 4.
Fig. 4. Identification of oversaturated areas ${O_n}$ that cause camera blooming: (a) obtain the minimum energy map ${P^{min}}$ from the fringe image P and the inverse fringe image ${P^c}$ , (b) apply an averaging filter of size 7 ${\times} $ 7 to the minimum intensity map ${P^{min}}$ and compute the average intensity map ${\bar{P}_{min}}$ , (c) set the averaging threshold ${T_{Avg}}$ to 250 and roughly locate the oversaturated areas ${O_n}\; $ that cause camera blooming, (d) perform UNION calculation on $\tilde{R}_n^m$ detected in each bit at the n-th exposure time.
Fig. 5.
Fig. 5. After finding the oversaturated area ${O_n}$ in the image, the decoding results are used to find the projector pixels that cause the camera to be oversaturated.
Fig. 6.
Fig. 6. At the n-th exposure time, (a) obtain the blooming suppression mask when the intensity of the projector pixels that cause camera oversaturation is reduced to zero, and (b) apply this blooming suppression mask to the LGGC fringes and obtain the AIBP.
Fig. 7.
Fig. 7. Project the fringe pattern and inverse fringe pattern onto the object, and then capture the projected fringe images P and projected inverse fringe images ${P^c}$ . After individual binarization, obtain $\tilde{P}$ and ${\tilde{P}^C}$ . The reliable region $R$ can then be obtained by performing $XOR$ operation on $\tilde{P}$ and ${\tilde{P}^C}$ .
Fig. 8.
Fig. 8. Calculated reliable region for LGGC fringes of each bit is different.
Fig. 9.
Fig. 9. Obtain the reliable region of each bit at the $n$ -th exposure time and perform AND operation to generate a reliable mask. Application of the reliable mask to the decoding result ${D_n}$ at the current exposure time.
Fig. 10.
Fig. 10. In the decoding inheritance method, the decoding result at a higher exposure time is used to supplement the decoding results that cannot be obtained at a lower exposure time.
Fig. 11.
Fig. 11. Regions repaired by decoding inheritance under different exposure times.
Fig. 12.
Fig. 12. Flow chart of adaptive blooming suppression structured light system.
Fig. 13.
Fig. 13. Experimental setup.
Fig. 14.
Fig. 14. As the exposure time increases, (a) the intensity around the metal faucet also increases, but the blooming caused by the oversaturated areas becomes more severe. (b) The problem of blooming interference fringes thus also becomes more serious.
Fig. 15.
Fig. 15. (a) Unsuppressed blooming image, (b) suppressed blooming image, (c) adaptive blooming suppression projection mask, (d) decoding results without blooming suppression, and (e) results of the decoding inheritance with blooming suppression. The three rows correspond to exposure times of 0.1 s, 2.1 s, and 4.1s.
Fig. 16.
Fig. 16. Comparing the (a) original method, (b) HDR method, (c) Lin’s method, and (d) the proposed method in terms of the blooming interference fringes, the effect of suppressing blooming interference, and the reconstruction results of (e) original method, (f) HDR method, (g) Lin’s method, and (h) the proposed method.
Fig. 17.
Fig. 17. Using metal cylinders, wrenches and clip to compare original images, HDR, Lin’s method and the proposed method.
Fig. 18.
Fig. 18. Metal gauge blocks.
Fig. 19.
Fig. 19. Average error of height difference.
Fig. 20.
Fig. 20. Average plane standard deviation of the gauge blocks when fitting the plane.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

O n = R ~ n 1 R ~ n 2 R ~ n M , w h e r e n = 1 , 2 , 3 , N
M a s k n r e l i a b i l i t y = R n 1 R n 2 R n M , w h e r e n = 1 , 2 , 3 , N
D n i n h e r i t = D 1 × M a s k 1 r e l i a b i l i t y + D 2 × ( M a s k 2 r e l i a b i l i t y M a s k 1 r e l i a b i l i t y ) + D 3 × ( M a s k 3 r e l i a b i l i t y M a s k 2 r e l i a b i l i t y M a s k 1 r e l i a b i l i t y ) + = D 1 × M a s k 1 r e l i a b i l i t y + n = 2 N D n × ( M a s k n r e l i a b i l i t y i = 1 n 1 M a s k i r e l i a b i l i t y )
t s t e p = t m a x t m i n N 1
t ¯ n = t s t e p × ( n 1 ) + t m i n
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.