Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Autofocusing method for a digital fringe projection system with dual projectors

Open Access Open Access

Abstract

This paper presents a novel technique to achieve autofocusing for a three-dimensional (3D) profilometry system with dual projectors. The proposed system uses a camera that is attached with an electronically focus-tunable lens (ETL) that allows dynamic change of camera’s focal plane such that the camera can focus on the object; the camera captures fringe patterns projected by each projector to establish corresponding points between two projectors, and two pre-calibrated projectors form triangulation for 3D reconstruction. We pre-calibrate the relationship between the depth and the current being used for each focal plane, perform a 3D shape measurement with an unknown focus level, and calculate the desired current value based on the initial 3D result. We developed a prototype system that can automatically focus on an object positioned between 450 mm to 850 mm.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical three-dimensional measurement [1,2] based on the digital fringe projection (DFP) technique performs contactless measurement of surfaces, which has applications in the fields of including manufacturing industry, entertainment, and robotics.

With the continuous innovation and development of science and technology, there is an increasing need to develop 3D optical measurement techniques with autofocus capability. However, unlike its 2D counterparts where autofocusing techniques have been quite mature and extensively implemented, there is still a lack of study in the field of the high-accuracy 3D shape measurement field. For a typical 3D shape measurement system, the depth of field (DOF) is fixed for a given system and is often shallow because the focus of the camera cannot be changed after calibration.

It is well known that the DFP system includes at least one projector and one camera. The triangulation principle is used to reconstruct 3D shapes. It should be noted the defocusing effect is different on the camera or on the projector. If the projector is defocused, the projected fringe patterns are blurred, the carried phase remains unchanged, albeit with a lower signal-to-noise ratio (SNR) [3]. In contrast, if the camera is defocused, the captured image will be blurred, and local surface geometry will be lost [4]. One of the common practices to increase the DOF of the camera is to reduce the aperture of the imaging lens. Unfortunately, the DFP method usually requires the utilization of the largest aperture in order to maximize the luminous flux from the projector to the camera.

To extend the DOF of the projector, Salvi et al. [5] have proposed to use fringe patterns with more than one frequencies. Even though this method can reduce the projector’s defocusing effect, precisely selecting the optimal frequency selection is often not possible for a digital video projector. Zhang et al. [6] proposed an approach that continuously updates speckle patterns according to the recovered depth map to extend the DOF. Unfortunately, the achievable resolution and accuracy for a 3D shape measurement system based on speckle pattern projection are typically not high. Achar et al. [7] proposed the idea of using multiple projector focus settings to extends the working volume, yet such a method typically requires projecting 25-30 images for each focal length setting. Therefore, for high-speed applications, such a method is not desirable.

To extend the DOF of the camera, there are many strategies such as wavefront coding [810], all-in-focus imaging [11,12], depth-from-defocus [13,14], etc.. For the method of wavefront coding, the incoherent optical system is modified to utilize a cubic-phase mask or special phase mask. As a result, the point spread function (PSF) is not sensitive to defocusing, or the phase masks require to be specially designed and fabricated. Furthermore, since the decenter and tilt may degrade the imaging performance, the optical recording process has a requirement of accurate alignment of these special phase masks. For the all-in-focus imaging method, images are captured on multiple focal planes and transferred on a single image that all objects are in focus through computing. While for the depth-from-defocus method, it estimates the depth of a scene based on blur differences from at least two images captured at different depths of field. However, since any DOF extending methods applied in 3D shape measurement has to provide the exact value of the focal length, the latter two methods are difficult to be adopted in 3D shape measurement systems.

With the merits of compactness, portability, easy to control, easy to be integrated into other optical devices, the electrically tunable lens (ETL) offers an effective alternative to enlarge the DOF of 2D imaging systems [1518]. Hu et al. [19] presented a method to enlarge 3D shape measurement range with an ETL with discrete calibration planes, and they [20] later achieved autofocusing through better modeling of the imaging system with ETL. Both approaches assumed the relationship between the input electrical signal and the lens parameters (e.g., shape and focal length) are precisely defined. However, some factors such as temperature, gravity, vibration will have impact on the liquid lenses, which will lead to slightly difference for the focal lengths at the same current value. Therefore, the pre-calibrated system parameters cannot perfectly match the system parameters during the measurement process. In fact, we have also found that the state-of-the-art ETL cannot achieve high-level of accuracy required for a high-accuracy 3D shape measurement. Furthermore, the calibration process of the ETL imaging system is non-trivial. In addition, changing the orientation of the ETL alters its physical parameters (e.g., principle point) the ETL, making it impossible to pre-calibrate the system accurately when a 3D system is practically used.

Taking advantage of the inherent merit of the projector defocusing: carrier phase of the fringe pattern does not change, this paper proposed a novel approach that can achieve autofocus for 3D shape measurement without the need of high-accuracy ETL and elaborate calibration for camera. Specifically, the proposed method technique 1) uses one camera with ETL to achieve autofocus at various distances to ensure the captured images are in focus; 2) camera captures projected fringe patterns to establish the corresponding points on the projector for each camera point; and 3) 3D reconstruction is realized by triangulating two pre-calibrated projector corresponding point pairs. Since the camera image is not directly used for triangulation, it is not necessary to calibrate the camera lens and thus allowing the camera imaging system to change its focal plane after system calibration. Furthermore, the ETL lens offers the opportunity to rapidly focus the image for such a system because even the camera image is not focused, 3D information can still be reconstructed overall, albeit with the loss details. As such, we propose to build the relationship between the focal plane and depth for the ETL imaging system, and then to precisely calculate the desired driving current for focused image acquisition using 3D information acquired at any given current value. We experimentally demonstrated the success of our proposed method to achieve autofocus for a large DOF.

The rest of the paper is organized as follows. Section 2 introduces the theoretical background, Section 3 demonstrates the experimental validation. Lastly, Section 4 summarizes the paper.

1.1 Phase-shifting algorithm

As a point-to-point phase retrieval, phase-shifting algorithms are extensively implemented in optical metrology due to their robustness to noise, and achievable high measurement accuracy. For an N-step (N$\geq 3$) phase-shifting algorithm with equal phase shifts, the nth fringe pattern can be mathematically represented as [21],

$${I_n}(x,y) = A(x,y) + B(x,y)\cos [\varphi (x,y) + 2n\pi /N],$$
where A(x, y) and B(x, y) respectively denotes the average intensity and intensity modulation. $\varphi$(x, y) is the phase to be solved for, which can be calculated by a least-squares manner.
$$\varphi (x,y) ={-} {\tan ^{ - 1}}\left[ {\frac{{\sum\nolimits_{n = 1}^N {{I_n}\sin (2n\pi /N)} }}{{\sum\nolimits_{n = 1}^N {{I_n}\cos (2n\pi /N)} }}} \right].$$
$\varphi$(x, y) here is often regarded as the wrapped phase since an arctangent function only provides phase values ranging from -$\pi$ and $\pi$ with 2$\pi$ discontinuities. Therefore, to a phase unwrapping algorithm is required to remove $2\pi$ discontinuities by adding or subtracting the multiples of $2\pi$, i.e.,
$$\Phi (x,y) = \varphi (x,y) + k(x,y) \times 2\pi,$$
where k(x, y) is the fringe order. In this paper, 18 (i.e., N=18) phase-shifted patterns are used to obtain the wrapped phase, and the fringe order k(x, y) is determined by projecting binary coded patterns using the gray-coding method.

1.2 System calibration

As discussed in Sec. 1, the proposed technique uses two projectors for triangulation and thus only projector calibration is necessary. Figure 1 shows the schematic diagram of the proposed system calibration with dual projectors. Both projectors are assumed to be calibrated under the same coordinate system based using a linear pinhole model. Then the projection from 3D world-word coordinates (xw, yw, zw) to the 2D projector image coordinates (u, v) can be mathematically described as,

$$ {s^l}[{u^l},{v^l},1]^t = {{\boldsymbol{P}}^{\textbf{l}}}[{x^w},{y^w},{z^w}]^t, $$
$$ {s^r}[{u^r},{v^r},1]^t = {{\boldsymbol{P}}^{\textbf{r}}}[{x^w},{y^w},{z^w}]^t,$$
where s is a scaling factor, P represents a 3$\times$4 projection matrix from 3D world coordinates (xw, yw, zw) to 2D image coordinates (u, v), superscripts l and r respectively donates the projector on the left side and that on the right side. t represents the matrix transpose operator.

 figure: Fig. 1.

Fig. 1. Schematic diagram of proposed calibration.

Download Full Size | PDF

It is well-known that the projectors cannot be directly calibrated easily, we utilize a camera to facilitate. In this research, we employed the method developed by Li et al. [22]. Briefly, the calibration idea works based on the successful establishment of the one-to-one mapping between camera point and projector point. This mapping is established as 1) the projector projects both horizontal and vertical fringe patterns such that each projector point is uniquely defined by the carrier phase; 2) the camera captures fringe patterns to reconstruct the phase maps; and 3) each camera point is uniquely mapped to the projector point with the same phase value. Once the one-to-one mapping is established, the camera captured feature points can be mapped to projector space for projector calibration. During the whole calibration process, both projector lenses remain untouched. We first calibrate the left projector and the camera using a standard structured light system calibration method described by Li et al. [22]; and then calibrate the right projector and the camera following the same calibration process. For each sub-system calibration, the world coordinate system is defined on the camera lens, and thus the geometric relationship between two projecting systems is also estimated. Note that our proposed system does not directly use camera calibration information for 3D reconstruction, and thus the camera here is only used as a tool to facilitate the projector calibration.

1.3 3D reconstruction

For a left projector pixel (ul, vl), Eqs. (4) and (5) provide Eq. (6) equations, while there are seven unknowns, sl, sr, xw, yw, zw, ur and vr. Thus only one more constraint equation is required to solve all unknowns uniquely. For the DFP system, the absolute phase $\Phi$(x, y) can provide the necessary constraint equation to calculate (xw, yw, zw) for each pixel. However, unlike the traditional camera-projector system, the point-to-point correspondence between two projectors is not directly unknown. To solve this problem, both horizontal and vertical fringe patterns are required for at least one projector to establish the one-to-one mapping between the camera pixel and the projector pixel, i.e., the point-to-point correspondence can be obtained based on two absolute phase maps: $\Phi$l(x, y) and $\Phi$r(x, y),

$${u^l} = \frac{{\Phi _V^l T_V^l}}{{2\pi}}, {v^l} = \frac{{\Phi _H^l T_H^l}}{{2\pi}}$$
for the left projector, and
$${u^r} = \frac{{\Phi _V^r T_V^r}}{{2\pi}},{v^r} = \frac{{\Phi _H^r T_H^r}}{{2\pi}}$$
for the right projector. where $\Phi _V^l$ and $\Phi _H^l$ respectively represents the absolute phase retrieved from the vertical fringe patterns projected by the left projector and the absolute phase reconstructed from the horizontal fringe patterns projected by the left projector; $T _V^l$ and $T _H^l$ respectively represents the fringe period in the vertical and horizontal direction; $\Phi _V^r$, $\Phi _H^r$, $T _V^r$ and $T _H^r$ are corresponding information for the right projector.

If both projectors project horizontal and vertical phase maps, Eqs. (4) and (5) can be rewritten as

$${s^l}\left[\frac{{\Phi _V^l T_V^l}}{{2\pi}},\frac{{\Phi _H^l T_H^l}}{{2\pi}},1\right]^t = {\textbf{P}^{\textbf{l}}}[{x^w},{y^w},{z^w}]^t, $$
$${s^r}\left[\frac{{\Phi _V^r T_V^r}}{{2\pi}},\frac{{\Phi _H^r T_H^r}}{{2\pi}},1\right]^t = {\textbf{P}^{\textbf{r}}}[{x^w},{y^w},{z^w}]^t. $$
Obviously, for system calibration, all these equations are required. However, for 3D reconstruction, these two relationships provide 6 equations with 5, and thus for each camera pixel, $(x^w, y^w, z^w)$ coordinates can be solved using a least square manner. Alternatively, we can use one direction fringe patterns from one projector to solve $(x^w, y^w, z^w)$ uniquely. It is important to note that 3D coordinates can be calculated without knowing the camera imaging parameters.

1.4 DOF limitation for DFP system

Due to the limited depth of field for both projector and camera with fixed lenses, the traditional system of the DFP system physically restricts the measurement space and thus hampers its applicability to numerous applications. To illustrate the DOF limitations, we build a standard DFP system that is composed of a digital light processing (DLP) projector (Model: Lightcrafter 4500, 912$\times$1140 pixels) and a complementary metal oxide semiconductor (CMOS) camera (Model: PointGrey Grasshopper GS3-U3-23S6M-C, 1920$\times$1200 pixels) with a 16 mm focal length lens (Model: Computar M1614-MP2). We first fix the projector lens with the focal plane is approximately at a depth of 850 mm, let the projector projects a squared binary pattern, put a flat white board with printed various sized “Purdue” at three different distances (i.e., 450 mm, 650 mm and 850 mm), and manually adjust the camera’s focal length such that the captured image is sharp and clear for each location. Figure 2 shows the captured image and the selected cross-sections. Clearly, all captured images are focused, while the projected patterns are defocused at various degrees, proving that the projector indeed has a limited DOF.

 figure: Fig. 2.

Fig. 2. DOF limitation of the projector when the camera focus can be adjusted. Square binary pattern is projected and captured for a white board at a distance approximately (a) 450 mm, (b) 650 mm, and (c) 850 mm away from the projector.

Download Full Size | PDF

Similarly, we experimentally verified the camera has a limited DOF if the camera lens is fixed. Once again, we put the flat white board at three different distance and capture the corresponding images. Figure 3 shows that the text on the board is blurred to a different degree when the camera focal plane is set to be approximately 450 mm. These experiments confirmed that the DOF of the camera or the projector is not sufficient to capture all focused image for a depth range of 450 mm to 850 mm.

 figure: Fig. 3.

Fig. 3. DOF limitation of the camera. Captured image for a white board at a distance approximately (a) 450 mm, (b) 650 mm, and (c) 850 mm away from the camera.

Download Full Size | PDF

1.5 Proposed autofocusing method

Taking advantage of the inherent merit of the projector defocusing: carrier phase of the fringe pattern does not change [22], we proposed a novel autofocusing method for 3D shape measurement. As shown in Sec. 1.4, the projected fringe pattern contrast changes within a large DOF, yet all details can be captured if the camera remains focused and the fringe quality is still visually high. In other words, despite the large depth range, the fringe contrast or signal to noise ratio (SNR) remains high, and thus high-quality phase can be retrieved for high-accuracy measurement for such a large DOF. With these being said, if the depth range is drastically enlarged, such the measurement accuracy may noticeably reduce when the methods like depth-drive variable frequency method proposed by G. Rao et al. [23] could be employed. Therefore, instead of using the camera-projector pair for triangulation, the projector-projector pair can be used to achieve high-quality 3D shape measurement within a large DOF assuming that the camera focus can be adjusted.

In the meantime, the ETL allows electronically control its focal plane, and the relationship between the input current and the focal plane can be pre-calibrated [20], albeit such a relationship may not be 100% precise. Fortunately, even the focal plane is not precisely known, the rough relationship is sufficient for this research because the camera imaging system has a certain depth of focus that can tolerate such uncertainties. We experimentally calibrate an ETL (Model: Optotune EL-16-40-TC) in our research. Note that in addition to the ETL, the camera is also attached with a 16 mm focal length lens (Model: Computar M1614-MP2). 8 positions (from D$1$ to D$8$) at intervals of approximately 80 mm from approximately 455 mm to 1015 mm are selected. A coated mirror surface with some textures was imaged at each position with different current values and evaluated to determine the accordingly current value for the best focused image. For example, when the mirror is at the position of 455 mm, 31 current values ranging from 26 mA to 46 mA with an interval of 0.67 mA were applied to change camera focus, and images are captured for each current value. We adopt the method discussed in [24] to evaluate the level of blur for these collected images, and the value distribution of the image blur level ranges from 0 to 1, with 0 being the best focused image and 1 being the worst quality (severely blurred) image. Since the camera blur can be modeled with Gaussian blur, the curve fitting and interpolation were used to find the current value (the valley value of the curve) when the camera focuses on this position. Once the current is determined, we measured the mirror surface by the dual projector system with the optimized current value. A small central area (15 $\times$ 15 pixels) depth $Z$ value is averaged to determine the corresponding focal plane. The same process was applied to all 8 locations. Table 1 shows the corresponding focal plane and current values for all 8 positions, and Fig. 4 graphically visualizes the same relationship.

 figure: Fig. 4.

Fig. 4. The relationship between focal plane and the current. Dots are the measured data points and the blue smooth curve is the fitted polynomial function.

Download Full Size | PDF

Tables Icon

Table 1. The input current value and the corresponding focal plane

Figure 4 shows that the relationship between the input current $\mathbf {i}$ and the corresponding focal plane $\mathbf {Z}$ is monotonic and thus a polynomial function can be used to determine the corresponding desired current for a given focal plane. In this research, we use a second order polynomial function as

$$\mathbf{i} = a_0 + a_1 \mathbf{Z} + a_2 \mathbf{Z}^2,$$
where $a_0, a_1$ and $a_2$ are constants. The fitted curve is shown as the smooth blue curve on Fig. 4. By this means, if the desired focal plane is known, the driving current can be calculated using this equation. In this research, we propose to use the camera within an unknown focus level to measure the object once. The measurement result is then used to estimate the focal plane, and subsequently the best focused image.

In summary, the autofocusing method uses the following procedures, as illustrated in Fig. 5,

  • 1. Calibrate two projectors under the same world coordinate system using the method discussed in Sec. 1.2
  • 2. Calibrate the relationship between the focal plane and the desired driving current value of the camera, i.e., $\mathbf {i} = f(\mathbf {Z})$.
  • 3. Capture projected fringe patterns by both projectors using the camera with an unknown focus level when the driving current is $\mathbf {i}$.
  • 4. Reconstruct 3D shape of the object based on triangulation of two projectors following the method described in Sec. 1.3.
  • 5. Determine the desired current value $\mathbf {i}_o$ with the focal plane $\mathbf {Z}_o$ estimated from reconstructed 3D shape information using Eq. (10).
  • 6. Perform 3D shape measurement again using fringe images captured with the current $\mathbf {i}_o$ following the method described in Sec. 1.3.

 figure: Fig. 5.

Fig. 5. Framework of our autofocusing method.

Download Full Size | PDF

As can be seen here, the proposed autofocusing method works because 1) the camera that is attached with an ETL that allows dynamic change of camera’s focal plane such that the camera can focus on the object; 2) the relationship between the focal plane and the current can be pre-calibrated; 3) the projected phase remains unchanged even though the projector is out-of-focused, and the projector can project good quality fringe patterns at a large DOF; and 4) 3D reconstruction can be realized using projectors without directly knowing camera imaging parameters.

2. Experiment

We carried out a sequence of experiments to demonstrate the success of the proposed autofocusing technique. The models of these two projectors and the camera were described in Sec. 1.4. The camera is attached with a 16 mm focal length lens (Model: Computar M1614-MP2) and an ETL (Model: Optotune EL-16-40-TC). The ETL offers continuous focusability from -10 to +10 diopters. For these experiments, both projector focal planes are adjusted at approximately 850 mm, both projectors are calibrated for a depth range of 450 mm to 850 mm using the calibration approach described in Sec. 1.2, and remain untouched after calibration.

We first quantitatively evaluate the measurement accuracy of the proposed method by measuring three spheres with diameters of 40 mm, 80 mm and 200 mm. These spheres are positioned at approximately $\mathbf {Z}_1 =$ 450 mm, $\mathbf {Z}_2 =$ 650 mm, and $\mathbf {Z}_3 =$ 850 mm respectively. The corresponding driving current for these focal planes are $\mathbf {i}_{1}$ = -41.41 mA, $\mathbf {i}_{2}$ = -48.40 mA and $\mathbf {i}_{3}$ = -54.79 mA, respectively. We measured each sphere with all three current values with different amount of defocusing levels. For example, for the sphere with a diameter of 40 mm positioned at $\mathbf {Z} = 450$ mm, the captured images are focused when the driving current is $\mathbf {i}_{1}$ = -41.41 mA, whilst are blurred for other conditions. Figure 6(a) shows the 3D result when the camera is focused. Figures 6(c) and 6(e) respectively shows the 3D results when the driving current is set at $\mathbf {i}_{2}$ and $\mathbf {i}_{3}$. To quantitatively see the differences, all these measured points were used to fit a sphere with a diameter of 40 mm to generate an depth error map by taking the difference between the measured data point and the fitted ideal sphere. Figures 6(b), 6(d) and 6(f) shows these error maps. The root-mean-square (RMS) error was calculated for each error map and presented in Table 2. The measurement quality is the best (i.e., RMS error is smallest) when the camera is focused, as expected.

Tables Icon

Table 2. RMS errors for results shown in Fig. 6. (Unit: mm)

Similarly, we measured the second and the third sphere and analyzed the corresponding errors. The results are shown in the remaining of Fig. 6 and Table 2. Once again, these experiments demonstrated that the best quality 3D shape measurement can be achieved when the camera is in focus, as expected.

 figure: Fig. 6.

Fig. 6. Measurement results of spheres at different locations. The first row shows results of the sphere with a diameter of 40 mm positioned at $\mathbf {Z} =$450 mm, the second row shows results of the sphere with a diameter of 80 mm positioned at $\mathbf {Z} =$650 mm, and the third row shows results of the sphere with a diameter of 200 mm positioned at $\mathbf {Z} =$850 mm. The first column shows 3D results when the driving current is $\mathbf {i}_{1}$, the second column shows the corresponding error maps for those results shown in in the first column, the third column shows 3D results when the driving current is $\mathbf {i}_{2}$, the fourth column shows the corresponding error maps for those results shown in in the third column, the fifth column shows 3D results when the driving current is $\mathbf {i}_{3}$, the sixth column shows the corresponding error maps for those results shown in in the fifth column.

Download Full Size | PDF

We then plotted one cross section of each of the measured sphere under different defocusing levels. Figure 7 shows the corresponding result. One may notice that even each sphere is measured under different levels of defocusing, the depth differences are rather small for all cases (i.e., the worst case is approximately 2 mm). These experiments confirmed that the desired focal plane can be roughly estimated using the 3D reconstructed results from Eq. (10) even if the camera is significantly defocused.

 figure: Fig. 7.

Fig. 7. One cross section of each of the measured sphere under different defocusing levels.(a) The first sphere $\mathbf {i}_{1}$, $\mathbf {i}_{2}$, $\mathbf {i}_{3}$; (b) The second sphere $\mathbf {i}_{1}$, $\mathbf {i}_{2}$, $\mathbf {i}_{3}$; and (c) The third sphere $\mathbf {i}_{1}$, $\mathbf {i}_{2}$, $\mathbf {i}_{3}$.

Download Full Size | PDF

To further evaluate the proposed method, a small statue with complex surface geometry is measured. Randomly put the statue within the depth range of 450 mm and 850 mm (i.e., the projector calibrated depth range), set the ETL current such that the statue is clearly blurred. Figure 8(a) shows photograph of the statue with initial current -80 mA, clearly the camera is not significantly defocused. Then measure the statue with such a setting. Figure 8(b) shows the reconstructed 3D shape, showing smoothed details. From this measurement, we select a small region (marked in Fig. 8(a)) within the measured data point, calculate the averaged depth value $\mathbf {Z} = 481.17$ mm, and then determine the optimal current value for focused image $\mathbf {i}_o$ = 38.69 mA using Eq. (10). Then measure the object again with this current value. Figures 8(c) and 8(d) respectively show the photograph of the statue with the optimal current value and the corresponding reconstruction. To better visualize the difference, a close up view of these results is shown in Figs. 8(e)– 8(h). Comparing with the initial measurement, the focused image is sharp and the 3D results show detailed structures, proving that the proposed autofocusing method can work for this example.

 figure: Fig. 8.

Fig. 8. Experimental result of automatically focus a smaller statue within the working range. (a) Photograph of the object with initial setting; (b) 3D result with initial setting -80 mA; (c) photograph of the object with focused setting -38.69 mA; (d) 3D result with focused setting; (e)-(h) close-up views of the results shown in (a)-(d).

Download Full Size | PDF

In addition, a rather large statue is measured. Randomly put it within the depth range of 450 mm and 850 mm but further away from the system than the smaller statue. Once again, initially set the camera to be defocused and then use the proposed method to determine the current to focus the camera for high quality 3D shape measurement. Figure 9 shows the measurement results. Once again, the proposed method works well for this case.

 figure: Fig. 9.

Fig. 9. Experimental result of automatically focus a larger statue within the working range. (a) Photograph of the object with initial setting; (b) 3D result with initial setting -90 mA; (c) photograph of the object with focused setting -46.09 mA; (d) 3D result with focused setting; (e)-(h) close-up views of the results shown in (a)-(d).

Download Full Size | PDF

3. Conclusion

This paper has presented an approach to rapidly focus 3D shape surface measurement system that employs dual projectors and a camera attached with ETL. The ETL can be electronically controlled to determine the best focal current from the depth value measured by the system with an unknown focusing level. 3D reconstruction was realized by using the corresponding points of these two projectors and thus the camera calibration is not required. We experimentally demonstrated that our proposed method can work well for a calibrated rather depth range from 450 mm to 850 mm.

Funding

Directorate for Computer and Information Science and Engineering (IIS-1637961); National Natural Science Foundation of China (61801057); Education Department of Sichuan Province (18ZB0124); Chengdu University of Information Technology (S201910621117).

Acknowledgments

This work was carried out at School of Mechanical Engineering, Purdue University, West Lafayette, Indiana, 47907, USA. Professor Min Zhong was a visiting a scholar at Purdue University, and the authors would like to acknowledge the support of the National Natural Science Foundation of China (NSFC) (61801057), Sichuan education department project 18ZB0124, College Students’ innovation and entrepreneurship training program S201910621117, and the National Science Foundation (NSF) (IIS-1637961).

Disclosures

The authors declare no conflicts of interest.

References

1. S. Zhang, “Rapid and automatic optimal exposure control for digital fringe projection technique,” Opt. Lasers Eng. 128, 106029 (2020). [CrossRef]  

2. S. Zhang, “High-speed 3d shape measurement with structured light methods: A review,” Opt. Lasers Eng. 106, 119–131 (2018). [CrossRef]  

3. B. Li and S. Zhang, “Microscopic structured light 3d profilometry: Binary defocusing technique vs. sinusoidal fringe projection,” Opt. Lasers Eng. 96, 117–123 (2017). [CrossRef]  

4. L. Zhang and S. Nayar, “Projection defocus analysis for scene capture and image display,” in ACM SIGGRAPH 2006 Papers, (2006), pp. 907–915.

5. J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recogn. 43(8), 2666–2680 (2010). [CrossRef]  

6. Y. Zhang, Z. Xiong, P. Cong, and F. Wu, “Robust depth sensing with adaptive structured light illumination,” J. Vis. Commun. Image Represent. 25(4), 649–658 (2014). [CrossRef]  

7. S. Achar and S. G. Narasimhan, “Multi focus structured light for recovering scene shape and global illumination,” in European Conference on Computer Vision, (Springer, 2014), pp. 205–219.

8. A. Sauceda and J. Ojeda-Castañeda, “High focal depth with fractional-power wave fronts,” Opt. Lett. 29(6), 560–562 (2004). [CrossRef]  

9. V. N. Le, S. Chen, and Z. Fan, “Optimized asymmetrical tangent phase mask to obtain defocus invariant modulation transfer function in incoherent imaging systems,” Opt. Lett. 39(7), 2171–2174 (2014). [CrossRef]  

10. Y. Wu, L. Dong, Y. Zhao, M. Liu, X. Chu, W. Jia, X. Guo, and Y. Feng, “Analysis of wavefront coding imaging with cubic phase mask decenter and tilt,” Appl. Opt. 55(25), 7009–7017 (2016). [CrossRef]  

11. J. R. Alonso, A. Fernández, G. A. Ayubi, and J. A. Ferrari, “All-in-focus image reconstruction under severe defocus,” Opt. Lett. 40(8), 1671–1674 (2015). [CrossRef]  

12. O.-J. Kwon, S. Choi, D. Jang, and H.-S. Pang, “All-in-focus imaging using average filter-based relative focus measure,” Digit. Signal Process. 60, 200–210 (2017). [CrossRef]  

13. V. Aslantas and D. Pham, “Depth from automatic defocusing,” Opt. Express 15(3), 1011–1023 (2007). [CrossRef]  

14. S. Murata and M. Kawamura, “Particle depth measurement based on depth-from-defocus,” Opt. Laser Technol. 31(1), 95–102 (1999). [CrossRef]  

15. J. M. Jabbour, B. H. Malik, C. Olsovsky, R. Cuenca, S. Cheng, J. A. Jo, Y.-S. L. Cheng, J. M. Wright, and K. C. Maitland, “Optical axial scanning in confocal microscopy using an electrically tunable lens,” Biomed. Opt. Express 5(2), 645–652 (2014). [CrossRef]  

16. C. Zuo, Q. Chen, W. Qu, and A. Asundi, “High-speed transport-of-intensity phase microscopy with an electrically tunable lens,” Opt. Express 21(20), 24060–24075 (2013). [CrossRef]  

17. H. Li, J. Peng, F. Pan, Y. Wu, Y. Zhang, and X. Xie, “Focal stack camera in all-in-focus imaging via an electrically tunable liquid crystal lens doped with multi-walled carbon nanotubes,” Opt. Express 26(10), 12441–12454 (2018). [CrossRef]  

18. X. Shen and B. Javidi, “Large depth of focus dynamic micro integral imaging for optical see-through augmented reality display using a focus-tunable lens,” Appl. Opt. 57(7), B184–B189 (2018). [CrossRef]  

19. X. Hu, G. Wang, Y. Zhang, H. Yang, and S. Zhang, “Large depth-of-field 3d shape measurement using an electrically tunable lens,” Opt. Express 27(21), 29697–29709 (2019). [CrossRef]  

20. X. Hu, G. Wang, J.-S. Hyun, Y. Zhang, H. Yang, and S. Zhang, “Autofocusing method for high-resolution three-dimensional profilometry,” Opt. Lett. 45(2), 375–378 (2020). [CrossRef]  

21. D. Malacara, Optical shop testing, vol. 59 (John Wiley & Sons, 2007).

22. B. Li, N. Karpinsky, and S. Zhang, “Novel calibration method for structured-light system with an out-of-focus projector,” Appl. Opt. 53(16), 3415–3426 (2014). [CrossRef]  

23. G. Rao, L. Song, S. Zhang, X. Yang, K. Chen, and J. Xu, “Depth-driven variable-frequency sinusoidal fringe pattern for accuracy improvement in fringe projection profilometry,” Opt. Express 26(16), 19986–20008 (2018). [CrossRef]  

24. F. Crete, T. Dolmiere, P. Ladret, and M. Nicolas, “The blur effect: perception and estimation with a new no-reference perceptual blur metric,” in Human vision and electronic imaging XII, vol. 6492 (International Society for Optics and Photonics, 2007), p. 64920I.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Schematic diagram of proposed calibration.
Fig. 2.
Fig. 2. DOF limitation of the projector when the camera focus can be adjusted. Square binary pattern is projected and captured for a white board at a distance approximately (a) 450 mm, (b) 650 mm, and (c) 850 mm away from the projector.
Fig. 3.
Fig. 3. DOF limitation of the camera. Captured image for a white board at a distance approximately (a) 450 mm, (b) 650 mm, and (c) 850 mm away from the camera.
Fig. 4.
Fig. 4. The relationship between focal plane and the current. Dots are the measured data points and the blue smooth curve is the fitted polynomial function.
Fig. 5.
Fig. 5. Framework of our autofocusing method.
Fig. 6.
Fig. 6. Measurement results of spheres at different locations. The first row shows results of the sphere with a diameter of 40 mm positioned at $\mathbf {Z} =$ 450 mm, the second row shows results of the sphere with a diameter of 80 mm positioned at $\mathbf {Z} =$ 650 mm, and the third row shows results of the sphere with a diameter of 200 mm positioned at $\mathbf {Z} =$ 850 mm. The first column shows 3D results when the driving current is $\mathbf {i}_{1}$ , the second column shows the corresponding error maps for those results shown in in the first column, the third column shows 3D results when the driving current is $\mathbf {i}_{2}$ , the fourth column shows the corresponding error maps for those results shown in in the third column, the fifth column shows 3D results when the driving current is $\mathbf {i}_{3}$ , the sixth column shows the corresponding error maps for those results shown in in the fifth column.
Fig. 7.
Fig. 7. One cross section of each of the measured sphere under different defocusing levels.(a) The first sphere $\mathbf {i}_{1}$ , $\mathbf {i}_{2}$ , $\mathbf {i}_{3}$ ; (b) The second sphere $\mathbf {i}_{1}$ , $\mathbf {i}_{2}$ , $\mathbf {i}_{3}$ ; and (c) The third sphere $\mathbf {i}_{1}$ , $\mathbf {i}_{2}$ , $\mathbf {i}_{3}$ .
Fig. 8.
Fig. 8. Experimental result of automatically focus a smaller statue within the working range. (a) Photograph of the object with initial setting; (b) 3D result with initial setting -80 mA; (c) photograph of the object with focused setting -38.69 mA; (d) 3D result with focused setting; (e)-(h) close-up views of the results shown in (a)-(d).
Fig. 9.
Fig. 9. Experimental result of automatically focus a larger statue within the working range. (a) Photograph of the object with initial setting; (b) 3D result with initial setting -90 mA; (c) photograph of the object with focused setting -46.09 mA; (d) 3D result with focused setting; (e)-(h) close-up views of the results shown in (a)-(d).

Tables (2)

Tables Icon

Table 1. The input current value and the corresponding focal plane

Tables Icon

Table 2. RMS errors for results shown in Fig. 6. (Unit: mm)

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

I n ( x , y ) = A ( x , y ) + B ( x , y ) cos [ φ ( x , y ) + 2 n π / N ] ,
φ ( x , y ) = tan 1 [ n = 1 N I n sin ( 2 n π / N ) n = 1 N I n cos ( 2 n π / N ) ] .
Φ ( x , y ) = φ ( x , y ) + k ( x , y ) × 2 π ,
s l [ u l , v l , 1 ] t = P l [ x w , y w , z w ] t ,
s r [ u r , v r , 1 ] t = P r [ x w , y w , z w ] t ,
u l = Φ V l T V l 2 π , v l = Φ H l T H l 2 π
u r = Φ V r T V r 2 π , v r = Φ H r T H r 2 π
s l [ Φ V l T V l 2 π , Φ H l T H l 2 π , 1 ] t = P l [ x w , y w , z w ] t ,
s r [ Φ V r T V r 2 π , Φ H r T H r 2 π , 1 ] t = P r [ x w , y w , z w ] t .
i = a 0 + a 1 Z + a 2 Z 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.