Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Binocular vision profilometry for large-sized rough optical elements using binarized band-limited pseudo-random patterns

Open Access Open Access

Abstract

In this paper, a non-contact binocular vision profilometry method is proposed to measure a rough lens with aperture of around 300mm. A series of binarized band-limited pseudo-random patterns (BBPPs) are projected onto the rough lens, we utilize the temporal encoding method so that each pixel in the captured images has its specific code word. Homologous points could be matched via stereo matching procedure, then the surface of the rough lens will be reconstructed based on triangulation method according to the previous calibration data. Compared with the three coordinate measuring machine (CMM), this method achieves a fast and cheap measurement of the large-sized rough lens, which might be highly interesting for fast and overall measurement of metre-sized rough elements in the future.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In recent years, with the development of laser fusion, aerospace remote sensing, extreme ultraviolet lithography, etc., the optical systems are becoming more and more complex. A large number of large-sized and high-precision optical elements are required to improve the performance of the system. For example, the national ignition facility (NIF) requires a total of 7648 meter-scale optics (~0.5 to 1.0 m) with nanometer-scale error [1], while the Shen Guang II laser facility (SG-II) contains more than 3000 large-sized optical elements with apertures over 280 mm.

There are lots of processing methods for optical elements, but the most widely used one is the removal processing which is suitable for high-precision and large-sized surface. Removal processing is mainly divided into three stages: grinding, rough polishing and precision polishing. The surface processing accuracy of each stage is gradually increased and different detection methods are required. In the grinding procedure, CMM is an important and common equipment for surface measurements of optical elements. CMM has the precision of micrometer or submicrometer, depending on the specific equipment or the measurement area, but such a point-by-point measuring instrument is inefficient for large-sized rough elements.

Optical three-dimensional (3-D) measurement methods have been widely studied and gradually applied in industrial production in the past few years [2–7]. As a kind of structured light, band-limited pattern (BLP) was first proposed by A. Wiegmann to encode human face in 2006 [8], both high and low frequencies of the projected pattern were suppressed to improve the accuracy of sub-pixel interpolation. Such BLP or binarized BLP has also been utilized in the 3-D reconstruction of plaster head and human face [9–11]. Compared with CMMs, optical 3-D measurement methods could achieve non-contact and fast measurements of rough optical elements, but usually not with the precision of CMMs. In order to quantify the measurement accuracy, a rough lens with an aperture of around 300 mm is selected to be measured in the experiment. It should be pointed out that due to the scattering characteristics of the rough lens, the decoding error of structured light based on spatial coding will be increased. Thanks to the static state of the large-sized rough lens in processing, we use the temporal coding strategy to establish code words for each point, i.e., several pseudo-random patterns are projected onto the rough lens sequentially to obtain dense 3-D point cloud.

In this paper, we propose a binocular vision profilometry system combining BBPP projecting and temporal coding technology. The following parts are arranged as follows: firstly, the system setup of our method will be introduced. Then we will elucidate the structured light in our system. Next, the temporal coding method as well as the detailed 3-D reconstruction algorithm will be discussed. Finally, we will show the experiment data and the data analysis.

2. Principles

2.1 System setup

The principle of binocular vision profilometry is similar to the depth perception of human eyes, where two cameras capture images of the object from different perspectives. A point on an object will be imaged in two cameras at different locations, which we call homologous points. There is a disparity between the two homologous points, which implies the depth information, then the 3-D profile of the entire object surface could be reconstructed via the calibration data.

Figure 1 shows our measuring system, the measuring module contains a digital light processing (DLP) projector and two cameras (1600 × 1200, pixel size = 4.5 μm) that are located on the two sides of the DLP. To be specific, the baseline between these two cameras is about 400 mm, and the angle between them is adjusted until the rough lens is simultaneously imaged on two cameras. In order to achieve full-field measurement, the rough lens is placed at a distance of about 1.0 m from DLP, so that the whole surface could be covered by projected patterns. The deformed patterns are captured by two cameras synchronously.

 figure: Fig. 1

Fig. 1 Schematic diagram of stereo profilometry system.

Download Full Size | PDF

Steps of active 3-D measurement contain calibration, structured (including BBPP) light patterns projection, deformed patterns collection, image pre-processing, stereo matching and 3-D reconstruction. We focus on the most important two parts of them: structured (including BBPP) light patterns and stereo matching, as shown in Fig. 2. For more informations about the whole binocular 3-D measurement procedure, refer to [10] for details.

 figure: Fig. 2

Fig. 2 Flow chart of binocular vision profilometry.

Download Full Size | PDF

2.2 Band-limited pseudo-random patterns

According to the number of projected patterns, 3-D measurements can be divided into single-shot [12,13] and multi-shot methods [14]. Single-shot method is suitable for dynamic object measurement, while multi-shot technique shows advantages in high-precision measurement of static object. Owing to the strong scattering characteristics of the rough lens surface, spatial coding used in single-shot method might lead to decoding errors of the points that are disturbed by the scattering light of their adjacent regions. In the experiment, the object to be measured is a static rough lens placed on the platform, so we choose multi-shot technique to code patterns in time domain, which is called temporal coding method.

The generation of binarized band-limited (0.05-0.1) pseudo-random pattern used in the experiment is shown in Figs. 3(a)–3(c). At first, we generate a pseudo-random 2-D coded pattern, as shown in Fig. 3(a). Then this pattern is transformed to frequency domain by Fourier transform. Band-limited pattern is obtained via band-limited filtering. Figure 3(b) shows the low-pass filtered pattern. Finally, the band-limited pattern is binarized to obtain the BBPP, as shown in Fig. 3(c).

 figure: Fig. 3

Fig. 3 Generation of binarized band-limited (0.05-0.1) pseudo-random pattern. (a) Original binary pattern; (b) Low-pass filtered pattern; (c) Binarized band-limited pseudo-random pattern.

Download Full Size | PDF

In the experiment, N BBPPs, which are different from each other, are projected on the rough lens. These patterns are random in the spatial domain, but for a point there is a unique coding in the temporal direction. The combination of BBPP and temporal coding shows lots of advantages over spatial coding strategy. First of all, the completely random encoding in spatial domain could diminish the neighbor-distribution effect introduced by the surface scattering. Secondly, this system projecting binary pattern is not sensitive to the non-uniform environment illumination, if compared with using the gray pattern. Finally, temporal coding strategy ensures that each pixel has its own specific code word, and point-to-point matching results in a dense point cloud. In contrast, the point cloud obtained by spatial coding strategy relying on the gray value of neighborhood pixels might be relatively sparse due to the unmatched homologous points of two cameras caused by the scattering of rough surface.

However, in the image acquisition of deformed patterns, blurring caused by multiple reflection of the rough lens must be considered. Figure 4(b) shows the blurred image of the rough lens. Multiple reflection results in the decrease of image contrast and the increase of noise, which seriously affects the subsequent reconstruction work.

 figure: Fig. 4

Fig. 4 Removal of multiple reflection. (a) Placement of linear polarizers. (b) Image captured before polarizers are placed. (c) Image captured after polarizers are placed.

Download Full Size | PDF

In experiment, we find that the propagation of light inside the rough lens is actually a process of depolarization, i.e., if polarized light is projected, the polarizations of the reflected light on each surface of the rough lens are different. So we put three linear polarizers in front of the lens of the projector and two cameras, as shown in Fig. 4(a). Their polarization directions are adjusted so that only the reflected light from the surface to be measured can be collected by CCD. The images captured by the cameras before and after the installation of the polarizer are shown in Figs. 4(b) and 4(c), respectively. The latter shows stronger contrast and lower noise, which could reduce the difficulty of image pre-processing and improve the accuracy of stereo matching.

2.3 Stereo matching

As we have mentioned before, N BBPPs are projected onto the object, two cameras capture deformed patterns that are reflected from the object synchronously. The schematic diagram of captured pattern sequences is shown in Fig. 5, which have been binarized. There is a disparity d between point p(x,y) in left image and its homologous point p(x+d,y) in right image.

 figure: Fig. 5

Fig. 5 schematic diagram of captured pattern sequences.

Download Full Size | PDF

It must be mentioned that all images obtained from different perspectives should undergo epipolar rectification in advance so that each pair of homologous points are located at the same horizontal epipolar line.

For a pair of given homologous points, we establish a pair of N-dimensional code words in time domain:

CL=[IL(x,y,1),IL(x,y,2),...,IL(x,y,t),...,IL(x,y,N)],
CR=[IR(x+d,y,1),IR(x+d,y,2),...,IR(x+d,y,t),...,IR(x+d,y,N)],
where the subscript L and R represent the points in left and right image, respectively. The pixel value of IL(x,y,t) and IR(x+d,y,t) is 0 or 1 owing to the binarization of captured images.

Zero mean normalized cross correlation (ZNCC) is one of the most popular similarity evaluation algorithms. To be specific in our temporal correlation technology, ZNCC can be described as follows:

ZNCC(x,y,d)=t=1N[IL(x,y,t)ML][IR(x+d,y,t)MR]DL(x,y,t)DR(x+d,y,t).
The numerator of Eq. (3) represents the correlation between the two N-dimensional code words CL and CR. Furthermore, M and D are the mean intensity value and the standard deviation of CL and CR, they are given by

M=1Nt=1NI(x,y,t),
D=1Nt=1N(I(x,y,t)M)2.

The value of ZNCC denotes the degree of correlation between a pair of potential homologous points. For a point p(x,y) in left image, we can search the corresponding homologous point with maximum value of ZNCC on the same epipolar line in right image.

3. Experiment and data analysis

The steps of binocular vision profilometry can be elucidated as:

Step 1. System calibration

We adopted the off-line calibration method proposed by Zhang in 2000 [15]. A planar calibration board with a size of 300mm × 300mm was fixed on a tripod, and the size of each small checkerboard is 20mm × 20mm. The patterns of checkerboard in various postures could be obtained by rotating the ball head. We collected 10-20 pairs of images of the calibration board for a standard calibration procedure by J. Y. Bouguet [16]. Two cameras were calibrated separately, then a stereo calibration was carried out to obtain the geometric relationship between these two cameras. The statistical reprojection errors (in pixel) of these two cameras are around 0.07 pixel in x and y directions, an intuitive errors distributions are shown in Fig. 6.

 figure: Fig. 6

Fig. 6 Reprojection errors (in pixel) of left and right camera.

Download Full Size | PDF

Step 2. Band-limited pseudo-random patterns projection and collection

N BBPPs were projected onto the rough lens by DLP sequentially. In the experiment, we verified that the optimum value of N is 20 [8], i.e., the coding length is 20, which ensured that each point had a specific code word. At the same time, two cameras captured the images of deformed patterns reflected from the surface of the rough lens synchronously. In addition, an extra white light pattern was projected on the rough lens so that the white Region of interest (ROI) and the black background are clearly distinguished. Then ROI can be easily extracted via binarization.

Step 3. Image pre-processing

Because the images captured by cameras could not be directly utilized for stereo matching, several image processing methods such as epipolar rectification, ROI extraction, binarization are required. A pair of corrected captured images are shown in Figs. 7(a) and 7(b), the ROIs of them are shown in Figs. 7(c) and 7(d). The binarized images of the left and right captured image are shown in Figs. 7(e) and 7(f), respectively.

 figure: Fig. 7

Fig. 7 A pair of captured images and binarization of them. (a) Left image; (b) Right image; (c) ROI of left image; (d) ROI of right image; (e) Binarization of left image; (f) Binarization of right image.

Download Full Size | PDF

Step 4. Stereo matching

We searched 420353 pairs of homologous points by calculating the value of matching cost function in Eq. (3). Because the ROI of the rough lens had been extracted accurately, the mismatching points could be avoided effectively. However, there are still some outliers mixed in the point cloud. Range and gradient constraints of disparities were two common methods applied to remove outliers. To be specific, since the height of the rough lens varies continuously, the homologous points with odd values or gradients of disparities would be regarded as outliers. In the experiment, 9438 pairs of homologous points were removed and the error rate of matching is less than 3%. The disparity map is shown in Fig. 8.

 figure: Fig. 8

Fig. 8 Disparity map of the rough lens.

Download Full Size | PDF

Step 5. 3-D reconstruction

3-D data of the rough lens could be reconstructed by combining disparity map and calibration data. According to the triangulation method, the 3-D profile of the whole surface is obtained, as shown in Fig. 9.

 figure: Fig. 9

Fig. 9 3-D points cloud of the rough lens

Download Full Size | PDF

In the manufacturing of spherical lenses, surface circle fitting, which is applied in the alignment of the central locations of the lens and operating platform, is a common method to establish coordinate system, as well as the measuring of surface roundness and coaxiality, etc.. The measurement data of CMM was used as reference in the experiment owing to its high accuracy. As shown in Fig. 10, an arc passing through spherical vertex was marked at a certain position of the rough lens. We fitted these arcs and obtained their radiuses from the measurement data of CMM and our system, respectively. By comparing the fitting radiuses of them, the accuracy of our method could be calculated.

 figure: Fig. 10

Fig. 10 Schematic diagram of data acquisition.

Download Full Size | PDF

A CMM produced by Xi’an High-tech AEH Industrial Metrology Co.,Ltd, which has the precision of about 3 μm, was applied to measure the whole surface. The result showed that the reference radius of the marked arc is 308.060 mm. In order to obtain more data to get error range, the rough lens was placed on the nanoscale electric displacement platform to move along the Z-axis in Fig. 10. We collected 4 sets of data at positions: 0 mm, 5 mm, 15 mm and 20 mm, and the same fitting method was performed to them. Table 1 shows the fitting radiuses at these four positions, the radiuses are 308.080 mm, 308.019 mm, 308.040 mm and 308.039 mm, respectively. The deviations, which are listed in Table 1, were obtained by comparing these radiuses to the reference data. A more intuitive presentation of these radiuses is shown in Fig. 11. It should be pointed out that the disparity map and 3-D profile shown before are reconstructed data at position 0 mm.

Tables Icon

Table 1. Fitting Radiuses and Deviations at Different Positions

 figure: Fig. 11

Fig. 11 Fitting radiuses R at different positions.

Download Full Size | PDF

Benefiting from the consecutive surface of the rough lens and the large angle of arc, a lot of reliable data had been obtained. There were a series of factors leading to noise: such as surface scattering, lens aberration, environmental light, environmental vibration, etc., which resulted in local perturbation on the curve, as shown in Fig. 11. However, the least squares fitting is a statistical method, so the fitting curve is not necessary to pass through all points. Instead, a function to approximate the basic relationship of data would be obtained, therefore, local noise has little effect on fitting radiuses. To illustrate this effect obviously, as shown in Fig. 12, a random point on the original curve at 0 mm is selected with a 1 mm movement in the positive height axis, but the difference of fitting radiuses between these two curves is only about 0.5 μm. In our experiment, the differences between the fitting radiuses of our method and CMM were less than 50 μm whereas the point cloud consisted of more than 4.0 × 105 points. All the measuring steps of our method were completed in several minutes, which achieved fast and full-field measurement of the large-sized rough lens.

 figure: Fig. 12

Fig. 12 Effect of local noise on radius fitting accuracy.

Download Full Size | PDF

4. Conclusion

In summary, a non-contact binocular vision profilometry method was proposed to reconstruct the surface of the rough lens with the aperture of around 300 mm. In order to solve the scattering of the rough surface, the method of combining BBPP and temporal coding was proposed. Compared with spatial coding strategy which relying on the gray value of adjacent pixels, this method shows the following advantages: firstly, it could diminish the neighbor-disturbing effect due to surface scattering. Secondly, this system using BBPP is more robust to the non-uniform environment illumination than using the grey pattern. Finally, such point-to-point matching results in a dense point cloud. The whole reconstruction procedure was completed within several minutes with high accuracy. Compared with CMM, this method achieved a fast and cheap measurement with high accuracy.

From a long-term point of view, our method is of great significance for improving the fabrication of optical elements, especially for large-sized aspheric optical elements. In order to determine measurement accuracy, the spherical rough lens is utilized in the experiment. In fact, aspheric optical elements are more widely used because of its advantages of correcting aberration, improving image quality and reducing system weight. CMM relies on a hard probe for point-by-point measurement, as a mechanical means of local measurement, its drawbacks will become more prominent with the increase of optical element size, while the non-contact full-field fast measurement method mentioned in this paper will be more valuable. Firstly, the efficiency of CMM will be lower. Benefiting from the excellent extensibility of the system, our method can still measure larger elements at the same time. Secondly, the point cloud generated in this paper has clear 3-D coordinate data at each point, which can realize the rapid positioning of defective parts to guide the processing, which is very valuable for the processing of aspheric elements. Finally, larger optical elements over meter-sized require a CMM with larger mechanical travel. The cost of such equipment of over meter-sized CMM will be extremely high or impossible for accurate measurement, on the contrary, the system in this paper appears to be very cheap.

It should be noted that our method is not precise enough to compare with CMM, and this disadvantage is a common dilemma faced by vision-based measurement methods. Therefore, combining the advantages of this method and CMM to achieve fast and precise measurement of large rough optical elements is an interesting work. Specifically, the method in this paper achieves fast reconstruction of the whole surface which could be useful for quick location of the areas which need to be processed. Then these areas are analyzed accurately by CMM to guide the processing. So far, a fast, accurate and low-cost measurement method is proposed to improve the processing technology of large-sized optical elements, which should be highly interesting for practical application of large-sized optical elements fabrication.

Funding

The Shanghai Science and Technology Committee (16DZ2290102, 17ZR1448100); Bureau of Frontier Sciences and Education Chinese Academy of Sciences (QYZDJ-SSW-JSC014).

References

1. P. A. Baisden, L. J. Atherton, R. A. Hawley, T. A. Land, J. A. Menapace, P. E. Miller, M. J. Runkel, M. L. Spaeth, C. J. Stolz, T. I. Suratwala, P. J. Wegner, and L. L. Wong, “Large optics for the national ignition facility,” Fus. Sci. Technol. 69(1), 295–351 (2016). [CrossRef]  

2. I. Ishii, K. Yamamoto, K. Doi, and K. Tsuji, “High-speed 3D image acquisition using coded structured light projection,” in Proceedings of IEEE Conference on Intelligent Robots and Systems (IEEE, 2007), pp. 925–930. [CrossRef]  

3. M. Takeda and K. Mutoh, “Fourier transform profilometry for the automatic measurement of 3-D object shapes,” Appl. Opt. 22(24), 3977–3982 (1983). [CrossRef]   [PubMed]  

4. S. V. D. Jeught and J. J. J. Dirckx, “Real-time structured light profilometry: a review,” Opt. Lasers Eng. 87, 18–31 (2016). [CrossRef]  

5. J. Zhang, C. Zhou, and X. Wang, “Three-dimensional profilometry using a Dammann grating,” Appl. Opt. 48(19), 3709–3715 (2009). [CrossRef]   [PubMed]  

6. S. Wei, C. Zhou, S. Wang, K. Liu, X. Fan, and J. Ma, “Colorful 3-D imaging using an infrared Dammann grating,” IEEE Trans. Industr. Inform. 12(4), 1641–1648 (2016). [CrossRef]  

7. H. Li, C. Zhou, S. Wang, Y. Lu, and X. Xiang, “Two-dimensional gold matrix method for encoding two-dimensional optical arbitrary positions,” Opt. Express 26(10), 12742–12754 (2018). [CrossRef]   [PubMed]  

8. A. Wiegmann, H. Wagner, and R. Kowarschik, “Human face measurement by projecting bandlimited random patterns,” Opt. Express 14(17), 7692–7698 (2006). [CrossRef]   [PubMed]  

9. M. Schaffer, M. Grosse, and R. Kowarschik, “High-speed pattern projection for three-dimensional shape measurement using laser speckles,” Appl. Opt. 49(18), 3622–3629 (2010). [CrossRef]   [PubMed]  

10. K. Liu, C. Zhou, S. Wei, S. Wang, X. Fan, and J. Ma, “Optimized stereo matching in binocular three-dimensional measurement system using structured light,” Appl. Opt. 53(26), 6083–6090 (2014). [CrossRef]   [PubMed]  

11. X. Fan, C. Zhou, S. Wang, C. Li, and B. Yang, “3D human face reconstruction based on band-limited binary patterns,” Chin. Opt. Lett. 14(8), 081101 (2016). [CrossRef]  

12. B. Li, Y. An, and S. Zhang, “Single-shot absolute 3D shape measurement with Fourier transform profilometry,” Appl. Opt. 55(19), 5219–5225 (2016). [CrossRef]   [PubMed]  

13. Y. Tanaka, Y. Mori, and T. Nomura, “Single-shot three-dimensional shape measurement by low-coherent optical path difference digital holography,” Appl. Opt. 53(27), G19–G24 (2014). [CrossRef]   [PubMed]  

14. D. Zheng, F. Da, Q. Kemao, and H. S. Seah, “Phase-shifting profilometry combined with Gray-code patterns projection: unwrapping error removal by an adaptive median filter,” Opt. Express 25(5), 4700–4713 (2017). [CrossRef]   [PubMed]  

15. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

16. J. Y. Bouguet, “Camera calibration toolbox for matlab,” http://www.vision.caltech.edu/bouguetj/calib_doc.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 Schematic diagram of stereo profilometry system.
Fig. 2
Fig. 2 Flow chart of binocular vision profilometry.
Fig. 3
Fig. 3 Generation of binarized band-limited (0.05-0.1) pseudo-random pattern. (a) Original binary pattern; (b) Low-pass filtered pattern; (c) Binarized band-limited pseudo-random pattern.
Fig. 4
Fig. 4 Removal of multiple reflection. (a) Placement of linear polarizers. (b) Image captured before polarizers are placed. (c) Image captured after polarizers are placed.
Fig. 5
Fig. 5 schematic diagram of captured pattern sequences.
Fig. 6
Fig. 6 Reprojection errors (in pixel) of left and right camera.
Fig. 7
Fig. 7 A pair of captured images and binarization of them. (a) Left image; (b) Right image; (c) ROI of left image; (d) ROI of right image; (e) Binarization of left image; (f) Binarization of right image.
Fig. 8
Fig. 8 Disparity map of the rough lens.
Fig. 9
Fig. 9 3-D points cloud of the rough lens
Fig. 10
Fig. 10 Schematic diagram of data acquisition.
Fig. 11
Fig. 11 Fitting radiuses R at different positions.
Fig. 12
Fig. 12 Effect of local noise on radius fitting accuracy.

Tables (1)

Tables Icon

Table 1 Fitting Radiuses and Deviations at Different Positions

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

C L =[ I L ( x,y,1 ), I L ( x,y,2 ),..., I L ( x,y,t ),..., I L ( x,y,N ) ],
C R =[ I R ( x+d,y,1 ), I R ( x+d,y,2 ),..., I R ( x+d,y,t ),..., I R ( x+d,y,N ) ],
ZNCC(x,y,d)= t=1 N [ I L (x,y,t) M L ][ I R (x+d,y,t) M R ] D L (x,y,t) D R (x+d,y,t) .
M= 1 N t=1 N I(x,y,t) ,
D= 1 N t=1 N ( I( x,y,t )M ) 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.