Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Flexible dynamic measurement method of three-dimensional surface profilometry based on multiple vision sensors

Open Access Open Access

Abstract

Single vision sensor cannot measure an entire object because of their limited field of view. Meanwhile, multiple rigidly-fixed vision sensors for the dynamic vision measurement of three-dimensional (3D) surface profilometry are complex and sensitive to strong environmental vibrations. To overcome these problems, a novel flexible dynamic measurement method for 3D surface profilometry based on multiple vision sensors is presented in this paper. A raster binocular stereo vision sensor is combined with a wide-field camera to produce a 3D optical probe. Multiple 3D optical probes are arranged around the object being measured, then many planar targets are set up. These planar targets function as the mediator to integrate the local 3D data measured by the raster binocular stereo vision sensors into the coordinate system. The proposed method is not sensitive to strong environmental vibrations, and the positions of these 3D optical probes need not be rigidly-fixed during the measurement. The validity of the proposed method is verified in a physical experiment with two 3D optical probes. When the measuring range of raster binocular stereo vision sensor is about 0.5 m × 0.38 m × 0.4 m and the size of the measured object is about 0.7 m, the accuracy of the proposed method could reach 0.12 mm. Meanwhile, the effectiveness of the proposed method in dynamic measurement is confirmed by measuring the rotating fan blades.

© 2015 Optical Society of America

1. Introduction

Industrial production sites are characterized by strong vibrations, narrow measurement space, and other complex environmental factors. A single vision sensor cannot measure an entire object because of its limited field of view. Therefore, multiple vision sensors need to be developed for the large-scale synchronous measurement of three-dimensional (3D) surface profilometry. However, multiple vision sensors are sensitive to strong vibrations, which reduce their measurement accuracy. Thus, the development of a dynamic measurement method for 3D surface profilometry with a high tolerance to strong field vibrations and other complex environmental conditions is crucial.

The current measurement methods for 3D surface profilometry can be classified as contact and non-contact. Contact measurement methods include manual measurements using fixtures and three-coordinate measuring machines. However, such methods have the low measurement efficiency and limited measurement range. Meanwhile, touch probes can damage some surfaces of measured objects. Non-contact measurement methods involve the use of laser trackers, 3D laser range finders, theodolites, and vision inspection technology. And this methods have high accuracy and widely application.

Vision inspection technology [1–4] has rapidly developed with the constant advancements in computer technology, electronics, optical technology, and image processing and pattern recognition technology. It has gradually become the principal measurement method for 3D surface profilometry because of its high degree of automation, large measurement range, high accuracy, and non-contact. Commonly used vision inspection technologies include passive and active vision inspection components.

Passive vision inspection [5–8] requires relatively simple equipment to obtain 3D surface information under natural lighting. Therefore, this technology has been widely applied to measure, for example, auto-body construction, large-scale antenna, and large aircraft. However, it is sensitive to the light intensity on site and to the texture of the measured object. Therefore, auxiliary methods such as adhesive mark points and spraying speckle patterns must be employed. Without these auxiliary means, passive vision inspection is inappropriate for the dynamic measurement of the 3D surface profilometry of objects with complex curved surfaces because there is not enough features for matching .

By projecting specially-controlled beams on the measured object, active vision technologies can obtain high-density 3D surface data. Active vision technologies include, for example, Fourier-transform profilometry [9,10], phase-measuring profilometry [11–13], and structured light vision inspection [14–19]. Fourier-transform profilometry can determine the high-density 3D surface profilometry of the measured object with only one image. This technology is suitable for dynamic measurements, but it requires a long operation time and has poor automatic performance. Despite its high measurement accuracy, phase-measuring profilometry is unsuitable for dynamic measurements because it requires images with some phase angle differences to be repeatedly projected at the same area. Some studies used color cameras to measure 3D surface profilometry with phase-measuring profilometry . However, this method is influenced by the color of the object itself and thus unsuitable in industrial fields. The structured light method is a 3D dynamic vision inspection method commonly used in industrial fields at present because of its simple equipment requirement, high degree of automation, and suitability for dynamic measurements. In this paper, the structured light method would be used to obtain 3D surface profilometry of objects.

A single vision sensor cannot measure the whole 3D surface profilometry of object because of occlusion. The measured area is generally divided into many subareas, and the 3D data of these subareas are integrated into the global coordinate system to obtain the 3D surface profilometry of an entire object. Vision inspection can be classified as flow vision inspection and multi-sensor vision inspection.

Flow-vision inspection can measures the 3D surface profilometry of an entire large-scale mechanical component by moving a single vision sensor around the measured object. All measured data of all subareas are integrated into the global coordinate system by using fiducial marks attached to the component. A typical example is ATOS developed by GOM (US). However, fiducial mark use is limited because they cannot be attached to soft objects, liquid, or high-accuracy machinery. In addition, this method is time consuming, and the fiducial marks are easily deformed. By contrast, flow-vision inspection has the advantages of simple equipment requirement and easy operation; it is also suitable for static measurements in industrial fields. However, it is unsuitable for the dynamic measurement of 3D surface profilometry.

Multi-sensor vision inspection [20–22] requires multiple vision sensors to perform global calibration before measurements and then integrates the data of all subareas measured by all vision sensors into the global coordinate system based on the calibration results. This method is commonly used in industrial fields, such as auto-body geometry measurement systems from Perceptron (US) and full-profile of rails measurement systems from MERMEC (Italy). Although multi-sensor vision inspection has a simple measurement principle, the global calibration of multiple vision sensors is difficult to conduct on site with this method. Furthermore, the measurement accuracy of the system after global calibration is largely influenced by strong vibrations on site.

To realize the dynamic measurement of the 3D surface profilometry of an object in the presence of strong vibrations, we combine a raster binocular stereo vision sensor and a wide-field camera to form a 3D optical probe. Then, multiple 3D optical probes and planar targets are arranged to form a flexible multi-sensor vision measurement system. Similar to the multi-sensor vision inspection, the proposed measurement system contains multiple vision sensors, but they are not rigidly fixed. The remainder of the paper is organized as follows. Section 2 introduces the structures and mathematical models of 3D optical probes and describes the basic principle of the algorithm. Section 3 presents a physical experiment in which rotating fans are measured using two 3D optical probes to verify the algorithm.

2. Basic measurement principle of the system

Three 3D optical probes are served as examples in the following to introduce the proposed method. The structural schematic of the measurement system is shown in Fig. 1 The system mainly consist of three 3D optical probes, three planar targets, a high-speed image acquisition system, a computer, system software, and the corresponding mechanisms. The basic principle of the measurement system is as follows. As required, the 3D optical probes are arranged around the measured object at a certain angle to ensure that the measurement range can cover the entire measured object. Then, the raster binocular stereo vision sensors of the 3D optical probes are operated to measure the surface of the local component in real time, and wide-field cameras are used to shoot the planar targets around the measurement site. Finally, the planar targets in the common field of view of the wide-field cameras are considered as the mediator, and the local 3D data obtained by all 3D optical probes are integrated into the global coordinate system.

 figure: Fig. 1

Fig. 1 Structural schematic of the dynamic vision measuring system for the 3D surface profilometry of a large-scale component under complex site conditions.

Download Full Size | PDF

The specific measurement steps of the system are listed below.

Step 1: The coordinate systems of the 3D optical probes are set up below those of raster vision sensors, and the optical probes are calibrated. Specifically, the raster binocular stereo vision sensors, intrinsic parameters of the wide-field cameras, and the transformation matrices between the coordinate systems of the wide-field cameras and those of the raster binocular stereo vision sensors are calibrated.

Step 2: According to the size and shape of the measured object, the 3D optical probes are arranged around the measured object. The planar targets are set up in the common field of view of the different wide-field cameras. The global coordinate system is established below one of the coordinate systems of the 3D optical probes.

Step 3: During the actual measurement, the raster binocular stereo vision sensor of each 3D optical probe is used to measure the local 3D profilometry of the measured object. Meanwhile, the wide-field cameras are used to shoot planar targets. Then, the transformation matrix from the coordinate system of the 3D optical probe to that of the global coordinate system is calculated. Finally, the local 3D data measured by each 3D optical probe are integrated into the globe coordinate system.

Step 4: 3D optical probes are used to constantly shoot the measured object in dynamic change and planar targets. Step 3 is repeated to realize the dynamic vision measurement of the 3D surface profilometry of the object.

2.1 Mathematical model of the measurement system

The mathematical model of the measurement system principally consist of mathematical models of 3D optical probes and the global measurement model in the paper. Specifically, models of the raster binocular stereo vision sensor and the wide-field camera are used.

2.1.1 Mathematical model of a 3D optical probe

In this paper, each 3D optical probe principally consist of a raster binocular stereo vision sensor and a wide-field camera. The raster binocular stereo vision sensor is used to measure local 3D data, whereas the wide-field camera is used to integrate global data. The structural schematic of the 3D optical probe is shown in Fig. 2. The raster binocular stereo vision in each probe sensor is composed of two cameras and a projector, and the wide-field camera is composed of a camera and a mirror with four surfaces. The coordinate system Ooxoyozo of the 3D optical probe is established below the Olxlylzl of the wide-field camera, whereas the coordinate system Osxsyszsof the raster binocular stereo vision sensor is set up below Oc1xc1yc1zc1of the left camera. Tos is the transformation matrix from Ooxoyozo to Osxsyszs.

 figure: Fig. 2

Fig. 2 Structural schematic of the 3D optical probe.

Download Full Size | PDF

1) Mathematic model of a raster vision sensor

As shown in Fig. 3, the coordinate systems of the left and right cameras are Oc1xc1yc1zc1 and Oc2xc2yc2zc2, respectively. The transformation matrix from the coordinate system of the left camera to that of the right camera is T21=[R21t2101](R21,t21 are the rotation matrix and the translation vector, respectively). r21 is the rodrigues representation of the rotation matrix R21. The coordinate system Osxsyszsof the raster binocular stereo vision sensor is set up aboveOc1xc1yc1zc1. p1=[u1,v1,1]T and p2=[u2,v2,1]Tare homogeneous coordinates of non-distorted images of the raster stripe point P in the coordinate system of the image by the left and right cameras, respectively. l1 is the epipolar line of p2 in the image of the left camera, and l2is the epipolar line of p1 in the image of the right camera. The raster stripe point P from the raster binocular stereo vision sensor is projected by the left and right cameras, respectively. Then, the binocular stereo vision model is used to calculate the 3D coordinates Ps=[xs,ys,zs,1] of P in Osxsyszs:

{ρ1p1=K1[I0]Psρ2p2=K2[R21t21]Ps
where Km=[fxγu00fyv0001](m=1,2) are the intrinsic parameters of the left and right cameras, respectively. u0 and v0 are the coordinates of the principal point.fxandfy are the scale factors in the image u and v axes and the parameter γ is the skew of the two image axes [23].

 figure: Fig. 3

Fig. 3 Binocular stereo vision model.

Download Full Size | PDF

In the actual measurement, lens distortion occurred in the camera’s imaging system.pd=(ud,vd,1)T is the homogeneous coordinate of the distorted image, p=(u,v,1)Tis the homogeneous coordinate of the non-distorted image, and pn=(xn,yn,1)Tis the homogeneous coordinate of the normalized image. The lens distortion model can be expressed as

ud=u+(uu0)(k1r2+k2r4)vd=v+(vv0)(k1r2+k2r4)
where r=xn2+yn2, k1, and k2 are the coefficients of radial lens distortion.

2) Mathematical model of a wide-field camera

Existing wide-field cameras mostly adopt hyperboloid, spherical, and cylindrical mirrors. These cameras are difficult to use for modeling and possess low measurement accuracy. Hence, they are unsuitable for high-accuracy industrial measurements and principally used for video monitoring and target recognition. To realize high-accuracy wide-field measurements, we use a wide-field, high-resolution camera with a four-surface mirror. The camera is suitable for multi-angle measurements, as shown in Fig. 4. A picture of the wide-field camera with a four-surface mirror is shown in Fig. 4(b). The wide-field camera can be designed to have a flat mirror depending on actual needs.

 figure: Fig. 4

Fig. 4 Structural schematic of the wide-field camera. (a) Sketch map of the wide-field camera with a four-surface mirror. (b) Physical picture of the wide-field camera with a four-surface mirror.

Download Full Size | PDF

As shown in Fig. 4(a), Omixmiymizmi(i=1,2,3,4) is the coordinate system of the four-mirror camera of the wide-field camera. Tm21,Tm31,Tm41 are the transformation matrixes from the coordinate system of mirror cameras 2, 3, and 4 to that of mirror camera 1, respectively. The coordinate system Olxlylzl of the wide-field camera is established based on the coordinate system Om1xm1ym1zm1 of mirror camera 1. According to Formula 3, the 3D coordinates Po=[xo,yo,zo,1] of the measuring point P of the raster binocular stereo vision sensor can be obtained under the coordinate system of the 3D optical probe:

Po=TsoPs

Therefore, using Eqs. (1)-(3), we can obtain the 3D coordinates Po of the measuring point P of the raster binocular stereo vision sensor under the coordinate system Ooxoyozo of the 3D optical probe.

2.1.2 Global measurement model of the system

The global integration of the measured data of the system is shown in Fig. 5. In the figure, Toi,tj is the transformation matrix from the coordinate system of the 3D optical probe to that of the planar target (oi is the i-th 3D optical probe, and tj is the j-th planar target). Toi,tj can be solved as previously described in [23] using the planar targets captured by the mirror camera of the wide-field camera.

 figure: Fig. 5

Fig. 5 Schematic of the global integrated model.

Download Full Size | PDF

The global coordinate system is set up on the basis of the coordinate system of 3D optical probe 1. The local 3D data measured by 3D optical probes 2 and 3 can be integrated into the global coordinate system through the following equation:

{PG1=TNPo1PG2=To1,t11To2,t1Po2PG3=To1,t31To3,t3Po3
where Po1, Po2, and Po3 are 3D data of the local topographies measured by the raster binocular vision sensors of 3D optical probes 1, 2, and 3, respectively, under the corresponding coordinate systems; PG1, PG2, and PG3are the coordinates of Po1, Po2, and Po3under the global coordinate system, respectively; and TN is the unit matrix.

2.2 Rapid extraction and recognition of light stripe center

2.2.1 Rapid extraction of light stripe center

In [24], Steger proposes an algorithm for extracting a light stripe center based on the Hessian matrix. In this method, the Hessian matrix used to calculate every image point needs to use a Gaussian convolution mask to convolute the image. In this paper, a method similar to Steger’s method [24] is proposed to realize the automatic extraction of light stripe.

1) Extracting the sub-pixel coordinate of the light stripe center

The ideal light stripe center is the vertex of its grayscale surface. The maximum eigenvalue corresponding to the Hessian matrix is related to the local maximum of the second-order derivative of the grayscale surface.

H(u,v)=[ruuruvruvrvv]
Where ruu, ruv, and rvv are second order partial derivatives of image intensity function I(u,v). They can be computed by convolving the image with the corresponding differential forms of Gaussian kernel as follows:
ruu=guu(u,v)I(u,v)ruv=guv(u,v)I(u,v)rvv=gvv(u,v)I(u,v)
Where guu, guv and gvv are second order Gaussian convolution kernel.

The eigenvector of H(u, v) corresponding to the largest eigenvalue represents the normal direction of light stripe, denoted by n(t) = (nu, nv)T. The grayscale distribution function of the neighborhood, where the initial point (ui,vi) of the ith light stripe center lies, is subject to second-order Taylor expansion. The first-order derivative of the light stripe center in the normal direction of the light stripe is taken as zero, and the sub-pixel coordinates of this point are calculated.

{ui'=uinuru+nvrvnu2ruu+2nunvruv+nv2rvvnuvi'=vinuru+nvrvnu2ruu+2nunvruv+nv2rvvnv

(tnu,tnv)[12,12]×[12,12](t=nuru+nvrvnu2ruu+2nunvruv+nv2rvv) is determined, i.e., if the point with zero first-order derivative lies within the current pixel, this point (ui',vi') is the sub-pixel coordinate of the light stripe center. The details of the process are introduced in [24].

The extracting result of the sub-pixel coordinate of the light stripe center is shown in Fig. 6. There are some false light stripe center points at the ends of light stripes except for the right light stripe center points, as shown in Fig. 6(a).

 figure: Fig. 6

Fig. 6 Extracting result of the light stripe. (a) Extracting result of the sub-pixel coordinate of the light stripe center. (b) Extracting result of the light stripe center after linking.

Download Full Size | PDF

2) Linking the light stripe center

In the above, the sub-pixel coordinates of light stripe center points and the orientation of light stripe at these points are obtained. Direction constraint is added in the process of linking light stripe center points in order to eliminate false sub-pixel coordinates of light stripe, as shown in Fig. 6(a).

The whole images are searched for light stripe center points, from top to bottom and left to right. The first pixel with qualified light stripe center point is called the initial pixel of a line to be linked. Then this pixel is appointed as the current pixel. Three neighboring pixels compatible with the line orientation at current pixel are examined. One of the qualified neighboring pixels is added to the line, and becomes the new current pixel. This process is repeated until there’s no qualified light stripe center point to add.

There are two criterions for a qualified neighboring pixel: 1) The angle between the line orientation at the pixel and the current pixel is less than a threshold.2) The distance between the sub-pixel center point locations of the pixel and that of the current pixel is less than a threshold. If there exist more than one qualified neighboring points, the point with minimum distance to the current point is selected.

The linking process of a light stripe ends when there’s no qualified neighboring pixel for the current pixel. Then, initial point of a new light stripe is searched, and the linking process is repeated for the new light stripe. This process continues until all the pixels of the image have been examined.

As shown in Fig. 6(b), false light stripe center points and background light stripes are removed after forcing the direction constraints in the light stripe linking process. The linking method not only eliminates false sub-pixel coordinates of light stripe, but also links the light stripe centers to form multiple line segments, as shown in Fig. 6(b).

2.2.2 Fast matching of raster light strips

In Section 2.2.1, light stripe centers are extracted and connected to obtain multiple light stripe segments. The following content describes a fast method for matching light stripes from the left and right cameras of the raster binocular stereo vision sensor.

Suppose the raster binocular stereo vision sensor has been calibrated, then the plane equation aix+biy+ciz+di=0 (i = 1, 2, 3…n) of all light stripes projected by the raster projector in the coordinate system of the raster binocular stereo vision sensor are also given.

According to the principle of epipolar constraint in the binocular stereo vision measurement, in the light stripe image shot by the right camera, the epipolar l2 corresponding to any point p1 in the raster light stripe from the left camera can be determined. Point p2 corresponding to point p1 must be the intersection point of the epipolar l2 and the raster light stripe, as shown in Fig. 7.

 figure: Fig. 7

Fig. 7 Schematic of epipolar constraint.

Download Full Size | PDF

If point p2 corresponding to p1 can be found, then p1 and p2 are substituted into Formula (1) to calculate the 3D coordinates of this point under the coordinate system of the raster binocular stereo vision sensor. Moreover, if the plane equation of the point is given, then the 3D coordinates P=[x,y,z]T of this point can be calculated through Eq. (8) under the coordinate system of the raster binocular stereo vision sensor.

{ρp=K1(I0)Pax+by+cz+d=0
where (a,b,c,d) is the coefficient of plane equation of the light plane under the coordinate system of the raster binocular stereo vision sensor, pis the homogeneous coordinate of the raster light stripe point under the coordinate system of image by Camera 1, andK1is the intrinsic parameter of Camera 1.

Based on the above conditions, the concrete steps for the fast matching of light stripes are as follows. First, any light stripe is selected from the light stripes by the left camera, denoted as Stripe 1; any point p1 is chosen in Stripe 1. Second, based on the solved light plane equations, set Q1 of all possible 3D coordinate points of p1 is solved by Eq. (8). Third, according to Eq. (1), the set of 3D coordinate points Q2 where p2 in the epipolar l2 of the right camera intersects with all raster light stripes is calculated. Fourth, a pair of points closest in the set is found. This pair of points is the corresponding points of left and right cameras, and the matching of light stripes of the left and right images is realized. Fifth, steps 1–4 are repeated to realize the matching of all light stripes by left and right images.

2.3 Calibration of a 3D optical probe

In this paper, the calibrations of a 3D optical probe mainly include the calibration of a raster binocular stereo vision sensor, a wide-field camera, and transformation matrix Tso between the coordinate system of the wide-field camera and that of the raster binocular stereo vision sensor. The calibrations of a raster binocular stereo vision sensor include the calibration of intrinsic parameters of a wide-field camera and the calibration of transformation matrixes Tm21,Tm31,Tm41 between the coordinate systems of a four-mirror camera. In this paper, the calibrations of a raster binocular stereo vision sensor and the intrinsic parameters of a wide-field camera are performed using the method in [23,25]. The calibrations of Tm21,Tm31,Tm41, and Tos are performed using the method in [26], as shown in Fig. 8.

 figure: Fig. 8

Fig. 8 Schematic of calibration process of Tm41.

Download Full Size | PDF

The coordinate system of a wide-field camera is established below the coordinate system 1 of the mirror camera, and the calibration process of Tm21,Tm31,Tm41, and Tos is as shown below.

Step 1: Two planar targets are placed in the fields of view of mirror cameras 1 and 4, the 3D optical probe is moved at least twice. Each mirror camera captures the image of the corresponding planar target. Using the constraint that the relative positions of the two planar targets are invariable, Tm41 can be computed. By repeating the above process, Tm21, Tm31 and Tso also can be computed.

The flatness and the coating method of the mirror can affect the measurement accuracy. To get high measurement accuracy, the first surface mirror with high accuracy would be used in the real measurement system. The flatness of the first surface mirror with four surfaces used in our physical experiments is about 1 um.

3. Physical experiments

Two 3D optical probes are used in this experiment to verify the effectiveness of the proposed algorithm. The setup of the physical experiment is shown in Fig. 9. The configurations of the two 3D optical probes are identical in our experiment. The raster binocular stereo vision sensor of the 3D optical probes consists of two cameras (AVTGC1380H, 17 mm Schneider lenses, with a resolution of 1360 × 1024 and a field of view of 500 mm × 380 mm × 400 mm) and one projector (DellM110, with a resolution of 1360 × 768). The wide-field camera of the 3D optical probes consists of one camera (Pointgray, 12 mm Schneider lenses, with a resolution of 2448 × 2048) and one mirror with four surfaces. The characteristic point of the planar target is unified as 10 × 10, with a machining accuracy of 0.05 mm.

 figure: Fig. 9

Fig. 9 Layout of physical experiment.

Download Full Size | PDF

3.1 Result of system calibration

The two 3D optical probes are calibrated according to Section 2.3. The characteristic point of planar target is unified as 10 × 10, with a machining accuracy of 0.05 mm.

Typical images used for the calibration of 3D optical probes are shown in Figs. 10(a)-10(d). As shown in Fig. 10(a), the planar target images captured by the cameras on the two sides of the raster binocular stereo vision sensor of probe 1 are used to calibrate the intrinsic parameters of cameras and the transformation matrix between the coordinate systems of the two cameras, respectively. As shown in Fig. 10(b), the images captured by the wide-field camera are used to calibrate the intrinsic parameters of wide-field camera. The target images captured by the wide-field camera, also shown in Fig. 10(c), are used to calibrate the transformation matrixes of the coordinate system of four mirror cameras. The images captured by the raster binocular stereo vision sensor and the wide-field camera are shown in Fig. 10(d) and used to calibrate the transformation matrix between the coordinates of the two. The result of parameter calibration of probe 1 is shown in Table 1.

 figure: Fig. 10

Fig. 10 Typical images used for the calibration of 3D optical probes.(a) Images captured by the raster binocular stereo vision sensor.(b) and (c) Images captured by the wide-field camera.(d) Images captured by the raster binocular stereo vision sensor and the wide-field camera.

Download Full Size | PDF

Tables Icon

Table 1. Result of parameter calibration of 3D optical probe 1

The typical images used for the calibration of probe 2 are similar to those used for probe 1.The result of parameter calibration of probe 2 is shown in Table 2.

Tables Icon

Table 2. Result of parameter calibration of 3D optical probe 2

The abovementioned result is the calibration result of the two 3D optical probes. The following accuracy evaluation test and dynamic measurement test are performed to verify the effectiveness of the proposed algorithm.

3.2 Evaluation of global measurement accuracy

A self-designed method is used to evaluate global measurement accuracy. The specific experimental procedure is as follows: The characteristic points attached to the mechanical part are measured by the high accuracy 3D measurement device(HAMD)(the measurement accuracy of HAMD is better than 0.01 mm), as shown in Fig. 11(a). The mechanical part is place behind two 3D optical probes. The 3D optical probes 1 measures the characteristic points attached to the left part of the mechanical part and 3D optical probes 2 measures the characteristic points attached to the right part of the mechanical part. Finally, all characteristic points are transformed to the global coordinate system by the planar target, as shown in Fig. 11(b).Fourteen characteristic points are selected to form seven distances between two characteristic points measured by probe 1 and probe 2, respectively. The distance obtained by the HAMD is calculated as the ideal distance dt and the distance obtained by the proposed method is the measurement distance dm. The deviation Δd between dm and dt and the root–mean–square (RMS) error are calculated to evaluate the global accuracy of the measuring system.

 figure: Fig. 11

Fig. 11 (a) Schematic of global measurement accuracy evaluation. (b) Mechanical part

Download Full Size | PDF

The images captured by probe 1 are shown in Fig. 12(a). The images captured by probe 2 are shown in Fig. 12(b).

 figure: Fig. 12

Fig. 12 Images captured by two probes for the evaluation of global measurement accuracy.(a) Images captured by probe 1.(b) Images captured by probe 2.

Download Full Size | PDF

All measurement results listed in Table 3 show that the RMS error is about 0.12 mm. All characteristic points measured by the HAMD and the proposed method are transformed to the same coordinate system by the ICP method [27], and the RMS error of the corresponding characteristic points distances is 0.11 mm. All characteristic points transformed to the same coordinate system are shown in Fig. 13.

Tables Icon

Table 3. Evaluation result of global measurement accuracy

 figure: Fig. 13

Fig. 13 3D coordinates of all characteristic points transformed to the same coordinate system.

Download Full Size | PDF

3.3 Dynamic measurement test

A physical experiment is designed to verify the effectiveness of the algorithm in dynamic measurement. The specific procedure is as follows. Two 3D optical probes are placed before and behind the electric fan, respectively. One planar target is placed within the common field of view of wide-field cameras of two probes, as shown in Fig. 9. Turning on the electric fan, the 3D surface profilometry of the two surfaces of electric fan is measured with the two 3D probes, respectively. Using the planar target in the common field of view of wide-field cameras, the local 3D data measured by the two probes are integrated into one coordinate system.

Given that our cameras are not high speed, the rotating speed of fan blades is set at a low speed. Images of fan blades captured by probe 1 and 2 are shown in Fig. 14.

 figure: Fig. 14

Fig. 14 Images captured by probe 1 and 2. (a) Images captured by probe 1. (b) Images captured by probe 2.

Download Full Size | PDF

The 3D surface profilometry of fan blades is obtained based on images of Fig. 14, as shown in Fig. 15. The 3D surface profilometry of six groups of continuously rotating fan blades is shown in Fig. 16.

 figure: Fig. 15

Fig. 15 3D surface profilometry of fan blades.(a) 3D surface profilometry of fan blades measured by probe 1.(b) 3D surface profilometry of fan blades measured by probe 1.(c) integration result of (a)and(b).

Download Full Size | PDF

 figure: Fig. 16

Fig. 16 3D surface profilometry of six groups of continuously rotating fan blades.

Download Full Size | PDF

4. Conclusion

When measuring the dynamic 3D surface profilometry of the object surface, the accuracy of the measurement system composed of several rigidly fixed vision sensors is vulnerable to the environmental vibration. To promote the accuracy of on-site measurement, a novel dynamic vision measurement system of 3D surface profilometry is introduced in this paper.

Compared with the existing methods, the highlight of our method is the use of 3D optical probe composed of a raster binocular stereo vision sensor and wide-field cameras. On the measurement site, the positions of 3D optical probes can be flexibly regulated, and need not be rigidly fixed. Therefore, the influence of on-site vibration on the measurement accuracy is reduced. Meanwhile, the proposed method is simple and flexible, and need not the global calibration on site. Physical experiment confirms that when the size of the measured object is about 0.7 m and the measuring range of raster binocular stereo vision sensor is about 0.5 m × 0.38 m × 0.4 m, the accuracy of the proposed method could reach 0.12 mm. Moreover, the test with rotating fan blades also confirms the effectiveness of the method in dynamic measurement.

Acknowledgments

The authors acknowledge the support from National Natural Science Foundation of China under Grant No. 51175027 and the Beijing Natural Science Foundation under Grant No. 3132029.

References and links

1. S. Shirmohammadi and A. Ferrero, “Camera as the instrument: the rising trend of vision based measurement,” IEEE Trans. Instrum. Meas. 17(3), 41–47 (2014). [CrossRef]  

2. S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010). [CrossRef]  

3. E. N. Malamas, E. G. M. Petrakis, M. Zervakis, L. Petit, and J. D. Legat, “A survey on industrial vision systems, applications and tools,” Image Vis. Comput. 21(2), 171–188 (2003). [CrossRef]  

4. F. Chen, G. M. Brown, and M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000). [CrossRef]  

5. S. W. Jung, J. Y. Jeong, and S. J. Ko, “Sharpness Enhancement of Stereo Images Using Binocular Just-Noticeable Difference,” IEEE Trans. Image Process. 21(3), 1191–1199 (2012). [CrossRef]   [PubMed]  

6. M. Ceccarelli, A. Speranza, D. Grimaldi, and F. Lamonaca, “Automatic detection and surface measurements of micronucleus by a computer vision approach,” IEEE Trans. Instrum. Meas. 59(9), 2383–2390 (2010). [CrossRef]  

7. Z. G. Ren, J. R. Liao, and L. L. Cai, “Three-dimensional measurement of small mechanical parts under a complicated background based on stereo vision,” Appl. Opt. 49(10), 1789–1801 (2010). [CrossRef]   [PubMed]  

8. W. M. Li and Y. F. Li, “Single-camera panoramic stereo imaging system with a fisheye lens and a convex mirror,” Opt. Express 19(7), 5855–5867 (2011). [CrossRef]   [PubMed]  

9. X. Y. Su, W. J. Chen, Q. C. Zhang, and Y. P. Chao, “Dynamic 3-D shape measurement method based on FTP,” Opt. Lasers Eng. 36(1), 49–64 (2001).

10. E. Zappa and G. Busca, “Static and dynamic features of Fourier transform profilometry: A review,” Opt. Lasers Eng. 50(8), 1140–1151 (2012). [CrossRef]  

11. L. Lu, J. T. Xi, Y. G. Yu, and Q. Guo, “New approach to improve the accuracy of 3-D shape measurement of moving object using phase shifting profilometry,” Opt. Express 21(25), 30610–30622 (2013). [CrossRef]   [PubMed]  

12. P. S. Huang, C. P. Zhang, and F. P. Chiang, “High-speed 3-D shape measurement based on digital fringe projection,” Opt. Eng. 42(1), 163–168 (2003). [CrossRef]  

13. Y. J. Fu and Q. Luo, “Fringe projection profilometry based on a novel phase shift method,” Opt. Express 19(22), 21739–21747 (2011). [CrossRef]   [PubMed]  

14. X. Zhang, Y. F. Li, and L. M. Zhu, “Color code identification in coded structured light,” Appl. Opt. 51(22), 5340–5356 (2012). [CrossRef]   [PubMed]  

15. H. R. A. Basevi, J. A. Guggenheim, H. Dehghani, and I. B. Styles, “Simultaneous multiple view high resolution surface geometry acquisition using structured light and mirrors,” Opt. Express 21(6), 7222–7239 (2013). [CrossRef]   [PubMed]  

16. R. Q. Yang, S. Chen, Y. Wei, and Y. Z. Chen, “Robust and accurate surface measurement using structured light,” IEEE Trans. Instrum. Meas. 57(6), 1275–1280 (2008). [CrossRef]  

17. J. Vargas, M. J. Terrón-López, and J. A. Quiroga, “Flexible calibration procedure for fringe projection profilometry,” Opt. Eng. 46(2), 023601 (2007). [CrossRef]  

18. J. Vargas, T. Koninckx, J. A. Quiroga, and L. V. Gool, “Three-dimensional measurement of microchips using structured light techniques,” Opt. Eng. 47(5), 053602 (2008). [CrossRef]  

19. J. Vargas and J. A. Quiroga, “Novel multiresolution approach for an adaptive structured light system,” Opt. Eng. 47(2), 023601 (2008). [CrossRef]  

20. R. S. Lu, Y. F. Li, and Q. Yu, “On-line measurement of straightness of seamless steel pipe using machine vision technique,” Sens. Actuators A Phys. 94(1), 95–101 (2001). [CrossRef]  

21. Q. Li and S. Ren, “A real-Time visual inspection system for discrete surface defects of rail heads,” IEEE Trans. Instrum. Meas. 61(8), 2189–2199 (2012). [CrossRef]  

22. Y. Li, Y. F. Li, Q. L. Wang, D. Xu, and M. Tan, “Measurement and defect detection of the weld bead based on online vision inspection,” IEEE Trans. Instrum. Meas. 59(7), 1841–1849 (2010). [CrossRef]  

23. Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

24. C. Steger, “An unbiased detector of curvilinear structures,” IEEE Trans. Pattern Anal. Mach. Intell. 20(2), 113–125 (1998). [CrossRef]  

25. J. Y. Bouguet, Camera calibration toolbox for Matlab. Available from: http://www.vision.caltech. edu/ bouguetj/ calib_doc/.

26. Z. Liu, G. J. Zhang, Z. Z. Wei, and J. H. Sun, “A global calibration method for multiple vision sensors based on multiple targets,” Meas. Sci. Technol. 22(12), 125102 (2011). [CrossRef]  

27. P. J. Besl and N. D. McKay, “A method for registration of 3-D shapes,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (1992). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1
Fig. 1 Structural schematic of the dynamic vision measuring system for the 3D surface profilometry of a large-scale component under complex site conditions.
Fig. 2
Fig. 2 Structural schematic of the 3D optical probe.
Fig. 3
Fig. 3 Binocular stereo vision model.
Fig. 4
Fig. 4 Structural schematic of the wide-field camera. (a) Sketch map of the wide-field camera with a four-surface mirror. (b) Physical picture of the wide-field camera with a four-surface mirror.
Fig. 5
Fig. 5 Schematic of the global integrated model.
Fig. 6
Fig. 6 Extracting result of the light stripe. (a) Extracting result of the sub-pixel coordinate of the light stripe center. (b) Extracting result of the light stripe center after linking.
Fig. 7
Fig. 7 Schematic of epipolar constraint.
Fig. 8
Fig. 8 Schematic of calibration process of T m4 1 .
Fig. 9
Fig. 9 Layout of physical experiment.
Fig. 10
Fig. 10 Typical images used for the calibration of 3D optical probes.(a) Images captured by the raster binocular stereo vision sensor.(b) and (c) Images captured by the wide-field camera.(d) Images captured by the raster binocular stereo vision sensor and the wide-field camera.
Fig. 11
Fig. 11 (a) Schematic of global measurement accuracy evaluation. (b) Mechanical part
Fig. 12
Fig. 12 Images captured by two probes for the evaluation of global measurement accuracy.(a) Images captured by probe 1.(b) Images captured by probe 2.
Fig. 13
Fig. 13 3D coordinates of all characteristic points transformed to the same coordinate system.
Fig. 14
Fig. 14 Images captured by probe 1 and 2. (a) Images captured by probe 1. (b) Images captured by probe 2.
Fig. 15
Fig. 15 3D surface profilometry of fan blades.(a) 3D surface profilometry of fan blades measured by probe 1.(b) 3D surface profilometry of fan blades measured by probe 1.(c) integration result of (a)and(b).
Fig. 16
Fig. 16 3D surface profilometry of six groups of continuously rotating fan blades.

Tables (3)

Tables Icon

Table 1 Result of parameter calibration of 3D optical probe 1

Tables Icon

Table 2 Result of parameter calibration of 3D optical probe 2

Tables Icon

Table 3 Evaluation result of global measurement accuracy

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

{ ρ 1 p 1 = K 1 [ I 0 ] P s ρ 2 p 2 = K 2 [ R 21 t 21 ] P s
u d = u + ( u u 0 ) ( k 1 r 2 + k 2 r 4 ) v d = v + ( v v 0 ) ( k 1 r 2 + k 2 r 4 )
P o = T so P s
{ P G1 = T N P o1 P G2 = T o 1 , t 1 1 T o 2 , t 1 P o2 P G3 = T o 1 , t3 1 T o 3 , t 3 P o3
H ( u , v ) = [ r u u r u v r u v r v v ]
r u u = g u u ( u , v ) I ( u , v ) r u v = g u v ( u , v ) I ( u , v ) r v v = g v v ( u , v ) I ( u , v )
{ u i ' = u i n u r u + n v r v n u 2 r u u + 2 n u n v r u v + n v 2 r v v n u v i ' = v i n u r u + n v r v n u 2 r u u + 2 n u n v r u v + n v 2 r v v n v
{ ρ p = K 1 ( I 0 ) P a x + b y + c z + d = 0
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.