Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Monocular vision-based low-frequency vibration calibration method with correction of the guideway bending in a long-stroke shaker

Open Access Open Access

Abstract

Calibration is required to maximize the sensitivity and measurement accuracy of vibration sensors. In this study, a low-frequency vibration calibration method is proposed that is based on the concept of monocular vision. In this method, we employ a high-accuracy edge extraction method to extract the edges of sequential images so as to obtain the high calibration accuracy. However, the proposed method must rely on a long-stroke shaker to provide vibration excitation to the sensor, and the bending in the guideway caused by the mechanical processing reduces the calibration accuracy, especially at very low frequencies. The proposed setting compensates for the bending using an additional monocular vision technique to significantly improve the calibration accuracy. To validate the calibration accuracy of the proposed method, a comparison was conducted between results obtained via the laser interferometry, the Earth's gravitation method, and the proposed method when applied to calibrate the sensitivity of a tri-axial acceleration sensor at frequencies between 0.04 and 8 Hz. The results of the comparison showed the proposed method calibrated the sensor sensitivity with high accuracy and was able to accurately account for the bending when the frequency was lower than 0.3 Hz. In contrast, the calibration accuracy of the laser interferometry decreased because of the bending.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Low-frequency vibration sensors are widely used in many engineering fields, including seismic prediction, building monitoring, and mineral exploration [1–3], and their measurement accuracy is directly determined by their sensitivity. Therefore, it is desirable to develop a low-frequency vibration calibration method to achieve the high-accuracy sensitivity calibration.

Presently, these vibration sensors are typically calibrated using either the comparison or primary vibration calibration methods. In the comparison method, a reference sensor with known sensitivity and the sensor to be calibrated are installed back-to-back and excited by the same vibration [4,5]. As the accuracy of this method depends on the accuracy of the reference sensor, which is calibrated by the primary calibration method, it cannot be used when a highly accurate calibration is required.

In contrast, the current primary vibration calibration method is usually the Earth's gravitation (EG) method [6], or laser interferometry (LI), as described in ISO 16063-11 [7–9]. The EG approach uses a rotator to provide a constant localized acceleration of gravity excitation to the sensor. The problem with this method is that the increasing centrifugal acceleration of the rotator at high frequencies is not negligible compared to the constant excitation, which restricts the maximum calibration frequency to approximately less than 5 Hz [6]. LI requires a laser interferometer to measure the vibration excitation to the sensors, which is provided by a long-stroke shaker [10–12]. However, the extra excitation delivered to the sensors as a result of the bending in the guideway of the shaker caused by the mechanical processing degrades the calibration accuracy. This extra excitation is especially significant relative to the tiny vibration excitation at very low frequencies (usually ≤ 0.2 Hz) [13]. This method enables high calibration accuracy to be obtained at low frequencies (usually > 0.2 Hz) because the extra excitation is negligible at these frequencies.

Monocular vision (MV) methods have been widely applied in precision measurement fields for a number of reasons, including their simplicity, flexibility, and efficiency [14–17]. Considering that the accuracy of the sensitivity calibration of the sensors depends on the precision of their input vibration excitation measurement, we propose a novel MV method to accurately calibrate low-frequency vibration sensors. To obtain the high calibration accuracy, the vibration excitation displacement is measured precisely via a method that improves the edge extraction accuracy of sequential images and a reliable camera calibration technique. The proposed vision setting is used to measure the bending in the guideway of the shaker to remove the extra excitation. When the proposed method is implemented, its calibration accuracy is similar to that of LI and EG, especially at very low frequencies, and it has the advantages of flexibility, efficiency, and low-cost.

The rest of this paper is organized as follows. Section 2 describes the proposed MV method for low-frequency vibration calibration. Section 3 describes the setting for correcting the bending in the guideway of the long-stroke shaker. The experimental results and discussion are provided in Section 4, and our conclusions are detailed in Section 5.

2. Low-frequency vibration calibration method

2.1 Principle of the proposed MV method

The sensitivity SV of a vibration sensor is defined as the ratio of its output voltage fitted amplitude value Vp,V to its input excitation acceleration fitted amplitude value ap,V:

SV=Vp,V/ap,V.

While Vp,V can be measured directly, ap,V is obtained by measuring the displacement of a sequence of monocular images. A sketch of an MV-based low-frequency vibration calibration system is shown in Fig. 1. The sensor to be calibrated is mounted on the working surface of the horizontal long-stroke shaker and the camera is positioned above the working surface with its optical axis perpendicular to the working surface. The vibration excitation to the sensor is provided by the shaker and the camera is used to collect sequential images of the moving working surface.

 figure: Fig. 1

Fig. 1 Sketch of the MV-based low-frequency vibration calibration system.

Download Full Size | PDF

To improve the measurement accuracy of ap,V, a high-contrast mark with a distinctive feature edge was applied to the working surface, as indicated by the red-dotted line in Fig. 1 so that it had a consistent excitation displacement relative to the sensor. When the camera is calibrated [18], the excitation displacement can be obtained by measuring the distance to the edge between any two sequential images. Here, ap,V is the second derivative of the excitation displacement fitted amplitude value, and its accuracy depends on the measurement accuracy of this fitted amplitude value.

A collection of sequential images and the corresponding feature edges are shown in Fig. 2, where subscript j is the frame number, lj is the feature edge of the jth frame fj (x, y), N is the number of the collected images, and j = 1, 2, …, N.

 figure: Fig. 2

Fig. 2 Sketch of the collected sequential images and the feature edges.

Download Full Size | PDF

2.2 Reliable camera calibration technique

A camera is used to accurately measure the excitation displacement [19–21]. The projection geometry for the camera is shown in Fig. 3, where Oc-XcYcZc is the camera coordinate system, πc is the image plane, Ou-XuYu is the pixel coordinate system, and Ow-XwYwZw is the world coordinate system. Given a world coordinate point Pw (xw, yw, zw), the undistorted image point is Pu (xu, yu). However, as all cameras include inherent distortions, the actual point is represented by Pd (xd, yd), where the subscript d denotes the distorted image point.

 figure: Fig. 3

Fig. 3 Projection geometry of the camera.

Download Full Size | PDF

Because the motion of the working surface is one dimensional, a chessboard plate is used to calibrate the camera. In other words, Pw (xw, yw, zw) can be simplified to Pw (xw, yw). The relationship between Pu (xu, yu) and Pw (xw, yw) can be expressed by Eq. (2) as per the pin-hole model [22]:

[xuyu1]=H[xwyw1]=[h11h12h13h21h22h23h31h32h33][xwyw1],
where H is the homography matrix [23].

As radial distortion is dominant in this type of camera, a second-order radial distortion polynomial model was adopted to characterize the relationship between the undistorted and distorted image points [24]. On that basis, the relationship between Pu (xu, yu) and Pd (xd, yd) can be written as

{xu=xd[1+k1(xd2+yd2)+k2(xd2+yd2)2]yu=yd[1+k1(xd2+yd2)+k2(xd2+yd2)2],
where k1 and k2 are the first- and second-order radial distortion coefficients. A similar relationship can be established between Pd (xd, yd) and Pw (xw, yw) based on Eq. (2) to obtain the re-projected world coordinates(xw',yw')of Pd (xd, yd). Then, the values of k1 and k2 can be obtained via Eq. (4), which is based on the Levenberg-Marquardt algorithm.
J=minr,c[(xw'xw)2+(yw'yw)2],
where r and c are the row and column of the arrayed calibration feature points on the chessboard image, respectively, and the initial values of k1 and k2 are zero. Finally, Pu (xu, yu) can be computed via Eq. (3) using the computed values of k1 and k2, and H can be calculated as per Eq. (2) using the acquired Pu (xu, yu) and Pw (xw, yw).

As mentioned above, a chessboard plate with high-precision corner points was used to calibrate the camera. The world coordinates of all corner points were precisely known, and the corresponding pixel coordinates on the chessboard image were extracted by the automatic corner detection method [25,26].

2.3 Highly accurate edge extraction of sequential images

The image captured by a camera is dependent on the light intensity. Thus, an edge in an image is not an ideal step edge, as indicated by the red-dotted line in Fig. 4(a), but is instead a steep slope comprised of three grayscales, as shown in Fig. 4(b).

 figure: Fig. 4

Fig. 4 (a) Image with no blurred edge, (b) grayscale distribution in the neighborhood of the non-blurred edge, (c) grayscale gradient of this edge, (d) image with a blurred edge, (e) grayscale distribution in the neighborhood of the blurred edge, and (f) grayscale gradient of this edge.

Download Full Size | PDF

In the proposed system, the edges of sequential images are blurred by the rapid motion of the working surface [27]. A blurred edge is represented by lP or lN, depending on the direction of motion of the working surface (Fig. 4(d)). The grayscale distribution in the neighborhood of the blurred edge resembles a gradual slope (Fig. 4(e)). To accurately extract the blurred edge, it is essential that the blur be reduced via an image enhancement method.

Figures 4(c) and 4(f) show the grayscale gradients of non-blurred and blurred edges along the horizontal direction of motion, respectively. The following Gaussian equation was used to fit the grayscale gradients:

gj(p)=aje[xj(p)μj]22σj2,
where xj(p) is the selected horizontal pixel coordinate and xj(p) ∈ (xj[c]-Δx, xj[c] + Δx), where Δx is the range of the selected horizontal pixel coordinate. In addition, in Eq. (5), gj(p) is the corresponding grayscale gradient and aj, μj, and σj are the fitted amplitude value, mean value, and standard deviation, respectively. If σj is larger than the σT of the non-blurred edge, then the edge in image fj (x, y) is blurred.

In reality, the blur is related to the direction and speed of motion of the working surface. The optical flow method [28,29] was used to detect the direction of motion and the blurred area. Increasing values of x[c] are defined as positive while the opposite is defined as negative. To simplify the calculations, only the blurred area is enhanced. The gradual grayscales in the neighborhood of the blurred edge increase the difficulty of accurately extracting the edge (Fig. 4(d)). Even though the grayscale distributions in the negative and positive directions are very similar, the actual positions of the corresponding blurred edges are different. To extract the actual edge position, the gradual grayscales are increased in the positive direction for a blurred edge moving in the negative direction, and vice versa. To explain this mathematically, if the fj (x, y) with the blurred edge is moving in the negative direction, it is enhanced by

fE(x,y)={fj(x,y)/(2f˜j(x,y)f˜j2(x,y)),f˜j(x,y)<T1fj,max(x,y),f˜j(x,y)T1,
otherwise, it is enhanced by
fE(x,y)={fj,min(x,y),f˜j(x,y)T2fj(x,y)(1f˜j2(x,y)),f˜j(x,y)>T2,
where fj, max (x, y), fj, min (x, y), andf˜j(x,y)are the maximum, minimum, and normalized grayscales of fj (x, y) in the blurred area, respectively, and T1 and T2 are two different thresholds. In this way, a blurred edge moving in either direction can be enhanced.

For the edge in fj (x, y) without the blur, the Zernike moment sub-pixel edge extraction method [15,30] was used to improve the extraction accuracy. The Zernike moment of fj (x, y) is given by

Znm=xyfj(x,y)Vnm*(ρ,θ)x2+y21,
andVnm*is the complex conjugate ofVnm, which has the form [31]:
Vnm(ρ,θ)=s=0(n|m|)/2(1)s(ns)!ρn2ss!(n+|m|2s)!(n|m|2s)!eimθ,
where n is a non-negative integer, m is an integer subject to the constraint thatn|m|is a non-negative even number, and i indicates an imaginary number.

Here, the Zernike method with a three-grayscale distribution edge model was used, as shown in Fig. 5. In this figure, L is the ideal edge in the unit circle, d is the distance between origin O (x0, y0) and L, ϕ is the angle between a line perpendicular to L and the X-axis in the clockwise direction, d1 and d2 are the distances between O (x0, y0) and the edges L1 and L2 corresponding to the different Zernike moments, respectively, and b, Δk, and k are the background, transition, and step grayscales, respectively.

 figure: Fig. 5

Fig. 5 Three-grayscale distribution edge model.

Download Full Size | PDF

The moment ofZnmrotated ϕ isZnm', and can be written as

Znm'=Znmeimϕ.

The values of ZnmandZnm'can be calculated according to Eqs. (8)-(10) and d1, d2, and ϕ can be calculated as follows:

d1=5Z40'+3Z20'8Z20',d2=5Z31'+Z11'6Z11,
ϕ=tan1(Im[Z31]Re[Z31]),
where Im[Z31] and Re[Z31] are the imaginary and real parts of Z31, respectively. The amplification of the K × K Zernike moment template is suppressed to improve the edge extraction accuracy. The sub-pixel coordinates of O (x0, y0) can be calculated by

[xsubysub]=[x0y0]+K(d1+d2)4[cosϕsinϕ].

2.4 Measurement of the displacement of the working surface

All the sub-pixel edge points of the jth image extracted via the proposed method were fitted by the least squares method to obtain the edge line lj. Similarly, the edge lines l1, …, lN of the sequential image were also fitted, where l1 is the reference zero displacement and the others were computed by measuring the respective spatial distances from lj to l1 and from l1 to lj. The value dj was defined as the average displacement from lj to l1. The displacements from other edge lines to l1 were calculated similarly.

Because the motion of the working surface is sinusoidal, the displacement at sampling time ti can be computed using the sine approximation method (SAM) [32,33].

dj(ti)=Acos(ωvti)Bsin(ωvti)+C,
where ωv is the angular frequency. For N sequential images, N-1 values of dj and N-1 equations were obtained in the form of Eq. (14). Parameters A, B, and C were calculated using the least squares method and these N-1 equations. The excitation displacement fitted amplitude value dp,V is

dp,V=A2+B2.

The corresponding excitation acceleration fitted amplitude value ap,V = (ωv)2dp,V. With these results, SV can be calculated via Eq. (1).

3. Correcting for the bending of the guideway of the long-stroke shaker

Although the vibration excitation provided by the long-stroke shaker can be measured by the MV method, the extra excitation introduced by the bending in the guideway of the shaker due to the mechanical processing reduces the accuracy of the sensitivity calibration, especially at very low frequencies. In the proposed method, the magnitude of the bend is measured by the MV method so that the extra excitation can be compensated for in the calibration process.

3.1 Model for correcting the bending

The geometry of a bend in the guideway is shown in Fig. 6, along with a model depicting the additional excitation acceleration delivered to the sensor by this bend [13].

 figure: Fig. 6

Fig. 6 (a) Bend in the guideway of the long-stroke shaker, (b) additional excitation acceleration delivered to the sensor due to this bend.

Download Full Size | PDF

Because the angle α of the bend is small, the extra excitation acceleration amplitude ap,G shown in Fig. 6(b) can be represented by

ap,G=glocsin(α)glocα,
and the actual excitation acceleration amplitudea^p,Eis

a^p,E=ap,Vcos(α)+ap,Gap,V+ap,G.

Therefore, the actual sensitivity S0 of the sensor is

S0=Vp,Va^p,E=Vp,Vap,V+ap,G.
From Eqs. (1) and (18), the relationship between S0 and SV is as follows:

S0=SVap,Vap,V+ap,G.

It is evident that the calibration accuracy of SV will decrease dramatically when ap,G cannot be neglected compared to ap,V. The relationship between the calibration accuracy and vibration frequency can be represented as

S0=SVωv2ωv2+ωα2,
whereωα2=glocα/dp and gloc = 9.801 m/s2, ωα2is constant at very low frequencies with the same displacement. Whenωv2decreases as the frequency decreases, ωα2cannot be neglected in comparison toωv2. To improve the calibration accuracy, S0 can be obtained by correcting SV with Eq. (20) if α is known.

3.2 Measurement of the bending in the guideway

Different from the calibration system, a circular mark is made on the working surface to measure the bending in the guideway, and the camera is placed on the side of the shaker and its optical axis is perpendicular to the mark, as shown in Fig. 7. The mark is used to track the movement or bending of the guideway in the vertical direction. The MV system is then used to measure the position of the mark at different displacements on the guideway. The provided excitation displacement is from -dp to dp, and is divided into Q intervals dh = -dp + h × 2dp / Q, where Q is an odd number and h = 1, 2, …, Q. Several circular mark images are collected at each dh, the edges of the mark in these images are extracted by the Zernike method, and their centers are calculated via the least squares circle fitting method. The average value(x¯h,c,y¯h,c)of these centers is used to indicate the position of the mark at dh. The positions of the mark at other displacements are calculated similarly.

 figure: Fig. 7

Fig. 7 Schematic showing the method of measuring the bending in the guideway.

Download Full Size | PDF

As the guideway of the shaker is horizontal, only the vertical coordinatey¯h,cat dh changes. When measured, the guideway displacement excited by the ESZ185-400 shaker was found to be from −180 to 180 mm. This distance was divided into 25 intervals, each of which was 15 mm. The relationship betweeny¯h,cand dh is shown in Fig. 8.

 figure: Fig. 8

Fig. 8 World vertical coordinates of the positions of the circular mark at different displacements along the guideway.

Download Full Size | PDF

The vertical position changey¯13,cat displacement d13 (d13 = 0 mm) was defined as reference base. Then, the bending angle αh between dh and d13 was calculated by

αh=arctan(y¯h,cy¯13,cdhd13),h13.

The bending angle between different values of dh and d13 were calculated via Eq. (21) assuming that the relationship between αh and dh is approximately linear [13]. Considering that the angles αh at the symmetrical positions relative to d13 are asymmetrical, the bending angles from -dp to 0 and from 0 to dp were fitted by the linear squares fitting with the respective corresponding displacements. The variations in the bending angles from −180 to 0 mm and from 0 to 180 mm were 1.2345 × 10−4 rad and 3.1322 × 10−4 rad, respectively. The average variation was 2.1834 × 10−4 rad, which was used as the value of α. Thus, ωα2was −0.011889 s−2, and its relative error with the −0.011947 s−2 obtained by the dynamic inclination estimation method in [13] was 0.49%. Finally, the actual sensitivity S0 was obtained after SV was corrected using Eq. (20).

4. Experimental results and discussion

The MV setup for the vibration sensor calibration method is shown in Fig. 9. An air-bearing horizontal long-stroke shaker ESZ185-400 with a maximum displacement of 360 mm was used to provide the vibration excitation and the low-frequency tri-axial acceleration sensor MSV 3000-02 sensor to be calibrated was mounted on the working surface. An AVT Manta G-125B CCD camera with a maximum frame rate of 30 fps and a resolution of 1292 × 964 pixels was used to capture sequential images of the displacements of the working surface to which a high contrast mark was applied. The camera was calibrated using a 200 mm × 200 mm chessboard plate with an 8 × 8 grid of squares before beginning the vibration calibration. The results used to correct for the bending in the guideway of the shaker are detailed in Section 3.

 figure: Fig. 9

Fig. 9 Low-frequency vibration sensor calibration system: (I) horizontal long-stroke shaker, (II) working surface, (III) vibration sensor, (IV) high-contrast mark, (V) CCD camera, (VI) acquisition device, and (VII) heterodyne interferometer.

Download Full Size | PDF

4.1 Vibration calibration results

The proposed MV method, LI, and EG were used to calibrate the sensor at frequencies between 0.04 and 8 Hz. The camera collected sequential images with different frame rates depending on the vibration frequencies. In order to ensure that the images describe the actual displacement of working surface at least 3 periods, the 90 frames were collected per frequency based on the Nyquist sampling theorem. The frames per period from 25 to 3.75 (approximately 4) when the frequency from 0.04 to 8 Hz, and the corresponding sampled periods from 3.6 to 24.

The edges of the high contrast mark in the sequential images obtained via the proposed method are shown in Fig. 10. Images with no blurred edges are shown in Figs. 10(a) and 10(d), while images with blurred edges in the positive and negative directions that were enhanced are shown in Figs. 10(b) and 10(c). The green points in the figures denote the sub-pixel edge points extracted using the proposed method, and the blue lines in the centers of the green regions denote lines fitted by the least squares method.

 figure: Fig. 10

Fig. 10 Edges of the extracted mark in the sequential images at a frequency of 1 Hz.

Download Full Size | PDF

The calibrated X-, Y-, and Z-axial sensitivities of the sensor in the range from 0.04 to 2 Hz are shown in Fig. 11. The sensitivities calibrated using the MV method were similar to those of both the EG method and LI at frequencies between 0.3 and 2 Hz; however, the MV method and LI were negatively affected by the bending in the guideway of the long-stroke shaker, causing the corresponding sensitivities to significantly decrease in the range between 0.04 and 0.3 Hz. The maximum relative errors of the X-, Y-, and Z-axial sensitivities between the MV and EG methods were 19.86%, 19.98%, and 18.50%, respectively.

 figure: Fig. 11

Fig. 11 Results of vibration calibration using the LI, MV method, and EG method at frequencies between 0.04 and 2 Hz: (a) X-axial sensitivity; (b) Y-axial sensitivity; (c) and Z-axial sensitivity.

Download Full Size | PDF

4.2 Results of vibration calibration using the proposed MV method with correction of the bending in the guideway of the shaker

Figure 12 shows the sensitivity calibrated results using the MV method after the bending was corrected (MVC) according to Eq. (20).

 figure: Fig. 12

Fig. 12 Calibration results for the MVC method: (a) X-axial sensitivity; (b) Y-axial sensitivity; and (c) Z-axial sensitivity.

Download Full Size | PDF

The maximum relative errors of the X-, Y-, and Z-axial sensitivities between the MVC and EG methods were 1.29%, 1.75%, and 1.23%, respectively, and they were much smaller than the corresponding maximum relative errors 19.86%, 19.98%, and 18.50% between the MV and EG methods. It is obvious that the sensitivity accuracy of MVC is much better than that of MV methods in the frequency range from 0.04 to 2 Hz. The accuracy of the calibrated sensitivity of the MVC method was similar to that of the EG method. The slight differences in the sensitivity at several very low frequencies between the MVC and EG methods were deemed acceptable in terms of the actual vibration calibration.

Additionally, the calibrated sensitivities of the MV method, MVC method, and LI between 2 and 8 Hz are shown in Fig. 13. Note that the calibrated sensitivity curves of the MV and MVC methods are overlapped because the bending influence is negligible at this frequency range. In the figure, it can be seen that the calibration results of the MV method, MVC method, and LI were almost identical as the bending was negligible in this frequency range. The maximum relative errors of the X-, Y-, and Z-axial sensitivities between the MV method and LI were 0.48%, 0.53%, and 0.36%, respectively.

 figure: Fig. 13

Fig. 13 The calibration results of MV method, LI, and MVC method at a frequency range of 2-8 Hz. (a) X-axial sensitivity; (b) Y-axial sensitivity; (c) Z-axial sensitivity.

Download Full Size | PDF

4.3 Discussion

As shown in Fig. 11, when the frequency was lower than 0.3 Hz, the extra excitation caused by the bending in the guideway of the shaker was significant relative to the small magnitude of the provided vibration and degraded the accuracy of the sensitivity calibration using the MV method. Correction for the bending is realized by the same equipment with an additional circular mark on the working surface. Slight differences in the calibrated sensitivities between the MVC and EG methods were observed at several very low frequencies, and was likely due to the influence of external disturbances on the output voltage signals of the sensor used in the MVC method, as shown in Fig. 12. The method in [13] accomplished the bending correction based on the calibration data and a supposed linear model. Actually, our provided correction is not limited to the linear model because the real calibration data may not be linear.

To achieve the high-accuracy sensitivity calibration, the bending in the guideway caused by mechanical processing was corrected by the method described in Section 3. Additionally, the bending angle is affected by the weights of the sensors mounted on the working surface. In this study, as the weight of the calibrated sensor was considered to be negligible, the bending angle was assumed to be the same as that of the guideway of the shaker. However, if the weight of the sensor to be calibrated cannot be ignored, the bending angle should be measured while this sensor is mounted on the working surface.

Owing to the limited frame rate of the camera, the calibration frequency range of the MVC method evaluated in this study was only 0.04-8 Hz. However, if a camera with a higher frame rate were to be used, the calibration frequency for the vibration sensor will be higher.

5. Conclusions

In this paper, a new MVC method was proposed for the calibration of low-frequency vibration sensors. The method included a camera calibration scheme and a technique for improving edges extracted from a series of sequential images to accurately measure the vibration excitation displacement. In addition, a simple, flexible, and low-cost setting for correcting the bending in the guideway of the long-stroke shaker was also developed to enable highly accurate vibration calibration over a low-frequency range. Both the low-frequency vibration calibration and bending compensation were achieved using the same equipment. When compared to the calibration results for the LI and EG method, the proposed MVC method demonstrated a high-accuracy of sensitivity calibration in the frequency range from 0.04 to 8 Hz, especially at frequencies between 0.04 and 0.3 Hz. It should be noted that at present, the correction for the bending in the guideway of the shaker is static and occurs before the vibration calibration. In the future, we plan to research methods of dynamically correcting for the bending in the guideway of the shaker via real-time measurements of the bending angle during the vibration calibration process.

Funding

National Key R&D Program of China (2017YFF0205003); Quality and Technical Supervision Ability Promotion Project (ANL1820); National Natural Science Foundation of China (51605461).

References

1. W. He, X. Zhang, C. Wang, R. Shen, and M. Yu, “A long-stroke horizontal electromagnetic vibrator for ultralow-frequency vibration calibration,” Meas. Sci. Technol. 25(8), 085901 (2014). [CrossRef]  

2. C. D. Ferreira, G. P. Ripper, R. S. Dias, and D. B. Teixeira, “Primary calibration system for vibration transducers from 0.4 Hz to 160 Hz,” J. Phys. Conf. Ser. 575, 012003 (2015). [CrossRef]  

3. W. He, Z. Wang, Y. Mei, and R. Shen, “A novel vibration-level-adjustment strategy for ultralow-frequency vibration calibration based on frequency-shifted method,” Meas. Sci. Technol. 24(2), 025007 (2013). [CrossRef]  

4. N. Garg and M. I. Schiefer, “Low frequency accelerometer calibration using an optical encoder sensor,” Measurement 111, 226–233 (2017). [CrossRef]  

5. R. J. Li, Y. J. Lei, L. S. Zhang, Z. X. Chang, K. C. Fan, Z. Y. Cheng, and P. H. Hu, “High-precision and low-cost vibration generator for low-frequency calibration system,” Meas. Sci. Technol. 29(3), 034008 (2018). [CrossRef]  

6. J. Dosch, “Low frequency accelerometer calibration using Earth’s gravity,” In: IMAC XXV: Conference & Exposition on Structural Dynamics 2007.

7. X. Diao, P. Hu, Z. Xue, and Y. Kang, “High-speed high-resolution heterodyne interferometer using a laser with low beat frequency,” Appl. Opt. 55(1), 110–116 (2016). [CrossRef]   [PubMed]  

8. H. Fu, P. Hu, J. Tan, and Z. Fan, “Simple method for reducing the first-order optical nonlinearity in a heterodyne laser interferometer,” Appl. Opt. 54(20), 6321–6326 (2015). [CrossRef]   [PubMed]  

9. ISO 16063–11, “Methods for the calibration of vibration and shock sensors-part 11: primary vibration calibration by laser interferometry,” Geneva, (1999).

10. M. Dobosz, T. Usuda, and T. Kurosawa, “Methods for the calibration of vibration pick-ups by laser interferometry: 1. Theoretical analysis,” Meas. Sci. Technol. 9(2), 232–239 (1998). [CrossRef]  

11. H. J. von Martens, A. Täubner, W. Wabinski, A. Link, and H. J. Schlaak, “Traceability of vibration and shock measurements by laser interferometry,” Measurement 28(1), 3–20 (2000). [CrossRef]  

12. H. Nicklich and M. Mende, “Calibration of very-low-frequency accelerometers a challenging task,” Sound Vibrat. 45(5), 1521–1527 (2011). [CrossRef]  

13. T. Bruns and S. Gazioch, “Correction of shaker flatness deviations in very low frequency primary accelerometer calibration,” Metrologia 53(3), 986–990 (2016). [CrossRef]  

14. J. Jin, L. Zhao, and S. Xu, “High-precision rotation angle measurement method based on monocular vision,” J. Opt. Soc. Am. A 31(7), 1401–1407 (2014). [PubMed]  

15. G. A. Papakostas, Y. S. Boutalis, D. A. Karras, and B. G. Mertzios, “A new class of Zernike moments for computer vision applications,” Inf. Sci. 177(13), 2802–2819 (2007). [CrossRef]  

16. F. Zhou, Y. Cui, B. Peng, and Y. Wang, “A novel optimization method of camera parameters used for vision measurement,” Opt. Laser Technol. 44(6), 1840–1849 (2012). [CrossRef]  

17. K. Nishi and Y. Matsuda, “Camera vibration measurement using blinking light-emitting diode array,” Opt. Express 25(2), 1084–1105 (2017). [CrossRef]   [PubMed]  

18. L. Deng, G. Lu, Y. Shao, M. Fei, and H. Hu, “A novel camera calibration technique based on differential evolution particle swarm optimization algorithm,” Neurocomputing 174, 456–465 (2016). [CrossRef]  

19. Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE. Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

20. D. Li and J. Tian, “An accurate calibration method for a camera with telecentric lenses,” Opt. Lasers Eng. 51(5), 538–541 (2013). [CrossRef]  

21. Z. Wang, “Removal of noise and radial lens distortion during calibration of computer vision systems,” Opt. Express 23(9), 11341–11356 (2015). [CrossRef]   [PubMed]  

22. Y. Hong, G. Ren, and E. Liu, “Non-iterative method for camera calibration,” Opt. Express 23(18), 23992–24003 (2015). [CrossRef]   [PubMed]  

23. C. Ricolfe-Viala and A. J. Sanchez-Salmeron, “Camera calibration under optimal conditions,” Opt. Express 19(11), 10769–10775 (2011). [CrossRef]   [PubMed]  

24. Y. Cui, F. Zhou, Y. Wang, L. Liu, and H. Gao, “Precise calibration of binocular vision system used for vision measurement,” Opt. Express 22(8), 9134–9149 (2014). [CrossRef]   [PubMed]  

25. Y. Liu, S. Liu, Y. Cao, and Z. Wang, “Automatic chessboard corner detection method,” IET Image Process. 10(1), 16–23 (2016). [CrossRef]  

26. Y. Bok, H. Ha, and I. S. Kweon, “Automated checkerboard detection and indexing using circular boundaries,” Pattern Recognit. Lett. 71, 66–72 (2016). [CrossRef]  

27. K. Wang, L. Xiao, and Z. Wei, “Motion blur kernel estimation in steerable gradient domain of decomposed image,” Multidimens. Syst. Signal Proc. 27(2), 577–596 (2016). [CrossRef]  

28. T. Brox and J. Malik, “Large displacement optical flow: descriptor matching in variational motion estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 33(3), 500–513 (2011). [CrossRef]   [PubMed]  

29. S. Baker, D. Scharstein, J. P. Lewis, S. Roth, M. J. Black, and R. Szeliski, “A database and evaluation methodology for optical flow,” Int. J. Comput. Vis. 92(1), 1–31 (2011). [CrossRef]  

30. Y. Xin, S. Liao, and M. Pawlak, “Circularly orthogonal moments for geometrically robust image watermarking,” Pattern Recognit. 40(12), 3740–3752 (2007). [CrossRef]  

31. C. Singh and E. Walia, “Fast and numerically stable methods for the computation of Zernike moments,” Pattern Recognit. 43(7), 2497–2506 (2010). [CrossRef]  

32. Q. Sun, T. Bruns, A. Täubner, L. Yang, A. Liu, and A. Zuo, “Modifications of the sine-approximation method for primary vibration calibration by heterodyne interferometry,” Metrologia 46(6), 646–654 (2009). [CrossRef]  

33. C. S. Veldman, “A novel implementation of an ISO standard method for primary vibration calibration by laser interferometry,” Metrologia 40(2), 1–8 (2003). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 Sketch of the MV-based low-frequency vibration calibration system.
Fig. 2
Fig. 2 Sketch of the collected sequential images and the feature edges.
Fig. 3
Fig. 3 Projection geometry of the camera.
Fig. 4
Fig. 4 (a) Image with no blurred edge, (b) grayscale distribution in the neighborhood of the non-blurred edge, (c) grayscale gradient of this edge, (d) image with a blurred edge, (e) grayscale distribution in the neighborhood of the blurred edge, and (f) grayscale gradient of this edge.
Fig. 5
Fig. 5 Three-grayscale distribution edge model.
Fig. 6
Fig. 6 (a) Bend in the guideway of the long-stroke shaker, (b) additional excitation acceleration delivered to the sensor due to this bend.
Fig. 7
Fig. 7 Schematic showing the method of measuring the bending in the guideway.
Fig. 8
Fig. 8 World vertical coordinates of the positions of the circular mark at different displacements along the guideway.
Fig. 9
Fig. 9 Low-frequency vibration sensor calibration system: (I) horizontal long-stroke shaker, (II) working surface, (III) vibration sensor, (IV) high-contrast mark, (V) CCD camera, (VI) acquisition device, and (VII) heterodyne interferometer.
Fig. 10
Fig. 10 Edges of the extracted mark in the sequential images at a frequency of 1 Hz.
Fig. 11
Fig. 11 Results of vibration calibration using the LI, MV method, and EG method at frequencies between 0.04 and 2 Hz: (a) X-axial sensitivity; (b) Y-axial sensitivity; (c) and Z-axial sensitivity.
Fig. 12
Fig. 12 Calibration results for the MVC method: (a) X-axial sensitivity; (b) Y-axial sensitivity; and (c) Z-axial sensitivity.
Fig. 13
Fig. 13 The calibration results of MV method, LI, and MVC method at a frequency range of 2-8 Hz. (a) X-axial sensitivity; (b) Y-axial sensitivity; (c) Z-axial sensitivity.

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

S V = V p , V / a p , V .
[ x u y u 1 ] = H [ x w y w 1 ] = [ h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 ] [ x w y w 1 ] ,
{ x u = x d [ 1 + k 1 ( x d 2 + y d 2 ) + k 2 ( x d 2 + y d 2 ) 2 ] y u = y d [ 1 + k 1 ( x d 2 + y d 2 ) + k 2 ( x d 2 + y d 2 ) 2 ] ,
J = min r , c [ ( x w ' x w ) 2 + ( y w ' y w ) 2 ] ,
g j ( p ) = a j e [ x j ( p ) μ j ] 2 2 σ j 2 ,
f E ( x , y ) = { f j ( x , y ) / ( 2 f ˜ j ( x , y ) f ˜ j 2 ( x , y ) ) , f ˜ j ( x , y ) < T 1 f j , max ( x , y ) , f ˜ j ( x , y ) T 1 ,
f E ( x , y ) = { f j , min ( x , y ) , f ˜ j ( x , y ) T 2 f j ( x , y ) ( 1 f ˜ j 2 ( x , y ) ) , f ˜ j ( x , y ) > T 2 ,
Z n m = x y f j ( x , y ) V n m * ( ρ , θ ) x 2 + y 2 1 ,
V n m ( ρ , θ ) = s = 0 ( n | m | ) / 2 ( 1 ) s ( n s ) ! ρ n 2 s s ! ( n + | m | 2 s ) ! ( n | m | 2 s ) ! e i m θ ,
Z n m ' = Z n m e i m ϕ .
d 1 = 5 Z 40 ' + 3 Z 20 ' 8 Z 20 ' , d 2 = 5 Z 31 ' + Z 11 ' 6 Z 11 ,
ϕ = t a n 1 ( Im [ Z 31 ] Re [ Z 31 ] ) ,
[ x s u b y s u b ] = [ x 0 y 0 ] + K ( d 1 + d 2 ) 4 [ cos ϕ sin ϕ ] .
d j ( t i ) = A cos ( ω v t i ) B sin ( ω v t i ) + C ,
d p , V = A 2 + B 2 .
a p , G = g l o c sin ( α ) g l o c α ,
a ^ p , E = a p , V cos ( α ) + a p , G a p , V + a p , G .
S 0 = V p , V a ^ p , E = V p , V a p , V + a p , G .
S 0 = S V a p , V a p , V + a p , G .
S 0 = S V ω v 2 ω v 2 + ω α 2 ,
α h = arc tan ( y ¯ h , c y ¯ 13 , c d h d 13 ) , h 13.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.