Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Stereo vision-based Kinematic calibration method for the Stewart platforms

Open Access Open Access

Abstract

Accuracy is the most important index for the industrial applications of the Stewart platform, which can be guaranteed by the kinematic calibration method to improve the motion orbit performance of this platform. In order to improve the effectiveness of the least squares algorithm and the identified accuracy of the platform’s geometric parameter errors, an applicab-le dimensionless error model based on the structural characteristics of the Stewart platform is investigated. Moreover, a novel stereo vision-based measurement method is proposed, which can measure the 6-degree-of-freedom (DOF) pose of the moving platform. On this basis, an identification simulation is schemed to validate the efficiency of the dimensionless error model, and the kinematic calibration experiment is carried out on a prototype. The experimental results demonstrate that the position error is decreased to 0.261 mm with an improved accuracy of 89.720%, the orientation error is decreased to 0.051° with an improved accuracy of 90.351%.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Parallel robots are widely used in the aspects of high-precision machining [1,2], aerospace [35] and medical equipment [6] because of their advantages of greater stiffness-to-mass ratio, payload-to-weight ratio, faster dynamic response, higher flexibility and repeatability. At the same time, they put forward higher requirements for its accuracy performance. The main errors affecting the accuracy of parallel robot are caused by manufacturing, assemble and control [7], and then Judd et al. [8] pointed out that the geometric parameter errors are the main factor reducing its accuracy. Currently, there are two main methods to improve the accuracy of parallel robot [9]. One is to directly enhance the manufacturing and components accuracy, but it will significantly increase the processing cost and put high requirement for assemblers. The second is to calibrate the geometric parameters of parallel robot, which is a low-cost and efficient method and widely used in industry.

The principle of kinematic calibration [10] is to construct an error function between the measured information and control model output and identified the geometric parameters by linear or non-linear algorithms to correct the control model, thus achieving the error compensa-tion. According to the different measurement ways, the kinematic calibration mainly has the methods of external calibration, constraint calibration and self-calibration [11]. The external calibration method [12] is used an external equipment to measure the actual pose of parallel robot and then estimate its real geometric parameters through the error model. The constraint calibration method [13] is imposing mechanical constraints to limit the partial motion capability of the parallel robot component for establishing an optimal objective and error model. The self-calibration method [14] uses the position sensors installed on the passive joint to measure its actual motion in each pose, and calculates the error model by an optimization objective which is taking the difference between the command motion and sensors’ observation values to a minimum. Correspondingly, the measurement equipment mainly includes laser trackers [1517], coordinate measuring machine [18,19], theodolites [20], ball bars [21,22], and vision sensors [2325], in which the laser tracker is the most often one for the pose measurement of parallel robot. Tian et al. [26] applied a regularization method to accomplish the kinematic calibration of a 5-DOF hybrid kinematic machine tool through a laser tracker. Kong et al. [27] performed the calibration of a 3-DOF parallel manipulator using API-T3 laser tracker. However, there are fewer research on the kinematic calibration of parallel robot using vision sensors. P. Renaud et al. [28] adopted a CCD camera to perform the calibration of H4 and I4 parallel robot.

Moreover, due to the complexity of parallel robot structure and the strong coupling of error parameters, the least squares method is usually not ensured the identification accuracy of geometric parameter when Jacobian matrix is ill-conditioned. The ideal error Jacobian matrix should be uniformed. Nevertheless, it is difficult to achieve because the corresponding elements of position and orientation are different magnitude orders. To address this issue, Luo et al. [29] and Wu et al. [30] constructed an error model by establishing the functional relationship between the kinematic parameters and measurement targets to complete the kinematic calibration of a 5-axis parallel machining robot and KUKA KR-270. However, these methods introduce the parameter errors of measurement targets which makes the more complex error model and more difficult identification of the geometric parameters.

To this end, a kinematic calibration method of the studied Stewart platform based on a stereo vision is presented in this article. A prototype of the studied Stewart platform and its coordinate establishment are described. Based on its kinematic analysis, an applicable dimen-sionless error model is constructed based on the geometrical characteristic of the prototype. Subsequently, a calibration algorithm is developed, and a set of measurement poses are uniformly selected in the workspace. Finally, a novel measurement method is proposed by a stereo vision to obtain the actual pose in each measurement pose, and a kinematic calibration experiment is carried out. The main contributions of this study include: (a) the dimensionless error model addressed the different magnitude of the error Jacobian matrix, (b) it is feasible to use the stereo vision to measure the 6- DOF pose of Stewart platform.

The remainder of this article is organized as follows: Section 2 describes the studied Stewart platform and derived the dimensionless error model. Section 3 verified the effectiveness of the dimensionless error model in simulate. Section 4 present the stereo vision-based measurement method of Stewart platform. The experimental results are provided in Section 5, and Section 6 concludes the article.

2. Applicable dimensionless error model of Stewart platform

Stewart platform, as a classical parallel robot, is mainly composed of a base platform, a mobile platform and six legs, as shown in Fig. 1(a). The lower part of branch Si (i = 1…6) is connected to the base platform by a lower S-joint Bi and the upper part of Si is connected to the mobile platform by an upper S-joint Ai. The six legs Si are prismatic joints, which can be independently telescopic by the drive of six servo motors, enabling the moving platform to move along x-, y-, z-axis and to rotate around x-, y-, z-axis. Creating a Cartesian coordinate system {B} fixed to the base platform and a Cartesian coordinate system {A} fixed to the mobile platform. As shown in Fig. 1(b), the kinematic parameters of Stewart platform can be described as: the coordinates of Bi = [bxi, byi, bzi] in {B}, Ai = [axi, ayi, azi] in {A}, the initial offset Si of Si. The nominal kinematic parameters are shown in Table 1.

 figure: Fig. 1.

Fig. 1. Stewart platform: (a) prototype, (b) kinematic scheme, (c) the closed-loop vector of Si.

Download Full Size | PDF

Tables Icon

Table 1. The nominal geometric parameters of Stewart platform.

The kinematic calibration of Stewart platform is to identify the six upper S-joint parameter errors δAi, the six lower S-joint parameter errors δBi and the six initial offset errors δSi, for a total of 42 geometric parameter errors δζ=[δBT i δAT i δSi]T.

As shown in Fig. 1(c), the closed-loop vector equation of leg i is

$${S_i}{\boldsymbol{s}_i} = \boldsymbol{t} + \boldsymbol{R}{\boldsymbol{A}_i} - {\boldsymbol{B}_i}$$
in which t and R are the position and orientation matrix of the mobile platform, respectively, Si and si are the length of leg i and its unit vector, respectively.

However, during the process of geometric parameter error identification using the least squares method, t and R should be unified in dimension to improve the identification accuracy. The orientation error is often transformed into radians for operation in the process of parameter error identification, and only the position error needs to be dimensionless. As shown in Fig. 1(b), according to the structural characteristics of the prototype, the principle of dimensionless for Eq. (1) is divided the left and right sides of it by ra, where ra is the radius of a circle fitted to the six upper S-joints Ai. Thus Eq. (1) is rewritten as:

$${S_{ri}}{\boldsymbol{s}_{ri}} = {\boldsymbol{t}_r} + \boldsymbol{R}{\boldsymbol{A}_{ri}} - {\boldsymbol{B}_{ri}}$$
in which tr = t/ra, Ari = Ai/ra, Bri = Bi/ra, Sri = Si/ra, and sri is the unit vector of Sri.

Fully differentiate the left and right of Eq. (2) to obtain the closed-loop differential equation:

$${\boldsymbol{s}_{ri}}\textrm{d}{S_{ri}} + {S_{ri}}\textrm{d}{\boldsymbol{s}_{ri}} = \textrm{d}{\boldsymbol{t}_r} + \textrm{d}\boldsymbol{R}{\boldsymbol{A}_{ri}} + \boldsymbol{R}\textrm{d}{\boldsymbol{A}_{ri}} - \textrm{d}{\boldsymbol{B}_{ri}}$$
then
$${\boldsymbol{s}_{ri}}\textrm{d}{S_{ri}} + {S_{ri}}\textrm{d}{\boldsymbol{s}_{ri}} = \textrm{d}{\boldsymbol{t}_r} + \boldsymbol{\omega } \times \boldsymbol{R}{\boldsymbol{A}_{ri}} + \boldsymbol{R}\textrm{d}{\boldsymbol{A}_{ri}} - \textrm{d}{\boldsymbol{B}_{ri}}$$
R is usually determined by Euler angles α-β-γ, the relationship between ω and Euler angles is
$$\boldsymbol{\omega } = {\boldsymbol{T}_\omega }\delta \boldsymbol{\theta }$$
where $\delta \boldsymbol{\theta } = {\left[ {\begin{array}{ccc} {\dot{\alpha }}&{\dot{\beta }}&{\dot{\gamma }} \end{array}} \right]^\textrm{T}}$, ${\boldsymbol{T}_\omega }\textrm{ = }\left[ {\begin{array}{ccc} {\cos \gamma \cos \beta }&{ - \sin \gamma }&0\\ {\sin \gamma \cos \beta }&{\cos \gamma }&0\\ { - \sin \beta }&0&1 \end{array}} \right]$.

Substitute Eq. (5) into (4) to get

$${\boldsymbol{s}_{ri}}\textrm{d}{S_{ri}} + {S_{ri}}\textrm{d}{\boldsymbol{s}_{ri}} = \textrm{d}{\boldsymbol{t}_r} + ({\boldsymbol{T}_\omega }\textrm{d}\boldsymbol{\theta }) \times \boldsymbol{R}{\boldsymbol{A}_{ri}} + \boldsymbol{R}\textrm{d}{\boldsymbol{A}_{ri}} - \textrm{d}{\boldsymbol{B}_{ri}}$$
multiplying ${\boldsymbol{s}_{ri}^{\textrm{T}}}$ left at both ends of Eq. (6) gives
$$\boldsymbol{s}_{ri}^\textrm{T}{\boldsymbol{s}_{ri}}\textrm{d}{S_{ri}} + \boldsymbol{s}_{ri}^\textrm{T}{S_{ri}}\textrm{d}{\boldsymbol{s}_{ri}} = \boldsymbol{s}_{ri}^\textrm{T}\textrm{d}{\boldsymbol{t}_r} + \boldsymbol{s}_{ri}^\textrm{T}({\boldsymbol{T}_\omega }\textrm{d}\boldsymbol{\theta }) \times \boldsymbol{R}{\boldsymbol{A}_{ri}} + \boldsymbol{s}_{ri}^\textrm{T}\boldsymbol{R}\textrm{d}{\boldsymbol{A}_{ri}} - \boldsymbol{s}_{ri}^\textrm{T}\textrm{d}{\boldsymbol{B}_{ri}}$$
where ${\boldsymbol{s}_{ri}^{\textrm{T}}}$sridSri = 0, dsrissri, ${\Delta _s} = \left[ {\begin{array}{ccc} 0&{ - \delta {s_{iz}}}&{\delta {s_{iy}}}\\ {\delta {s_{iz}}}&0&{ - \delta {s_{ix}}}\\ { - \delta {s_{iy}}}&{\delta {s_{ix}}}&0 \end{array}} \right]$.The second term on the left side of Eq. (6) is ${\boldsymbol{s}_{ri}^{\textrm{T}}}$Sridsri = 0, and then
$$\boldsymbol{s}_{ri}^\textrm{T}\textrm{d}{\boldsymbol{t}_r} + {(\boldsymbol{R}{\boldsymbol{A}_{ri}} \times {\boldsymbol{s}_{ri}}\textrm{)}^\textrm{T}}{\boldsymbol{T}_\omega }\textrm{d}\boldsymbol{\theta } = \boldsymbol{s}_{ri}^\textrm{T}\textrm{d}{\boldsymbol{B}_{ri}} - \boldsymbol{s}_{ri}^\textrm{T}\boldsymbol{R}\textrm{d}{\boldsymbol{A}_{ri}} + \textrm{d}{S_{ri}}$$

Substitute dtr=δtr, ${\rm d}\boldsymbol{\theta}=\delta\boldsymbol{\theta}$, dBri=δBri, dAri=δAri, dSri=δSri in Eq. (8), and then arrange it rewriting to the matrix form:

$$\left[ {\begin{array}{cc} {\boldsymbol{s}_{ri}^\textrm{T}}&{{{(\boldsymbol{R}{\boldsymbol{A}_{ri}}\mathrm{\ \times }{\boldsymbol{s}_{ri}}\textrm{)}}^\textrm{T}}{\boldsymbol{T}_\omega }} \end{array}} \right]\left[ {\begin{array}{c} {\delta {\boldsymbol{t}_r}}\\ {\delta \boldsymbol{\theta }} \end{array}} \right] = \left[ {\begin{array}{ccc} {\boldsymbol{s}_{ri}^\textrm{T}}&{\textrm{ - }\boldsymbol{s}_{ri}^\textrm{T}\boldsymbol{R}}&1 \end{array}} \right]\left[ {\begin{array}{c} {\delta {\boldsymbol{B}_{ri}}}\\ {\delta {\boldsymbol{A}_{ri}}}\\ {\delta {S_{ri}}} \end{array}} \right]$$

Set the dimensional δXr and δζr as follows:

$$\delta {\boldsymbol{X}_r} = {\left[ {\begin{array}{cc} {\delta \boldsymbol{t}_r^\textrm{T}}&{\delta {\boldsymbol{\theta }^\textrm{T}}} \end{array}} \right]^\textrm{T}}$$
$$\delta {\boldsymbol{\zeta }_r} = {\left[ {\begin{array}{ccccccc} {\delta \boldsymbol{B}_{r1}^\textrm{T}}&{\delta \boldsymbol{A}_{r1}^\textrm{T}}&{\delta {S_{r1}}}& \cdots &{\delta \boldsymbol{B}_{r6}^\textrm{T}}&{\delta \boldsymbol{A}_{r6}^\textrm{T}}&{\delta {S_{r6}}} \end{array}} \right]^\textrm{T}}$$
then the error equations of six legs can be obtained by Eq. (9):
$$\delta {\boldsymbol{X}_r}\textrm{ = }\boldsymbol{J}_{Sr}^{\textrm{ - 1}}{\boldsymbol{J}_{Pr}}\delta {\boldsymbol{\zeta }_r}$$
in which ${\boldsymbol{J}_{Sr}} = \left[ {\begin{array}{ccc} {\boldsymbol{s}_{r1}^\textrm{T}}&{{{({\boldsymbol{R}{\boldsymbol{A}_{r1}} \times {\boldsymbol{s}_{r1}}} )}^\textrm{T}}{\boldsymbol{T}_\omega }}\\ \vdots & \vdots \\ {\boldsymbol{s}_{r6}^\textrm{T}}&{{{({\boldsymbol{R}{\boldsymbol{A}_{r6}} \times {\boldsymbol{s}_{r6}}} )}^\textrm{T}}{\boldsymbol{T}_\omega }} \end{array}} \right]$, ${\boldsymbol{J}_{Pr}} = \left[ {\begin{array}{ccccccc} {\boldsymbol{s}_{r\textrm{1}}^\textrm{T}}&{\textrm{ - }\boldsymbol{s}_{r\textrm{1}}^\textrm{T}\boldsymbol{R}}&1& \cdots &0&0&0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ 0&0&0& \cdots &{\boldsymbol{s}_{r6}^\textrm{T}}&{\textrm{ - }\boldsymbol{s}_{r6}^\textrm{T}\boldsymbol{R}}&1 \end{array}} \right]$.

Equation (11) is the dimensionless error model for a single measurement pose, which reflects the linear transfer relationship between the δζr and δXr. Therefore, δXr is obtained by the external measurement information, and then δζr can be determined by solving the linear equation of Eq. (12). Only 6 redundant measurement information for one measurement pose, and the unknown geometric parameter errors are 42, so more than 7 measurement poses are needed to identify all geometric parameter errors. The dimensionless error model of n (n > 7) measured poses can be expressed as:

$$\left[ {\begin{array}{c} {\delta {\boldsymbol{X}_{r1}}}\\ \vdots \\ {\delta {\boldsymbol{X}_{rn}}} \end{array}} \right] = \left[ {\begin{array}{c} {\boldsymbol{J}_{Sr\textrm{1}}^{\textrm{ - 1}}{\boldsymbol{J}_{Pr\textrm{1}}}}\\ \vdots \\ {\boldsymbol{J}_{Srn}^{\textrm{ - 1}}{\boldsymbol{J}_{Prn}}} \end{array}} \right]\delta {\boldsymbol{\zeta }_r}$$
then the identified geometric parameter error is
$$\delta \boldsymbol{\zeta } = \delta {\boldsymbol{\zeta }_r}\cdot {r_a}$$

3. Simulation verifications

A simulation of the efficiency and accuracy of the dimensionless error model are presented in this section. The geometric parameter error identification is simulated based on the uniformly selecting measurement poses from the workspace. It should be noted that the workspace is determined by the method of Masory et al. [31]. Moreover, the measurement pose is uniformly selected in the position space and orientation space respectively, because the position and orientation are two different quantities in the workspace. The principle is shown in Fig. 2, and each node represents a measurement pose. So there are total of 72 measurement poses X and the corresponding Jacobian matrix condition number is cond(J-1 SrJPr) = 2.61e3, which is calculated by Eq. (13).

 figure: Fig. 2.

Fig. 2. The scheme of measurement poses selected: (a) position space, (b) orientation space.

Download Full Size | PDF

Subsequently, the simulation principle is as follows.

  • Step 1: Determine the nominal measurement pose set X.
  • Step 2: Generate the actual measurement pose set. The drive leg lengths of the nominal measurement pose set X is calculated by Eq. (1). Setting the 42 geometric parameter errors δζ. The nominal geometric parameters ζ are substituted by ζ+δζ, and then the actual measure-ment pose set are forward solved by the driving leg lengths.
  • Step 3: Identify the geometrical parameter error. Taking the minimization of the pose error δX as the optimization objective, which are the residuals between the X and actual measurement pose set. Then, δζ is estimated through the dimensionless error model which is solved the least squares method.
  • Step 4: Compensate the geometric parameter error. The ζ in Eq. (1) are replaced by ζζk which are identified in Step 3, thus complete the geometric parameter error compensation.
  • Step 5: Verify the calibration results. The modified pose set are forward solved after the geometric parameter error compensation in Step 4. The pose residuals are calculated between the iteration of pose set and the actual measurement pose set. Then, set a threshold ε→0 and a maximum number of iterations. If norm(δXk, 2)≤ε, the iteration is stopped, otherwise it is stopped when the number of iterations reach the maximum. The currently identified geometric parameter ζδζk is used as the actual geometric parameter of Stewart platform, where k is the number of iterations.

According to the above simulation flow, set ε=1e-16 and the maximum iteration number is 10. The kinematic calibration simulation of X is carried out based on the dimensionless error model. As shown in Fig. 3(a), the maximum accuracy between the setting and identification geometric parameter error is 1.820e-11, and the minimum is 6.306e-14. Additionally, the iterative process of the pose errors is shown in Fig. 3(b) that shows no obvious change after 3 iterations. Moreover, the nondimensional errors in Fig. 3(b) are the deviation of the actual measurement pose set and X, which are homogeneity in the dimensionless error model. It could be confirmed that the identified geometric parameter errors are equal to the setting values and reflect the correctness of the proposed error model.

 figure: Fig. 3.

Fig. 3. The simulated results of X: (a) the identified geometric parameter errors, (b) the iterative process.

Download Full Size | PDF

4. Stereo vision-based measurement method for Stewart platform

The stereo vision-based measurement method of Stewart platform as shown in Fig. 4(a), in which the stereo vision is consisted of two high-speed cameras, a gear head and a tripod. Two high-speed cameras are firmly mounted on each end of a camera beam which is connected to the tripod by the gear head. The high-speed camera can be rotated around the x-, y-, and z-axes by adjusting the gear head, and its height can be adjusted by the tripod. Therefore, the measure-ment process is summarized as follows:

  • Step 1: The internal parameters of high-speed camera need to be calibrated through a scale bar and a calibration panel before the measurement. The calibration process is shown in Fig. 4(b).
  • Step 2: After the calibration of the stereo vision, a test is carried out to determine its measure-ment accuracy. It is necessary to determine the spatial single-point measurement accuracy for the stereo vision because the positions of coding targets are measured in this experiment, where the coding targets are attached to the upper surface of the moving and base platform. According to ASME B89.4.22-2004 [32], 30 repetitive measurements are taken on the same coding target at 5200 mm from the camera in the measurement space, and taking the maximum deviation between the 30 measurement values and their average as the measurement accuracy of a single point. The result is that the single-point measurement accuracy is 44.136 µm, which fully meets the spatial pose measurement accuracy requirement of the studied Stewart platform.
  • Step 3: Construct a global coordinate system {bb}, a mobile coordinate system {aa} and a reference coordinate system {ee}. As shown in Fig. 4(a), {bb} is located on the upper surface of base platform, where the origin coincides with the center of a circle fitted by the coding targets 53, 54, 55, 56. The 56 is located on the positive semi-axis of x, and z-axis is perpendicular to the plane which fitted by the four coding targets, and y-axis determined by the right-hand rule. In the same way, {aa} is constructed on the upper surface of the moving platform, and the origin coincides with the center of a circle which fitted by the coding targets 15, 16, 17, and 18. The 18 is located on the positive semi-axis of x, and z-axis is perpendicular to a plane which fitted by the four coding targets, and y-axis is determined by the right-hand rule. Then, set the transform of {bb} and {aa} is bbTaa. It should be noted that the coding targets 53, 54, 55, and 56 are distributed on a circle with a radius of 25 mm at equal intervals respectively, and the coding targets 15, 16, 17, and 18 are the same way of that, where the labels of coding targets are marked by the internal setting of high-speed camera.

 figure: Fig. 4.

Fig. 4. Kinematic calibration experiment of Stewart platform: (a) experimental setup, (b) the calibration process of the stereo vision.

Download Full Size | PDF

The coding targets 15, 16, 17 and 18 are usually obscured by the moving platform during the movement of the Stewart platform, which is resulting in them not appearing in the two high-speed cameras field of view at the same time as coding targets 53, 54, 55 and 56, as shown in Fig. 4(a). Therefore, a reference coordinate system {ee} is constructed in an auxiliary tool as the intermediate transformation coordinate system of {bb} and {aa}, where notation bbTee is the transformation between {bb} and {ee}, and eeTaa is the transformation between {ee} and {aa}. The construction principle of {ee} is that the origin coincides with coding target 37, the coding target 39 is located on the positive semi-axis of x, and the z-axis is perpendicular to a plane fitted by the coding targets 36, 37, 38, and 39, and then y-axis is determined by the right-hand rule. {bb} and {ee} are fixed during the measurement process of kinematic calibration, so the determined bbTee is

$${}^{bb}{\boldsymbol{T}_{ee}} = \left[ {\begin{array}{cccc} {0.982}&{0.189}&{0.003}&{402.321}\\ { - 0.189}&{0.982}&{0.006}&{352.760}\\ { - 0.001}&{ - 0.007}&1&{422.012}\\ 0&0&0&1 \end{array}} \right]$$
then the transformation of {bb} and {aa} is
$${}^{bb}{\boldsymbol{T}_{aa}} = {}^{bb}{\boldsymbol{T}_{ee}}\cdot {}^{ee}{\boldsymbol{T}_{aa}}$$
according to the Euler angle theory, there is
$$\boldsymbol{T} = \left[ {\begin{array}{cccc} {c{\gamma_m}c{\beta_m}}&{c{\gamma_m}s{\beta_m}s{\alpha_m} - s{\gamma_m}c{\alpha_m}}&{c{\gamma_m}s{\beta_m}c{\alpha_m} + s\varphi s{\alpha_m}}&{{x_m}}\\ {s{\gamma_m}c{\beta_m}}&{s{\gamma_m}s{\beta_m}s{\alpha_m} + c{\gamma_m}c{\alpha_m}}&{s{\gamma_m}s{\beta_m}c{\alpha_m} - c{\gamma_m}s{\alpha_m}}&{{y_m}}\\ { - s{\beta_m}}&{c{\beta_m}s{\alpha_m}}&{c{\beta_m}c{\alpha_m}}&{{z_m}}\\ 0&0&0&1 \end{array}} \right]$$
in which c represents cos, s represents sin, and bbTaa = T.

Calculate Eq. (16) and (17) to obtain the actual measured position of the moving plat-form which is

$${\boldsymbol{t}_m} = {[{x_m},{y_m},{z_m}]^\textrm{T}}$$
and the actual measured orientation is
$$\left\{ \begin{array}{l} {\alpha_m} = \textrm{Atan}2\left( {\frac{{{\boldsymbol{T}_{32}}}}{{c{\beta_m}}},\frac{{{\boldsymbol{T}_{33}}}}{{c{\beta_m}}}} \right)\\ {\beta_m} = {\sin^{ - 1}}( - {\boldsymbol{T}_{31}})\\ {\gamma_m} = \textrm{Atan}2\left( {\frac{{{\boldsymbol{T}_{21}}}}{{c{\beta_m}}},\frac{{{\boldsymbol{T}_{11}}}}{{c{\beta_m}}}} \right) \end{array} \right.$$

5. Experiment verifications and results

According to the stereo vision-based measurement method described in Section 4, the spatial position of coding targets can be transformed into the actual measurement pose set of the mobile platform. The kinematic calibration experiment is carried out on a Stewart platform prototype based on the dimensionless error model and the nominal measurement pose set, and then the actual measurement pose set is measured by the stereo vision.

Subsequently, the repeatability test is carried out on the prototype, as shown in Table 2. The principle is to measure the actual pose of the moving platform through control its movement to the command position [62 mm, 57 mm, -6 mm] and orientation [-14°, 12°, 1°], and then repeat 15 times measurements in the same way. It can be obtained that the maximum position accuracy is dP = 64.252 µm, and the maximum orientation accuracy is dΦ=51.888 “, where dP and dΦ are calculated by Eq. (20) [33].

$$\left\{ \begin{array}{l} \textrm{d}P = \sqrt {\delta {x^2} + \delta {y^2} + \delta {z^2}} \\ \textrm{d}\varPhi = \sqrt {\delta {\alpha^2} + \delta {\beta^2} + \delta {\gamma^2}} \end{array} \right.$$

The kinematic calibration experiment is carried out based on the dimensionless error model and the nominal measurement pose set. The identified geometric parameter errors are shown in Table 3. The position errors and orientation errors are presented in Fig. 5 before and after the kinematic calibration. As shown in Table 4, the maximum position error before calibration is 3.547 mm, and its average value is 2.539 mm, while the maximum position error after calibration is 0.519 mm (85.368% improvement), and its average value is 0.261 mm (89.720% improvement). The maximum orientation error before calibration is 0.353°, and its average value is 0.228°, while the maximum orientation error after calibration is 0.051° (85.552% improvement), and its average value is 0.022° (90.351% improvement). It can be concluded that the position and orientation accuracy of the studied Stewart platform has been significantly enhanced after kinematic calibration.

 figure: Fig. 5.

Fig. 5. The position errors and orientation errors before and after calibration.

Download Full Size | PDF

Tables Icon

Table 2. The position and orientation repeatability test of the studied Stewart platform.

Tables Icon

Table 3. The identified geometric parameter errors (mm).

Tables Icon

Table 4. Pose errors of Stewart platform before and after kinematic calibration.

6. Conclusion

In order to improve the position and orientation accuracy of the studied Stewart platform, an applicable dimensionless error model is constructed based on the structural characteristics of it in this article, which is solved through the least squares method. The identified accuracy of the geometric error parameters is 6.306e-14 by the simulate that demonstrate the correctness of the model. Then, a novel measurement method is proposed by the stereo vision to measure the actual 6-DOF pose of the moving platform, where the measurement accuracy of it is 44.136 µm which fully meets the measured requirement of the prototype. Finally, the kinematic calibration experiment is implemented, and the results show that the position error is reduced from 2.539 mm to 0.261 mm, improved the position accuracy by 89.720%, the orientation error is reduced from 0.228° to 0.051°, improved the orientation accuracy by 90.351%.

In the end, a related issue is also worth noting. The dimensionless error model in this work is only considering the 42 kinematic parameters of the Stewart platform. Therefore, a complete and independent error model will be investigated in future works, which is involved the measurement noise of the stereo vision and the installation error of the coding targets.

Funding

National Natural Science Foundation of China (52075512, 61640014, 62203132); Study on verification device of automatic transformer calibration system (2022YJ32); Doctor Foundation Project of Guizhou University (GuidaRenji Hezhi [2020]30); Youth Science and Technology Talents Development Project of Guizhou Education Department (Qianjiaohe KY [2022]138); The Guizhou Province Graduate Research Fund (YJSCXJH [2020]049).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. S. Jiang, C. Chi, H. Fang, T. Tang, and J. Zhang, “A minimal-error-model based two-step kinematic calibration methodology for redundantly actuated parallel manipulators: An application to a 3-DOF spindle head,” Mech. Mach. Theory 167, 104532 (2022). [CrossRef]  

2. B. Mei, F. Xie, X. J. Liu, and C. Yang, “Elasto-geometrical error modeling and compensation of a five-axis parallel machining robot,” Precis. Eng. 69, 48–61 (2021). [CrossRef]  

3. J. Chen, F. Xie, X. J. Liu, and Z. Chong, “Elasto-geometrical calibration of a hybrid mobile robot considering gravity deformation and stiffness parameter errors,” Robot Cim-Int. Manuf. 79, 102437 (2023). [CrossRef]  

4. X. W. Zhao, B. Tao, S. B. Han, and H. Ding, “Accuracy analysis in mobile robot machining of large-scale workpiece,” Robot Cim-Int. Manuf. 71, 102153 (2021). [CrossRef]  

5. Y. Hu, F. Gao, X. C. Zhao, T. H. Yang, H. R. Shen, C. K. Qi, and R. Cao, “A parameter dimension reduction-based estimation approach to enhance the kinematic accuracy of a parallel hardware-in-the-loop docking simulator,” Robotica 39(6), 959–974 (2021). [CrossRef]  

6. W. Meinhold, D. E. Martinez, J. Oshinski, A. P. Hu, and J. Ueda, “A direct drive parallel plane piezoelectric needle positioning robot for MRI guided intraspinal injection,” IEEE Trans. Biomed. Eng. 68(3), 807–814 (2021). [CrossRef]  

7. S. Aguado, J. Santolaria, D. Samper, J. Velazquez, and J. J. Aguilar, “Empirical analysis of the efficient use of geometric error identification in a machine tool by tracking measurement techniques,” Meas. Sci. Technol. 27(3), 035002 (2016). [CrossRef]  

8. R. P. Judd and A. B. Knasinski, “A technique to calibrate industrial robots with experimental verification,” IEEE Trans. Robot. Automat. 6(1), 20–30 (1990). [CrossRef]  

9. B. Muralikrishnan, S. Phillips, and D. Sawyer, “Laser trackers for large-scale dimensional metrology: A review,” Precis. Eng. 44, 13–28 (2016). [CrossRef]  

10. T. Sun, Y. P. Zhai, Y. M. Song, and J. T. Zhang, “Kinematic calibration of a 3-DoF rotational parallel manipulator using laser tracker,” Robot Cim-Int. Manuf. 41, 78–91 (2016). [CrossRef]  

11. Y. Hu, F. Gao, X. C. Zhao, B. C. Wei, D. G. Zhao, and Y. N. Zhao, “Kinematic calibration of a 6-DOF parallel manipulator based on identifiable parameters separation (IPS),” Mech. Mach. Theory 126, 61–78 (2018). [CrossRef]  

12. A. Rosyid, B. EI-Khasawneh, and A. Alazzam, “External kinematic calibration of hybrid kinematics machine utilize-ing lower-DOF planar parallel kinematics mechanisms,” Int. J. Precis. Eng. Man. 21(6), 995–1015 (2020). [CrossRef]  

13. F. G. Li, Q. Zeng, K. F. Ehmann, J. Cao, and T. M. Li, “A calibration method for over-constrained spatial translational parallel manipulators,” Robot Cim-Int. Manuf. 57, 241–254 (2019). [CrossRef]  

14. Y. J. Chiu and M. H. Perng, “Self-calibration of a general hexapod manipulator using cylinder constraints,” Int. J. Mach. Tool Manu. 43(10), 1051–1066 (2003). [CrossRef]  

15. C. Li, Y. Q. Wu, H. Lowe, and Z. X. Li, “POE-based robot kinematic calibration using axis configuration spaced and the adjoint error model,” IEEE Trans. Robot. 32(5), 1264–1279 (2016). [CrossRef]  

16. Y. M. Song, J. T. Zhang, B. B. Lian, and T. Sun, “Kinematic calibration of a 5-DOF parallel kinematic machine,” Precis. Eng. 45, 242–261 (2016). [CrossRef]  

17. T. Huang, D. Zhao, F. W. Yin, W. J. Tian, and D. G. Chetwynd, “Kinematic calibration of a 6-DOF hybrid robot by considering multicollinearity in the identification Jacobian,” Mech. Mach. Theory 131, 371–384 (2019). [CrossRef]  

18. D. C. Cong, D. Y. Yu, and J. W. Han, “Kinematic calibration of parallel robots using CMM,” in Proceedings of the 6th Word Congress on Intelligent Control and Automation (2006), pp. 8514–8518.

19. A. Nubiola, M. Slamani, A. Joubair, and I. A. Bonev, “Comparison of two calibration methods for a small industrial robot based on an optical CMM and a laser tracker,” Robotica 32(3), 447–466 (2014). [CrossRef]  

20. D. Whitney, C. Lozinski, and J. M. Rourke, “Industrial robot forward calibration method and results,” J. Dyn. Sys., Meas. Control 108(1), 1–8 (1986). [CrossRef]  

21. W. J. Tian, F. W. Yin, H. T. Liu, J. H. Li, Q. Li, T. Huang, and D. G. Chetwynd, “Kinematic calibration of a 3-DOF spindle head using a double ball bar,” Mech. Mach. Theory 102, 167–178 (2016). [CrossRef]  

22. L. P. Wang, Y. Z. Liu, J. Wu, J. S. Wang, and B. B. Zhang, “Study of error modeling in kinematic calibration of parallel manipulators,” Int. J. Adv. Robot. Syst. 13(5), 172988141667256 (2016). [CrossRef]  

23. M. Yang, Y. Wang, Z. H. Liu, S. N. Zuo, C. G. Cai, J. Yang, and J. J. Yang, “A monocular vision-based decoupling measurement method for plane motion orbits,” Measurement 187, 110312 (2022). [CrossRef]  

24. M. Yang, Y. Wang, C. G. Cai, Z. H. Liu, H. J. Zhu, and S. Y. Zhou, “Monocular vision-based low-frequency vibration calibration method with correction of the guideway bending in a long-stroke shaker,” Opt. Express 27(11), 15968–15981 (2019). [CrossRef]  

25. M. Yang, Z. H. Liu, C. G. Cai, Y. Wang, J. Yang, and J. J. Yang, “Monocular vision-based calibration method for the axial and transverse sensitivities of Low-frequency triaxial vibration sensors with the elliptical orbit excitation,” IEEE Trans. Ind. Electron. 69(12), 13763–13772 (2022). [CrossRef]  

26. W. J. Tian, M. W. Mou, J. H. Yang, and F. W. Yin, “Kinematic calibration of a 5-DOF hybrid kinematic machine tool by considering the ill-posed identification problem using regularization method,” Robot Cim-Int. Manuf. 60, 49–62 (2019). [CrossRef]  

27. L. Y. Kong, G. L. Chen, Z. Zhang, and H. Wang, “Kinematic calibration and investigation of the influence of universal joint errors on accuracy improvement for a 3-DOF parallel manipulator,” Robot Cim-Int. Manuf. 49, 388–397 (2018). [CrossRef]  

28. P. Renaud, N. Andreff, J. M. Lavest, and M. Dhome, “Simplifying the kinematic calibration of parallel mechanisms using vision-based metrology,” IEEE Trans. Robot. 22(1), 12–22 (2006). [CrossRef]  

29. X. Luo, F. Xie, X. J. Liu, and Z. H. Xie, “Kinematic calibration of a 5-axis parallel machining robot based on dimensionless error mapping matrix,” Robot Cim-Int. Manuf. 70, 102115 (2021). [CrossRef]  

30. Y. Wu, A. Klimchik, S. Caro, B. Furet, and A. Pashkevich, “Geometric calibration of industrial robots using enhanced partial pose measurements and design of experiments,” Robot Cim-Int. Manuf. 35, 151–168 (2015). [CrossRef]  

31. O. Masory and J. Wang, “Workspace evaluation of Stewart platforms,” Adv. Robotics 9(4), 443–461 (1994). [CrossRef]  

32. “Methods for Performance Evaluation of Articulated Arm Coordinate Measuring Machines,” ASME B89.4.22-2004.

33. L. W. Yang, X. Z. Tian, Z. L. Li, F. M. Chai, and D. Y. Dong, “Numerical simulation of calibration algorithm based on inverse kinematics of the parallel mechanism,” Optik 182, 555–564 (2019). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Stewart platform: (a) prototype, (b) kinematic scheme, (c) the closed-loop vector of Si.
Fig. 2.
Fig. 2. The scheme of measurement poses selected: (a) position space, (b) orientation space.
Fig. 3.
Fig. 3. The simulated results of X: (a) the identified geometric parameter errors, (b) the iterative process.
Fig. 4.
Fig. 4. Kinematic calibration experiment of Stewart platform: (a) experimental setup, (b) the calibration process of the stereo vision.
Fig. 5.
Fig. 5. The position errors and orientation errors before and after calibration.

Tables (4)

Tables Icon

Table 1. The nominal geometric parameters of Stewart platform.

Tables Icon

Table 2. The position and orientation repeatability test of the studied Stewart platform.

Tables Icon

Table 3. The identified geometric parameter errors (mm).

Tables Icon

Table 4. Pose errors of Stewart platform before and after kinematic calibration.

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

S i s i = t + R A i B i
S r i s r i = t r + R A r i B r i
s r i d S r i + S r i d s r i = d t r + d R A r i + R d A r i d B r i
s r i d S r i + S r i d s r i = d t r + ω × R A r i + R d A r i d B r i
ω = T ω δ θ
s r i d S r i + S r i d s r i = d t r + ( T ω d θ ) × R A r i + R d A r i d B r i
s r i T s r i d S r i + s r i T S r i d s r i = s r i T d t r + s r i T ( T ω d θ ) × R A r i + s r i T R d A r i s r i T d B r i
s r i T d t r + ( R A r i × s r i ) T T ω d θ = s r i T d B r i s r i T R d A r i + d S r i
[ s r i T ( R A r i   × s r i ) T T ω ] [ δ t r δ θ ] = [ s r i T  -  s r i T R 1 ] [ δ B r i δ A r i δ S r i ]
δ X r = [ δ t r T δ θ T ] T
δ ζ r = [ δ B r 1 T δ A r 1 T δ S r 1 δ B r 6 T δ A r 6 T δ S r 6 ] T
δ X r  =  J S r  - 1 J P r δ ζ r
[ δ X r 1 δ X r n ] = [ J S r 1  - 1 J P r 1 J S r n  - 1 J P r n ] δ ζ r
δ ζ = δ ζ r r a
b b T e e = [ 0.982 0.189 0.003 402.321 0.189 0.982 0.006 352.760 0.001 0.007 1 422.012 0 0 0 1 ]
b b T a a = b b T e e e e T a a
T = [ c γ m c β m c γ m s β m s α m s γ m c α m c γ m s β m c α m + s φ s α m x m s γ m c β m s γ m s β m s α m + c γ m c α m s γ m s β m c α m c γ m s α m y m s β m c β m s α m c β m c α m z m 0 0 0 1 ]
t m = [ x m , y m , z m ] T
{ α m = Atan 2 ( T 32 c β m , T 33 c β m ) β m = sin 1 ( T 31 ) γ m = Atan 2 ( T 21 c β m , T 11 c β m )
{ d P = δ x 2 + δ y 2 + δ z 2 d Φ = δ α 2 + δ β 2 + δ γ 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.