Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Vision measurement system for geometric parameters of tubing internal thread based on double-mirrored structured light

Open Access Open Access

Abstract

Geometric parameter measurement of tubing internal thread is critical for oil pipeline safety. In response to the shortcomings of existing methods for measuring internal thread geometric parameters, such as low efficiency, poor accuracy, and poor accessibility, this paper proposes a vision system for measuring internal thread geometric parameters based on double-mirrored structured light. Compared to previous methods, our system can completely reproduce the internal thread tooth profiles and allows multi-parameter measurement in one setup. To establish the correlation between the structural and imaging parameters of the vision system, three-dimensional (3D) optical path models (OPMs) for the vision system considering the mirror effect of the prism is proposed, which extends the scope of the optical path analysis and provides a theoretical foundation for designing the structural parameters of the vision system. Moreover, modeling and three-step calibration methods for the vision system are proposed to realize high-accuracy restoration from the two-dimensional (2D) virtual image to the actual 3D tooth profiles. Finally, a vision measurement system is developed, and experiments are carried out to verify the accuracy and measure the three geometric parameters (i.e., taper, pitch, and tooth height) of typical internal threads. Based on the validation results using the reference system, the vision measurement accuracy and efficiency are 6.7 and 120 times that of the traditional system, which verifies the measurement effectiveness and accuracy of the vision system proposed in this paper.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

High-accuracy inspection of internal thread of tubing coupling is critical for safe oil and gas production. It is difficult for the existing methods to provide the accuracy and efficiency required for inspection, because they have low measurement efficiency (e.g., plug gauge [1], thread micrometer [2], taper gauge, and height gauge [3]) and poor accuracy (e.g., optical fiber [4] and grating [5]), and they are not capable for reproducing rough outlines of tooth profiles (e.g., position sensitive device (PSD) [6]), prone to wear (e.g., probe profiler [7]), and expensive (e.g., X-ray computed tomography (CT) [8]). Vision measurement offers numerous advantages, including high accuracy, efficiency, and low cost, and it is extremely promising for inspecting internal thread geometric parameters. The 2D measurement method consisting of a planar backlight, telecentric lens, and a monocular camera is often employed for exterior thread inspection. However, the internal thread must be measured using a 3D vision approach. In view of poor reachability, point, line, or circular laser are typically utilized to illuminate the internal thread.

Tong et al. [9] developed a high-speed, automated internal thread parameter measurement system that collects thread profile data using a built-in reflector and a point laser displacement sensor. After processing the measurement data with a non-uniform B spline curve, the measured values of internal thread parameters are extracted. Spectral confocal sensor [10] is a point-laser displacement sensor variant with a long initial working distance and high accuracy. However, its overall size is large and difficult to measure. A circular laser is another projected light used for the vision measurement of internal thread geometric parameters. Wakayama et al. [11] proposed a method for detecting closed planar contours. A conical mirror is utilized to split a single laser beam into circular laser beams to illuminate a closed plane, and image processing is performed to determine its 2D profile. In 2019, Abulkhanov et al. [12] proposed a circular laser-based device and method for measuring corrosion pits in the inner cavity of oil pipes. Their methods yielded good results for measuring smooth circular holes. The hole center resolution attained with their method could reach 10 µm. Chun-Fu Lin et al. [13] proposed a method for measuring internal thread pitch and median diameter that combines a laser and an industrial camera. Their device with limited measurable parameters is bulky and difficult to fix in the tubing, making it impossible to measure geometric parameters of small diameter internal thread.

Laser displacement and spectral confocal measurement methods based on point-laser illumination are bulky and difficult to probe within the tubing. Furthermore, a linear stage is required to obtain the thread profile. For the circular-laser based measurement method [14], the camera and laser must be connected coaxially through a glass cover, causing the imaging optical path (IOP) to be refracted twice, reducing the accuracy of thread parameter measurement. Furthermore, circular-laser and point-laser [9] based methods suffer from low accuracy and poor measurement integrity due to laser misalignment deformation caused by the light form and complex thread profile. The challenges above can be efficiently solved using the vision measurement method with line-laser lighting along the longitudinal section of the internal thread [15]. Chun-Fu Lin et al. [16] proposed a real-time system for internal thread defect measurement based on monocular structured light. With the system, the taper cannot be measured because only single line laser is used to strengthen the tooth profile. Furthermore, for the thread with long section, the line laser is not reachable, resulting in measurement failures. A stereo vision system comprised of a line laser and two cameras is expensive to measure and has the same difficulties in probing within the tubing as the monocular structured light. Furthermore, while small endoscopic cameras could probe inside the tubing, the limited image fidelity makes them challenging to meet high industrial accuracy measurement requirements [17].

To address the above limitations of both contact and non-contact measurement methods, this paper offers a novel vision system for measuring geometric parameters of tubing internal thread based on double-mirrored structured light. By introducing a double-sided reflective prism and line structured light, the single-camera is expanded into an enhanced vision system that considers both turning peep and stereo measurement to complete a comprehensive, efficient, and high-accuracy measurement of multiple geometric parameters. The rest of the paper is organized as follows. The measurement system and principle are presented in Section 2. In Section 3, 3D OPMs for laser projection and vision imaging are proposed, which provide a theoretical foundation for the optimal design of the vision system to ensure measurement accessibility and high measurement accuracy. Section 4 presents the measurement model of double-mirrored structured-light vision system, as well as a three-step method for calibrating the parameters in the measurement model. Section 5 discusses the validation and measurement experiments on the geometric parameters of two different types of tubing internal threads. Section 6 gives the conclusion of the paper.

2. Measurement system and principle

2.1 Measurement system

The manufacturing failure mode of the internal thread is rotationally symmetric, implying that if there is a dimensional overrun in one longitudinal section, there is a consistent dimensional anomaly in all longitudinal sections. Thus, the measurement of geometric parameters on the longitudinal section is sufficient to evaluate the quality of the internal thread.

In view of this, a metrology for inspecting the geometric parameters of tubing internal thread based on double-mirrored structured-light vision is proposed. As illustrated in Fig. 1, the measurement system consisted of a monocular camera, a reflective prism, a line laser, and the finishing tool, etc. The prism consisted of two reflective surfaces symmetric to the left and right, allowing the prism to deflect the path of rays and complete the thread peeping. In terms of the sparse structure and poor visibility of the teeth, the structured light emitted by the line laser passing through the axis of the internal thread is used to obtain the tooth profiles while avoiding laser deformation by the vertical reflection of the prism, ensuring that the tooth profiles can be imaged with high accuracy and integrity. In the measurement, firstly, 3D spatial optical path analysis is conducted to optimize the system structural parameters that ensured measurement accessibility and accuracy. Next, a measurement model for double-mirrored structured-light vision system mapping the correlation between the spatial points and its 2D projections is established, and then the parameters in the measurement model are calibrated by the three-step method. Subsequently, the line laser is emitted and then reflected along the longitudinal section of the thread to the teeth by the prism. Afterwards, the laser profile characterizing the tooth shape is reflected by the prism back to the camera for imaging. According to extracted subpixel-level laser profiles and the calibrated measurement model parameters, the 3D point clouds of the two tooth profiles on the longitudinal section can be reconstructed. Finally, combined with the American Petroleum Institute (API) standard, the geometric parameters of the internal thread can be calculated from the point cloud data.

 figure: Fig. 1.

Fig. 1. Schematic diagram of vision system for measuring geometric parameters of tubing internal thread.

Download Full Size | PDF

2.2 Measurement principle

The transformation matrix between the world coordinate system (WCS) and the camera coordinate system (CCS) is denoted by $\left[ {\begin{array}{cc} \textbf{R}&\textbf{t}\\ {{\textbf{0}^\textrm{T}}}&1 \end{array}} \right]$. Then, the camera model mapping the relation between the 3D point $\textbf{P}$ and 2D image point $\textbf{p}$ can be given by:

$$\left\{ \begin{array}{l} s\left[ {\begin{array}{c} u\\ v\\ 1 \end{array}} \right] = \underbrace{{\left[ {\begin{array}{cccc} {{\alpha_x}}&0&{{u_0}}&0\\ 0&{{\alpha_y}}&{{v_0}}&0\\ 0&0&1&0 \end{array}} \right]}}_{\textbf{K}} \cdot \underbrace{{\left[ {\begin{array}{cc} \textbf{R}&\textbf{t}\\ {{\textbf{0}^\textrm{T}}}&1 \end{array}} \right]}}_{\textbf{M}} \cdot \left[ {\begin{array}{c} {{x_w}}\\ {{y_w}}\\ {{z_w}}\\ 1 \end{array}} \right] = \textbf{K} \cdot \underbrace{{\left[ {\begin{array}{cccc} {{\textbf{R}_1}}&{{\textbf{R}_2}}&{{\textbf{R}_3}}&\textbf{t}\\ 0&0&0&1 \end{array}} \right]}}_{\textbf{M}} \cdot \left[ {\begin{array}{c} {{x_w}}\\ {{y_w}}\\ {{z_w}}\\ 1 \end{array}} \right]\\ u = ({u_p} - {u_0})(1 + {k_1}{r^2} + {k_2}{r^4} + {k_3}{r^6}) + 2{p_1}({u_p} - {u_0})({v_p} - {v_0}) + {p_2}[{r^2} + 2{({u_p} - {u_0})^2}] \\ v = ({v_p} - {v_0})(1 + {k_1}{r^2} + {k_2}{r^4} + {k_3}{r^6}) + 2{p_2}({u_p} - {u_0})({v_p} - {v_0}) + {p_1}[{r^2} + 2{{({v_p} - {v_0})}^2}]\end{array}\right.$$
where ${(u,\textrm{ }v)^T}$ is the coordinate of the ideal image point of the $\textbf{p}$, and ${({x_w},\textrm{ }{y_w},\textrm{ }{z_w})^T}$ is the coordinate of the $\textbf{P}$. $\textbf{K}$ and $\textbf{M}$ stand for the intrinsic and extrinsic matrixes of the camera model. $[{\textbf{R}_1},\textrm{ }{\textbf{R}_2},\textrm{ }{\textbf{R}_3}]$ and $\textbf{t}$ depict the rotation and translation components of the extrinsic matrix, respectively. ${\alpha _x}$ and ${\alpha _y}$ denote normalized focal lengths along two axes of the image coordinate system (ICS), respectively. While ${({u_0},\textrm{ }{v_0})^T}$ and s describe the image center and the scale factor. Due to the manufacturing error, distortions caused by the optical elements must be compensated. In Eq. (1), ${({u_p},\textrm{ }{v_p})^T}$ is the coordinate of the distorted point of the $\textbf{p}$. ${k_1}$, ${k_2}$, and ${k_3}$ are the radial distortion coefficients. ${p_1}$ and ${p_2}$ describe the tangential distortion coefficients. r denotes the pixel distance from point $\textbf{p}$ to ${({u_0},\textrm{ }{v_0})^T}$.

During the measurement, the third dimension would be lost in single-camera 2D image, therefore, this paper introduces the plane constraint of the structured light to reconstruct the 3D thread information.

3. 3D OPMs for double-mirrored structured-light vision system

The prism size, layout parameters (i.e., relative positions of the prism, camera, and laser), and camera parameters (i.e., image sensor size, and focal length) directly influence the measurement accessibility and accuracy. As a result, OPA is critical for the optimal design of the vision system. However, previous researches are mostly limited to 2D optical path simulations [1823] with very few 3D OPAs [24]. After the prism is placed in front of the camera, the optical paths of laser projection and vision imaging become more complex. Thus, in view of the limitations of the existing OPMs and the fact that the vision system design only relies on experience, 3D OPMs are proposed and then analyzed to determine the structural parameters that ensure the measurement accessibility and accuracy.

3.1 3D imaging optical path model (OPM)

Considering the symmetry of the internal thread, the vision system is arranged symmetrically from left to right along the axis of the internal thread. The image sensor is divided equally to image the thread teeth on the left (i.e., ${\boldsymbol{I}_1}$) and right (i.e., ${\boldsymbol{I}_2}$) reflective surfaces of the prism. To avoid the internal thread imaging distortion, the angle between each of the two reflective surfaces and the axis of internal thread is set to 45°. Then, taking the right image sensor and its observed internal thread in left reflective surfaces as an example, the 3D imaging OPM is established to calculate the field-of-view (FoV) parameters of the vision system on a certain object plane.

As illustrated in Fig. 2, a coordinate system is established according to the right-hand rule, where the intersection of the laser axis and the top edge of the prism is the origin $\textbf{O}$, the laser axis is the $Z$-axis, and the horizontal right axis is the $Y$-axis, respectively. To establish the 3D OPM, camera position parameters (i.e., the angle $\theta $ between the camera optical axis and the laser axis, the height h between the camera optical center ${\textbf{O}_c}$ and the plane $X = 0$, and the distance l between ${\textbf{O}_c}$ and the plane $Z = 0$), imaging parameters (i.e., the horizontal dimension ${S_x}$ and vertical dimension ${S_y}$ of the image sensor, and the focal length $f$), and laser parameters (i.e., the distance ${d_1}$ between the laser center ${\textbf{O}_e}$ and the top edge of the prism, and the emission angle $\gamma $ of the laser) are defined as the structural parameters. Then, we have ${\textbf{O}_c} = {( - h,\textrm{ }0,\textrm{ }l)^T}$, and the vertices of the right image sensor can be expressed by ${\textbf{l}_1} = {(b/2 - h,\textrm{ }{S_x}/2,\textrm{ }l + a/2)^T}$, ${\textbf{l}_2} = {( - d/2 - h,\textrm{ }{S_x}/2,\textrm{ }l - c/2)^T}$, ${\textbf{l}_3} = {( - d/2 - h,\textrm{ 0},\textrm{ }l - c/2)^T}$, and ${\textbf{l}_4} = \textrm{ }{(b/2 - h,\textrm{ }0,\textrm{ }l + a/2)^T}$, respectively, where $a = {S_y}\sin \phi + 2f\cos \phi \textrm{ }$, $b = {S_y}\cos \phi - 2f\sin \phi$, $c = {S_y}\sin \phi - 2f\cos \phi \textrm{ }$, and $d = {S_y}\cos \phi + 2f\sin \phi$. As shown in Fig. 2, the FoV of the camera without the prism is formed by four light rays ${\textbf{l}_i}{\textbf{O}_c}\textrm{ }(i = 1,\textrm{ }2\textrm{ } \cdots \textrm{ }4)$, which can be expressed as ${\textbf{l}_1}{\textbf{O}_c} = {(b/2,\textrm{ }{S_x}/2,\textrm{ }a/2)^T}$, ${\textbf{l}_2}{\textbf{O}_c} = {( - d/2,\textrm{ }{S_x}/2,\textrm{ } - c/2)^T}$, ${\textbf{l}_3}{\textbf{O}_c} = {( - d/2,\textrm{ 0},\textrm{ } - c/2)^T}$, and ${\textbf{l}_4}{\textbf{O}_c} = \textrm{ }{(b/2,\textrm{ }0,\textrm{ }a/2)^T}$, respectively.

 figure: Fig. 2.

Fig. 2. Illustration of the IOP of the vision system.

Download Full Size | PDF

For a known direction vector ${(a\textrm{, }b,\textrm{ }c)^T}$ of a line passing through the point ${(x\textrm{, }y,\textrm{ }z)^T}$, the point on the plane and the normal vector of the plane are ${({x_0}\textrm{, }{y_0},\textrm{ }{z_0})^T}$ and ${(m\textrm{, }n,\textrm{ }p)^T}$, respectively. Letting $F = ma + nb + pc$ and $F \ne 0$, the line-plane intersection can be written as:

$${\textbf{S}_e} = \left[ {\begin{array}{c} {{{(x(nb + pc) + a(m{x_0} + n({y_0} - y) + p({z_0} - z)))} / F}}\\ {{{(y(ma + pc) + b(n{y_0} + m({x_0} - x) + p({z_0} - z)))} / F}}\\ {{{(z(ma + nb) + c(p{z_0} + m({x_0} - x) + n({y_0} - y)))} / F}} \end{array}} \right]$$

To obtain the FoV parameters of the camera with a prism, we first calculate the intersections ${\textbf{F}_i}\textrm{ }(i = 1,\textrm{ }2\textrm{ } \cdots \textrm{ }4)$ of the four rays ${\textbf{l}_i}{\textbf{O}_c}\textrm{ }(i = 1,\textrm{ }2\textrm{ } \cdots \textrm{ }4)$ and the ${\boldsymbol{I}_1}$. For the ${\boldsymbol{I}_1}$, a point on the plane and the normal vector of the plane satisfies ${({x_0},\textrm{ }{y_0},\textrm{ }{z_0})^T} = {(0,\textrm{ }0,\textrm{ }0)^T}$ and ${\textbf{n}_1} = {(0,\textrm{ }\sqrt 2 /2,\textrm{ }\sqrt 2 /2)^T}$, respectively. After the above parameters are substituted into Eq. (2), ${\textbf{F}_i}$ can be obtained as ${\textbf{F}_1} = \frac{1}{{a - {S_x}}}{(h{S_x} - ha - lb,\textrm{ } - {S_x}l,\textrm{ } - {S_x}l)^T}$, ${\textbf{F}_2} = \frac{{ - 1}}{{c + {S_x}}}{(h{S_x} + hc + ld,\textrm{ } - {S_x}l,\textrm{ } - {S_x}l)^T}$, ${\textbf{F}_3} = {( - (hc + ld)/c,\textrm{ 0},\textrm{ }0)^T}$, and ${\textbf{F}_4} = {( - (ha + lb)/a,\textrm{ 0},\textrm{ }0)^T}$, respectively.

Given that ${\textbf{n}_1}$ and the distance d from $\textbf{O}$ to ${\boldsymbol{I}_1}$, the mirror matrix of ${\boldsymbol{I}_1}$ can be obtained as ${\textbf{M}_{f1}} = \left[ {\begin{array}{cc} {\textbf{I} - 2{\textbf{n}_1} \cdot {\textbf{n}_1}^\textrm{T}}&{2d{\textbf{n}_1}}\\ {{\textbf{0}^\textrm{T}}}&1 \end{array}} \right] = \left[ {\begin{array}{llll} 1&0&0&0\\ 0&0&{ - 1}&0\\ 0&{ - 1}&0&0\\ 0&0&0&1 \end{array}} \right]$. Then the mirrored point ${\textbf{O}_{cr}}$ of ${\textbf{O}_c}$ via ${\boldsymbol{I}_1}$ can be calculated as ${\textbf{O}_{cr}} = {( - h,\textrm{ }l,\textrm{ }0)^T}$. As shown in Fig. 2, the mirrored FoV ${\Pi _1}$ is the space enclosed by the line ${\textbf{O}_{cr}}{\textbf{F}_i} (i = 1, 2 {\ldots} 4)$ on the left side of ${\boldsymbol{I}_1}$, where ${\textbf{O}_{cr}}{\textbf{F}_1} = {(lb/({S_x} - a),\textrm{ }(l + {S_x}l)/({S_x} - a),\textrm{ }{S_x}l/({S_x} - a))^T}$, ${\textbf{O}_{cr}}{\textbf{F}_2} = {( - lb/(c + {S_x}),\textrm{ }({S_x}l - l)/(c + {S_x})},$ ${S_x}l/(c + {S_x}))^{T}$, ${\textbf{O}_{cr}}{\textbf{F}_3} = {( - ld/c,\textrm{ } - l,\textrm{ }0)^T}$, and ${\textbf{O}_{cr}}{\textbf{F}_4} = {( - lb/a,\textrm{ } - l,\textrm{ }0)^T}$, respectively. Besides, set the radius of the tubing internal thread as r, then the object distance of the system is the plane $Y ={-} r$. To solve the visible range in the plane $Y ={-} r$ after the camera is mirrored by the ${\boldsymbol{I}_1}$, let the intersections of the ${\textbf{O}_{cr}}{\textbf{F}_i}$ and $Y ={-} r$ be ${\textbf{J}_i}\textrm{ }(i = 1,\textrm{ }2\textrm{ } \cdots \textrm{ }4)$. By substituting in Eq. (2), ${\textbf{J}_i}$ can be derived as ${\textbf{J}_1} = {({J_{1x}},\textrm{ }{J_{1y}},\textrm{ }{J_{1z}})^T} = {(( - bl - br - ha)/a,\textrm{ } - r,\textrm{ } - {S_x}(l + r)/a)^T}$, ${\textbf{J}_2} = {({J_{2x}},\textrm{ }{J_{2y}},\textrm{ }{J_{2z}})^T} = {(( - ld - rd - hc)/c,\textrm{ } - r,\textrm{ }{S_x}(l + r)/c)^T}$, ${\textbf{J}_3} = {({J_{3x}},\textrm{ }{J_{3y}},\textrm{ }{J_{3z}})^T} = {(( - ld - rd - hc)/c,\textrm{ } - r,\textrm{ }0)^T}$, and ${\textbf{J}_4} = {({J_{4x}},\textrm{ }{J_{4y}},\textrm{ }{J_{4z}})^T} = {(( - bl - br - ha)/a,\textrm{ } - r,\textrm{ }0)^T}$, respectively. Then, as shown in Fig. 2, these four points ${\textbf{J}_i}$ constitute an imaging FoV ${\mathbf{\Pi }_J}$ with a right-angled trapezoid shape, of which the height ${\textbf{J}_3}{\textbf{J}_4}$ is ${H_a} = |{{J_{3x}} - {J_{4x}}} |= |{(l + r)(d/c + b/a)} |$, the length of the ${\textbf{J}_2}{\textbf{J}_3}$ is ${L_1} = |{{J_{3z}} - {J_{2z}}} |= |{{S_x}(l + r)/c} |$, and the length of the ${\textbf{J}_1}{\textbf{J}_4}$ is ${L_2} = |{{J_{1z}} - {J_{4z}}} |= |{{S_x}(l + r)/a} |$.

3.2 Combined OPA of laser projection and vision imaging

In this study, the structured light is used to reconstruct the 3D tooth profiles of the tubing internal thread. To ensure the measurement accessibility, the complete laser strip should be imaged by a camera via the prism, and the following requirements have to be fulfilled: ① The laser strip projected at the object plane must be contained by the mirrored FoV ${\mathbf{\Pi }_J}$. ② The prism size should be optimized to ensure that the measurement system can probe into the tubing coupling for measurement. ③ On the basis of satisfying ① and ②, the IOP to the laser strip is not occluded by the end face of tubing coupling. Therefore, a combined analysis is performed on the optical paths of laser projection and camera imaging to provide a basis for determining the system’s structural parameters.

Firstly, we analyze the light path of the laser projection. Given the ${d_1}$ and $\gamma $, then we can obtain that ${\textbf{O}_e}\textrm{ = }{(0,\textrm{ }0,\textrm{ }{d_1})^T}$ and the projection angle of the incident structured light (ISL) on ${\boldsymbol{I}_1}$ is $\gamma /2$. As illustrated in Fig. 3, we denote the intersections of the incident structured-light plane (ISLP) and ${\boldsymbol{I}_1}$ as $[{\textbf{F}_5},\textrm{ }{\textbf{F}_6}]$. Here, ${\textbf{F}_5} = {(0,\textrm{ }0,\textrm{ }0)^T}$ and ${\textbf{F}_6} = {(0,\textrm{ } - {d_1}\tan (\gamma /2)/[1 - \tan (\gamma /2)]}$, ${d_1}\tan (\gamma /2)/[\tan (\gamma /2) - 1])^{T}$. Given the mirror matrix of ${\boldsymbol{I}_1}$, the mirrored point of ${\textbf{O}_e}$ with respect to ${\boldsymbol{I}_1}$ can be obtained as ${\textbf{O}_{er}} = {(0,\textrm{ }{d_1},\textrm{ 0})^T}$. Then, we can obtain the mirrored structured-light plane (i.e., reflected structured-light plane (RSLP)) enclosed by the line ${\textbf{O}_{er}}{\textbf{F}_i}\textrm{ }(i = 5,\textrm{ }6)$ on the left side of ${\boldsymbol{I}_1}$. Let the endpoints of the intersection line between the RSLP and the plane $Y ={-} r$ as ${\textbf{J}_5}$ and ${\textbf{J}_6}$, according to Eq. (2), we can obtain ${\textbf{J}_5} = {({J_{5x}},\textrm{ }{J_{5y}},\textrm{ }{J_{5z}})^T} = {(0,\textrm{ } - r,\textrm{ }0)^T}$ and ${\textbf{J}_6} = {({J_{6x}},\textrm{ }{J_{6y}},\textrm{ }{J_{6z}})^T} = {(0,\textrm{ } - r,\textrm{ } - (r + {d_1})\tan (\gamma /2))^T}$, respectively.

 figure: Fig. 3.

Fig. 3. Illustration of the light path of the laser projection

Download Full Size | PDF

Then, to ensure the measurement accessibility of the vision system, the FoV ${\mathbf{\Pi }_J}$ must contain the laser strip ${\textbf{J}_5}{\textbf{J}_6}$ projected on the plane $Y ={-} r$. Thus, the RSIP and the ${\mathbf{\Pi }_J}$ are comprehensively analyzed to obtain the constraints that should be satisfied by the structural parameters. According to the light path of the laser projection, the ${\textbf{J}_5}{\textbf{J}_6}$ on the plane $X = 0$ with the endpoint ${\textbf{J}_5}$ on ${\textbf{J}_2}{\textbf{J}_3}$ is parallel to ${\textbf{J}_1}{\textbf{J}_4}$ and ${\textbf{J}_2}{\textbf{J}_3}$, respectively. Since the length difference between ${\textbf{J}_1}{\textbf{J}_4}$ and ${\textbf{J}_2}{\textbf{J}_3}$ is small enough, the line length of ${\textbf{J}_1}{\textbf{J}_4}$ is chosen to constrain the length of ${\textbf{J}_5}{\textbf{J}_6}$ (i.e., $|{\textbf{J}_1}{\textbf{J}_4}|\ge |{\textbf{J}_5}{\textbf{J}_6}|$). Additionally, we denote the $Z$-component difference between ${\textbf{J}_6}$ and ${\textbf{J}_1}$ as ${d_l} = {J_{6z}} - {J_{1z}}$. Besides, to constrain the height position of ${\textbf{J}_5}{\textbf{J}_6}$ in the range of ${\mathbf{\Pi }_J}$, the constraint relationship between ${\textbf{J}_1}{\textbf{J}_4}$, ${\textbf{J}_2}{\textbf{J}_3}$ and ${\textbf{J}_5}{\textbf{J}_6}$ in the $X$-axis direction is established, which can be expressed as ${U_r} = {J_{4x}} - {J_{6x}}$ and ${D_w} = {J_{6x}} - {J_{3x}}$. As mentioned above, to ensure that the ${\mathbf{\Pi }_J}$ contains the ${\textbf{J}_5}{\textbf{J}_6}$, the structural parameters (i.e., h, l, $\phi $, ${d_1}$, $\gamma $, ${S_x}$, ${S_y}$, and $\textrm{ }f$) should satisfy ${d_l} > 0$, ${U_r} < 0$, ${D_w} < 0$, namely $\left\{ \begin{array}{l} {J_{3x}} \le {J_{5x}} \le {J_{4x}}\\ {J_{2z}} \le {J_{6z}} \le {J_{5z}} \le {J_{3z}} \end{array} \right.$.

Secondly, to ensure that the system can probe into the tubing coupling for measurement, the effective region occupied by the laser stripe on ${\boldsymbol{I}_1}$ should be calculated to minimize the prism size. It can be known from OPA that the effective region is enclosed by ${\textbf{F}_5}$, ${\textbf{F}_6}$, ${\textbf{F}_{c1}}$, and ${\textbf{F}_{c2}}$. ${\textbf{F}_{c1}}$ and ${\textbf{F}_{c2}}$ are defined as the intersections of the ${\textbf{O}_c}{\textbf{J}_{5r}}$ and the ${\textbf{O}_c}{\textbf{J}_{6r}}$ with the ${\boldsymbol{I}_1}$, which can be calculated as ${\textbf{F}_{c1}} = {({F_{c1x}},\textrm{ }{F_{c1y}},\textrm{ }{F_{c1z}})^T} = {( - hr/(l + r),\textrm{ 0},\textrm{ }0)^T}$ and ${\textbf{F}_{c2}} = {({F_{c2x}},\textrm{ }{F_{c2y}},\textrm{ }{F_{c2z}})^T} = {(\frac{{hr - \tan (\gamma /2)({d_1} + r)}}{{l + r - \tan (\gamma /2)({d_1} + r)}},\textrm{ }\frac{{ - \tan (\gamma /2)({d_1} + r)l}}{{l + r - \tan (\gamma /2)({d_1} + r)}},\textrm{ }\frac{{ - \tan (\gamma /2)({d_1} + r)l}}{{l + r - \tan (\gamma /2)({d_1} + r)}})^T}$, respectively. As shown in Fig. 3, after obtaining such a region, the prism is designed by ensuring accessibility and ease of processing. The size of the prism (i.e., ${H_1}$, ${H_2}$, ${L_3}$, and ${L_4}$) can be obtained as ${H_1} = |{hr/(l + r)} |$, ${H_2} = \left|{\frac{{hr - \tan (\gamma /2)({d_1} + r)}}{{\tan (\gamma /2)({d_1} + r) - (l + r)}}} \right|$, ${L_3} = \left|{\sqrt 2 (\frac{{\tan (\gamma /2)({d_1} + r)l}}{{l + r - \tan (\gamma /2)({d_1} + r)}})} \right|$, and ${L_4} = \left|{\sqrt 2 (\frac{{d\tan (\gamma /2)}}{{1 - \tan (\gamma /2)}})} \right|$, respectively. To ensure the penetration of the prism to the tubing coupling, ${F_{c2y}}^2 + {F_{c2x}}^2 \le {r^2}$.

Thirdly, set the feed depth of the prism with respect to the end face of the tubing coupling as ${d_z}$. Besides, we define the mirrored points of ${\textbf{J}_6}$ via ${\boldsymbol{I}_1}$ as ${\textbf{J}_{6r}}$. To avoid the IOP to the laser strip being occluded by the tubing coupling, it is necessary to analyze the relative position of the intersection of ${\textbf{O}_c}{\textbf{J}_{6r}}$ with the plane $Z = {d_z}$ and a circle of radius r. Given the mapping matrix of ${\boldsymbol{I}_1}$, ${\textbf{J}_{6r}} = {(0,\textrm{ } - \tan (\gamma /2)({d_1} + r),\textrm{ } - r)^T}$. By substituting the above parameters into Eq. (2), the intersection of the ${\textbf{O}_c}{\textbf{J}_{6r}}$ with the plane $Z = {d_z}$ can be obtained as ${\textbf{F}_{I2}}\textrm{ = }{({F_{I2x}},\textrm{ }{F_{I2y}},\textrm{ }{F_{I2z}})^T} = {( - h({d_z} + \textrm{ }r)/(l + \textrm{ }r),\textrm{ }\tan (\gamma /2)(d + r)({d_z} - l)/(l + r),\textrm{ }{d_z})^T}$. Thus, to make the IOP unobstructed, the structural parameters should satisfy ${F_{I2y}}^2 + {F_{I2x}}^2 \le {r^2}$ $({F_m} = \sqrt {{F_{I2x}}^2 + {F_{I2y}}^2} )$.

3.3 Optical path simulation

In this section, the effects of structural parameters (i.e., h, l, $\phi $, ${d_1}$, $\gamma $, ${S_x}$, ${S_y}$, and $f$) and measurement requirements (i.e., r and ${d_z}$) on the imaging parameters are analyzed. The constant values of the structural parameters are set to h = 50 mm, l = 140 mm, ϕ = 20°, d1 = 90 mm, γ = 30°, Sx = 4.8 mm, Sy = 3.6 mm, f = 12 mm, r = 60 mm, and dz = 20 mm, respectively. While, the variation ranges of each structural parameter are h = [35 mm 45 mm], l = [120 mm 180 mm], ϕ = [15° 25°], d1 = [70 mm 110 mm], γ = [10° 60°], Sx = [2 mm 16 mm], Sy = [2 mm 16 mm], f = [8 mm 15 mm], r = [35 mm 60 mm], dz = [10 mm 30 mm], respectively. The simulation results of the 3D optical path of the double-mirrored structured-light vision with several parameters configurations are given in Fig. 4, where the red area, the yellow area, the gray area, the light purple area, and the dark purple area are non-mirrored FoV, ${\mathbf{\Pi }_1}$, ${\mathbf{\Pi }_J}$, prism, and effective region of the prism, respectively. Besides, the green plane is the structured-light plane. As can be seen from Fig. 4, the imaging parameters are constantly changing with the changes of structural parameters.

 figure: Fig. 4.

Fig. 4. Simulation results of the 3D optical path of the double-mirrored structured-light vision. (a) h = 50 mm, l = 140 mm, ϕ = 15°, d1 = 80 mm, γ = 30°, Sx = 4.8 mm, Sy = 3.6 mm, f = 12 mm, r = 75 mm, and dz = 20 mm. (b) h = 60 mm, l = 130 mm, ϕ = 20°, d1 = 80 mm, γ = 30°, Sx = 4.8 mm, Sy = 3.6 mm, f = 12 mm, r = 60 mm, and dz = 20 mm. (c) h = 50 mm, l = 140 mm, ϕ = 20°, d1 = 100 mm, γ = 20°, Sx = 4.8 mm, Sy = 3.6 mm, f = 12 mm, r = 60 mm, and dz = 20 mm.

Download Full Size | PDF

The curves illustrated in Figs. 5(a) – 5(j) are plotted after evaluating the effect of each structural parameter on the imaging parameters of the vision system. Additionally, the slope of each curve reflects the sensitivity of the imaging parameters to each structural parameter. Table 1 displays the analysis findings.

 figure: Fig. 5.

Fig. 5. Changes in imaging parameters with respect to each structural parameter. (a) h. (b) l. (c) ϕ. (d) d1. (e) γ. (f) f. (g) Sx. (h) Sy. (i) r. (j) dz.

Download Full Size | PDF

Tables Icon

Table 1. Sensitivity of the imaging parameters to the structural parameters.

In Table 1, the red-highlighted imaging parameters are not affected by the structural parameters. In conclusion, the impacts of the structural parameters on the size of mirrored FoV are ranked as f, ${S_y}$, r, $\phi $, ${S_x}$, l, ${d_1}$, and $\gamma $. Additionally, the influence of structural parameters on the prism size is ranked as h, l, $\phi $, ${d_1}$, $\gamma $, f, ${S_x}$, ${S_y}$, and r. Besides, without considering the tubing coupling, the parameters that effects on the ability of the vision system to observe the complete laser strip are ranked as h, l, $\phi $, ${d_1}$, $\gamma $, f, ${S_x}$, ${S_y}$, and r. On this basis, when considering the tubing coupling, the influence of the relevant structural parameters on the line-of-sight accessibility caused by the end face of the tubing coupling is ranked as h, l, r, $\gamma $, ${d_1}$, and ${d_z}$. Through the above analysis, the system’s structural parameters can be optimized according to the actual measurement requirements based on the 3D OPM and sensitivity analysis results.

4. Modeling and calibration of the proposed vision system

In this section, a 3D measurement model for vision system based on double-mirrored structured-light is established. On this basis, a three-step method is proposed to calibrate the parameters of the measurement model.

4.1 3D measurement model

As shown in Fig. 6, the double-mirrored structured-light vision system is built using a camera, a laser, and a reflective prism. The three parameters (i.e., taper, pitch, and tooth height) of the internal thread can be measured using the image of the laser profile in the prism. However, the laser profile is in 2D pixels, whereas the three parameters are measured in physical units (mm or °). Therefore, to realize the inversion from 2D image to geometric parameters, an accurate measurement model should be established using the camera model, the structured-light plane, and the mirror reflection model.

 figure: Fig. 6.

Fig. 6. Schematic diagram of the measurement model.

Download Full Size | PDF

Set the RSLP as $\boldsymbol{I}$, each point on the plane satisfies:

$$A{x_w} + B{y_w} + C{z_w} + D = 0$$
where $(A,\textrm{ }B,\textrm{ }C)$ describes the normal vector of the RSLP, ${({x_w},\textrm{ }{y_w},\textrm{ }{z_w})^T}$ is the coordinate of a point on the laser plane, and D denotes the distance from the origin of the WCS to $\boldsymbol{I}$. The vector expression of the $\boldsymbol{I}$ and the light ray ${\boldsymbol{L}_C}$ can be defined as:
$$\left\{ {\begin{array}{l} {\boldsymbol{I} = \{{\textbf{i}|{\textbf{n}_I}^T \cdot (\textbf{i} - {\textbf{i}_q}) = 0} \}}\\ {{\boldsymbol{L}_C} = \{{\textbf{i}|\textbf{i} = {\textbf{i}_L} + k\textbf{v},k \in R} \}} \end{array}} \right.$$
where ${\textbf{n}_I}$ represents the normal vector of $\boldsymbol{I}$. ${\textbf{i}_q}$ stands for a point on $\boldsymbol{I}$. $\textbf{v}$ is the direction vector of the ${\boldsymbol{L}_C}$. ${\textbf{i}_L}$ denotes a point on ${\boldsymbol{L}_C}$ and $k$ describes a real number. Then, the intersection $\textbf{i}$ of ${\boldsymbol{L}_C}$ and $\boldsymbol{I}$ can be given by:
$${\textbf{n}_I}^T \cdot (\textbf{i} - {\textbf{i}_q}) = {\textbf{i}_L} + k\textbf{v}$$

For $\boldsymbol{I}$, let ${\textbf{i}_q} = {(0,\textrm{ }0,\textrm{ } - \frac{D}{C})^T}$, ${\textbf{n}_I}^T = {(A,\textrm{ }B,\textrm{ }C)^T}$. Set the inverse projection of the image point $\textbf{p}$ on the surface of the calibration plate where the WCS is established be ${\textbf{i}_L} = {({i_{Lx}},\textrm{ }{i_{Ly}},\textrm{ 0})^T}$. Since the ray ${\boldsymbol{L}_C}$ passes through ${\textbf{i}_L}$ and ${\textbf{O}_c} = {({O_{cx}},\textrm{ }{O_{cy}},\textrm{ }{O_{cz}})^T}$, then the intersection of $\boldsymbol{I}$ and ${\boldsymbol{L}_C}$ can uniquely determine the 3D point $\textbf{P}$ corresponding to $\textbf{p}$. After the above parameters are substituted into Eq. (5), $\textbf{P} = {({P_x},\textrm{ }{P_y},\textrm{ }{P_\textrm{z}})^T}$ can be calculated as:

$$\left[ {\begin{array}{c} {{P_x}}\\ {{P_y}}\\ {{P_z}} \end{array}} \right] = \underbrace{{(1 + \frac{{({\textbf{i}_q} - {\textbf{O}_c}) \cdot {\textbf{n}_I}^T}}{{{\textbf{n}_I}^T \cdot \textbf{v}}})}}_{{{P_I}}} \cdot \left[ {\begin{array}{c} {{i_{Lx}}}\\ {{i_{Ly}}}\\ 0 \end{array}} \right] - \frac{{({\textbf{i}_q} - {\textbf{O}_c}) \cdot {\textbf{n}_I}^T}}{{{\textbf{n}_I}^T \cdot \textbf{v}}}\left[ {\begin{array}{c} {{O_{cx}}}\\ {{O_{cy}}}\\ {{O_{cy}}} \end{array}} \right]$$

According to Eq. (1), ${\textbf{O}_c}$ can be calculated as:

$$\textbf{0} = \textbf{R} \cdot {\textbf{O}_c} + \textbf{t}$$

After the homography transformation of the camera model (Eq. (1)), the following equation can be obtained:

$$s\left[ {\begin{array}{c} u\\ v\\ 1 \end{array}} \right] = \textbf{K} \cdot \left[ {\begin{array}{cccc} {{\textbf{R}_1}}&{{\textbf{R}_2}}&0&\textbf{t}\\ 0&0&0&1 \end{array}} \right] \cdot \left[ {\begin{array}{c} {{i_{Lx}}}\\ {{i_{Ly}}}\\ 0\\ 1 \end{array}} \right] = \textbf{K} \cdot \textbf{H} \cdot \left[ {\begin{array}{c} {{i_{Lx}}}\\ {{i_{Ly}}}\\ {{i_{Lz}}}\\ 1 \end{array}} \right]$$
where $\textbf{H}$ describes the homography matrix of $\textbf{M}$.

As shown in Fig. 6, the internal thread observed through the prism by the camera is a mirrored image. Thus, the tooth profiles on the left and right sides of the longitudinal section are imaged collinear on the sensor, making it impossible to measure the taper of the internal thread. To address the problem, the mirror matrix is applied to convert the virtual point $\textbf{P} = {({P_x},\textrm{ }{P_y},\textrm{ }{P_z})^T}$ in the prism to a real point $\textbf{P}^{\prime} = {({P^{\prime}_x},\textrm{ }{P^{\prime}_y},\textrm{ }{P^{\prime}_z})^T}$ on the internal thread, the relation between the two types of points can be expressed as:

$$\left[ {\begin{array}{c} {{P_x}}\\ {{P_y}}\\ {{P_z}}\\ 1 \end{array}} \right]\textrm{ = }\left[ {\begin{array}{cc} {\textbf{I} - 2\textbf{n} \cdot {\textbf{n}^\textrm{T}}}&{2d\textbf{n}}\\ {{\textbf{0}^\textrm{T}}}&1 \end{array}} \right] \cdot \left[ {\begin{array}{c} {{{P^{\prime}}_x}}\\ {{{P^{\prime}}_y}}\\ {{{P^{\prime}}_z}}\\ 1 \end{array}} \right]$$
where $\textbf{n}$ is the unit normal vector of the reflective surface of the prism. d represents the distance from the origin of WCS to the reflective surface. It should be noted that d is negative when $\textbf{n}$ is facing the origin, and vice versa. Based on the above analysis, the measurement model for the vision system can be deduced as:
$$s\left[ {\begin{array}{c} u\\ v\\ 1 \end{array}} \right] = \frac{{\textbf{K} \cdot \textbf{H}}}{{{P_I}}} \cdot \left\{ {\left[ {\begin{array}{cc} {\textbf{I} - 2\textbf{n} \cdot {\textbf{n}^\textrm{T}}}&{2d\textbf{n}}\\ {{\textbf{0}^\textrm{T}}}&1 \end{array}} \right] \cdot \left[ {\begin{array}{c} {{{P^{\prime}}_x}}\\ {{{P^{\prime}}_y}}\\ {{{P^{\prime}}_z}}\\ 1 \end{array}} \right] + \frac{{({\textbf{i}_q} - {\textbf{O}_c}) \cdot {\textbf{n}_I}^T}}{{{\textbf{n}_I}^T \cdot \textbf{v}}} \cdot \left[ {\begin{array}{c} {{O_{cx}}}\\ {{O_{cy}}}\\ {{O_{cy}}} \end{array}} \right]} \right\}$$

4.2 Three-step method for calibrating measurement model parameters

The measurement model for the double-mirrored structured-light vision is complex because of the many parameters involved. Therefore, as illustrated in Fig. 7, a three-step method is proposed to sequentially calibrate the camera model, the laser plane, and the mirror reflection model.

  • a) Camera model calibration

    To determine the parameters of the camera model, a high-accuracy checkerboard pattern is taken as the target, and 15 images of the target with different poses are collected by the vision system without the prism. Then, according to the prior distance constraint between the checkerboard corners, the intrinsic and extrinsic matrices of the camera model, as well as the camera distortion parameters, are solved using Zhang's calibration method [25].

  • b) Structured-light calibration

    To determine the pose relationship between the laser plane and the camera frame, on the basis of a), the parameters in the structured-light model are calibrated using the line laser and the high-accuracy checkerboard pattern. As illustrated in Fig. 7, ① The coordinate system $O - UVW$ is established after the checkerboard pattern is fixed at the front of the camera, and the pose of the CCS relative to the $O - UVW$ is solved according to the Zhang's calibration method. ② Then, the line laser projects a light plane onto the checkerboard pattern, and the laser strip image of $\textbf{AB}$ is captured and automatically detected using software such as the Open CV library [26] or the GML C++ Camera Calibration Toolbox [27]. Based on the pre-calibrated camera model parameters, the 3D coordinates of all points on the laser strip can be calculated. ③ The checkerboard pattern is moved to a different position along the laser axis direction, and the previous two steps are repeated to determine the 3D coordinates of the points on the intersection line $\textbf{A}^{\boldsymbol{\prime}}\textbf{B}^{\boldsymbol{\prime}}$ at that position in the $O - UVW$. Then the laser plane is fitted using 3D points on $\textbf{AB}$ and $\textbf{A}^{\boldsymbol{\prime}}\textbf{B}^{\boldsymbol{\prime}}$ by least square, thereafter the pose relationship between the laser plane and the CCS can be obtained to complete the calibration of the structured-light model.

  • c) Mirror reflection model calibration

    To calculate the pitch, tooth height and taper of the internal thread, the parameters in the mirror reflection model need to be calibrated to convert the virtual image of the internal thread into real 3D tooth profiles. As shown in Fig. 7, after calibrating the structured-light model, the prism is arranged at the front of the camera, and a dot calibration target is added next to the prism to ensure that the camera can simultaneously collect the real image of the calibration target and the virtual image in the prism. Afterwards, all dot centers of the two targets are extracted from the collected images. After calculating the extrinsic matrixes of the two targets by using the Zhang's calibration method, the extracted dot-centers can be converted to the CCS and eventually to the $O - UVW$. Then, the 3D coordinates of matching point-pairs between the real and virtual images are utilized to calculate the normal vectors of the ${\boldsymbol{I}_1}$ (or ${\boldsymbol{I}_2}$) as well as the points in their planes. Finally, the distance d from ${\textbf{O}_c}$ to ${\boldsymbol{I}_1}$ (or ${\boldsymbol{I}_2}$) can be further calculated to complete the calibration of the mirror reflection model.

    Through the above step-by-step calibration method, all parameters in the measurement model of the double-mirrored structured-light vision can be solved.

 figure: Fig. 7.

Fig. 7. Principle of the three-step method for calibrating parameters in the measurement model.

Download Full Size | PDF

5. Accuracy verification and experiments

5.1 Experimental system

As demonstrated in Fig. 8, vision system based on double-mirrored structured light is built for detecting the geometric parameters of tubing internal threads. The system is mainly composed of a camera, a line laser, and a reflective prism. Among them, the camera and line laser placed at the front of the prism constitute a structured-light vision unit. Each part of the vision system is integrated by the finishing tooling, in which the clamp can hold different types of tubing couplings and ensure that the axis of tubing coupling always coincides with the laser plane. According to the API standard, the geometric parameters of the tubing couplings with outer diameter of 31/2 in. and 27/8 in. (1 in. = 25.4 mm) are tested. The internal threads to be measured are shown in Fig. 9. In this paper, a black-and-white camera with the resolution of 1.6 M pixels equipped with a 12 mm prime lens is used to capture the image. To effectively highlight the tooth profiles of the internal thread, a 532 nm line laser with the power of 50 mW is applied, and imaging parameters with object distance of 150 mm and FoV of 90 mm × 60 mm are selected to ensure that only laser strips in the prism can be observed. To make the system capable of measuring two types of internal threads, the internal space of the small tubing coupling is used to limit the prism size to ensure the penetrability of the proposed vision system. Additionally, the diameter of the large tubing coupling, the tooth number of small coupling and the feed depth are utilized to constrain the object distance, the FoV, and the prism size.

 figure: Fig. 8.

Fig. 8. Experimental setup for internal thread inspection.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Two specifications of tubing couplings. (a) Tubing internal thread with outer diameter of 27/8 in. (b) Tubing internal thread with outer diameter of 31/2 in.

Download Full Size | PDF

In this paper, the tooth number is at least greater than 11 to ensure the three parameters of the internal thread (especially the pitch) can be measured by a single shot. Based on the above constraints, the Levenberg-Marquardt (LM) algorithm [28] is used to optimize the structural parameters of the vision system, and the results are shown in Table 2. After the 3D point cloud of tooth shapes is obtained by image processing, the three geometric parameters of pitch, taper, and tooth height can be detected according to API standard.

Tables Icon

Table 2. Optimized structural parameters of the vision system

5.2 System calibration and accuracy validation

Before the measurement, a 12 × 9 matrix high-accuracy checkerboard pattern is first used to solve the parameters in the camera model, and then the reprojection distance error is used to verify the camera calibration accuracy. As shown in Fig. 10, each point in the figure is the deviation between the spatial inverse projection of the sub-pixel corners and the prior corners on the pattern. The maximum value of the calibration error is calculated to be 5 µm, the average to be 2.5 µm, and the root mean square to be 1.2 µm, respectively. Subsequently, the prism is installed to calculate other parameters in the measurement model according to the proposed three-step calibration method. As shown in Fig. 11(a), the accuracy of the proposed step-based calibration method is verified by measuring a high-accuracy ceramic step-block. The step-block has three steps, each step being 20 mm long and 50 mm wide. To rebuild the step height easily, the step surface of the block is placed parallel to the reflective surface, and the height (10 mm) between the middle and the lower steps is selected as the standard for accuracy verification. After the laser stripes are extracted from the image (Fig. 11(b)), the 3D laser point cloud (Fig. 11(c)) can be reconstructed based on the system parameters. Figure 11(d) shows the difference between the reconstructed step height and the reference one. The results show that the maximum, average, and root mean square value of the distance error are 10 µm, 7 µm, and 2.4 µm, indicating that the vision system is accurate enough to determine the geometric parameters of the internal threads.

 figure: Fig. 10.

Fig. 10. Accuracy verification results of camera calibration.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Step-block for accuracy verification of the vision system. (a) Ceramic step-block. (b) Image of laser stripes on the step-block. (c) The 3D laser point cloud.

Download Full Size | PDF

5.3 Experiments for measuring geometric parameters of internal threads

Following the accuracy validation, the measurement experiments of the tubing internal threads are carried out. In this paper, the most widely used API 5CT [29] tubing couplings with outer diameters of 31/2 in. and 27/8 in. are inspected. Figure 12 shows real image and virtual images of the laser strips. The image of the laser profiles which represent the thread teeth in the longitudinal section is shown in Fig. 13(a). In this paper, the local threshold extraction algorithm [30] and Steger algorithm [31] are used to realize the sub-pixel localization of the left and right laser profiles. Figure 13(b) shows the sub-pixel extraction results of the left laser profile. Finally, combined with the pre-calibrated 3D measurement model, the virtual (red curves in Fig. 13(c)) and the real (green curves in Fig. 13(c)) tooth shapes can be reconstructed.

 figure: Fig. 12.

Fig. 12. Real image and virtual images of the laser strips.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. Flow chart of line laser processing. (a) Image of the internal thread with laser strips. (b) Sub-pixel extraction result of left laser profile. (c) Reconstructed real 3D laser profiles (green) and virtual 3D laser profiles (red) in the prism.

Download Full Size | PDF

 figure: Fig. 14.

Fig. 14. Reference instrument measurement. (a) Coordinate Measurement Machine (CMM) (taper). (b) Profilometer (height and pitch).

Download Full Size | PDF

 figure: Fig. 15.

Fig. 15. Conventional contact measurement. (a) Taper gauge. (b) Height gauge. (c) Pitch gauge.

Download Full Size | PDF

According to the API standard, ① The pitch is defined as the distance between 10 threads along the axis of internal thread. ② The tooth height is defined as the radial distance between the tooth crest (or tooth root) of the internal thread and the corresponding root envelope (or crest envelope). ③ The taper is defined as the change rate of the diameter of the tubing internal thread over the thread length. The standard geometric parameters and their tolerances are shown in Table 3. After obtaining the actual 3D tooth profiles on the longitudinal section of the internal thread, the tooth crests and tooth roots are extracted and linearly fitted to obtain the middle meridian, root envelope, and the crest envelope, respectively. Then, the three parameters of the internal threads can be determined.

Tables Icon

Table 3. Dimensional tolerance of internal thread parameters.

To validate the measurement accuracy and efficiency of the proposed vision system, the taper gauge, pitch gauge, and height gauge are utilized to perform the traditional contact measurement of three parameters under the same conditions, as shown in Fig. 15. Meanwhile, the time required to complete a tubing coupling measurement is recorded. In addition, the NC8107 CMM with an accuracy of 2.5 µm from Leader Company and an SP2030 profiler with an accuracy of 0.2 µm from Meider Company are also used to detect the tubing internal threads, as shown in Fig. 14. The measurement results of the three approaches shown in Table 4 indicate that all parameters of the threads are within the tolerance, and the products are qualified. In addition, the measurement results of the two system (i.e., CMM and profilometer) are used as reference to verify the measurement accuracy of both the proposed and the traditional approaches. The results show that the average measurement errors of the proposed vision system for the taper, pitch and tooth height are 0.4286 mm/m, 0.0048 mm and 0.0056 mm, respectively. Compared with the measurement errors of 3.2938 mm/m, 0.0341 mm and 0.0304 mm corresponding to the traditional approach, the measurement accuracy of each parameter is improved by 6.7 times, respectively. Moreover, the average measurement time of the proposed system for the three parameters is 0.5 s, which is 120 times faster than that of the traditional method (60 s). These results demonstrate the high accuracy and efficiency of the proposed vision measurement system.

Tables Icon

Table 4. Measurement results of thread parameters.

6. Conclusions

To address the shortcomings of low efficiency, poor accuracy and poor accessibility of existing approaches in measuring the taper, pitch, and tooth height of tubing internal threads, a vision system based on double-mirrored structured light is proposed, which achieves an integrated measurement of multi-parameters in one setup. For the system, 3D OPMs considering the image sensor size, the focal length, and the laser parameters is proposed. Then the imaging parameters and the effective prism size are optimized to ensure the structural compactness and measurement accessibility of the vision system. Thereafter, to achieve high-accuracy measurement of the three parameters, the modeling and calibration methods of the proposed vision system are proposed. Finally, a vision system is designed based on the optimized structural parameters, and the three parameters of the API-recommended tubing couplings with outer diameters of 31/2 in. and 27/8 in. are tested. The results show that the average measurement errors of the proposed vision system for the taper, pitch and, tooth height are 0.4286 mm/m, 0.0048 mm and 0.0056 mm, and the time consumed is 0.5 s for one piece. The validation results based on the reference system show the detection efficiency and accuracy of the proposed vision system. At present, our system is large in size and not portable enough. Then future research will improve the portability of the vision system, and design a hand-held high-accuracy system for measuring geometric parameters of different types of tubing internal threads through error analysis and compensation.

Funding

Fundamental Research Funds for the Central Universities (22CX01003A-6, 22CX06019A); National Natural Science Foundation of China (52005513).

Acknowledgments

The authors would like to acknowledge funding support from the National Natural Science Foundation of China and the Fundamental Research Funds for the Central Universities.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. “Threading, Gauging, and Inspection of Casing, Tubing, and Line Pipe Threads,” API SPEC 5B-2017.

2. A. Przyklenk, S. Schädel, and M. Stein, “Verification of a calibration method for 3D screw thread metrology,” Meas. Sci. Technol. 32(9), 094005 (2021). [CrossRef]  

3. “Determination of pitch diameter of parallel thread gauges by mechanical probing,” EURAMET/cg-10/v.01.

4. B. Krawczyk, K. Smak, P. Szablewski, and B. Gapiński, “Review of measurement methods to evaluate the geometry of different types of external threads,” in International Scientific-Technical Conference MANUFACTURING (2022), pp. 89–101.

5. Y. Liu, J. Wang, W. Li, and X. Wen, “Study for a lab-made Moiré pattern-based thread counter,” Eur. J. Phys. 42(3), 034001 (2021). [CrossRef]  

6. Q. Tong, Z. Ding, J. Chen, L. Ai, and F. Yuan, “The research of screw thread parameter measurement based on position sensitive detector and laser,” in Journal of Physics: Conference Series (2006), Vol. 48, pp. 561–565.

7. P. Serban and F. Peti, “Coordinate Measuring Machine thread position measurement analysis,” in IOP Conference Series: Materials Science and Engineering (2020), Vol. 898, pp. 012018.

8. F. Zanini and S. Carmignato, “X-ray computed tomography for measurement of additively manufactured metal threaded parts,” in Proc. ASPE/euspen Summer Topical Meeting (2018), pp. 217–221.

9. Q. Tong, C. Jiao, H. Huang, G. Li, Z. Ding, and F. Yuan, “An automatic measuring method and system using laser triangulation scanning for the parameters of a screw thread,” Meas. Sci. Technol. 25(3), 035202 (2014). [CrossRef]  

10. X. Zhang, F. Gao, Y. Li, L. Hai, and L. Shui, “Research on non-contact measurement method of ball screw nut based on spectral confocal principle,” in 2017 13th IEEE International Conference on Electronic Measurement & Instruments (ICEMI) (2017), pp. 493–498.

11. T. Wakayama and T. Yoshizawa, “Optical center alignment technique based on inner profile measurement method,” Proc. SPIE 9110, 110–115 (2014). [CrossRef]  

12. S. R. Abulkhanov and N. A. Ivliev, “Optical inspection device for the inner surface of pipe ends,” in Journal of Physics: Conference Series (2019), Vol. 1368, pp. 022075.

13. C. Lin, C. Hwang, H. Fang, C. Chen, and J. R. Sze, “Real-time pitch diameter measurement of internal thread for nut using laser triangulation,” in 2017 IEEE International Instrumentation and Measurement Technology Conference (I2MTC) (2017), pp. 1–4.

14. A. Gunatilake, L. Piyathilaka, A. Tran, V. K. Vishwanathan, K. Thiyagarajan, and S. Kodagoda, “Stereo vision combined with laser profiling for mapping of pipeline internal defects,” IEEE Sens. J. 21(10), 11926–11934 (2021). [CrossRef]  

15. Z. Dong, X. Sun, C. Chen, and M. Sun, “A fast and on-machine measuring system using the laser displacement sensor for the contour parameters of the drill pipe thread,” Sensors 18(4), 1192 (2018). [CrossRef]  

16. C. Lin, S. Lin, C. Hwang, H. Tu, C. Chen, and C. Weng, “Real-time image-based defect inspection system of internal thread for nut,” IEEE T. Instrum. Meas. 68(8), 2830–2848 (2019). [CrossRef]  

17. Q. Sheng, J. Zheng, W. Shi, R. Zhao, J. Liu, and H. Li, “Measurement and modeling of reflection characteristics of hole inner surface based on endoscopic image,” Measurement 190(28), 110742 (2022). [CrossRef]  

18. D. Cheng, H. Chen, C. Yao, Q. Hou, W. Hou, L. Wei, T. Yang, and Y. Wang, “Design, stray light analysis, and fabrication of a compact head-mounted display using freeform prisms,” Opt. Express 30(20), 36931–36948 (2022). [CrossRef]  

19. A. V. Gorevoy, A. S. Machikhin, D. D. Khokhlov, and V. I. Batshev, “Optimization of prism-based stereoscopic imaging systems at the optical design stage with respect to required 3D measurement accuracy,” Opt. Express 28(17), 24418–24430 (2020). [CrossRef]  

20. X. Li, W. Li, X. Yin, X. Ma, and J. Zhao, “Camera mirror binocular vision based method for evaluating the performance of industrial robots,” IEEE Trans. Instrum. Meas. 70, 1–14 (2021). [CrossRef]  

21. W. McCord, C. Smith, and Z. Zhang, “Single-camera stereoscopic 3D multiplexed structured image capture for quantitative fuel-to-air ratio mapping,” Opt. Laser. Eng. 152, 106945 (2022). [CrossRef]  

22. L. Yu, N. Bekdullayev, and G. Lubineau, “Smartphone-based single-camera stereo-DIC system: thermal error analysis and design recommendations,” IEEE Sens. J. 21(7), 9567–9576 (2021). [CrossRef]  

23. H. Lin and C. L. Tsai, “Depth measurement based on stereo vision with integrated camera rotation,” IEEE Trans. Instrum. Meas. 70, 1–10 (2021). [CrossRef]  

24. X. Li, W. Li, X. Ma, X. Yin, X. Chen, and J. Zhao, “Spatial light path analysis and calibration of four-mirror-based monocular stereo vision,” Opt. Express 29(20), 31249–31269 (2021). [CrossRef]  

25. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

26. G. Bradski and A. Kaehler, Learning OpenCV: Computer vision with the OpenCV library (2008), Chap. 11.

27. V. Vezhnevets, A. Velizhev, N. Chetverikov, and A. Yakubenko, “GML C++ Camera Calibration Toolbox,” (2005), http://research.graphicon.ru/calibration/gml-c++-camera-calibration-toolbox.html.

28. A. Ranganathan, “The levenberg-marquardt algorithm,” Tutoral on LM algorithm. 11(1), 101–110 (2004).

29. “Specification for Casing and Tubing,” API SPEC 5CT-2012.

30. S. Wang, A. Zhang, and H. Wang, “A Feature Extraction Algorithm for Enhancing Graphical Local Adaptive Threshold,” in International Conference on Intelligent Computing (Springer, 2022), pp. 277–291.

31. L. Qi, Y. Zhang, X. Zhang, S. Wang, and F. Xie, “Statistical behavior analysis and precision optimization for the laser stripe center detector based on Steger's algorithm,” Opt. Express 21(11), 13442–13449 (2013). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. Schematic diagram of vision system for measuring geometric parameters of tubing internal thread.
Fig. 2.
Fig. 2. Illustration of the IOP of the vision system.
Fig. 3.
Fig. 3. Illustration of the light path of the laser projection
Fig. 4.
Fig. 4. Simulation results of the 3D optical path of the double-mirrored structured-light vision. (a) h = 50 mm, l = 140 mm, ϕ = 15°, d1 = 80 mm, γ = 30°, Sx = 4.8 mm, Sy = 3.6 mm, f = 12 mm, r = 75 mm, and dz = 20 mm. (b) h = 60 mm, l = 130 mm, ϕ = 20°, d1 = 80 mm, γ = 30°, Sx = 4.8 mm, Sy = 3.6 mm, f = 12 mm, r = 60 mm, and dz = 20 mm. (c) h = 50 mm, l = 140 mm, ϕ = 20°, d1 = 100 mm, γ = 20°, Sx = 4.8 mm, Sy = 3.6 mm, f = 12 mm, r = 60 mm, and dz = 20 mm.
Fig. 5.
Fig. 5. Changes in imaging parameters with respect to each structural parameter. (a) h. (b) l. (c) ϕ. (d) d1. (e) γ. (f) f. (g) Sx. (h) Sy. (i) r. (j) dz.
Fig. 6.
Fig. 6. Schematic diagram of the measurement model.
Fig. 7.
Fig. 7. Principle of the three-step method for calibrating parameters in the measurement model.
Fig. 8.
Fig. 8. Experimental setup for internal thread inspection.
Fig. 9.
Fig. 9. Two specifications of tubing couplings. (a) Tubing internal thread with outer diameter of 27/8 in. (b) Tubing internal thread with outer diameter of 31/2 in.
Fig. 10.
Fig. 10. Accuracy verification results of camera calibration.
Fig. 11.
Fig. 11. Step-block for accuracy verification of the vision system. (a) Ceramic step-block. (b) Image of laser stripes on the step-block. (c) The 3D laser point cloud.
Fig. 12.
Fig. 12. Real image and virtual images of the laser strips.
Fig. 13.
Fig. 13. Flow chart of line laser processing. (a) Image of the internal thread with laser strips. (b) Sub-pixel extraction result of left laser profile. (c) Reconstructed real 3D laser profiles (green) and virtual 3D laser profiles (red) in the prism.
Fig. 14.
Fig. 14. Reference instrument measurement. (a) Coordinate Measurement Machine (CMM) (taper). (b) Profilometer (height and pitch).
Fig. 15.
Fig. 15. Conventional contact measurement. (a) Taper gauge. (b) Height gauge. (c) Pitch gauge.

Tables (4)

Tables Icon

Table 1. Sensitivity of the imaging parameters to the structural parameters.

Tables Icon

Table 2. Optimized structural parameters of the vision system

Tables Icon

Table 3. Dimensional tolerance of internal thread parameters.

Tables Icon

Table 4. Measurement results of thread parameters.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

{ s [ u v 1 ] = [ α x 0 u 0 0 0 α y v 0 0 0 0 1 0 ] K [ R t 0 T 1 ] M [ x w y w z w 1 ] = K [ R 1 R 2 R 3 t 0 0 0 1 ] M [ x w y w z w 1 ] u = ( u p u 0 ) ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) + 2 p 1 ( u p u 0 ) ( v p v 0 ) + p 2 [ r 2 + 2 ( u p u 0 ) 2 ] v = ( v p v 0 ) ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) + 2 p 2 ( u p u 0 ) ( v p v 0 ) + p 1 [ r 2 + 2 ( v p v 0 ) 2 ]
S e = [ ( x ( n b + p c ) + a ( m x 0 + n ( y 0 y ) + p ( z 0 z ) ) ) / F ( y ( m a + p c ) + b ( n y 0 + m ( x 0 x ) + p ( z 0 z ) ) ) / F ( z ( m a + n b ) + c ( p z 0 + m ( x 0 x ) + n ( y 0 y ) ) ) / F ]
A x w + B y w + C z w + D = 0
{ I = { i | n I T ( i i q ) = 0 } L C = { i | i = i L + k v , k R }
n I T ( i i q ) = i L + k v
[ P x P y P z ] = ( 1 + ( i q O c ) n I T n I T v ) P I [ i L x i L y 0 ] ( i q O c ) n I T n I T v [ O c x O c y O c y ]
0 = R O c + t
s [ u v 1 ] = K [ R 1 R 2 0 t 0 0 0 1 ] [ i L x i L y 0 1 ] = K H [ i L x i L y i L z 1 ]
[ P x P y P z 1 ]  =  [ I 2 n n T 2 d n 0 T 1 ] [ P x P y P z 1 ]
s [ u v 1 ] = K H P I { [ I 2 n n T 2 d n 0 T 1 ] [ P x P y P z 1 ] + ( i q O c ) n I T n I T v [ O c x O c y O c y ] }
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.