Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Laser triangulation measurement system with Scheimpflug calibration based on the Monte Carlo optimization strategy

Open Access Open Access

Abstract

We propose a linear laser triangulation measurement system using Scheimpflug calibration based on the Monte Carlo optimization strategy. A Scheimpflug inclination camera calibration model is introduced in the measurement system for improving the image definition in small-range measurements with a large depth-of-field. To address the nonlinear optimization problem between the instrument resolution and measurement range, the Monte Carlo method is adopted to determine the optimal optical parameters (scattering angle, Scheimpflug angle, and focus length) in a practical measurement system. Furthermore, we experimentally constructed the measurement system to demonstrate the measurement precision by measuring a standard step block (measurement range 15 mm). The performance parameters of the maximum measurement error, maximum standard deviation, and linearity are obtained as ±7 μm, 0.225 μm, and 0.046%, respectively. Finally, the proposed measurement system based on the Monte Carlo optimization strategy is promising for high-precision measurements in industrial applications and provides guidance for optimizing the design parameters of ranging measurement sensors.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Optical measurement technologies, including grating encoders [14], chromatic confocal [5,6], optical frequency combs [7], and optical triangulation [8] are vital to various fields of advanced industrial production. In applications involving small ranges and high precision, laser triangulation has been widely used owing to its high stability, high speed, high measurement accuracy, and low cost [9]. Laser triangulation, which comprises a laser beam and a camera, is frequently used to establish a link between two-dimensional (2D) and three-dimensional (3D) point clouds in a ranging system [10,11]. The laser source typically includes a point laser and a line laser. In this work, we selected the line laser considering its higher measurement speed. A linear laser triangulation ranging sensor actively projects a line laser onto the measured object’s surface. Meanwhile, the scattered or reflected beam from the measured object’s surface is focused via the imaging lens group and then imaged onto a photosensitive element. Subsequently, combined with the triangulation relationship of the laser source, measured object, and imaging-receiving sensor, we can obtain the range information of the object surface based on the movement of the laser fringes [12,13]. Owing to the characteristics of high-speed information acquisition, high-precision measurement, and robustness, the linear laser triangulation technique is widely applied in diverse applications, such as industrial manufacturing, aerial surveys, civil engineering, and biological medical treatment, in order to achieve the desired plate thickness, mechanical vibration, vehicle volume and surface roughness, among others. More detailed industrial applications include the detection of the key size of mobile phone back shells, detection of steel plate flatness, tracking of weld lines, and 3D contour reconstruction of black magnetic tiles.

With regard to industrial applications, the measurement accuracy and range are contradictory factors in the design of linear laser 3D ranging sensors. The tradeoff between the sensor resolution and measurement range is a significant challenge for researchers and engineers. Additionally, the sensor parameters include the working distance, resolution (object movement corresponding to one pixel of the image plane), scattering angle, and Scheimpflug angle. The nonlinear characteristics of these parameters directly affect the design progress of the sensitivity, measurement resolution, and measurement range. In previous studies, many experiments were repeatedly conducted to determine the optimal parameters, which required significant effort and was also time-consuming [14,15]. By contrast, the developments in computing ability have enabled an effective calculation platform for acquiring the optimal parameters under complex functional conditions [16,17]. Many optimization methods, such as genetic algorithms and damped least-squares, have already been developed for designing the optical system [1821]. Genetic algorithm can simultaneously consider numerous object points in the domain owing to its random and adaptive qualities [18]. The damped least-squares approach describes the imaging quality of the system by constructing an evaluation function between the aberrations and structural parameters [19]. However, the two methods are not suitable in the application scenario of the complex coupling relationship among the convergence conditions, when the convergence boundary condition is not evident.

In practical applications, a nonlinear programming strategy is adopted to derive the optimal solution for several system parameters. Furthermore, the Monte Carlo method can be used to solve nonlinear optimization problems involving sensitivity and measurement range [2224]. Using numerous random samples, multiple sets of approximate solutions that satisfy constraints such as the resolution, measurement range, and Scheimpflug angle can be obtained, and a set of optical parameters (e.g., the scattering angle, working distance and focal length) that are the most consistent with the actual conditions can be selected. In Monte Carlo optimization, the maximum sensitivity is considered as the objective function, and a series of optical parameters (e.g., the scattering angle α; lens focal length f; and working distance h) are traversed by random numbers. The constraint conditions are determined based on the actual selection and design requirements. The algorithm is cycled 50 million times, and the parameter selection tends to converge under repeated experiments. Applying the actual constraints can ensure that the sensor design is consistent with the actual use, and the principle of optimal sensitivity design ensures the superior performance of the ranging sensor. Finally, using these optimal optical parameters, a mathematical model is established to describe the performance of the ranging sensor.

In addition to the design of the line laser ranging sensor, the calibration of the camera is another key factor for achieving accurate ranging measurements. In short-range high-precision 3D measurement applications, the Scheimpflug camera can expand the depth-of-field range to guarantee clear imaging across the entire measurement range [2527]. In the case of ordinary industrial cameras, the lens plane is typically parallel to the imaging plane, and the front and rear boundaries of the depth-of-field are naturally parallel to the lens plane; this causes some regions to be clearly displayed during imaging, whereas other regions appear blurred. If the Scheimpflug angle exists, the depth-of-field may increase and afford clear images of both the front and rear boundaries [28]. Zhang’s calibration method [29] for planar patterns is characterized by a simple operation, low equipment cost, and low calibration accuracy and it is widely accepted and applied in the industry. However, the mapping of the pinhole camera model is not suitable for the Scheimpflug lens system, owing to the Scheimpflug angle between the lens plane and image plane, which does not exist in ordinary cameras [30,31]. Therefore, the calibration model must be modified to ensure precision measurements. Yin [32] proposed a simplified lens distortion model for a Scheimpflug imaging system based on a small tiltd angle. Legarda [33] added two Scheimpflug angles to determine the homography between the planes perpendicular to the tilted image plane. Based on the aforementioned research, this study improved Zhang’s calibration method and extended the plane calibration model to the Scheimpflug calibration model by incorporating two Scheimpflug free degrees. Finally, a 3D model was reconstructed, and the depth information was obtained by combining the small-hole imaging principle, a distortion model, and light-plane calibration. It is noteworthy that only the Scheimpflug law renders it difficult to realize clear imaging via a tilted camera, and it must coincide with the Hinge intersection [34,35]. This study focuses on an expansion of the Scheimpflug law.

In this study, we introduced the Monte Carlo method into the nonlinear optimization parameter design of a line laser sensor. The contradiction between the instrument resolution and measurement range was effectively solved, and a set of optimal sensor parameters based on actual constraints was obtained. Meanwhile, a simple Scheimpflug camera calibration model that can render the entire measurement plane image clear was introduced, to increase the imaging depth-of-field and ranging accuracy. In the Scheimpflug camera calibration, the conversion relationship based on the two degrees of freedom in the tilted camera calibration model was clarified, and the tilted angles were optimized twice using the Levenberg-Marquardt optimization algorithm. Combined with the line laser emission and imaging lens module as well as the algorithms of light-plane calibration and the gray centroid method, we successfully constructed a linear laser ranging system based on the Scheimpflug camera. A systematic measurement accuracy test experiment was also performed on the sixteenth-order standard stepped block. This experiment demonstrated the effectiveness of the sensor design method and the high precision of the measurement system.

The remainder of this paper is organized as follows. Section 2 describes the overall design and principle of the line laser ranging sensor system, which mainly includes the mathematical model, optimization design of the sensor parameters based on the Monte Carlo method, calibration model of the Scheimpflug camera, and calibration of the light plane. Section 3 introduces the experimental setup and results. Section 4 presents a summary of this study and highlights the application prospects of the proposed design method.

2. Overall design

Figure 1(a) illustrates the overall structure of the ranging system for the Z-direction height of an object that satisfies the conditions of laser triangulation and the Scheimpflug law. The hardware of this system includes a line laser source module (left) and an imaging lens module (right). The measurement process mainly entails line laser projection onto the object surface and 2D image acquisition of the line laser via the imaging lens module. The optical structures of the laser source, line laser module and the convergent cylindrical lens are shown in Fig. 1(b). In the line laser source module, the line laser can be obtained by adopting the laser diode as the light source; First, the 2 mm point laser beam is irradiated on the Powell prism, which has a divergence angle of 60°, to form a diverging flat-topped line laser in the X-direction. Second, the laser passes through the collimated cylindrical mirror in the X-direction to form a parallel line laser and subsequently passes through a Y-direction cylindrical mirror to the convergence line laser stripe. The 2D image formed by the imaging plane is depicted in Fig. 1(c).

 figure: Fig. 1.

Fig. 1. (a) Schematic of laser triangulation measurement system with Scheimpflug calibration for measuring the height of object; (b) optical structure of line laser module; and (c) imaging diagram of complementary metal-oxide semiconductor (CMOS) camera. (d) Schematic of thin convex lens model based on Scheimpflug law; three components are involved: the laser source, the convex lens, and the photosensitive device.

Download Full Size | PDF

2.1 Mathematical model of line laser ranging sensor

The mathematical model of a line laser ranging sensor serves as the theoretical basis for ranging. In this section, in addition to the conventional thin convex lens model, we present a ranging mathematical model along both the laser length and the laser depth direction simultaneously, based on the Scheimpflug law. Scheimpflug law describes the spatially geometric relationship among three optical planes including the focal plane of the camera, lens plane, and image plane, where the lens plane is unparallel to the image plane, i.e., existing the Scheimpflug angle β. Three optical planes must intersect in a Scheimpflug line to satisfy the Scheimpflug constant-focus condition. Thus, the object-plane imaging becomes clear across the entire measurement range, and the depth of field of the imaging can be expanded.

Figure 1(d) shows a schematic illustration of Scheimpflug law, including the laser source, convex lens, and photosensitive device. In Fig. 1(d), α is the angle between the laser axis AB and lens optical axis AA’, β is the angle between the photosensitive device plane A'B’ and lens optical axis AA’, and $\theta $ is the angle between BB’ and the lens optical axis AA’. The laser axis intersects with the object at points A and B, and it is imaged at points A’ and B’ on the photosensitive device, respectively. AB represents the depth-of-field, and A'B’ denotes the corresponding imaging range; a and b represent the object and image distance of point A, respectively; and C and C’ are the vertical points of B and B’ on AA’, respectively. The object displacement y can be derived using a mathematical model. Moreover, we present the derivation of the main system indexes in the sensor design to determine the optimal sensor parameters. The detailed derivation process is as follows:

As △OCB∼△OC'B’, the mathematical relationship can be expressed as

$$\frac{{OA + AC}}{{BC}} = \frac{{OA^{\prime} - A^{\prime}C^{\prime}}}{{B^{\prime}C^{\prime}}}$$
By substituting OA = a, OA’ = b, AB = y, A'B’ = x, AC = ycosα, BC = ysinα, B'C’ = xsinβ and A'C’ = xcosβ into Eq. (1), the relationship between the object displacement y and image displacement x can be expressed as follows:
$$x = \frac{{by\sin \alpha }}{{a\sin \beta + y\sin (\alpha + \beta )}}.$$
When y is small, the relationship between x and y can be simplified as
$$x \approx \frac{{by\sin \alpha }}{{a\sin \beta }}, $$
According to the Gaussian imaging formula and constant-focus imaging conditions, the relationship between α and β can be described as
$$a \cdot \tan \alpha = b \cdot \tan \beta$$
Equation (5) is a mathematical expression of Scheimpflug law. However, α and β are related to imaging magnification, and the Scheimpflug title angle β must exist to satisfy Scheimpflug law.

The Scheimpflug angle β can be calculated using Eq. (5):

$$\beta = \arctan (\frac{a}{b}\tan \alpha )$$
In nonlinear measuring instruments, sensitivity η represents the sensitivity of the image movement to changes in the object movement; it is typically expressed as the rate of change between the two aspects. Based on Eq. (2), the measurement sensitivity η of the measurement system can be expressed as Eq. (2).
$$\eta = \frac{{dx}}{{dy}} = \frac{{ab\sin \alpha \sin \beta }}{{{{[{a\sin \beta + y\sin (\alpha + \beta )} ]}^2}}}.$$
The system resolution μ is determined by the size $ \delta $ of the CMOS pixels. When the pixel pitch is distinguished by the system, the corresponding object displacement is the minimum resolution of the system. Substituting $\delta $ into Eq. (2), μ can be expressed as
$$\mu = \frac{{a\delta \sin \beta }}{{b\sin \alpha + \delta \sin (\alpha + \beta )}}$$
Based on Eq. (2), the measurement range S is determined using the CMOS length. Assuming that ${L_{CMOS}}$ represents the length of the CMOS imaging screen, based on Eq. (2) and (3), the CMOS imaging plane can be classified into two regions, as shown in Fig. 1(d). Therefore, the direction near the laser is defined as positive, whereas the direction away from the laser is defined as negative. S can be expressed using Eq. (8) as follows:
$$S = \frac{{a\frac{{{L_{CMOS}}}}{2}\sin \beta }}{{b\sin \alpha + \frac{{{L_{CMOS}}}}{2}\sin (\alpha + \beta )}}\textrm{ + }\frac{{a\frac{{{L_{CMOS}}}}{2}\sin \beta }}{{b\sin \alpha - \frac{{{L_{CMOS}}}}{2}\sin (\alpha + \beta )}}$$
The formula above shows the derivation of the primary optical parameters. The sensitivity η, system resolution μ, measurement range S, and Scheimpflug angle β are important parameters for determining the optimal sensor parameters during sensor design.

The typical schematic of a thin convex lens model based on Scheimpflug law (as shown in Fig. 1(d) can only reflect the ranging mathematical model along the laser emission direction, which is denoted as the height direction (CB). Furthermore, to show the sensor’s mathematical model along both the laser’s length (ME) and the height direction (CB), the measurement expressions can be derived from the 3D geometric relationship shown in Fig. 2, where ψ1, ψ2 and ψ3 represent the focused object, lens, and image planes, respectively; these intersect at the Scheimpflug line, which is indicated in brown. The trapezoid MNDE represents the maximum measurement range in the object plane, while the rectangle M'N'D'E’ represents the corresponding measurement range in the image plane. Meanwhile, α represents the rotation angle between lines L1 and AA’, and γ represents the rotation angle between lines L2 and L3. A represents an arbitrary point along with CB in the height direction; B and C represent the midpoints of ME and ND, respectively, in the length direction; A’, B’ and C’ represent the corresponding image points passing through the center point O of the lens. The displacement of the object in the height and length directions can be obtained using the following formulas:

 figure: Fig. 2.

Fig. 2. Imaging model of line laser triangulation based on Scheimpflug law. The ranging mathematical model along the laser length and laser depth direction is shown simultaneously.

Download Full Size | PDF

Based on the geometric relationship of the triangle similarity in the length direction, BE and CD can be expressed as follows:

$$CD\textrm{ = }\frac{{OC}}{{OC^{\prime}}} \cdot C^{\prime}D^{\prime}$$
$$BE\textrm{ = }\frac{{OB}}{{OB^{\prime}}} \cdot B^{\prime}E^{\prime}$$
The values of OB, OB’, OC, and OC’ can be obtained using the cosine theorem based on

OAB, △OA'B’, △OAC, and △OA'C’, respectively. Using OC’ as an example,

$$OC^{\prime} = \sqrt {A^{\prime}{{C^{\prime}}^2} + O{{A^{\prime}}^2} - 2A^{\prime}C^{\prime} \cdot OA^{\prime} \cdot \cos \beta } ,$$
along the height direction. When A'C’ is known, the object movement AC can be obtained by referring to Eq. (8), and the result is shown in Eq. (12):
$$AC = \frac{{aA^{\prime}C^{\prime}\sin b}}{{b\sin a - A^{\prime}C^{\prime}\sin (a + b)}} + \frac{{aA^{\prime}C^{\prime}\sin b}}{{b\sin a + A^{\prime}C^{\prime}\sin (a + b)}}$$
The maximum CMOS imaging area corresponds to the largest measurement range of the object. As point A approaches the Scheimpflug line, OB/OB’ becomes increasingly smaller. Combined with Eq. (10), the maximum measurement range MNDE is trapezoidal because the CMOS exhibits a rectangular shape on the image plane. Generally, this mathematical model can be used to calculate the measurement range along with both the height and the length.

2.2 Optimization design of sensor parameters based on the Monte Carlo method

To achieve maximum sensitivity, several relevant optical parameters must be determined. Instead of continuous hypotheses and repeated experiments, the Monte Carlo method can be used to solve problems on optimization through random sampling, thereby providing theoretical guidance for the selection of the optical parameters required for laser triangulation. Figure 3 shows the algorithm flowchart of the Monte Carlo method. In the optimization program, an objective equation must first be established for the maximum sensitivity K. Subsequently, random numbers are generated within the specified range, and the random variables are adopted as independent optical parameters, including the scattering angle α, lens focal length f, and working distance H. Next, we list the functions of the Scheimpflug angle β, measuring range S, and resolution μ, whose independent variables are the random ones generated above. Furthermore, we determine whether the functions above satisfy the actual constraints. If not, the algorithm returns to generating random numbers, and this process is continued until the functions satisfy the actual constraints. The flowchart is then referenced to assess whether Ki > Kmax. If this condition is satisfied, then the maximum sensitivity Kmax is assigned to Ki and the next step is performed. Otherwise, the program proceeds to the next judgment of whether the cycle of random numbers is complete. If they have been assessed, then β, S, K, f, H and μ can be output, and the process is terminated. Otherwise, the algorithm returns to generating random numbers and continues the algorithm flow.

 figure: Fig. 3.

Fig. 3. Algorithm flowchart for designing resolution and measurement range based on Monte Carlo method.

Download Full Size | PDF

The constraint conditions were determined based on actual selection and design requirements. Subsequently, based on our practical experience, we set the value ranges of these independent random variables as the convergence condition to efficiently realize the Monte Carlo algorithm. Considering the imaging magnification and brightness of the line laser, the working distance H should not be excessively large and was thus set to 75 mm ≦ H ≦ 100 mm. A larger value of α can result in lower energy of the received scattering spot. If α is excessively small, then the placement of the CMOS overlaps with the laser position, and the spatial structure is difficult to arrange. We set α to 30° ≦ α ≦ 50° based on design experience. Several lenses with fixed focal lengths were selected, including 16, 25, 35 and 50 mm. The focal lengths were substituted for the program. Subsequently, S was determined based on the size of the measured object, and the predesigned measurement range was 16 mm < S < 25 mm.

The Monte Carlo method exhibits a certain degree of randomness, and the results generated each time will differ. After 50 million cycles, under multiple repeated experiments, the parameter selection tended to converge. We selected the optimal parameters based on the process, such as the scattering angle α = 45°, working distance H = 99.50 mm, and focal length f = 29.60 mm. The results of the corresponding optical parameters are listed in Table 1, which were calculated according to the abovementioned actual parameters selected.

Tables Icon

Table 1. Optical design parameters of laser triangulation.

We ensured that the working distance H and focal length f remained unchanged by controlling the variables. Using the Monte Carlo method, we can summarize the empirical law of the optical parameter design, which provides a reference for the subsequent theoretical design of triangulation.

2.3 Calibration model of Scheimpflug camera

In this section, we extend the conventional pinhole camera model to the calibration model of the Scheimpflug camera and complete the conversion in four coordinate systems. The calibration process of the Scheimpflug camera is as follows. The Scheimpflug camera model adds two angle parameters τx and τy to describe the conversion from the ideal imaging surface Π to the tilted camera imaging surface Πs; τx and τy represent the rotation angle where the ideal imaging surface rotates around the X-and Y-axis in the camera coordinate system, respectively. τx and τy constitute the rotation matrix Rτ between the ideal imaging surface Π and the tilted camera imaging surface Πs. An appropriate initial value must be selected to ensure the convergence and accuracy of the calibration algorithm. The Scheimpflug tilted angle β was designed using the Monte Carlo method, combined with the sensor parameters abs(τx) = 0 and abs (τy) =90−β = 14.92°. As τy rotates counterclockwise around the Y-axis, whose symbol is negative, τy = −14.92°. We adopted two iterations to obtain more accurate initial values for the internal parameters and distortion using the Levenberg-Marquardt algorithm [36]. First, we substituted the initial values of the camera internal parameters and fixed the initial value of the Scheimpflug angle to obtain new internal and distortion parameters for the camera. Subsequently, these new internal and distortion parameters of the camera were substituted for the optimization. The final step entailed solving the Scheimpflug angle constraint to optimize the Scheimpflug angle. The results of the iterative optimization are presented in Table 2. In the second iteration, we set τx = −0.0011 rad and τy = −0.2154 rad. After converting radians into degrees, we obtained τx = −0.06° and τy = −12.34°. Finally, we introduced the optimized τx and τy values into the calibration model to obtain the internal and external parameters of the tilted camera.

$${R_\tau } = \left[ {\begin{array}{ccc} {\cos {\tau_y}}&0&{\sin {\tau_y}}\\ 0&1&0\\ { - \sin {\tau_y}}&0&{\cos {\tau_y}} \end{array}} \right]\left[ {\begin{array}{ccc} 1&0&0\\ 0&{\cos {\tau_x}}&{ - \sin {\tau_x}}\\ 0&{\sin {\tau_x}}&{\cos {\tau_x}} \end{array}} \right]$$
Figure 4 shows the calibration model of the Scheimpflug title camera, which explains the role of the tilted angles τx and τy in camera calibration. In Fig. 4, r represents the normal vector crossing points PW and the optical center O. Assuming that (m, n) is the basis vector of the plane Πs, vector m is located in the plane XcOZc, whereas vector n is perpendicular to vector m and forms an included angle τy with the coordinate axis Yc. The origin of the coordinate system of the ideal imaging plane is (0, 0, f); qs and q represent the intersection of r with planes Πs and Π, respectively, where t represents the direction of vector r.

 figure: Fig. 4.

Fig. 4. Calibration model of Scheimpflug camera.

Download Full Size | PDF

Tables Icon

Table 2. Results of iterative optimization.

The transformation relationship between the homogeneous coordinates from the world coordinate system${\tilde{P}_w}$ [XW YW ZW 1]T to the pixel coordinate system ${\tilde{P}_S}$ [ux uy 1]T of the inclined plane is expressed as follows:

$$s{\tilde{P}_s} = ASR_\tau ^T[\begin{array}{cc} R&T \end{array}]{\tilde{P}_w}, $$
where A represents the internal parameter matrix; S represents the transformation matrix between the ideal imaging surface Π and the tilted camera imaging surface Πs; s represents the scale factor; and R and T are the rotation and translation matrices between the world and camera coordinate systems, respectively. After performing matrix conversion and vector calculation ${\tilde{P}_W}$based on the four coordinate systems (world, camera, image, pixel coordinate system and calibration model), we obtained the conversion relationship between the pixel coordinate system of the inclined image plane and the camera coordinate system in the Scheimpflug system, as follows:
$$s\left[ {\begin{array}{c} {{u_s}}\\ {{v_s}}\\ 1 \end{array}} \right] = A\left[ {\begin{array}{ccc} {\cos {\tau_x}\cos {\tau_y}}&0&{\sin {\tau_y}}\\ 0&{\cos {\tau_x}\cos {\tau_y}}&{ - \sin {\tau_x}\cos {\tau_y}}\\ 0&0&1 \end{array}} \right]R_\tau ^T\left[ {\begin{array}{c} {{X_C}}\\ {{Y_C}}\\ {{Z_C}} \end{array}} \right]$$
where (us, vs) represents the coordinates of the pixel coordinate system in the tilted-imaging plane; (XC, YC, ZC) are the coordinates of the camera coordinate system, and S represents a significant scale factor matrix in the calibration model of the Scheimpflug camera and is the only unknown quantity in Eq. (14). Based on the derivation, S is expressed as
$$S = \left[ {\begin{array}{ccc} {\cos {\tau_x}\cos {\tau_y}}&0&{\sin {\tau_y}}\\ 0&{\cos {\tau_x}\cos {\tau_y}}&{ - \sin {\tau_x}\cos {\tau_y}}\\ 0&0&1 \end{array}} \right]$$
In summary, we derived a calibration model for the Scheimpflug camera and obtained the relationship between the Scheimpflug tilted angles and the rotation coefficient matrix. The most accurate rotation coefficient matrix S can be calculated using τx and τy.

2.4 Calibration of the light plane

Light-plane calibration can be used to solve the external parameter matrix of the camera and the light-plane equation. Combined with the internal parameter matrix of the camera and the line laser stripe on the stereo target, the line laser point coordinates in the camera coordinate system can be solved, and the points that fit the optical plane and light-plane coefficient equations A, B, C and D can be determined. Hence, the pixel coordinates of the laser stripe center can be transformed into world coordinates. The conversion equations are as follows:

$$\left\{ {\begin{array}{c} {{Z_C}\left[ {\begin{array}{c} u\\ v\\ 1 \end{array}} \right] = \left[ {\begin{array}{ccc} \alpha &\gamma &{{u_0}}\\ 0&\beta &{{v_0}}\\ 0&0&1 \end{array}} \right]\left[ {\begin{array}{c} {{X_C}}\\ {{Y_C}}\\ {{Z_C}} \end{array}} \right]}\\ {A{X_C} + B{Y_C} + C{Z_C} + D = 0} \end{array}} \right.,$$
where XC and ZC can be obtained using the linear laser displacement sensor, and YC is equal to the value of the motor encoder when the one-dimensional motion platform moves. Here, α, β, γ, u0 and v0 are the internal parameters of the camera.

In the experiment, we projected a linear structured light onto the step and used the center of the step as the fitting feature point, as shown in Fig. 5. The center points of these steps were fitted using the edge points of each step. During the measurement, the step blocks were turned horizontally and intersected with the laser plane, and two intersecting lines were used to model the light plane.

 figure: Fig. 5.

Fig. 5. Schematic of light plane fitted by two intersecting lines: (a) Feature points when the laser stripe was projected onto the step at Position 1; (b) Feature points when the laser stripe was projected onto the step at Position 2; (c) Light plane formed by two intersecting laser feature point lines at Positions 1 and 2.

Download Full Size | PDF

Figure 5(a) and 5(b) show the fitting feature points at Positions 1 and 2, respectively, and Fig. 5(c) illustrates the fitting light plane formed by the laser feature points in Fig. 5(a) and 5(b).

3. Experiment and results

3.1 Experimental setup

Figure 6 presents the top view of the experimental setup for the line laser triangulation ranging sensor system. The miniaturized line laser light source, macro industrial lens, and CMOS industrial camera were integrated onto the base substrate to construct a set of small-sized integrated sensor structures. The dotted red lines represent the top views of the optical plane, imaging plane, and lens plane, separately, and the three planes represented by the three-dotted lines intersect at the Scheimpflug line, as indicated by the dotted blue line. Corresponding to Fig. 1(a), i.e., the schematic of the ranging system for the Z-direction height of the object, the ranging system in this experiment mainly included a laser source, a line laser model, convergent cylindrical lens, the measured object, an imaging lens, and a CMOS camera. A standard aluminum-based step block with a size of 16 mm × 20 mm × 21 mm is adopted as the measurement target. The height and width of each step are designed as ∼1 and ∼1.25 mm, respectively, which are calibrated by three coordinate measuring machine (CMM) (model: UMPC850). Thus, a total of 16 steps is designed in the step block for the depth-of-field measurement of our optical system. Notable that the surface of each step is polished to guarantee the measurement accuracy. The stepped block’s roughness level is Ra 3.2, and the errors of the three rotational degrees of freedom (△θx, △θy, △θz) = (0.0001°, 0.0165°, 0.0041°). We defined the rotation errors by averaging the error of the 4th, 6th, 8th, 10th and 12th step planes measured by CMM. It is clearly known from the above data that the impact of surface roughness and roll angle error on the measurement can be negligible in the application scenario of this study. In the internal calibration experiment, the calibration plate contained 12 × 9 small grids, and the side length of a single grid was 1.5 mm, with an accuracy of 2 µm. The model of our using camera is MER-2000-19U3M/C, with the corresponding parameter of 5496 (H) × 3672 (V) resolution and 2.4 µm × 2.4 µm pixel size. We obtained 15 images of the calibration plates at different positions and attitudes, as shown in Fig. 7.

 figure: Fig. 6.

Fig. 6. (a) Top view of the material experimental setup for the line laser triangulation ranging sensor system based on Scheimpflug camera; (b) schematic of 16-step block; and (c) physical map of 16-step block.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Illustration of 15 checkerboard positions for the internal parameter calibration of the Scheimpflug camera ranging system.

Download Full Size | PDF

We used the ZEMAX software to design a laser diode beam-shaping system. The specific parameters of ZEMAX model are as follows: the focal lengths of the converging cylindrical mirror in the X and Y directions are 50 and 75 mm, the central wavelength of the laser source is 650 nm, and the spot diameter of the incident laser is 2 mm. The corresponding light path diagram in the ZEMAX has already been depicted in Fig. 1(b). Finally, we obtained a line laser with a length of 20 mm, power of 10 mW, and beam waist profile of 50 µm, at a working distance of 70 mm. The irradiance distributions of the designed linear laser source in the X- and Y-directions are shown in Fig. 8.

 figure: Fig. 8.

Fig. 8. The irradiated laser intensity distribution of generated line laser in the (a) X-axis direction and (b) Y-axis direction.

Download Full Size | PDF

In addition to the design of the linear laser light source module, the design of the imaging lens module is vital for the ranging system. We adopted the Cooke triplet as the initial structure of the imaging lens group, as it can yield a favorable aperture value and a large field angle. As the diaphragm was located in the middle of the lens group, good symmetry was maintained in the system. The parameters of the lens group include the field of view ±6°, the entry pupil diameter 9 mm, and the focal length 39 mm. In terms of imaging resolution, the field curvature within the whole field of view is less than 30 µm, and the maximum value of distortion is 2.47%.

3.2 Measurement results

Figure 9 shows the measurement flow chart of linear laser triangulation measurement system. We projected the line laser on the measured object firstly, and the camera captured the laser fringe picture, then continued the algorithm flow and obtained the height information finally. The measurement algorithm process mainly includes the extraction of the center of the light fringe, calibration of the internal parameters, and calibration of the external parameters. The adopted methods and acquired data in each step of the measurement flow are also listed to illustrate the measurement flow in detail. For an instance, the center of the light fringe is extracted by utilizing the gray centroid method and moving least-squares filter. The laser 2D image information regarding the measured object, as obtained from five groups of experiments, is shown in Fig. 10. The depth of the measured stepped block is indicated in Fig. 11. Furthermore, the measurement results for each step of the aforementioned stages are described in detail below.

 figure: Fig. 9.

Fig. 9. Measurement algorithm flow chart of the linear laser triangulation measurement system (The first, second, and third rows depict the measurement process, the adopted methods, and the acquired data, respectively).

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Images of light fringes on steps: (a) light fringe extracted at Position 1; (b) central point of the step at Position 1; (c) light fringe extracted at Position 2; and (d) central point of the step in Position 2.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Measured laser-fringe images of (a)-(e) corresponding to the groups from one to five (the enlarged image (a) with the inset clearly presents the enlarged depth of field).

Download Full Size | PDF

The results of the internal parameters are discussed here. In the experiment, the calibration method proposed by Zhang [29] was used to calibrate the internal parameters of the system, and the distortion correction of the picture was considered. We optimized the two rotation angles, τx and τy, using the Levenberg-Marquardt algorithm through two iterations. Subsequently, we simultaneously obtained more accurate values of the internal parameters and distortion, as shown in Table 2, where fx, fy, u0 and v0 are the internal parameters of the camera; τx and τy represent the Scheimpflug angle around the x- and y-axes, respectively; and k1, k2, k3, p1and p2 are the distortion parameters of the camera. The reprojection error is an important index for evaluating the calibration effect. In the comparative experiment based on a typical camera calibration model, the average reprojection error is 0.21 pixels (The experimental process is omitted because only the imaging model is different from the experiment in this paper). In this study, after two iterative optimizations, the average reprojection error decreased significantly from 0.032967 to 0.032891 pixels; this verifies the validity of the calibration model.

Furthermore, the results of the external parameters are presented here. Figure 5 presents a schematic of the laser line in two positions. Figure 10 illustrates the 2D images in the experiment when the line laser was projected onto the stepped block in two different positions to complete the external calibration and form a light plane.

As shown in Fig. 10(a) and 10(c), the light stripe and the background can be clearly distinguished based on the images of the light fringes on the stepped block. We used the gray centroid method [37] and the moving least-squares filter to extract the center of the light fringe. As shown in Fig. 10(b) and 10(d), the average of the center of the light fringes was employed as the pixel coordinates for the step midpoint. By performing light-plane calibration, the external parameter matrix of the light plane can be calculated as follows:

$$\left[ {\begin{array}{cc} R&T \end{array}} \right] = \left[ {\begin{array}{cccc} {0.6669}&{ - 0.0048}&{0.7451}&{ - 2.2862}\\ { - 0.0016}&{1.0000}&{0.0079}&{0.8256}\\ { - 0.7451}&{ - 0.0064}&{0.6669}&{169.5450} \end{array}} \right]$$
This section describes the light plane. We multiplied the world coordinates of the step center point, shown in Fig. 10, with the external parameter matrix to obtain the camera coordinates of the step edge points; subsequently, the least-squares method was used to fit the light-plane equation, as follows:
$$\textrm{1}\textrm{.1172}x\textrm{ + 0}\textrm{.0118}y\textrm{ + }z - \textrm{167}\textrm{.0007 = 0}$$
Combined with Eq. (19), once the camera’s internal parameter matrix and the light-plane coefficients A, B, C, and D are obtained, the pixel coordinates of the laser stripe center can be converted into 3D coordinates, as follows:
$$\left[ {\begin{array}{ccc} \alpha &\gamma &{{u_0} - u}\\ 0&\beta &{{v_0} - v}\\ A&B&C \end{array}} \right]\left[ {\begin{array}{c} {{X_C}}\\ {{Y_C}}\\ {{Z_C}} \end{array}} \right] = \left[ {\begin{array}{c} 0\\ 0\\ { - D} \end{array}} \right]$$
Using Eq. (20), the inhomogeneous linear equations can be solved via the Singular Value Decomposition [38] matrix decomposition method, and the 3D coordinates of the laser stripe center, [XC, YC, ZC], can be obtained.

We performed five sets of experiments to verify the performance of the ranging sensor system; Furthermore, using a stepped block (20 mm × 16 mm × 21 mm), a measurement error of 2-16 mm in terms of the height range was obtained. Moreover, the true height of the step block was measured using a Three Coordinates Measuring Machine (model UPMC850). Figure 11 shows the 2D measured laser fringe images of the five groups. During the experiment, we measured steps 2-16.

Based on the five-group measurement experiment, we obtained the depth information of the measured stepped object. Figure 12 shows the five-group measurement error; the experimental results show that, within the measurement range, the positive and negative measurement errors were less than 6 and 7 µm, respectively. Additionally, the sensor system was highly robust, with a maximum standard deviation of 0.225 µm. The linearity of the entire system was 0.046% (FS = 15 mm). The equation for linearity is as follows:

$$\delta = \Delta {Y_{\max }}/FS \times 100\%,$$
where $\Delta $Ymax represents the maximum deviation between the sensor calibration curve and the fitting curve, and FS denotes the sensor’s full-scale output.

 figure: Fig. 12.

Fig. 12. Average error and standard deviation for the stepped block, obtained through the five-group measurements.

Download Full Size | PDF

To quantitatively compare the experimental results with different optimization methods, here, a reported related study introduced a mutation operator-based particle swarm optimization technique for the parameter optimization in a comparable application scene of laser triangulation ranging system [39]. With the approximately identical working ranges of 22.6 mm (20 mm in Ref. [39]) and nonlinear errors of 0.046% (0.045% in Ref. [39]), our measurement results exhibit a higher sensitivity of 5.13 (1.029 in Ref. [39]) and a longer working distance of 99.5 mm (50 mm in Ref. [39]). In conclusion, the adoption of Monte Carlo method applied for the parameters optimization in the linear laser triangulation measurement system verifies a high adaptability and superiority. Furthermore, we compared the measurement performance of our laser triangulation range sensor system with a commercial product. For an instance, the KEYENCE LJ-V7060, as a mature ultra-high-speed profile-measurement device, offers the essential performance parameters: measurement range 16 mm, linearity ±0.1%, and maximum repetition accuracy 0.4 µm for the z-axis, whose maximum measurement error (± 16 µm) can be calculated by Eq. (21). In contrast to the KEYENCE LJ-V7060, our measurement system exhibits better linearity (±0.046%) and a lower maximum measurement error (±7 µm) under nearly identical measurement ranges. The maximum standard deviation in our measurement system presents a small value of 0.225 µm, verifying relatively high stability.

4. Conclusion

In this study, we fabricated a laser triangulation range sensor system based on the Scheimpflug camera for small-range measurements with a large depth-of-field. A ranging mathematical model was established along both the length and height of the laser. In an experimental setup, we designed a line laser collimation system based on a laser diode in a laser light source module and achieved a line laser with good uniformity, high collimation, and easy regulation. The Cooke triplet design was adopted for the imaging lens module. Specifically, to realize the intelligent optimization design of the sensor parameters, we summarized the experience and law of the theoretical design for line laser triangulation range sensors using the Monte Carlo method. This method overcomes the difficulty of a nonlinear optimization strategy between the instrument resolution and measurement range by selecting a set of optimal sensor parameters that satisfy the actual requirements from multiple sets of approximate solutions under constraints. Using the gray centroid method, we extracted the fringe center and calculated the moving distance of the image point. To expand the imaging range and depth-of-field, we introduced a Scheimpflug calibration model that contained two additional free degrees. Combined with light-plane calibration, the camera’s internal parameters and optical-plane external parameters were solved.

Finally, to verify the accuracy of this system, we conducted five sets of measurement experiments on a step block (16 mm × 20 mm × 21 mm) using this system. The experimental results showed that the maximum error of the ranging device was ±7 µm, and its linearity was 0.046% (FS = 15 mm). Additionally, the maximum standard deviation was 0.225 µm, which is sufficient to satisfy the industrial ranging requirements of low cost and high accuracy. Our experiments showed that the design of a line laser ranging sensor can be intelligent and automated and that the sensor performance can be optimized under constraints. In the future, this strategy will be extended to other types of ranging sensors, such as structured light ranging and optical frequency comb ranging sensors, to realize the performance optimization design of these ranging sensors.

Funding

Shenzhen Stable Supporting Program (WDZC20200820200655001); Basic and Applied Basic Research Foundation of Guangdong Province (2019A1515110373); Start-up Funding of Shenzhen International Graduate School, Tsinghua University (QD2020001N); Interdisciplinary Funding of Shenzhen International Graduate School, Tsinghua University (JC2021003); Shenzhen Science and Technology Program (RCBS20200714114957381); National Natural Science Foundation of China (52005291, 61905129).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. A. Kimura, W. Gao, W. Kim, K. Hosono, Y. Shimizu, L. Shi, and L. Zeng, “A sub-nanometric three-axis surface encoder with short-period planar gratings for stage motion measurement,” Precis. Eng. 36(4), 576–585 (2012). [CrossRef]  

2. Y. Kangning, Z. Junhao, Y. Weihan, Z. Qian, X. Gaopeng, W. Guanhao, W. Xiaohao, and L. Xinghui, “Two-channel six degrees of freedom grating-encoder for precision-positioning of sub-components in synthetic-aperture optics,” Opt. Express 29(14), 21113–21128 (2021). [CrossRef]  

3. L. Xinghui, Z. Qian, Z. Xiangwen, L. Haiou, Y. Lin, M. Donghan, S. Jianhui, N. Kai, and W. Xiaohao, “Holographic fabrication of an arrayed one-axis scale grating for a two-probe optical linear encoder,” Opt. Express 25(14), 16028–16039 (2017). [CrossRef]  

4. Y. Han, K. Ni, X. Li, G. Wu, K. Yu, Q. Zhou, and X. Wang, “An FPGA Platform for Next-Generation Grating Encoders,” Sensors 20(8), 2266 (2020). [CrossRef]  

5. J. Bai, X. Li, X. Wang, Q. Zhou, and K. Ni, “Chromatic Confocal Displacement Sensor with Optimized Dispersion Probe and Modified Centroid Peak Extraction Algorithm,” Sensors 19(16), 3592 (2019). [CrossRef]  

6. J. Bai, Y. Wang, X. Wang, Q. Zhou, K. Ni, and X. Li, “Three-Probe Error Separation with Chromatic Confocal Sensors for Roundness Measurement,” Nanomanuf. Metrol. 4(4), 247–255 (2021). [CrossRef]  

7. H. Yu, K. Ni, Q. Zhou, X. Li, X. Wang, and G. Wu, “Digital error correction of dual-comb interferometer without external optical referencing information,” Opt. Express 27(20), 29425–29438 (2019). [CrossRef]  

8. M. Massot-Campos and G. Oliver-Codina, “Optical Sensors and Methods for Underwater 3D Reconstruction,” Sensors 15(12), 31525–31557 (2015). [CrossRef]  

9. S. Cui and X. Zhu, “A generalized reference-plane-based calibration method in optical triangular profilometry,” Opt. Express 17(23), 20735–20746 (2009). [CrossRef]  

10. 10. J. Franca, M. Gazziro, A. Ide, and J. Saito, “A 3D scanning system based on laser triangulation and variable field of view,” in Proc. IEEE International Conference on Image Processing (Italy, Sept. 2005), pp. 425-428.

11. J. Li, Q. Zhou, X. Li, R. Chen, and K. Ni, “An improved low-noise processing methodology combined with PCL for industry inspection based on laser line scanner,” Sensors 19(15), 3398 (2019). [CrossRef]  

12. F. E. Goodwin, M. S. Hersman, and A. R. Slotwinski, “Laser proximity sensor,” U.S. patent 4733609A (29 March 1988).

13. M. T. Breen, “Laser distance measuring method and apparatus,” U.S. patent 4856893A (15 August 1989).

14. X. M. Wang, H. F. Wang, and N. F. Yao, “Parameter optimization of laser displacement sensor based on particle swarm optimization algorithm,” Laser Technol. 42(2), 181–186 (2018).

15. J. Z. Y. X. Zhu and Y. L. Wang, “Model establishment and parameter optimization of high precision laser ranging system,” Machinery 54(7), 68–71 (2016).

16. M. Reyes-Sierra and C. A. C. Coello, “Multi-objective particle swarm optimizers: a survey of the state-of-the-art,” Int. J. Comput. Intell. Res. 2(3), 287–308 (2006). [CrossRef]  

17. Q. He and L. Wang, “An effective co-evolutionary particle swarm optimization for constrained engineering design problems,” Eng. Appl. Artif. Intell. 20(1), 89–99 (2007). [CrossRef]  

18. C. M. Tsai, Y. C. Fang, and C. T. Lin, “Application of genetic algorithm on optimization of laser beam shaping,” Opt. Express 23(12), 15877–15887 (2015). [CrossRef]  

19. M. van Turnhout and F. Bociort, “Chaotic behavior in an algorithm to escape from poor local minima in lens design,” Opt. Express 17(8), 6436–6450 (2009). [CrossRef]  

20. D. C. Sinclair, “Optical design software,” in Handbook of Optics, Fundamentals, Techniques, and Design, Vol. 1, 2nd ed., M. Bass, E. W. Van Stryland, D. R. Williams, and W. L. Wolfe, eds. (McGraw-Hill, New York, 1995), 34.1–34.26.

21. H. Gross, H. Zügge, M. Peschka, and F. Blechinger, “Principles of optimization,” in Handbook of Optical Systems, Vol. 3 (Wiley-VCH, Weinheim, 2007), 291–370.

22. R. J. Drost, T. J. Moore, and B. M. Sadler, “UV communications channel modeling incorporating multiple scattering interactions,” J. Opt. Soc. Am. A 28(4), 686–695 (2011). [CrossRef]  

23. D. K. Borah, V. R. Mareddy, and D. G. Voelz, “Single and double scattering event analysis for ultraviolet communication channels,” Opt. Express 29(4), 5327–5342 (2021). [CrossRef]  

24. J. R. Fonseca, M. I. Friswell, and A. W. Lees, “Efficient robust design via Monte Carlo sample reweighting,” Int. J. Numer. Meth. Eng. 69(11), 2279–2301 (2007). [CrossRef]  

25. A. Donges, R. Noll, and T Laser Measurement, echnology: Fundamentals and Applications (Springer, 2015).

26. E. Malmqvist, M. Brydegaard, M. Alden, and J. Bood, “Scheimpflug Lidar for combustion diagnostics,” Opt. Express 26(12), 14842–14858 (2018). [CrossRef]  

27. T. Scheimpflug, “Improved method and apparatus for the systematic alteration or distortion of plane pictures and images by means of lenses and mirrors for photography and for other purposes,” Great Britain Patent No. 1196 (16 January 1904).

28. L. Mei and M. Brydegaard, “Atmospheric aerosol monitoring by an elastic Scheimpflug lidar system,” Opt. Express 23(24), A1613–1628 (2015). [CrossRef]  

29. Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

30. P. Fasogbon, L. Duvieubourg, and L. Macaire, “Scheimpflug Camera Calibration Using Lens Distortion Model,” in Proceedings of International Conference on Computer Vision and Image Processing, (2017), pp. 159–169.

31. P. Fasogbon, L. Duvieubourg, P.-A. Lacaze, and L. Macaire, “Intrinsic camera calibration equipped with scheimpflug optical device,” in Proceedings of the International Conference on Quality Control by Artificial Vision, (2015), p. 953416.

32. H. M. Merklinger, Focusing the View Camera (Nova Scotia: Seaboard Printing Ltd., Belford, 1996).

33. A. Miks, J. Novak, and P. Novak, “Analysis of imaging for laser triangulation sensors under Scheimpflug rule,” Opt. Express 21(15), 18225–18235 (2013). [CrossRef]  

34. X. Yin, W. Tao, C. Zheng, H. Yang, Q. He, and H. Zhao, “Analysis and simplification of lens distortion model for the Scheimpflug imaging system calibration,” Opt. Commun. 430, 380–384 (2019). [CrossRef]  

35. A. Legarda, A. Izaguirre, N. Arana, and A. Iturrospe, “Comparison and error analysis of the standard pin-hole and Scheimpflug camera calibration models,” in IEEE 11th International Workshop of Electronics, Control, Measurement, Signals and Their Application to Mechatronics, Toulouse, France (2013), pp. 1–6.

36. A. Ranganathan, “The levenberg-marquardt algorithm,” Tutoral on LM algorithm 11, 101–110 (2004).

37. L. Qi, Y. Zhang, X. Zhang, S. Wang, and F. Xie, “Statistical behavior analysis and precision optimization for the laser stripe center detector based on Steger’s algorithm,” Opt. Express 21(11), 13442–13449 (2013). [CrossRef]  

38. M. I. A. Lourakis and R. Deriche, “Camera self-calibration using the singular value decomposition of the fundamental matrix: from point correspondences to 3d measurements,” Research Report 3748, INRIA Sophia-Antipolis, (1999).

39. Z. Nan, W. Tao, and H. Zhao, “Automatic optical structure optimization method of the laser triangulation ranging system under the Scheimpflug rule,” Opt. Express 30(11), 18667–18683 (2022). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. (a) Schematic of laser triangulation measurement system with Scheimpflug calibration for measuring the height of object; (b) optical structure of line laser module; and (c) imaging diagram of complementary metal-oxide semiconductor (CMOS) camera. (d) Schematic of thin convex lens model based on Scheimpflug law; three components are involved: the laser source, the convex lens, and the photosensitive device.
Fig. 2.
Fig. 2. Imaging model of line laser triangulation based on Scheimpflug law. The ranging mathematical model along the laser length and laser depth direction is shown simultaneously.
Fig. 3.
Fig. 3. Algorithm flowchart for designing resolution and measurement range based on Monte Carlo method.
Fig. 4.
Fig. 4. Calibration model of Scheimpflug camera.
Fig. 5.
Fig. 5. Schematic of light plane fitted by two intersecting lines: (a) Feature points when the laser stripe was projected onto the step at Position 1; (b) Feature points when the laser stripe was projected onto the step at Position 2; (c) Light plane formed by two intersecting laser feature point lines at Positions 1 and 2.
Fig. 6.
Fig. 6. (a) Top view of the material experimental setup for the line laser triangulation ranging sensor system based on Scheimpflug camera; (b) schematic of 16-step block; and (c) physical map of 16-step block.
Fig. 7.
Fig. 7. Illustration of 15 checkerboard positions for the internal parameter calibration of the Scheimpflug camera ranging system.
Fig. 8.
Fig. 8. The irradiated laser intensity distribution of generated line laser in the (a) X-axis direction and (b) Y-axis direction.
Fig. 9.
Fig. 9. Measurement algorithm flow chart of the linear laser triangulation measurement system (The first, second, and third rows depict the measurement process, the adopted methods, and the acquired data, respectively).
Fig. 10.
Fig. 10. Images of light fringes on steps: (a) light fringe extracted at Position 1; (b) central point of the step at Position 1; (c) light fringe extracted at Position 2; and (d) central point of the step in Position 2.
Fig. 11.
Fig. 11. Measured laser-fringe images of (a)-(e) corresponding to the groups from one to five (the enlarged image (a) with the inset clearly presents the enlarged depth of field).
Fig. 12.
Fig. 12. Average error and standard deviation for the stepped block, obtained through the five-group measurements.

Tables (2)

Tables Icon

Table 1. Optical design parameters of laser triangulation.

Tables Icon

Table 2. Results of iterative optimization.

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

O A + A C B C = O A A C B C
x = b y sin α a sin β + y sin ( α + β ) .
x b y sin α a sin β ,
a tan α = b tan β
β = arctan ( a b tan α )
η = d x d y = a b sin α sin β [ a sin β + y sin ( α + β ) ] 2 .
μ = a δ sin β b sin α + δ sin ( α + β )
S = a L C M O S 2 sin β b sin α + L C M O S 2 sin ( α + β )  +  a L C M O S 2 sin β b sin α L C M O S 2 sin ( α + β )
C D  =  O C O C C D
B E  =  O B O B B E
O C = A C 2 + O A 2 2 A C O A cos β ,
A C = a A C sin b b sin a A C sin ( a + b ) + a A C sin b b sin a + A C sin ( a + b )
R τ = [ cos τ y 0 sin τ y 0 1 0 sin τ y 0 cos τ y ] [ 1 0 0 0 cos τ x sin τ x 0 sin τ x cos τ x ]
s P ~ s = A S R τ T [ R T ] P ~ w ,
s [ u s v s 1 ] = A [ cos τ x cos τ y 0 sin τ y 0 cos τ x cos τ y sin τ x cos τ y 0 0 1 ] R τ T [ X C Y C Z C ]
S = [ cos τ x cos τ y 0 sin τ y 0 cos τ x cos τ y sin τ x cos τ y 0 0 1 ]
{ Z C [ u v 1 ] = [ α γ u 0 0 β v 0 0 0 1 ] [ X C Y C Z C ] A X C + B Y C + C Z C + D = 0 ,
[ R T ] = [ 0.6669 0.0048 0.7451 2.2862 0.0016 1.0000 0.0079 0.8256 0.7451 0.0064 0.6669 169.5450 ]
1 .1172 x  + 0 .0118 y  +  z 167 .0007 = 0
[ α γ u 0 u 0 β v 0 v A B C ] [ X C Y C Z C ] = [ 0 0 D ]
δ = Δ Y max / F S × 100 % ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.