Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Projected feature assisted coarse to fine point cloud registration method for large-size 3D measurement

Open Access Open Access

Abstract

Fringe projection profilometry has gained significant interest due to its high precision, enhanced resolution, and simplified design. Typically, the spatial and perspective measurement capability is restricted by the lenses of the camera and projector in accordance with the principles of geometric optics. Therefore, large-size object measurement requires data acquisition from multiple perspectives, followed by point cloud splicing. Current point cloud registration methods usually rely on 2D feature textures, 3D structural elements, or supplementary tools, which will increase costs or limit the scope of the application. To address large-size 3D measurement more efficiently, we propose a low-cost and feasible method that combines active projection textures, color channel multiplexing, image feature matching and coarse-to-fine point registration strategies. Using a composite structured light with red speckle patterns for larger areas and blue sinusoidal fringe patterns for smaller ones, projected onto the surface, which allows us to accomplish simultaneous 3D reconstruction and point cloud registration. Experimental results demonstrate that the proposed method is effective for the 3D measurement of large-size and weak-textured objects.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

With the rapid development of science and technology, 3D imaging technology has gained increasing attention. Compared to 2D image information, 3D topography can provide richer and more detailed information, allowing for more comprehensive and realistic descriptions of 3D scene attributes. Structured light projection 3D measurement technology is an active non-contact imaging technology that accurately captures and stores the 3D spatial information of objects. Due to its high measurement accuracy, strong anti-interference capabilities, lack of dependence on scene texture characteristics, and ease of system construction, this technology has found wide applications in areas such as cultural relic protection [1], architecture [2], digital cities [3], civil engineering [4], and mine monitoring [5].

Structured light-based three-dimensional measurement technology involves projecting known encoded patterns onto the scene to be measured using optical projectors. Then the images of the projection area are captured by the camera, and the coding in the image is decoded and matched from different perspectives. Finally, the 3D shape information of the target object's surface is obtained based on the triangulation principle. Structured light projection 3D measurement technology that uses phase coding is also known as fringe projection profilometry. It is widely used in fields such as defect detection, reverse engineering, and computer vision due to its non-contact nature, high measurement accuracy, and good adaptability. A typical fringe projection measurement system comprises a computer, a camera, and a projector. The overlap area of the projector and camera's field of view is the effective measurement area. Generally, with the expansion of the measurement area, the accuracy of the camera and projector will decrease. In order to obtain complete three-dimensional information on large-size objects while ensuring measurement accuracy, it is necessary to scan and measure the object from multiple positions and then convert the point cloud data of each perspective into a unified coordinate system using the point cloud registration algorithm. During the measurement process, the translation and rotation relationship between different positions results in a rigid body position transformation relationship exists between the point cloud coordinate systems constructed at each position. After obtaining the relative positional relationship between point clouds, the point clouds of each sub-region are finally spliced together to obtain complete point cloud information.

2. Related work

In recent years, many research achievements have emerged regarding the measurement of large-size objects. Qian Jiaming [6], Camille [7], and many scholars have achieved complete measurement of the object through point cloud registration technology.

Generally speaking, depending on whether it requires hardware auxiliary equipment, point cloud registration methods can be divided into those that rely on external auxiliary information and those that rely on point cloud data itself.

The registration method relying on external auxiliary information requires adding hardware auxiliary equipment to accomplish the point cloud data registration. Some common methods include the labeling method [8], high-precision rotating platform method [9], cloud registration method based on the tracker [7] and multi-view splicing method combined with a manipulator [10]. The labeling method acquires the spatial position conversion relationship by attaching marker points to the measured object and subsequently splicing point cloud data from different perspectives through marker point coordinates obtained from multiple angles. This method is suitable for objects with insignificant features and is simple in calculation and widely applied. However, this method is both time-consuming and labor-intensive, and it is unable to provide precise 3D information of an object due to inherent errors in the markers used. Moreover, if the markers are pasted or projected as a fixed shape, any lens distortion or deformation of the object being tested can result in inaccurate positioning of the markers. This, in turn, can lead to poor quality point cloud stitching and reconstruction. The high-precision rotating platform method is characterized by a simple algorithm, strong operability, high accuracy, and efficiency. However, the splicing process requires human intervention, necessitating high operator requirements and heavy dependence on the rotating platform's accuracy. The cloud registration method based on the tracker uses the tracker to track the 3D laser scanner's position before and after movement and subsequently calculates the position change, thereby establishing the mathematical model of the coordinate conversion relationship and ultimately achieving point cloud data registration from different perspectives. However, this method's equipment cost is high and heavily relies on the tracker's accuracy. If the tracker's accuracy is insufficient or the operator operates incorrectly, the subsequent point cloud splicing error will be high. The multi-view splicing method combined with a manipulator is to obtain the absolute transformation relationship of the measurement coordinate system of each location under known mechanical conditions. This method can solve the problem of incomplete 3D point cloud data acquisition due to occlusion in traditional single-view 3D measurements, which offers the advantages of high accuracy of small-view imaging and can obtain complete point cloud information of each field of view through a mobile manipulator. However, the point cloud data obtained by the camera from the same object in different positions is based on the current camera coordinate system. If the position difference between each point of cloud data is too large, the subsequent registration efficiency may decline.

In contrast to the previously mentioned methods, the point cloud registration method relying on the point cloud data does not require hardware systems or the application of additional markers on the surface of the measured object. Instead, it relies on the inherent features of the 3D point cloud or the features of 2D images to achieve point cloud registration. 3D point cloud features such as normal vectors [11] and Gaussian curvature [12] reflect the local information of the point cloud, which can be obtained through processing and calculating the 3D point cloud data without additional markers on the measured object surface. However, if the point cloud registration process relies solely on the features of the 3D point cloud itself, there is a high requirement for the overlap ratio of the point clouds to be spliced, and the computational efficiency is not well, which makes this method not practical. In practical applications, the transformation relationship between different position coordinate systems is generally calculated based on the two-dimensional texture information in the scene being measured and then mapped to the three-dimensional point cloud to achieve the point cloud coarse registration process. Lv Ying et al. proposed a method to convert two-dimensional pixel coordinates into three-dimensional point cloud spatial coordinates based on color and depth images and then use local feature descriptions to complete the initial registration process. This method improves the matching performance of feature points at scale and reduces the difficulty of rotational feature matching [13]. Jing Han et al. proposed a 3D reconstruction method based on image feature point matching to improve the 3D reconstruction accuracy and speed up the reconstruction speed. By improving the SIFT algorithm, the initial matching of feature points is achieved using the neighborhood voting method [14].

Once a good initial position is obtained, it is necessary to fine-tune the stitching point clouds to obtain a more accurate position. The current fine registration algorithms mainly include the ICP algorithm and extensions based on this algorithm. Chen Jia et al. proposed a Hong–Tan based ICP (Iterative Closest Point) automatic registration algorithm (HTICP) for partially overlapping point clouds that improves the speed and accuracy of registration on partially overlapping point clouds [15]. Wen-Chung Chang et al. proposed a candidate-based axially switching (CBAS) computed closer point (CCP) approach [16]. The candidate-based axially switching CCP approach is employed before the ICP algorithm to enhance the uncertain range of the translation and the rotation required for 3-D matching tasks. In order to speed up the computations, the simplified KD-tree for the nearest neighbor search is employed to decrease the computation time significantly. To improve convergence for nearly-flat meshes with small features, Szymon et al. introduce a new variant based on the uniform sampling of the space of normal and conclude by proposing a combination of ICP variants optimized for high speed, resulting in a method that takes only a few tens of milliseconds to align two meshes [17].

However, in cases where the scene being measured lacks texture information, it may be difficult to obtain a good initial position of the point cloud to be spliced solely through two-dimensional feature point matching. In such cases, the methods mentioned above may be ineffective. In response to this issue, Li Qiming et al. proposed a new type of local feature descriptor that combines neighborhood point cloud coordinates and normal vector information to improve the ICP algorithm. This method can effectively capture and describe the rich detail features of the weak-texture surfaces, construct robust and significant feature descriptors, improve the matching accuracy of the measurement results and reduce the overall reconstruction error of large and complex components. However, since this method needs to process every point in the point cloud, it can be very time-consuming [18]. Xuan Gong et al. proposed a fast point cloud registration strategy based on the “eye-in-hand” model, which can obtain RGB-D images at multi-view around weak-textured objects with an RGB-D camera for rapid point cloud registration and reconstruction. And this method's multi-view 3D reconstruction speed is faster, and the accuracy of the point cloud is within 2 mm, but the limitations of mechanical equipment limit the measurement range [19].

In order to solve both large-size and weak-textured object measurement problems simultaneously, this paper proposes a coarse-to-fine measurement method that adds texture information by projecting speckles and combines feature point matching with high-stability feature point coordinate transformation. This method is to project a composite structured light that comprises speckle patterns in a larger area and fringes patterns in a smaller area on the surface of the object, and there should exist differences between speckles and fringe patterns in color channels (for example, if the fringe patterns are projected with blue light, then the speckle should be projected with red light.). After capturing the images with an RGB camera and separating the color channels, the fringe patterns for local high-precision 3D reconstruction and the speckles patterns for calculating point cloud transformation are obtained simultaneously. The speckle feature matching technique is then used to obtain the position transformation relationship between point clouds in each sub-region and accomplish the coarse registration process. Subsequently, the point cloud's local curvature and direction vector are used as feature descriptors to achieve fine registration of the two adjacent point clouds. In order to validate the performance of the proposed point cloud registration algorithm, we measured five different objects, which are large-size and weak-textured.

3. Method

The proposed method flow is illustrated in Fig. 1. The red light projector should be positioned appropriately depending on the size of the measured object to ensure the speckles patterns that provide texture features for subsequent calculations of the point cloud coarse registration could cover the object's surface. The blue light projector and the RGB camera constitute a sub-region 3D measurement system that is used to accomplish high-precision three-dimensional reconstruction of the object's local region. Then we will simultaneously acquire the 3D point cloud and 2D image information from the same perspective. The specific processing steps are as follows:

 figure: Fig. 1.

Fig. 1. Overview of the proposed method.

Download Full Size | PDF

Step I: According to the color wavelength difference, the color channel of the image captured by the RGB camera is separated to obtain the blue fringe patterns and red speckle patterns.

Step II: The fringe patterns extracted from the blue channel are utilized to accomplish the high-precision three-dimensional reconstruction of the localized object through the four-step phase-shift and tri-frequency heterodyne algorithm [2027].

Step III: The speckles patterns extracted from the red channel are utilized to detect and match feature points between two images through the SIFT algorithm [28], thereby obtaining the relative position relationship between point clouds under corresponding perspectives and achieving coarse registration. Subsequently, the position of the point cloud is finely adjusted through its curvature and direction vector. Finally, we comprehensively measure the entire object per the abovementioned procedures.

3.1 Three-dimensional reconstruction

We conducted specific experiments to substantiate the approach mentioned above's viability, which include the 3D restriction with the four-step phase-shift and tri-frequency heterodyne algorithm and the point cloud registration assisted with the large-area speckle projection.

3.1.1. Four-step phase shift

The four-step phase-shift method employs three sets of fringe patterns with varying frequencies; each set contains four fringe patterns with identical frequency, and the phase difference is $\frac{\pi }{2}$. The illumination function of the fringe patterns is presented in formula (1):

$$I({x,y} )= a(x,y) + b(x,y)\cdot \cos [{2\pi f + \varphi ({x,y} )} ],$$
where $a({x,y} )$ represents the background illumination, $b({x,y} )$ represents the reflectivity of the surface of the measured object;$f$ represents the frequency of the fringe on the reference plane; And $\varphi ({x,y} )$ represents the phase value at $({x,y} )$. Four-step phase-shifting requires taking four fringes patterns with an equal phase difference between 0 and 2π, the four phases that we chose are $\left( {0,\frac{\pi }{2},\pi ,\frac{{3\pi }}{2}} \right)$, and then we will get four fringe patterns [2931]. The illumination distribution of four fringes patterns is shown in formula (2):
$$\left\{ \begin{array}{l} I(x,y) = a({x,y} )+ b({x,y} )\cos [{\varphi ({x,y} )} ]\\ I(x,y) = a({x,y} )+ b({x,y} )\cos \left[ {\varphi ({x,y} )+ \frac{\pi }{2}} \right]\\ I({x,y} )= a({x,y} )+ b({x,y} )\cos [{\varphi ({x,y} )+ \pi } ]\\ I({x,y} )= a({x,y} )+ b({x,y} )\cos \left[ {\varphi ({x,y} )+ \frac{{3\pi }}{2}} \right] \end{array} \right..$$

The relationship between the phase value corresponding to each point and the illumination can be calculated by formula (2) as follows:

$$\varphi ({x,y} )= \arctan \left\{ {\frac{{{I_4}({x,y} )- {I_2}({x,y} )}}{{{I_1}({x,y} )- {I_3}({x,y} )}}} \right\}.$$

The phase calculated according to formula (3) is a truncated phase. Since the phase information is calculated through the arctangent function, the calculated phase principal value $\varphi ({x,y} )$ is in $({ - \pi ,\pi } )$; nevertheless, the optical image contains multiple fringe periods, rendering this value non-unique throughout the measurement space. In order to obtain a continuous phase distribution, phase unwrapping is imperative. In this paper, we employ the tri-frequency heterodyne algorithm to perform the phase unwrapping.

3.1.2 Phase unwrapped – tri-frequency heterodyne algorithm

Tri-frequency heterodyne algorithm is a time-phased unwrapping method, which is improved based on the three-frequency unwrapping method. ${p_1},{p_2},{p_3}$ are the width of three different frequency stripes, let ${p_{12}} < {p_3}$, any same measuring point x on the same projection measuring system, the fringe order on their wrapped phase diagram is ${n_1},{n_2},{n_3}$. So we can get:

$${p_1}{n_1} = {p_2}{n_2} = {p_3}{n_3} = {p_{12}}({{m_{12}} + \Delta {m_{12}}} ),$$
$$\Delta {n_i} = \frac{{{\varphi _i}}}{{2\pi }},\Delta {n_i} \in [{0,1} ),i = 1,2,3,$$
$${n_i} = {N_i} + \Delta {n_i},{N_i} \in Z,$$
where ${\varphi _i}$ represents the initial wrapping phase of the $i$th fringe patterns, ${N_i}$ represents an integral part of the fringe series, and $\Delta {n_i}$ represents the fractional part of the fringe series; ${p_{12}}$ is the width of the stripe generated according to the heterodyne principle, then we can get ${p_{12}} < \frac{{{p_1}{p_2}}}{{{p_2} - {p_1}}}$, ${m_{12}}$ and $\Delta {m_{12}}$ are integral and fractional parts of the fringe series respectively. In order to carry out phase unwrapping without ambiguity in the whole field, that is, the number of fringes in the whole field of view is not more than 1, it can be deduced that [3235]:
$$\left\{ {\begin{array}{cc} \begin{array}{l} \frac{{{p_3}({\Delta {m_{12}} - \Delta {n_3}} )}}{{{p_3} - {p_{12}}}} - \Delta {m_{12}}\\ \frac{{{p_3}({\Delta {m_{12}} + 1 - \Delta {n_3}} )}}{{{p_3} - {p_{12}}}} - \Delta {m_{12}} \end{array}&\begin{array}{l} \Delta {n_3} - \Delta {m_{12}} \le 0,\\ \\ \Delta {n_3} - \Delta {m_{12}} > 0. \end{array} \end{array}} \right.$$

According to $\phi = 2\pi i$, the absolute phase unwrapping:

$$\begin{aligned} &\left\{ \begin{array}{cc} {\phi_1} = \frac{{{p_2}({2\pi {m_{12}} + {\varphi_1} - {\varphi_2}} )}}{{{p_2} - {p_1}}}\\ {\phi_2} = \frac{{{p_1}({2\pi {m_{12}} + {\varphi_1} - {\varphi_2}} )}}{{{p_2} - {p_1}}} & {\varphi_2} - {\varphi_1} \le 0, \end{array}\right.\\ &\left\{ \begin{array}{cc} {\phi_1} = \frac{{{p_2}[{2\pi ({{m_{12}} + 1} )+ {\varphi_1} - {\varphi_2}} ]}}{{{p_2} - {p_1}}}\\ {\phi_2} = \frac{{{p_1}[{2\pi ({{m_{12}} + 1} )+ {\varphi_1} - {\varphi_2}} ]}}{{{p_2} - {p_1}}} &{{\varphi_2} - {\varphi_1} > 0} \end{array}\right., \end{aligned}$$
${\phi _1}$ and ${\phi _2}$ represent the unwrapping absolute phase values corresponding to the stripes whose width is ${p_1}$ and ${p_2}$, respectively.

3.2 Point cloud registration

When measuring the large-size and weak-textured objects or scenes, the point clouds measured at a fixed location are typically inadequate to represent the entire measurement area. Consequently, it is necessary to measure the object from different perspectives. After obtaining the point cloud information of a local area through the four-step phase shift and tri-frequency heterodyne algorithm, the 3D measuring system is moved to measure other local areas of the entire large-size and weak-textured object. Firstly, the speckle patterns are utilized to detect and match feature points so that the point clouds to be spliced can obtain a good initial posture. Subsequently, fine registration is conducted based on the characteristics of the point cloud itself, which refines its posture through the curvature and direction vector thresholds.

3.2.1 Coarse registration

In our method, the coarse registration of point clouds relies on utilizing image feature matching to determine the position transformation relationship between point clouds; we use the SIFT feature detection algorithm. However, due to the lack of texture information in the measured object, the extraction of SIFT feature points may not be stable. Nonetheless, the speckle image obtained through color channel separation carries significant coding structure features, which can effectively assist in SIFT feature point detection and matching.

(1) SIFT feature matching

SIFT algorithm to achieve image feature matching includes four steps: constructing scale space, Gaussian difference, generating feature descriptor and feature vector matching [36,37].

The construction of scale space is essential to introduce different scale parameters to obtain the visual content details of image data at different scales and then accurately search visual image features. In order to represent the scale space in more detail, we can use a combination of image convolution and the Gaussian function to represent it.

$$L({x,y,\sigma } )= G({x,y,\sigma } )\ast I({x,y} ),$$
$$G({x,y,\sigma } )= \frac{1}{{2\pi {\sigma ^2}}}\exp \left( { - \frac{{{x^2} + {y^2}}}{{2{\sigma^2}}}} \right),$$
where $L({x,y,\sigma } )$ represents the scale space of the image, $G({x,y,\sigma } )$ represents the Gaussian function, $\sigma $ represents the scale space factor, and $I({x,y} )$ represents the scaled coordinate of the original image. In order to improve the stability of extracting the extreme points in the scale space, the difference of Gaussian (DOG) is used to detect the extreme points in some regions. The Gaussian difference function is:
$$D({x,y,\sigma } )= [{G({x,y,k\sigma } )- G({x,y,\sigma } )} ]I({x,y} )= L({x,y,k\sigma } )- L({x,y,\sigma } ),$$
where k is the amplification factor.

In order to ensure the descriptors remain unchanged when the image is rotated, the direction parameter needs to be determined. The calculation formula for the value and direction of the feature point gradient detected in the Gaussian difference scale space is:

$$m({x,y} )= {({{{({L({x + 1,y} )- L({x - 1,y} )} )}^2} + {{({L({x,y - 1} )- L({x,y - 1} )} )}^2}} )^{\frac{1}{2}}},$$
$$\theta ({x,y} )= \arctan \left( {\frac{{L({x,y + 1} )- L({x,y - 1} )}}{{L({x + 1,y} )- L({x - 1,y} )}}} \right),$$

$L$ expresses the scale of feature points. $m({x,y} )$ represents the gradient direction amplitude at point $({x,y} )$, while $\theta ({x,y} )$ represents the direction at point $({x,y} )$.

After obtaining the position, scale, and orientation parameters of feature point, the feature points possess translation, scale, and rotation invariance. To describe the feature points, a $16 \times 16$ neighborhood is determined by taking the feature point as the center, the neighborhood is divided into four sub-regions, and each pixel in the neighborhood has a direction. The directions of the neighborhood are divided into eight groups, and the direction of each pixel is recorded. Then, the number of pixels belonging to each group is calculated, as shown in Fig. 2. The arrows in the figure represent the directions of each group, and the length of the arrows represents the number of pixels contained in the group.

 figure: Fig. 2.

Fig. 2. Generate Feature Descriptor.

Download Full Size | PDF

Subsequently, the NNDR (Nearest neighbor distance ratio) strategy is employed to filter the feature points, which involves identifying the two closest feature points in another image based on the generated feature descriptors and using the Euclidean distance as a measure of similarity between the feature points in the two images. And the matching is considered successful if the ratio of Euclidean distance between the nearest neighbor and the next nearest neighbor of two feature points is less than a specific threshold, as shown in Fig. 3.

 figure: Fig. 3.

Fig. 3. Filter matching point pairs.

Download Full Size | PDF

The formula of the whole matching process is as follows:

$${d_{ab}} = \sqrt {\sum\limits_{i = 1}^n {{{({{d_a}(i )- {d_b}(i )} )}^2}} } ,$$
$$\frac{{{d_N}}}{{{d_S}}} < ratio\_thresh,$$
${d_{ab}}$ represents the Euclidean distance between two characteristic points; ${d_N}$ represents the nearest distance between two points, and ${d_S}$ represents the next nearest neighbor distance between two points. If $\frac{{{d_N}}}{{{d_S}}}$ (low - ratio) is also less than a given threshold $({ratio\_thresh} )$, it can be determined as the correct matching point pair.

(2) Calculate transformation matrix

After the construction and matching of SIFT feature points between 2D images, the matched 2D feature points are mapped to the point cloud data to be spliced to obtain the rotation and translation matrices for the coarse registration. In three-dimensional space, coordinate transformation relationships are determined using at least three points. Therefore, we need to ensure the overlapping area contains at least three common marker points [2].

It is assumed that the local 3D data point cloud 1 and point cloud 2 are measured by the structured light scanner under two different angles of view, and the two point clouds have a common area. The marker points in point cloud 1 are $P = \{{{p_i}|{{p_i} \in P,i = 1,2, \cdot{\cdot} \cdot ,n} } \}$, and the marker points in point cloud 2 are $Q = \{{{q_i}|{{q_i} \in Q,i = 1,2, \cdot{\cdot} \cdot ,n} } \}$, where $({{p_i},{q_i}} )$ is the matching flag point pair, $n \ge 3$, and ${p_i}$, ${q_i}$ is the 3D coordinate of the marked points.

The methods to determine the transformation matrix of two point clouds mainly include the least square [38], singular value decomposition (SVD) [39], and quaternion methods [40,41]. In this paper, we select the singular value decomposition method to obtain the transformation matrix.

In order to obtain the conversion relationship R and T between point cloud 1 and point cloud 2, the objective function should be minimized:

$$E = \sum\limits_{i = 1}^n {{{|{{q_i} - ({R{p_i} + T} )} |}^2}} .$$

According to formula (16), the rotation matrix R and the shift vector T can be obtained using the singular value decomposition method.

3.2.2 Fine registration

When the coarse registration process is accomplished, the point cloud to be spliced has obtained a good relative position relationship. Based on the curvature information of the two point clouds, we could calculate the matching point pairs of the two point clouds. Then the accurate matching point pairs are obtained through directional vector threshold filtering. Finally, fine adjustments are performed to obtain a more accurate posture.

Two point clouds after coarse registration are recorded as ${P^ \ast }$ and ${Q^ \ast }$. And the point cloud ${P^ \ast }$ need to be centralized, the centroid $\bar{x}$ of the query point x and its neighborhood points is:

$$\bar{x} = \frac{1}{n}\sum\limits_{i = 1}^n {{x_i}} .$$

After re-centralizing the point set ${P^ \ast }$, the new point set is marked as $p^{\prime}$, that is:

$$p^{\prime} = \{{{{x^{\prime}}_1},{{x^{\prime}}_2},\ldots ,{{x^{\prime}}_n}} \}= \{{{x_1} - \bar{x},{x_2} - \bar{x},\ldots ,{x_n} - \bar{x}} \}.$$

For any query point x in the point cloud ${P^ \ast }$ and each search radius ${r_d}$, the covariance matrix is constructed as:

$${C_d} = \frac{1}{{|{{K_d}} |}}{\sum\nolimits_{{x_i} \in d} {({{x_i} - \bar{x}} )({x - \bar{x}} )} ^T}, $$
where ${K_d} = \{{{x_i}|{{x_i} - \bar{x}} \le {r_d}} \}$, ${x_i}$ represents the neighborhood point in the search radius of query point x, and $\bar{x}$ represents the centroid of x and its neighborhood point.

The SVD algorithm is used to decompose the above covariance matrix ${C_d}$ to get three eigenvalues that are ${\lambda _{{d_1}}} > {\lambda _{{d_2}}} > {\lambda _{{d_3}}}$ in order [42]; and then the surface curvature at the query point x is recorded as:

$${S_d} = \frac{{{\lambda _{{d_3}}}}}{{{\lambda _{{d_1}}} + {\lambda _{{d_2}}} + {\lambda _{{d_3}}}}}.$$

To enhance the robustness of the calculation results, we usually select different neighborhood search radii for the query points and take the difference of curvature information calculated under different search radii as the matching information we need:

$$\Delta {S_d} = {S_{d + 1}} - {S_d}.$$

For another point cloud ${Q^ \ast }$, the curvature information is calculated using the same method and is recorded as $\Delta {S^{\prime}_d}$, and the threshold is set $\tau $, if $\Delta {S_d} - \Delta {S^{\prime}_d} < \tau $, then this pair of points in the two point clouds is considered to be a matching point pair.

(1) Point cloud normal vector estimation

For a point and its neighborhood points in the point cloud (neighborhood points with radius r or k points of the nearest neighbor), the least squares fitting plane is selected based on the minimum principle, and the fitting process of the local plane ${P_l}$ fitted by the least squares [43,44] method can be expressed as:

$${P_l} = \arg \min \sum\limits_{i = 1}^n {{{({{{({{x_i} - m} )}^T}n} )}^2},}$$
where the centroid m is the center of a neighborhood;

The optimization function is ${P_l} = \min [{f(n )} ]= \min ({{n^T}Sn} )$, and $S = Y{Y^T}$, ${n^T}n = 1$, $f(n )= {n^T}Sn$.

The normal vector n is obtained by decomposing S. And by making the difference between each point in the point set and its neighborhood center point, we can get a matrix and decompose it through SVD:

$$Y = U\Sigma {V^T},$$
where U is the $m \ast n$ square matrix, $\Sigma $ is the diagonal matrix of $m \ast n$, the diagonals are characteristic matrices, and ${V^T}$ is the $n \ast n$ matrix, the last column of vectors in U is the normal vector n, which is also the eigenvector with the smallest eigenvalue.

(2) Set constraints to eliminate wrong matching point pairs

The obtained normal vector n is normalized, if the cosine value of the angle between the normal vectors of each matching point pair is less than a threshold value $\varepsilon $, that is $\cos \theta < \varepsilon $, the corresponding point pair is considered to be an error point pair and is removed from the corresponding point set.

As shown in Fig. 4, ${q_1},{q_2},{q_3}, \cdot{\cdot} \cdot $ represent part of the points in the point cloud, ${n_1}$ represents the normal vector at point ${q_1}$, and green, blue lines represent the correct correspondence, that is, $({{q_1},{{q^{\prime}}_1}} ),({{q_3},{{q^{\prime}}_3}} )$ is the correct corresponding point pair. ${n_2}$ represents the normal vector at point ${q_2}$, ${n^{\prime}_2}$ represents the normal vector at point ${q^{\prime}_2}$, and $\theta $ is the included angle between ${n_2}$ and ${n^{\prime}_2}$. If $\cos \theta < \varepsilon $, $({{q_2},{{q^{\prime}}_2}} )$ is considered as the wrong matching point pair and should be eliminated.

 figure: Fig. 4.

Fig. 4. Eliminate wrong matching point pairs.

Download Full Size | PDF

(3) Calculate transformation matrix

The fine registration is achieved by computing the rigid body transformation matrix using the SVD algorithm to solve for exact matching point pairs between point clouds. The exact matching point pairs of the corresponding points of the two point clouds are recorded as $P^{\prime\prime} = \{{{{p^{\prime\prime}}_1},{{p^{\prime\prime}}_2}, \cdot{\cdot} \cdot ,{{p^{\prime\prime}}_n}} \}$ and $Q^{\prime\prime} = \{{{{q^{\prime\prime}}_1},{{q^{\prime\prime}}_2}, \cdot{\cdot} \cdot ,{{q^{\prime\prime}}_n}} \}$, respectively, and the mathematical model is built by calculating the rotation matrix R and the translation vector t of the two point sets:

$$F({R,t} )= \arg \min \sum\limits_{i = 1}^n {{w_i}({R{{p^{\prime\prime}}_i} + t} )- {{({{q_i}^{\prime \prime }} )}^2},}$$
where ${w_i}$ represents the weight between each pair of points.

The derivative of t in Eq. (24) is as follows:

$$0 = \frac{{\partial F}}{{\partial t}}\sum\limits_{i = 1}^n {2{w_i}({({R{p_i}^{\prime \prime } + t} )- {{q^{\prime\prime}}_i}} )} = 2t\left( {\sum\limits_{i = 1}^n {{w_i}} } \right) + 2R\left( {\sum\limits_{i = 2}^n {{w_i}{{p^{\prime\prime}}_i}} } \right) - 2\sum\limits_{i = 1}^n {{w_i}{q_i}^{\prime \prime }} .$$

The center point $\hat{p}$ of point set $P^{\prime\prime}$ and the center point $\hat{q}$ of point set $Q^{\prime\prime}$ are respectively:

$$\begin{aligned} \hat{p} &= {\frac{{\sum\nolimits_{i = 1}^n {{w_i}{p_i}^{\prime \prime }} }}{{\sum\nolimits_{i = 1}^n {{w_i}} }},}\\ \hat{q} &= {\frac{{\sum\nolimits_{i = 1}^n {{w_i}{q_i}^{\prime \prime }} }}{{\sum\nolimits_{i = 1}^n {{w_i}} }}.} \end{aligned}$$

The sets $X = \{{{x_i}} \}$ and $Y = \{{{y_i}} \}$ are used to represent $p^{\prime\prime} - \hat{p}$ and ${q^{\prime\prime}_i} - \hat{q}$, ${x_i},{y_i}$ represents the points in the new data set, respectively:

$$\begin{aligned} {x_i} &= {{{p^{\prime\prime}}_i} - \widehat p,}\\ {y_i} &= {{{q^{\prime\prime}}_i} - \widehat q.} \end{aligned}$$

Equation (24) can be converted to:

$$F\left( R \right) = \arg \min \sum\limits_{i = 1}^n {{w_i}R{x_i} - {y_i}^2} .$$

The formula (28) can be extended and rearranged as:

$$\sum\limits_{i = 1}^n {{w_i}y_i^TR{x_i} = tr({W{Y^T}RX} )= tr({({RX} )({W{Y^T}} )} )= tr({RXW{Y^T}} )} .$$

According to the SVD algorithm, the rotation matrix $R = V{U^T}$ and the shift vector $t = \hat{q} - R\hat{p}$ are calculated, respectively.

4. System and experiment

4.1 System construction

The specific experiments were designed to verify the feasibility of the proposed algorithm, as shown in Fig. 5(a). In the experimental system, the red light projector is utilized to provide texture information for the large-size and weak-textured objects, as shown in Fig. 5(b). The distance between the projector and the object could be adjusted based on the size of the measured range. The measuring distance of our system is 1 m, and the field of view is 550 × 400(mm2). The fringe projection measurement system is moved to different positions to accomplish the high-precision 3D reconstruction of their respective areas, as shown in Fig. 5 (c):

 figure: Fig. 5.

Fig. 5. Schematic diagram of point cloud mosaic experiment based on speckle assistance.

Download Full Size | PDF

In this paper, the equipment for constructing the experimental system is a color projector, and the model is DLP Lightcrafter4500; A Blue light projector, the model is PDC03; An RGB camera, the model is MER2-230-168U3M; Matching lens, the model is HN-0826-20M-C1/1X, as shown in Fig. 6.

 figure: Fig. 6.

Fig. 6. Physical diagram of experimental system.

Download Full Size | PDF

4.2 Point cloud registration based on scene texture

We measured the plaster heads in the laboratory, as shown in Fig. 7. Firstly, we capture the scene image and the projected fringe pattern at the same location. And then, we use the fringe patterns to accomplish 3D reconstruction. Finally, we rely on the texture information in the scene image for SIFT feature detection and matching to obtain the position transformation relationship of the point cloud from two perspectives and accomplish the point cloud registration.

 figure: Fig. 7.

Fig. 7. Point cloud splicing based on scene texture.

Download Full Size | PDF

According to the above experiment, it can be proved that the measured object with rich texture can effectively complete the point cloud registration by detecting the feature points in the two-dimensional image.

4.3 Point cloud registration without speckle assistance

We measured weak-textured objects without speckle assistance, taking the standard white plate as an example. When dealing with scenarios where the texture information is inadequate, it is difficult to extract SIFT feature points from the two-dimensional image for feature matching. Due to the deficiency of texture information in the collected white standard board image, only a limited number of feature points can be detected. Therefore, the matching efficiency of feature points is low, which leads the point clouds to the inability to obtain an accurate initial position, as shown in Fig. 8:

 figure: Fig. 8.

Fig. 8. Point cloud registration experiment of large size and weak texture objects without speckle assistance.

Download Full Size | PDF

The above experimental results show that the point cloud registration method using feature point matching in two-dimensional images is unsuitable for large-size and weak-textured objects.

4.4 Method proposed in this paper

Through the method proposed in this paper, we measured the ball, standard white plate, bent steel plate, wooden wall plate and perforated iron sheet, respectively. We project speckles onto the measured object's surface to enhance its texture features and facilitate the accomplishment of the feature point matching in the two-dimensional image and follow-up point cloud registration.

4.4.1 RGB channel separation

After setting up the measurement system, we separate the color channel of the composite structured light according to the wavelength of light to obtain red (660 nm) speckle patterns and blue (440 nm) fringe patterns, which are used for feature-matching and three-dimensional reconstruction, respectively, as shown in Fig. 9.

 figure: Fig. 9.

Fig. 9. RGB channel separation.

Download Full Size | PDF

4.4.2 Point cloud registration

After obtaining the point clouds of each sub-region and accomplishing the detection and matching of feature points between images, the matching relationship between feature points in the image is mapped into the point cloud to calculate the relative position relationship between the point clouds.

However, errors may occur when converting 2D feature point coordinates to 3D coordinates, which can lead to inaccurate 3D reconstructions. To mitigate this potential issue, we propose a novel approach that entails determining the precise position of a feature point by computing the coordinates of both the feature point and its neighboring points. Specifically, we consider a feature point to be accurately calculated if the difference between its 3D coordinates and those of its surrounding points falls within a predetermined threshold range, as depicted in Fig. 10:

 figure: Fig. 10.

Fig. 10. SIFT feature point detection and coordinate transformation.

Download Full Size | PDF

After obtaining the precise location of the feature points using the method mentioned above, we use these matching point pairs to calculate the relative position relationship between point clouds, and we still take the standard whiteboard as an example to calculate the coordinate transformation relationship of feature points from different perspectives, as shown in Fig. 11.

 figure: Fig. 11.

Fig. 11. Calculate relative position relationship.

Download Full Size | PDF

When the relative positional relationship between the point clouds has been determined, we proceed to calculate the rotation matrix R and shift vector t between them. Subsequently, we designate the point cloud in the lower left region as the reference point for the coarse registration process. Thereafter, we leverage the curvature and direction vectors of the point cloud to obtain more precise matching point pairs, which allows us to accurately calculate the positional transformation relationship and achieve the fine registration process. The resulting point cloud registration from different sub-regions is shown in Fig. 12 and summarized in Table 1, with each region represented by a unique color.

 figure: Fig. 12.

Fig. 12. Point cloud splicing results.

Download Full Size | PDF

Tables Icon

Table 1. Point cloud splicing data

4.5 Experimental results and analysis

In order to evaluate the efficacy of the proposed method, we measured five kinds of weak-textured, large-size and repeated-textured objects.

We measured the white ball at first. The white ball shown in Fig. 13 has fewer texture features and is small. In order to verify the effectiveness of the method in this paper for the weak-texture objects, we measure it only from two perspectives. The RMSE value after the registration is 0.7151 mm.

 figure: Fig. 13.

Fig. 13. White Ball.

Download Full Size | PDF

The bent steel plate shown in Fig. 14 is a typical large-size and weak-textured object, which is not easy to achieve the complete 3D measurement with traditional methods. In order to verify the effectiveness of the method in this paper for large-size and weak-textured objects, we measured the bent steel plate from six perspectives, and the RMSE value after each registration is about 0.6277 mm.

 figure: Fig. 14.

Fig. 14. Bent steel plate.

Download Full Size | PDF

The wooden wallboard shown in Fig. 15 is also a large-size and weak-textured object. It needs to be measured from 15 perspectives, and the RMSE value after each registration is about 0.6366 mm.

 figure: Fig. 15.

Fig. 15. Wooden wallboard.

Download Full Size | PDF

The iron plate with holes shown in Fig. 16 is a large-size and repeated-textured object, and there are many round holes on it that may cause mismatching of feature points. In order to verify the effectiveness of the method in this paper for large-size and repeated-textured objects, we measure it from four perspectives, and the RMSE value after each registration is about 0.4875 mm.

 figure: Fig. 16.

Fig. 16. Iron sheet.

Download Full Size | PDF

5. Conclusions

In the case of measuring large-sized and weak-textured objects, it cannot be easy to achieve the point cloud registration based on their own features and the 2D image features of the corresponding perspective. To address this issue, we propose a point cloud registration method that combines active projection textures, color channel multiplexing and coarse-to-fine point registration strategies. Our method involves projecting a composite structured light that comprises speckle patterns in a larger area and fringe patterns in a smaller area on the surface of an object and measuring it from various perspectives. By separating the fringe patterns and speckle patterns according to the wavelength of light, we can use the fringe patterns to accomplish the three-dimensional reconstruction of each sub-region and use the speckle pattern features to calculate the relative position relationships between the point clouds. First, we perform coarse registration of the point clouds through feature matching, followed by fine registration accomplished via leveraging the curvature and direction vectors of the point cloud.

Our method eliminates the need for pasting markers on the object's surface, ensuring the completeness and accuracy of the obtained point cloud information. Since 2D image feature point matching does not require the shape of the speckle, the method's accuracy in this paper will be higher compared to the method of pasting or projection fixed-shaped marker points. Moreover, it does not rely on other high-precision mechanical instruments, thus significantly reducing the constraints on the measurement range and costs. Compared with other traditional methods, the method has the advantages of lower cost, simpler operation, higher practicality and stability. Furthermore, the efficacy of the proposed method in measuring objects with weak-textured, large-size and repeated-textured has been demonstrated through the experiments on a diverse range of objects that include a white sphere, a standard white plate, a curved steel plate, a wooden wallboard, and an iron sheet with holes.

Funding

National Key Research and Development Program of China (2021YFC2202404).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. G. Du, M. Zhou, P. Ren, W. Shui, P. Zhou, Z. Wu, A. K. Asundi, and Y. Fu, “A 3D modeling and measurement system for cultural heritage preservation,” Proc. SPIE 9524, 952420 (2015). [CrossRef]  

2. Liu, “Research on the Application of 3D Scanning Measurement in the Information Integration System of Building Construction in the Internet Era,” in 2022 6th International Conference on Intelligent Computing and Control Systems (ICICCS), (2022), pp. 556–559.

3. L. Qing, F. Weixi, and C. Huanbin, “Research on Visualization Modeling Technology of Massive Laser Point Cloud 3D Data,” in 2020 IEEE Conference on Telecommunications, Optics and Computer Science (TOCS), (2020), pp. 94–97.

4. B. Xu and C. Liu, “A 3D reconstruction method for buildings based on monocular vision,” Comput-aided Civil Eng. 37(3), 354–369 (2022). [CrossRef]  

5. S. K. Singh, B. P. Banerjee, and S. Raval, “Three-dimensional unique-identifier-based automated georeferencing and coregistration of point clouds in underground mines,” Remote Sensing 13(16), 3145 (2021). [CrossRef]  

6. J. Qian, S. Feng, T. Tao, Y. Hu, K. Liu, and S. Wu, “High-resolution real-time 360 3d model reconstruction of a handheld object with fringe projection profilometry,” Opt. Lett. 44(23), 5751–5754 (2019). [CrossRef]  

7. C. Simon, R. Schütze, F. Boochs, and F. S. Marzani, “Registration of arbitrary multi-view 3D acquisitions,” Comput Ind 64(9), 1082–1089 (2013). [CrossRef]  

8. L. Song, D. Lin, X. Peng, and Z. Li, “Two-stage point cloud registration for 3D measurement of large workpieces,” in 2021 16th International Conference on Computer Science & Education (ICCSE), (2021), pp. 500–505.

9. Y. Ye and Z. Song, “An accurate 3D point cloud registration approach for the turntable-based 3D scanning system,” in 2015 IEEE International Conference on Information and Automation (2015), pp. 982–986.

10. B. Jerbić, F. Šuligoj, M. Švaco, and B. Šekoranja, “Robot assisted 3D point cloud object registration,” Procedia Eng. 100, 847–852 (2015). [CrossRef]  

11. I. Elkhrachy, “Feature Extraction of Laser Scan Data Based on Geometric Properties,” J. Indian Soc. Remote Sens. 45(1), 1–10 (2017). [CrossRef]  

12. R. Liang, “Research on Point Cloud Registration Algorithm Based on Gaussian Curvature,” in 2020 35th Youth Academic Annual Conference of Chinese Association of Automation (YAC), (2020), pp. 465–471.

13. L. Ying, C. Z. Ming, Y. Tian-tian, and N. Kang, “Research on seamless mosaic of point cloud data with improved feature matching,” J. Comput. Simulation 37, 200–205 (2020).

14. J. Han, Y. Cao, L. Xu, W. Liang, Q. Bo, J. Wang, C. Wang, Q. Kou, Z. Liu, and D. Cheng, “3D reconstruction method based on medical image feature point matching,” Comput. Math Method Med. 2022, 1–11 (2022). [CrossRef]  

15. J. Chen, X. Wu, M. Y. Wang, and X. Li, “3D shape modeling using a self-developed hand-held 3D laser scanner and an efficient HT-ICP point cloud registration algorithm,” Opt. Laser Technol. 45, 414–423 (2013). [CrossRef]  

16. C. W. Chung and C. H. Wu, “Candidate-based matching of 3-D point clouds with axially switching pose estimation,” Vis. Comput. 36(3), 593–607 (2020). [CrossRef]  

17. R. Szymon and M. Levoy, “Efficient variants of the ICP algorithm,” in 2001 Proceedings third international conference on 3-D digital imaging and modeling (2001), pp. 145–152.

18. Q. Li, J. Ren, X. Pei, M. Ren, L. Zhu, and X. Zhang, “High-accuracy point cloud matching algorithm for weak-texture surface based on multi-modal data cooperation[J],” Acta Opt. Sin. 42(8), 0810001 (2022).

19. X. Gong, X. Chen, X. Jian, and W. He, “Multi view 3D reconstruction method for weak texture objects based on “eye-in-hand” model,” in 2021 China Automation Congress (CAC) (2021), pp. 5673–5678.

20. C. Zuo, Q. Chen, S. Feng, F. Feng, G. Gu, and X. Sui, “Optimized pulse width modulation pattern strategy for three-dimensional profilometry with projector defocusing,” Appl. Opt. 51(19), 4477–4490 (2012). [CrossRef]  

21. Y. Wang and S. Zhang, “Superfast multifrequency phase-shifting technique with optimal pulse width modulation,” Opt. Express 19(6), 5149–5155 (2011). [CrossRef]  

22. W. Lohry, V. Chen, and S. Zhang, “Absolute three-dimensional shape measurement using coded fringe patterns without phase unwrapping or projector calibration,” Opt. Express 22(2), 1287–1301 (2014). [CrossRef]  

23. Z. Cai, X. Liu, H. Jiang, D. He, X. Peng, S. Huang, and Z. Zhang, “Flexible phase error compensation based on Hilbert transform in phase shifting profilometry,” Opt. Express 23(19), 25171–25181 (2015). [CrossRef]  

24. H. Zhao, X. Diao, H. Jiang, and X. Li, “High-speed triangular pattern phase-shifting 3D measurement based on the motion blur method,” Opt. Express 25(8), 9171–9185 (2017). [CrossRef]  

25. P. Zhou, Y. Wang, Y. Xu, Z. Cai, and C. Zuo, “Phase-unwrapping-free 3D reconstruction in structured light field system based on varied auxiliary point,” Opt. Express 30(17), 29957–29968 (2022). [CrossRef]  

26. Z. Zhang, D. P. Towers, and C. E. Towers, “Snapshot color fringe projection for absolute three-dimensional metrology of video sequences,” Appl. Opt. 49(31), 5947–5953 (2010). [CrossRef]  

27. Z. Zhang, C. E. Towers, and D. P. Towers, “Time efficient color fringe projection system for 3D shape and color using optimum 3-frequency selection,” Opt. Express 14(14), 6444–6455 (2006). [CrossRef]  

28. D. I. H. Putri, Riyanto Martin, and C. Machbub, “Object detection and tracking using SIFT-KNN classifier and Yaw-Pitch servo motor control on humanoid robot,” in 2018 International Conference on Signals and Systems (ICSigSys), (2018), pp. 47–52.

29. S. Feng, C. Zuo, L. Zhang, T. Tao, Y. Hu, W. Yin, J. Qian, and Q. Chen, “Calibration of fringe projection profilometry: A comparative review,” Opt. Laser Eng. 143, 106622 (2021). [CrossRef]  

30. S. Zhang, D. V. D. Weide, and J. Oliver, “Superfast phase-shifting method for 3-D shape measurement,” Opt. Express 18(9), 9684–9689 (2010). [CrossRef]  

31. J. S. Hyun and S. Zhang, “Enhanced two-frequency phase-shifting method,” Appl. Opt. 55(16), 4395–4401 (2016). [CrossRef]  

32. Z. Zhang, C. E. Towers, and D. P. Towers, “Uneven fringe projection for efficient calibration in high-resolution 3D shape metrology,” Appl. Opt. 46(24), 6113–6119 (2007). [CrossRef]  

33. Y. An, J. Hyun, and S. Zhang, “Pixel-wise absolute phase unwrapping using geometric constraints of structured light system,” Opt. Express 24(16), 18445 (2016). [CrossRef]  

34. M. Zhang, Q. Chen, T. Tao, S. Feng, Y. Hu, H. Li, and C. Zuo, “Robust and efficient multi-frequency temporal phase unwrapping: optimal fringe frequency and pattern sequence selection,” Opt. Express 25(17), 20381 (2017). [CrossRef]  

35. Y. Xu, H. Zhao, H. Jiang, and X. Li, “High-accuracy 3D shape measurement of translucent objects by fringe projection profilometry,” Opt. Express 27(13), 18421 (2019). [CrossRef]  

36. S. Wang, Z. Guo, and Y. Liu, “An image matching method based on sift feature extraction and FLANN search algorithm improvement,” J. Phys.: Conf. Ser. 2037(1), 012122 (2021). [CrossRef]  

37. L. Marlinda, S. Rustad, R. S. Basuki, F. Budiman, and M. Fatchan, “Matching Images On The Face Of A Buddha Statue Using The Scale Invariant Feature Transform (SIFT) Method,” in 2020 7th International Conference on Information Technology, Computer, and Electrical Engineering (ICITACEE), (2020), pp. 169–172.

38. J. Yu, Y. Lin, B. Wang, Q. Ye, and J. Cai, “An advanced outlier detected total least-squares algorithm for 3-D point clouds registration,” IEEE Trans. Geosci. Remote Sensing 57(7), 4789–4798 (2019). [CrossRef]  

39. C. Lin, Y. Tai, J. Lee, and Y. Chen, “A novel point cloud registration using 2D image features,” EURASIP J. Adv. Signal Process. 2017(1), 5 (2017). [CrossRef]  

40. F. Xu, Y. Zhou, and C. Luol, “Point Cloud Registration with Low Overlap Based on Dimension Reduction and Feature Matching,” in 2022 IEEE 6th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), (2022), pp. 483–488.

41. J. Liu and Z. Ren, “The research and application of the multi-view registration,” in 2010 3rd International Congress on Image and Signal Processing (2010), pp. 1258–1262.

42. J. Li, F. Qian, and X. Chen, “Point cloud registration algorithm based on overlapping region extraction,” J. Phys.: Conf. Ser. 1634(1), 012012 (2020). [CrossRef]  

43. S. Quan, Y. Xin, Y. Cheng, M. Hui, L. Xiao, and D. Xu, “Road extraction from 3D point clouds based on the difference of normal vector,” in Thirteenth International Conference on Graphics and Image Processing (ICGIP 2021), (SPIE, 2022), p. 120831E.

44. J. Ma, J. Wu, K. Yang, C. Zuo, S. Feng, H. Wang, and Q. Kemao, “Measurement point cloud registration method for complex mechanical parts based on improved ICP,” in International Conference on Optical and Photonic Engineering (icOPEN 2022), (SPIE, 2023), p. 1255005.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1.
Fig. 1. Overview of the proposed method.
Fig. 2.
Fig. 2. Generate Feature Descriptor.
Fig. 3.
Fig. 3. Filter matching point pairs.
Fig. 4.
Fig. 4. Eliminate wrong matching point pairs.
Fig. 5.
Fig. 5. Schematic diagram of point cloud mosaic experiment based on speckle assistance.
Fig. 6.
Fig. 6. Physical diagram of experimental system.
Fig. 7.
Fig. 7. Point cloud splicing based on scene texture.
Fig. 8.
Fig. 8. Point cloud registration experiment of large size and weak texture objects without speckle assistance.
Fig. 9.
Fig. 9. RGB channel separation.
Fig. 10.
Fig. 10. SIFT feature point detection and coordinate transformation.
Fig. 11.
Fig. 11. Calculate relative position relationship.
Fig. 12.
Fig. 12. Point cloud splicing results.
Fig. 13.
Fig. 13. White Ball.
Fig. 14.
Fig. 14. Bent steel plate.
Fig. 15.
Fig. 15. Wooden wallboard.
Fig. 16.
Fig. 16. Iron sheet.

Tables (1)

Tables Icon

Table 1. Point cloud splicing data

Equations (29)

Equations on this page are rendered with MathJax. Learn more.

I ( x , y ) = a ( x , y ) + b ( x , y ) cos [ 2 π f + φ ( x , y ) ] ,
{ I ( x , y ) = a ( x , y ) + b ( x , y ) cos [ φ ( x , y ) ] I ( x , y ) = a ( x , y ) + b ( x , y ) cos [ φ ( x , y ) + π 2 ] I ( x , y ) = a ( x , y ) + b ( x , y ) cos [ φ ( x , y ) + π ] I ( x , y ) = a ( x , y ) + b ( x , y ) cos [ φ ( x , y ) + 3 π 2 ] .
φ ( x , y ) = arctan { I 4 ( x , y ) I 2 ( x , y ) I 1 ( x , y ) I 3 ( x , y ) } .
p 1 n 1 = p 2 n 2 = p 3 n 3 = p 12 ( m 12 + Δ m 12 ) ,
Δ n i = φ i 2 π , Δ n i [ 0 , 1 ) , i = 1 , 2 , 3 ,
n i = N i + Δ n i , N i Z ,
{ p 3 ( Δ m 12 Δ n 3 ) p 3 p 12 Δ m 12 p 3 ( Δ m 12 + 1 Δ n 3 ) p 3 p 12 Δ m 12 Δ n 3 Δ m 12 0 , Δ n 3 Δ m 12 > 0.
{ ϕ 1 = p 2 ( 2 π m 12 + φ 1 φ 2 ) p 2 p 1 ϕ 2 = p 1 ( 2 π m 12 + φ 1 φ 2 ) p 2 p 1 φ 2 φ 1 0 , { ϕ 1 = p 2 [ 2 π ( m 12 + 1 ) + φ 1 φ 2 ] p 2 p 1 ϕ 2 = p 1 [ 2 π ( m 12 + 1 ) + φ 1 φ 2 ] p 2 p 1 φ 2 φ 1 > 0 ,
L ( x , y , σ ) = G ( x , y , σ ) I ( x , y ) ,
G ( x , y , σ ) = 1 2 π σ 2 exp ( x 2 + y 2 2 σ 2 ) ,
D ( x , y , σ ) = [ G ( x , y , k σ ) G ( x , y , σ ) ] I ( x , y ) = L ( x , y , k σ ) L ( x , y , σ ) ,
m ( x , y ) = ( ( L ( x + 1 , y ) L ( x 1 , y ) ) 2 + ( L ( x , y 1 ) L ( x , y 1 ) ) 2 ) 1 2 ,
θ ( x , y ) = arctan ( L ( x , y + 1 ) L ( x , y 1 ) L ( x + 1 , y ) L ( x 1 , y ) ) ,
d a b = i = 1 n ( d a ( i ) d b ( i ) ) 2 ,
d N d S < r a t i o _ t h r e s h ,
E = i = 1 n | q i ( R p i + T ) | 2 .
x ¯ = 1 n i = 1 n x i .
p = { x 1 , x 2 , , x n } = { x 1 x ¯ , x 2 x ¯ , , x n x ¯ } .
C d = 1 | K d | x i d ( x i x ¯ ) ( x x ¯ ) T ,
S d = λ d 3 λ d 1 + λ d 2 + λ d 3 .
Δ S d = S d + 1 S d .
P l = arg min i = 1 n ( ( x i m ) T n ) 2 ,
Y = U Σ V T ,
F ( R , t ) = arg min i = 1 n w i ( R p i + t ) ( q i ) 2 ,
0 = F t i = 1 n 2 w i ( ( R p i + t ) q i ) = 2 t ( i = 1 n w i ) + 2 R ( i = 2 n w i p i ) 2 i = 1 n w i q i .
p ^ = i = 1 n w i p i i = 1 n w i , q ^ = i = 1 n w i q i i = 1 n w i .
x i = p i p ^ , y i = q i q ^ .
F ( R ) = arg min i = 1 n w i R x i y i 2 .
i = 1 n w i y i T R x i = t r ( W Y T R X ) = t r ( ( R X ) ( W Y T ) ) = t r ( R X W Y T ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.