Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Hyperspectral lidar point cloud segmentation based on geometric and spectral information

Open Access Open Access

Abstract

Light detection and ranging (lidar) can record a 3D environment as point clouds, which are unstructured and difficult to process efficiently. Point cloud segmentation is an effective technology to solve this problem and plays a significant role in various applications, such as forestry management and 3D building reconstruction. The spectral information from images could improve the segmentation result, but suffers from the varying illumination conditions and the registration problem. New hyperspectral lidar sensor systems can solve these problems, with the capacity to obtain spectral and geometric information simultaneously. The former segmentation on hyperspectral lidar were mainly based on spectral information. The geometric segmentation method widely used by single wavelength lidar was not employed for hyperspectral lidar yet. This study aims to fill this gap by proposing a hyperspectral lidar segmentation method with three stages. First, Connected-Component Labeling (CCL) using the geometric information is employed for base segmentation. Second, the output components of the first stage are split by the spectral difference using Density-Based Spatial Clustering of Applications with Noise (DBSCAN). Third, the components of the second stage are merged based on the spectral similarity using Spectral Angle Match (SAM). Two indoor experimental scenes were setup for validation. We compared the performance of our mothed with that of the 3D and intensity feature based method. The quantitative analysis indicated that, our proposed method improved the point-weighted score by 19.35% and 18.65% in two experimental scenes, respectively. These results showed that the geometric segmentation method for single wavelength lidar could be combined with the spectral information, and contribute to the more effective hyperspectral lidar point cloud segmentation.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In recent decades, light detection and ranging (lidar) has been a cost-effective and reliable tool for recording a 3D environment. Lidar has been extensively used in land cover mapping [1], building reconstruction [2], ecology reversion [3,4] and digital terrain model generation [5]. Single-wavelength lidar generally utilizes infrared laser to detect objects and present them as 3D dense point clouds. Every point possesses coordinates and intensity values at one wavelength. Lidar point cloud data sets, however, are unstructured and often massive, thereby posing as an obstacle to efficient lidar applications [6,7]. Point cloud segmentation techniques were developed to solve this problem [8] and improve the lidar’s performance in forestry management [7,9], ecological theory exploration [10], 3D reconstruction [11,12], and autonomous driving [13,14].

Therefore, numerous studies on single wavelength lidar point cloud segmentation have been conducted. The four main geometric based methods are edge- [15], model- [16], clustering feature- [17], and region growing-based methods [18]. In edge-based methods, the points with rapidly changing geometric feature are connected to form occlusive segmentation regions [19,20]. Edge-based methods are sensitive to noise and perform poorly on complex scenes [15]. For model-based methods, Hough transform (HT) [21,22] and random sample consensus (RANSAC) [23,24] are two widely used algorithms. Model-based methods are rapid and robust with outliers, but fail for complex shapes or fully automated implementations [25]. In clustering feature-based methods, the geometric features are extracted and then clustered [26]. These clustering feature-based methods can process both geometric regular and irregular targets. However, their results depend on the quality of point feature computation and selection [17].

In region growing-based methods, neighboring points that share similar characteristics are collected into one component [18,27]. A region (component) is grown around the seed point based on predefined similarity criterion of the geometric features, until an end condition is satisfied. These methods perform more effectively than the edge-based method on a continuous surface. But selecting seed points and the predefined similarity criterion significantly influences segmentation results [28]. However, Connected-Component Labeling (CCL), an improved region growing-based method, requires no seed point and can work on geometric regular and irregular targets. These advantages contribute to its effectiveness and robustness [29,30], and make it popular for point cloud segmentation for years [31–33].

However, these geometric-based point cloud segmentation methods tend to over-segment simple surfaces and under-segment complex shapes that have fine features or details [14,34]. Therefore, images with spectral information were introduced to assist the segmentation [35,36]. However, fusion of image and point cloud suffers from the varying illumination conditions [37] and the registration problem [38].

These limitations are being addressed by a new type of sensor. The hyperspectral lidar was developed to obtain geometric and spectral information simultaneously [39–41]. The employment of the supercontinuum laser and spectrograph permits the hyperspectral lidar’s capability of obtaining spectral information. And the hyperspectral lidar is developing to be more practical and more effective. Malkamaki et al. [42] implemented a compact design of portable hyperspectral lidar with an improved pulse digitizing scheme. Ren et al. [43] established a multispectral lidar with only one single-photon detector using temporal separations between different wavelengths and time-correlated single-photon counting technology. With the improvement of instrument, the potential of hyperspectral lidar was demonstrated in many applications, such as the biochemistry parameter reversion of vegetation [44–46], land cover and land use [47], and target recognition and classification [48,49].

Besides these applications, the hyperspectral lidar point cloud segmentation and classification is also one of the research hotspots. Based on the airborne multispectral lidar Optech Titan, Brindusa et al. [50] computed 3D and intensity classification features and developed new intensity features, such as lidar normalized difference vegetation index, to segment single trees and identify the tree species. Yu et al. [51] developed an airborne multispectral lidar based solution for forest mapping that is capable of providing species-specific information. Airborne lidar scanning-features used as predictors for tree species were extracted from segmented tree objects and used in random forest classification. However, as Optech Titan acquires lidar points in three channels at different angles, points from different channels do not coincide at the same GPS time [52]. This defect involves finding corresponding points and complicates the data processing.

Terrestrial hyperspectral lidar using the supercontinuum laser can deal with this defect. Based on a 33-channel hyperspectral lidar ranging from 500 nm to 820 nm, Altmann et al. [53] presented a hierarchical Bayesian classification method to estimate the depth of points and conducted reflectivity-based scenes segmentation. Kaasalainen et al. [48] classified the wet and dry cardboard and wooden panel points with spectral identification, such as water concentration index and normalized water index with substituted wavelength of 848 and 951 nm. These attempts were mainly based on the spectral information, while the geometric segmentation method widely used by single wavelength lidar was not employed for hyperspectral lidar.

To combine the effective geometric method with spectral information of hyperspectral lidar, this study proposes a three-stage segmentation method. In the first stage, a point cloud is segmented by connected-component labeling (CCL). Then, the output components of the first stage are split by spectral difference using density-based spatial clustering of applications with noise (DBSCAN) in the second stage. Lastly, all the components are merged by spectral similarity using spectral angle match (SAM) and geometric distance.

Considering the data processed by geometric method widely used by single wavelength lidar is the raw point cloud but not the component of points, the geometric method was conducted as base segmentation before spectral method. Therefore, the geometric-based CCL, an improved region growing-based method used to segment point clouds for years [31–33], was selected as base segmentation given its effectiveness and robustness [30]. Then, the DBSCAN was selected for spectral splitting because it disregards the number of class as an input and detects arbitrarily shaped clusters. In addition, DBSCAN takes noise into consideration and is robust to outliers [54]. At last, the SAM was employed for spectral similarity measurement, because it is less sensitive to spectral brightness compared with Euclidean distance method [55].

Two indoor experimental scenes were setup for validation. Natural and man-made materials with different shapes and different reflective properties were employed in this experiment. The results were assessed in terms of recall, precision, F score, the un-weighted score, and the point-weighted score [56]. We compared our proposed method results with 3D and intensity feature-based results to demonstrate the advantages of the combination of geometric method with hyperspectral lidar. This advantage has promising potential for the more efficient application of hyperspectral lidar on forestry management, 3D reconstruction, autonomous driving and target classification.

2. Experiment instruments and materials

2.1 Experiment instruments

The instrument involved in this study was a terrestrial hyperspectral lidar introduced by Du et al. [57]. In this system, a supercontinuum white laser is transmitted from an optical fiber laser (SC-PRO, YSL Photonics) to a beam sampler. The beam sampler divides the laser beam into two; one beam is focused on an avalanche photodiode (APD) for a trigger, and the other beam is transmitted to a scan mirror, as illustrated in Fig. 1.

 figure: Fig. 1

Fig. 1 Illustration of the hyperspectral lidar system.

Download Full Size | PDF

The scan mirror is sufficiently large for the view field of a receiving telescope; it can rotate vertically and horizontally with high precision. For better spectral and spatial quality, the target is measured for multiple pulse measurement while the scan mirror is pseudo-static. Ten or more pulses were averaged, that could contribute to the more precise spectral and spatial information. The pulse processing involved is the waveform analysis, and the amplitude is used as the intensity.

The laser beam is then transmitted to the target, and the scattering echo signal from the target is received by an achromatic telescope. A fiber on the side of an eyepiece guides the echo signal to a grating spectrometer. The echo signal is converted into an electrical signal by a 32-channel photomultiplier tube array that covers from 431 nm to 751 nm; the 32 channels are regularly distributed. The spectral width is 10 nm for each channel. The Cartesian coordinates of the point cloud are calculated by the distance and angle from the scan mirror rotator.

2.2 Experiment materials

Point cloud segmentation is the process of classifying point clouds into multiple homogeneous components. Thus, points in the same component will have the same properties [58]. These isolated components must be meaningful and not overlap [59]. Based on these segmentation goals, nine and seven materials were set up in our two experimental scenes to evaluate the hyperspectral lidar point cloud segmentation performance.

This experiment was conducted at Wuhan University. The experimental scenes are illustrated in Fig. 2, where the materials were placed at a horizontal distance of approximately 4 m from the telescope receiver. These materials were a piece of black paper, a box covered with white paper, a Peperomia tetraphylla plant, a plastic flowerpot, a brown cardboard box, an aluminum alloy box, a wooden box, a Sansevieria Trifasciata plant and a ceramic flowerpot in scene one. In scene two, the materials were a piece of black paper, an Aloe Vera plant, a white ceramic flowerpot, a red Rubik’s cube, a brown cardboard box, and a carrot-like ceramic object. All the vegetation and their flowerpots were divided into two targets because they share none of the same spectral or shape properties. Moreover, the orange part and green leaf-like part of the carrot-like ceramic object on the right side of the second scene were taken as separate materials for the same reason.

 figure: Fig. 2

Fig. 2 Two indoor experimental scenes with nine and seven materials.

Download Full Size | PDF

These two experimental scenes included various kinds of materials: vegetation and man-made materials with different reflective properties and different shapes. The vegetation and man-made materials were included to demonstrate the segmentation ability of hyperspectral lidar for them. In addition, materials with high reflectance (e.g. a box covered with white paper), materials with low reflectance (e.g. black paper), and specular reflection materials (e.g. aluminum alloy box) were placed to validate the segmentation performance on objects with different kinds of reflective properties. In addition, the targets with different shapes: the cuboid, cylinder and vegetation structure were included.

3. Methods

3.1 The three-stage point cloud segmentation workflow

The workflow of point cloud segmentation for hyperspectral lidar is composed of three stages. Figure 3 depicts a flowchart of the proposed method.

 figure: Fig. 3

Fig. 3 Flowchart of the our proposed three-stage point cloud segmentation method for hyperspectral lidar.

Download Full Size | PDF

  • 1. The point cloud is segmented by CCL into components, using the geometric coordinates.
  • 2. The output components are split based on the spectral difference in each component.
  • 3. The components are merged in accordance with the criterion for spectral similarity and the geometric distance between the components.

In our proposed approach, the DBSCAN-based spectral splitting is conducted before the mergence, because the DBSCAN could split components into small ones with homogenous spectra. Then they could be effectively merged by spectral similarity and distance. If the mergence is conducted before the splitting, then a component containing a small number of points belonging to a neighboring component cannot be merged with this neighboring component, given the spectral difference between these two components. And then this small number of points would not be merged into the correct component in the final segmentation result. However, the splitting and then mergence sequence effectively deal with this situation.

3.2 Connected-component labeling

CCL, which is alternatively called connected-component analysis, is a region growing-based method for lidar point cloud segmentation. CCL was initially used in computer vision to detect connected regions in binary digital images, although color images and data with higher dimensionality can also be processed [60]. This method has been used to segment point clouds for years [31–33].

The software CloudCompare v2.6 was used for CCL implementation. The point cloud was first organized into a 3D grid by octree. Considering the point cloud size (0.93 × 0.37 × 0.31 m for scene one; 0.49 × 0.24 × 0.18 m for scene two) and density (approximately 20,810 point/m2 and 23,810 point/m2), the octree level was set to seven and six in scene one and two, respectively. The corresponding grid step was 0.74 cm and 0.78 cm, respectively. The point cloud 3D grid was then traversed to find 26 neighborhood connected grids. Connected grid cells containing points were labeled as one component. After CCL, connected points will be assigned to one component. These results simulate single-wavelength lidar because only geometric information without spectral information was used during the CCL. The details of CCL method were presented by [60].

3.3 Splitting based on spectral difference

After CCL, a point cloud is labeled into components. Several materials might be mistakenly over-segmented into many components or under-segmented into one component given the distance between these materials. Splitting is performed based on spectral difference to eliminate the error of under-segmentation.

Points in one component were clustered using Density-Based Spatial Clustering of Applications with Noise (DBSCAN) based on spectral reflectance. The number of materials in an under-segmented component is unknown. DBSCAN was selected because it disregards the number of materials as an input and detects arbitrarily shaped clusters. In addition, DBSCAN takes noise into consideration and is robust to outliers [54]. After performing DBSCAN, if the number of output cluster (M) is one, then the points in this component have similar spectral reflectance, and this component will not be split. If M>1, then M materials might exist in this component based on the spectral difference. But some of them might be the noise clusters, which are supposed to detected by degree of compactness.

If M>1, then the degree of compactness is calculated for every output cluster after DBSCAN, to detect the expectant clusters and the noise clusters. We propose the degree of compactness as a measure of the compactness of points in one cluster, defined as:

DC=k/(dis_max2×density)
where DC is the degree of compactness, k is the number of points in this DBSCAN output cluster, dis_max is the farthest distance between two points in this output cluster, and density is the density of the lidar point cloud. The introduction of density might remove the influence from the change in point cloud density by different lidar data sets. The DBSCAN output clusters with low degree of compactness are disregarded as noise clusters, because they might be discrete points with similar spectra.

3.4 Mergence based on spectral similarity and geometric distance

Then, Stage 3 was conducted to merge components, based on the spectral similarity and geometric distance between components. After splitting based on the spectral difference, more components might be produced, with points in every component sharing similar spectra. In Stage 3, the components with similar spectra and close distance in space were merged.

The similarity between the mean of the spectral arrays in every component is calculated by using Spectral Angle Match (SAM). SAM is a common approach in spectral similarity measurement, and less sensitive to spectral brightness [55]. In SAM, when these two mean spectral arrays are compared, as in, X=(x1,x2,,x32) and Y=(y1,y2,,y32), then the angle of these two arrays are calculated as

θ=cos1i=132xiyi(i=132xi2)1/2(i=132yi2)1/2,θ[0,π2]

A small θ value indicates significant spectral similarity. If the spectral similarity between these two components is higher than the threshold, then the minimum spatial Euclidean distance between these two components is calculated. If the minimum distance is less than the threshold, then these two components are close together and probably belong to the same material; thus, they will be merged to produce the final point cloud segmentation components.

3.5 Segmentation accuracy evaluation

The accuracy of the segmentation is evaluated by recall, precision, F score [6], point-weighted score, and un-weighted score [56]. Recall, precision, and F score are commonly used for evaluating the effectiveness of individual component segmentation, whereas the point-weighted and un-weighted scores are for a whole scene evaluation. Ground truth for accuracy assessment was obtained by manual segmentation. Recall indicates the ratio of correctly retrieved points to the count of ground truth points. Precision represents the ratio of correctly retrieved points to the count of segmentation component points. The F score balances precision and recall. Recall, precision, and F score are computed using the following formulas:

recall=TP/(TP+FN),
precision=TP/(TP+FP),
Fscore=2×recall×precisionrecall+precision
where true positive (TP), true negative (TN), false positive (FP), and false negative (FN) are obtained from the confusion matrix. TP indicates points that are in both the ground truth component and the corresponding component by the segmentation. TN represents points that are not either in the ground truth component or the corresponding component by the segmentation. FN describes points that are in the ground truth component but not in the corresponding component by segmentation. FP denotes the points that are not in the ground truth component but are in the corresponding component by the segmentation. Figure 4 illustrates these terms in the binary case. A high TP value and low FN and FP values correspond to high accuracy.

 figure: Fig. 4

Fig. 4 Illustration of true positive (TP), true negative (TN), false positive (FP), and false negative (FN) of the segmentation in a binary case. The blue and red ellipses represent the ground truth component and component recognized by the segmentation method.

Download Full Size | PDF

In addition, point-weighted and un-weighted scores [56] were adopted for the overall accuracy measurement of the segmentation. The point-weighted score considers the number of points in every ground truth object, whereas the un-weighted score ignores this number. They are computed as follows:

scoreweighted=1i|Ri|i|Ri|maxj[TPij/(TPij+FPij+FNij)],
scoreunweighted=i1Nrmaxj[TPij/(TPij+FPij+FNij)]
where Ri is the ith ground truth material; |Ri| is the point number of Ri; TPij, FPij, and FNij indicate the corresponding terms of the jth component by segmentation and the ith ground truth component; and Nr is the total number of ground truth components. These measurements of segmentation accuracy were used in evaluation and comparison of the experimental results as discussed in Section 4.

4. Results

The point clouds captured by the hyperspectral lidar are depicted as false color point clouds in Fig. 5. The backscattered reflectances of points were calculated by normalization of Spectralon targets as diffuse reflection reference [61]. Within this paper the term “reflectance” will be used instead of “backscattered reflectance”, because all reflectances were measured close to the same backscattering geometry [62].

 figure: Fig. 5

Fig. 5 Hyperspectral lidar point cloud presented in false color; the false color was assigned by the first three components of PCA from the 32-channel spectra.

Download Full Size | PDF

The red, green, and blue (RGB) channels of false color were assigned by the first three components, respectively, from the 32-channel spectra using principal component analysis (PCA) [63]. In accordance with the principle of PCA, the first three components contained the most information of the 32-channel spectra.

4.1 CCL evaluation results

After CCL in Stage one, the experimental scene one was segmented into 65 and experimental scene two into 26 components by geometric information. Figure 6 depicts the CCL results; different colors represent various components. The results showed that CCL separated the main materials in the experimental scenes, but with errors.

 figure: Fig. 6

Fig. 6 Output components by the CCL using geometric information that are enclosed in orange boxes. The colors of components are assigned random, so different components may be in the same color.

Download Full Size | PDF

The limitations of CCL are as follows. Many scattered small components were observed, especially on the Peperomia tetraphylla and on the top of the box covered with white paper in scene one and Aloe Vera plant in scene two, given the sparse distribution of points in those areas. In addition, some gaps between components indicates that the point clouds of several materials were over-segmented into two or more components, such as the black paper and Sansevieria Trifasciata plant leaves in scene one. These gaps were caused by obstruction from objects in front of them, this problem might be mitigated by multiple-view scanning. Moreover, materials that are close to each other were segmented into one component, such as the Sansevieria Trifasciata plant and the ceramic flowerpot in scene one and Aloe Vera plant and the ceramic flowerpot in scene two.

The performance of CCL was quantitatively evaluated by methods described in Section 3.5. Table 1 lists the recall, precision, and F score of the nine materials in scene one.

Tables Icon

Table 1. Recall, precision, and F score of nine materials based on CCL in scene one

Table 1 shows that CCL resulted in high recall and precision values on the box covered with white paper, brown cardboard box, plastic flowerpot, wooden box, and aluminum alloy box in experimental scene one. For the Peperomia tetraphylla, the recall was 0.415 given its complicated spatial structure and sparse distribution of points. For the Sansevieria Trifasciata plant and ceramic flowerpot, the component by CCL combined these two materials and resulted in a relatively low precision. Furthermore, CCL divided the black paper in scene one into four parts because of the gaps caused by the objects in front of it. Table 2 lists the evaluation results for experimental scene two.

Tables Icon

Table 2. Recall, precision, and F score of seven materials based on CCL in scene two

The evaluation results in Table 2 for experimental scene two show the similar results with results of experimental scene one. In experimental scene two, for the black paper, the Rubik’s cube and the brown cardboard box, their recalls and precisions were all higher than or close to 0.9. However, the green and orange ceramic, the Aloe Vera plant and its flowerpot were under-segmented with low precisions; while the top part of Aloe Vera plant was over-segmented with the lowest recall (0.6667).

Overall, the point-weighted and un-weighted scores based on CCL were 0.7920 and 0.7427 for scene one; 0.8774 and 0.6759 for scene two. CCL performed satisfactorily on most materials in two scenes, while the under- and over-segmentation happened to several materials.

4.2 Spectral difference-based splitting evaluation results

After Stage two: splitting based on spectral difference, 65 components were split into 67 in scene one; and 26 components were split into 28 in scene two. For optimal performance, the input parameters of DBSCAN including the epsilon and minimum number of points required to form a component, were experimentally set to 0.07 and 10.

Figure 7 presents the performance of DBSCAN on the four components; the points in one component were divided into several clusters according to the spectral difference of the points. Different colors represent various clusters, and the small white spot represents the noise recognized by DBSCAN. The left column indicates the spectral distribution of the points of the first three components by PCA, whereas the right column denotes the spatial distribution of the points. These DC values in the middle column indicate the degrees of compactness of split clusters.

 figure: Fig. 7

Fig. 7 Points in the four components (i.e., (a) Sansevieria Trifasciata plant and ceramic flowerpot, (b) Aloe Vera plant and its white ceramic flowerpot, (c) the orange and green part of ceramic, and (d) plastic flowerpot) are clustered by DBSCAN.

Download Full Size | PDF

The performance of DBSCAN varied in the different components. In Figs. 7(a)-7(c), the different materials were split into different components. In Fig. 7(d), the plastic flowerpot was divided into three clusters; the green cluster No. 2 is the main body, and the blue and red points represent the edges whose spectral information from hyperspectral lidar were different from main body, as incidence angle of the laser beam changed. However, the red and blue clusters were not supposed to be split, because they all belonged to the plastic flowerpot. Thus, they would be disregarded for their low degrees of compactness.

After the DBSCAN, the degree of compactness was calculated for every new cluster, in accordance with Eq. (1). The degree of compactness measures the compactness of points in one cluster; clusters with high degree of compactness are expectant components. The threshold was set to 0.15 experimentally. Clusters with degree of compactness less than threshold were disregarded, and the others were treated as new components. Figure 7 demonstrates that the degree of compactness is effective for disregarding the noise cluster.

4.3 Spectral similarity and geometric distance-based mergence evaluation results

After Stage three, 67 components were merged into 32 components in scene one; and 28 into 12 components in scene two, based on spectral similarity and geometrical distance. Figure 8 depicts the results. For optimal performance, the threshold of the minimum spatial Euclidean distance and SAM were experimentally set to 0.08 m and 0.1 rad, correspondingly.

 figure: Fig. 8

Fig. 8 Final segmentation result based on our proposed three-stage segmentation method.

Download Full Size | PDF

In the Stage three, components of the same material were processed to be merged. In the aspect of the point-weighted and un-weighted scores, the values in result of Stage three (0.9173 and 0.8716 in scene one; 0.9506 and 0.8673 in scene two) were higher than the values obtained by CCL (0.7920 and 0.7427 in scene one; 0.8774 and 0.6759 in scene two). Table 3 and Table 4 summarizes the recall, precision, and F score of two experimental scenes in the result of Stage three. The accuracies on the Peperomia tetraphylla plant, Sansevieria Trifasciata plant, Aloe Vera plant, ceramic flowerpot, and black paper were improved, whereas the accuracies for the other materials remained unchanged or slightly declined.

Tables Icon

Table 3. Recall, precision and F score of nine materials of final result in scene one

Tables Icon

Table 4. Recall, precision, and F score of seven materials of final result in scene two

The effectiveness of our proposed method was clearly displayed in these results. 12 components of the Peperomia tetraphylla were merged as an entire component given their spectral similarity and close geometric distance. Thus, the recall of the Peperomia tetraphylla plant increased from 0.415 to 0.75. Similarly, three components of the black paper in scene one were merged after Stage 3. The recall of the black paper increased from 0.695 to 0.952.

Several errors remained after the mergence. The points on the top of the box covered with white paper in scene one were merged into one component, but were not merged into the entire box because the echo intensity is relative to the incidence angle in accordance with the lidar equation [64,65]. The incidence angles of the points on the top were close to 90°, whereas the incidence angles of points in the main part of the box were much lower than 90°. Therefore, the top and main parts were difficult to merge. This error also occurred on the top of the wooden box in scene one.

For the Peperomia tetraphylla plant in scene one and Aloe Vera plant in scene two, some points in the plant were not merged into the main body component, even though their recalls increased from 0.415 to 0.75, and 0.67 to 0.916, respectively. The reason could be due to the considerable variation of spectral reflectance in vegetation. The variation might be due to the complex spatial structure of vegetation and the variation of the incidence angle. The spatial structure of a plant canopy was complex and might cause the multi-echoes of a laser beam. These multi-echoes significantly influenced the spectral echo signal by including different spectral signals from different targets [66]. In addition, the incidence angle of the laser beam on the different leaves varied from one angle to another. This phenomenon might have also caused the spectral difference in the vegetation.

5. Discussion

5.1 The segmentation improvement from the spectral information

Comparing the results of Stage one and three in section 4, segmentation performance using spectral information from hyperspectral lidar is evident. DBSCAN and SAM measure the spectral difference and similarity. Spectral difference and similarity were used to split and merge the intermediate results, and then eliminate segmentation errors. Overall performance was improved as a quantitative analysis shows.

The point-weighted score increased from 0.7920 to 0.9173 in scene one, and from 0.8774 to 0.9506 in scene two. The un-weighted scores increased from 0.7427 to 0.8716 in scene one, and from 0.6759 to 0.8673 in scene two. The recall, precision, and F score for materials were improved, retained, or slightly reduced. The spectral difference helped split the combined components such as the Aloe Vera plant and ceramic flowerpot. In addition, spectral similarity permitted the merger of the scattered components of Peperomia Tetraphylla and the Aloe Vera plants.

5.2 Comparison with 3D and intensity feature based method

To validate the effect of our proposed method, the 3D and intensity feature based method was employed for comparison. The 3D and intensity feature based methods had achieved the satisfied results by airborne multispectral lidar for forest application [50] and by terrestrial hyperspectral lidar for target detection [48,67]. The 3D features used in this study were linearity, planarity, sphericity and curvature [68]. The intensity features included raw intensity and normalized intensity. These features were calculated for every point and then clustered by K-Means. The K-Means was selected as an unsupervised method for fair comparison, because there was no supervised method in our approach. Figure 9 shows the 3D and intensity feature based results of experimental scene one and two.

 figure: Fig. 9

Fig. 9 3D and intensity feature based segmentation result.

Download Full Size | PDF

The point-weighted and un-weighted scores of the 3D and intensity feature based segmentation were 0.7238 and 0.5213 in scene one, and 0.7641 and 0.4774 in scene two. These scores were much lower than that of our proposed method. Our proposed method outperformed the 3D and intensity feature based method on Peperomia tetraphylla plant, plastic flowerpot in scene one and the carrot-like ceramic object in scene two. The reason may be their relatively complex spatial structures and the varying intensities. In addition, comparing Fig. 8 and Fig. 9, there are more salt and pepper noise in the 3D and intensity feature based results. Therefore, our proposed method is more effective in the aspect of the quantitative analysis.

5.3 Outlook

5.3.1 Limitation of the proposed method

The proposed segmentation method is based on the hypothesis that the materials are geometrically lumped together and spectra-similar. Thus, the performance is influenced by the variations in the geometric and spectral situation. If two materials are too close and spectra-similar, then segmentation cannot be performed on them. If the two parts of one material are too distant from each other due to the obstruction from the object in front (e.g., the black paper was obstructed by the Sansevieria Trifasciata plant in scene one), then mergence may not be achieved.

In addition, the scattered points are not segmented well not only by the proposed method but also by other segmentation methods. Most scattered points appear in the vegetation with a complex structure. The spectral and geometric information of these points may be influenced by multiple echoes [66]. Therefore, as the quality of the geometric and spectral information declines, the greater will be the difficulty in segmentation.

The proposed method uses both spectral and geometric information for the hyperspectral lidar point cloud segmentation, but the joint usage of these information still requires improvement. In Stage one, the CCL is geometric-based. In Stage two, the DBSCAN uses spectral information, and the degrees of compactness employs the geometric information. In Stage three, SAM uses the spectral information, whereas merger employs the Euclidean distance between components with geometric information. The use of spectral and geometrical information is separate. Therefore, a method with an enhanced combination of the spectral and geometric information should be developed in the future study.

5.3.2 Radiometric calibration

The spectral information helped improve segmentation performance, but the quality of spectral information still needs to be improved for a precise expression of the spectral attributes of the objects. Radiometric calibration can solve this problem. For example, the top of the white paper box could be merged into the main part of the box body, if the influence of incidence angle of the laser beam can be mitigated by radiometric calibration. Furthermore, the cylindrical shape of the plastic flowerpot changes the beam incidence angle, resulting in spectral variation on its left and right sides. Therefore, if radiometric calibration can mitigate the influence of the incidence angle, it may contribute to a more effective spectra-based splitting and mergence.

However, no standard radiometric calibration workflow for the hyperspectral lidar is available to date. Shuo et al. [69] utilized the ratio of the difference in wavelength reflectance to remove the influence of the incidence angle based on the radar equation. Furthermore, Kaasalainen et al. [70] found that the spectral indices cannot completely remove the effects from the incidence angle given the uncertainty from systematic errors caused by the wavelength dependency of the laser incidence angle effects. Therefore, the standard radiometric calibration method for the hyperspectral lidar remains a challenge that needs to be developed.

6. Conclusion

To combine the effective geometric method with spectral information of hyperspectral lidar, this study proposed a three-stage segmentation method. The three stages of the segmentation are as follows: 1) point cloud was first segmented through the CCL method using geometric information only; 2) the output components of Stage one were split by DBSCAN using spectral difference; 3) the components of Stage two were merged by spectral similarity using SAM and Euclidean geometric distance.

Our results indicate that the widely used geometric method could be combined with spectral information of hyperspectral lidar for better segmentation performance. And the contribution from spectral information was also demonstrated by the increase of the point-weighted score and unweighted score. Compared with the 3D and intensity feature based method (0.7238 and 0.7641 for point-weighted score, 0.5213 and 0.4774 for unweighted score), our proposed method shows higher point-weighted score (0.9173 and 0.9506) and unweighted scores (0.8716 and 0.8673) in two scenes, and also shows less salt and pepper noise in the segmentation results. A standard radiometric calibration method for the hyperspectral lidar, however, needs to be further developed to obtain precise spectral reflectance that will contribute to a more effective segmentation.

Funding

National Key R&D Program of China (2018YFB0504500), National Natural Science Foundation of China (41601360, 41571370, 41801268), and Wuhan Morning Light Plan of Youth Science and Technology (2017050304010308).

References

1. Y. Qin, S. Li, T.-T. Vu, Z. Niu, and Y. Ban, “Synergistic application of geometric and radiometric features of LiDAR data for urban land cover mapping,” Opt. Express 23(11), 13761–13775 (2015). [CrossRef]   [PubMed]  

2. A. Sampath and J. Shan, “Segmentation and reconstruction of polyhedral building roofs from aerial lidar point clouds,” IEEE Trans. Geosci. Remote Sens. 48(3), 1554–1567 (2010). [CrossRef]  

3. Z. X. Pan, F. Y. Mao, W. Wang, T. Logan, and J. Hong, “Examining intrinsic aerosol-cloud interactions in south asia through multiple satellite observations,” J. Geophys. Res. Atmos. 123(19), 11210–11224 (2018). [CrossRef]  

4. J. Yang, L. Du, W. Gong, S. Shi, J. Sun, and B. Chen, “Analyzing the performance of the first-derivative fluorescence spectrum for estimating leaf nitrogen concentration,” Opt. Express 27(4), 3978–3990 (2019). [CrossRef]   [PubMed]  

5. Z. Hui, B. Wu, Y. Hu, and Y. Y. Ziggah, “Improved progressive morphological filter for digital terrain model generation from airborne lidar data,” Appl. Opt. 56(34), 9359–9367 (2017). [CrossRef]   [PubMed]  

6. A. V. Vo, L. Truong-Hong, D. F. Laefer, and M. Bertolotto, “Octree-based region growing for point cloud segmentation,” ISPRS J. Photogramm. Remote Sens. 104, 88–100 (2015). [CrossRef]  

7. V. F. Strîmbu and B. M. Strîmbu, “A graph-based segmentation algorithm for tree crown extraction using airborne lidar data,” ISPRS J. Photogramm. Remote Sens. 104, 30–43 (2015). [CrossRef]  

8. G. Vosselman and H. G. Maas, Airborne and terrestrial laser scanning (DBLP, 2010).

9. C. Dechesne, C. Mallet, A. L. Bris, and V. Gouet-Brunet, “Semantic segmentation of forest stands of pure species combining airborne lidar data and very high resolution multispectral imagery,” ISPRS J. Photogramm. Remote Sens. 126, 129–145 (2017). [CrossRef]  

10. S. Tao, F. Wu, Q. Guo, Y. Wang, W. Li, B. Xue, X. Hu, P. Li, D. Tian, C. Li, H. Yao, Y. Li, G. Xu, and J. Fang, “Segmenting tree crowns from terrestrial and mobile lidar data by exploring ecological theories,” ISPRS J. Photogramm. Remote Sens. 110, 66–76 (2015). [CrossRef]  

11. L. Zhang, Z. Li, A. Li, and F. Liu, “Large-scale urban point cloud labeling and reconstruction,” ISPRS J. Photogramm. Remote Sens. 138, 86–100 (2018). [CrossRef]  

12. M. Jarząbek-Rychard and A. Borkowski, “3d building reconstruction from als data using unambiguous decomposition into elementary structures,” ISPRS J. Photogramm. Remote Sens. 118, 1–12 (2016). [CrossRef]  

13. M. Soilán, B. Riveiro, J. Martínez-Sánchez, and P. Arias, “Segmentation and classification of road markings using mls data,” ISPRS J. Photogramm. Remote Sens. 123, 94–103 (2017). [CrossRef]  

14. T. Hackel, J. D. Wegner, and K. Schindler, “Fast semantic segmentation of 3d point clouds with strongly varying density,” Isprs Annals of Photogrammetry Remote Sensing & Spatial Informa III-3, 177–184 (2016). [CrossRef]  

15. E. Castillo, J. Liang, and H. Zhao, “Point cloud segmentation and denoising via constrained nonlinear least squares normal estimates,” in Innovations for shape analysis (Springer, 2013), pp. 283–299.

16. C. Dong, L. Zhang, P. Takis, and X. Huang, “A methodology for automated segmentation and reconstruction of urban 3-d buildings from als point clouds,” IEEE J. Sel. Top. Appl. Earth Observ. 7, 4199–4217 (2017).

17. X. Lu, J. Yao, J. Tu, K. Li, L. Li, and Y. Liu, “Pairwise linkage for point cloud segmentation,” Isprs Annals of Photogrammetry Remote Sensing & Spatial Informa III-3, 201–208 (2016). [CrossRef]  

18. A. Jagannathan and E. L. Miller, “Three-dimensional surface mesh segmentation using curvedness-based region growing approach,” IEEE Trans. Pattern Anal. Mach. Intell. 29(12), 2195–2204 (2007). [CrossRef]   [PubMed]  

19. T. Rabbani, F. Van Den Heuvel, and G. Vosselmann, “Segmentation of point clouds using smoothness constraint,” Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 36, 248–253 (2006).

20. A. D. Sappa and M. Devy, “Fast range image segmentation by an edge detection strategy,” in Proceedings Third International Conference on 3-D Digital Imaging and Modeling, (IEEE, 2001), 292–299. [CrossRef]  

21. D. H. Ballard, “Generalizing the hough transform to detect arbitrary shapes,” Pattern Recognit. 13(2), 111–122 (1981). [CrossRef]  

22. G. Vosselman, B. G. Gorte, G. Sithole, and T. Rabbani, “Recognising structure in laser scanner point clouds,” Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 46, 33–38 (2004).

23. D. Chen, L. Zhang, P. T. Mathiopoulos, and X. Huang, “A methodology for automated segmentation and reconstruction of urban 3-d buildings from als point clouds,” IEEE J. Sel. Top. Appl. Earth Observ. 7(10), 4199–4217 (2014). [CrossRef]  

24. M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM 24(6), 381–395 (1981). [CrossRef]  

25. R. Hesami, A. Babhadiashar, and R. Hosseinnezhad, “Range segmentation of large building exteriors: A hierarchical robust approach,” Comput. Vis. Image Underst. 114(4), 475–490 (2010). [CrossRef]  

26. J. M. Biosca and J. L. Lerma, “Unsupervised robust planar segmentation of terrestrial laser scanner point clouds based on fuzzy clustering methods,” ISPRS J. Photogramm. Remote Sens. 63(1), 84–98 (2008). [CrossRef]  

27. P. J. Besl and R. C. Jain, “Segmentation through variable-order surface fitting,” IEEE Trans. Pattern Anal. Mach. Intell. 10(2), 167–192 (1988). [CrossRef]  

28. E. Grilli, F. Menna, and F. Remondino, “A review of point clouds segmentation and classification algorithms,” ISPRS -. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci XLII-2(W3), 339–344 (2017). [CrossRef]  

29. M. Awrangjeb, C. S. Fraser, and G. Lu, “Building change detection from lidar point cloud data based on connected component analysis,” ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences 2(W5), 393–400 (2015). [CrossRef]  

30. M. B. Dillencourt, H. Samet, and M. Tamminen, “A general approach to connected-component labeling for arbitrary image representations,” J. Assoc. Comput. Mach. 39(2), 253–280 (1992). [CrossRef]  

31. P. Duraisamy and B. Buckles, “Graph-connected components for filtering urban lidar data,” J. Appl. Remote Sens. 9(1), 096075 (2015). [CrossRef]  

32. J. Zhang, X. Lin, and X. Ning, “Svm-based classification of segmented airborne lidar point clouds in urban areas,” Remote Sens. 5(8), 3749–3775 (2013). [CrossRef]  

33. A. Börcs, B. Nagy, and C. Benedek, “Instant object detection in lidar point clouds,” IEEE Geosci. Remote Sens. Lett. 14(7), 992–996 (2017). [CrossRef]  

34. Q. Zhan, Y. Liang, and Y. Xiao, “Color-based segmentation of point clouds,” Laser scanning 38, 155–161 (2009).

35. K. K. Sareen, G. K. Knopf, and R. Canas, “Hierarchical data clustering approach for segmenting colored three-dimensional point clouds of building interiors,” Opt. Eng. 50(7), 077003 (2011). [CrossRef]  

36. H. Luo, C. Wang, C. Wen, Z. Cai, Z. Chen, H. Wang, Y. Yu, and J. Li, “Patch-based semantic labeling of road scene using colorized mobile lidar point clouds,” IEEE Trans. Intell. Transp. Syst. 17(5), 1286–1297 (2016). [CrossRef]  

37. R. Malik, J. Khurshid, and S. N. Ahmad, “Road sign detection and recognition using colour segmentation, shape analysis and template matching,” in 2007 International Conference on Machine Learning and Cybernetics, (IEEE, 2007), 3556–3560. [CrossRef]  

38. W. Zhang, J. Zhao, M. Chen, Y. Chen, K. Yan, L. Li, J. Qi, X. Wang, J. Luo, and Q. Chu, “Registration of optical imagery and LiDAR data using an inherent geometrical constraint,” Opt. Express 23(6), 7694–7702 (2015). [CrossRef]   [PubMed]  

39. S. Song, W. Gong, B. Zhu, and X. Huang, “Wavelength selection and spectral discrimination for paddy rice, with laboratory measurements of hyperspectral leaf reflectance,” ISPRS J. Photogramm. Remote Sens. 66(5), 672–682 (2011). [CrossRef]  

40. I. H. Woodhouse, C. Nichol, P. Sinclair, J. Jack, F. Morsdorf, T. J. Malthus, and G. Patenaude, “A multispectral canopy lidar demonstrator project,” IEEE Geosci. Remote Sens. Lett. 8(5), 839–843 (2011). [CrossRef]  

41. T. Hakala, J. Suomalainen, S. Kaasalainen, and Y. Chen, “Full waveform hyperspectral LiDAR for terrestrial laser scanning,” Opt. Express 20(7), 7119–7127 (2012). [CrossRef]   [PubMed]  

42. T. Malkamäki, S. Kaasalainen, and J. Ilinca, “Portable hyperspectral lidar utilizing 5 GHz multichannel full waveform digitization,” Opt. Express 27(8), A468–A480 (2019). [CrossRef]   [PubMed]  

43. X. Ren, Y. Altmann, R. Tobin, A. Mccarthy, S. Mclaughlin, and G. S. Buller, “Wavelength-time coding for multispectral 3d imaging using single-photon lidar,” Opt. Express 26(23), 30146–30161 (2018). [CrossRef]   [PubMed]  

44. L. Du, S. Shi, J. Yang, W. Wang, J. Sun, B. Cheng, Z. Zhang, and W. Gong, “Potential of spectral ratio indices derived from hyperspectral LiDAR and laser-induced chlorophyll fluorescence spectra on estimating rice leaf nitrogen contents,” Opt. Express 25(6), 6539–6549 (2017). [CrossRef]   [PubMed]  

45. A. M. Wallace, A. Mccarthy, C. J. Nichol, X. Ren, S. Morak, D. Martinez-Ramirez, I. H. Woodhouse, and G. S. Buller, “Design and evaluation of multispectral lidar for the recovery of arboreal parameters,” IEEE Trans. Geosci. Remote Sens. 52(8), 4942–4954 (2014). [CrossRef]  

46. J. Sun, S. Shi, J. Yang, L. Du, W. Gong, B. Chen, and S. Song, “Analyzing the performance of prospect model inversion based on different spectral information for leaf biochemical properties retrieval,” ISPRS J. Photogramm. Remote Sens. 135, 74–83 (2018). [CrossRef]  

47. S. Morsy, A. Shaker, and A. El-Rabbany, “Multispectral lidar data for land cover classification of urban areas,” Sensors (Basel) 17(5), 958 (2017). [CrossRef]   [PubMed]  

48. S. Kaasalainen, L. Ruotsalainen, M. Kirkko-Jaakkola, O. Nevalainen, and T. Hakala, “Towards multispectral, multi-sensor indoor positioning and target identification,” Electron. Lett. 53(15), 1008–1011 (2017). [CrossRef]  

49. B. Chen, S. Shi, W. Gong, Q. Zhang, J. Yang, L. Du, J. Sun, Z. Zhang, and S. Song, “Multispectral lidar point cloud classification: A two-step approach,” Remote Sens. 9(4), 373 (2017). [CrossRef]  

50. B. C. Budei, B. St-Onge, C. Hopkinson, and F.-A. Audet, “Identifying the genus or species of individual trees using a three-wavelength airborne lidar system,” Remote Sens. Environ. 204, 632–647 (2018). [CrossRef]  

51. X. Yu, J. Hyyppä, P. Litkey, H. Kaartinen, M. Vastaranta, and M. Holopainen, “Single-sensor solution to tree species classification using multispectral airborne laser scanning,” Remote Sens. 9(2), 108 (2017). [CrossRef]  

52. J. Fernandez-Diaz, W. Carter, C. Glennie, R. Shrestha, Z. Pan, N. Ekhtari, A. Singhania, D. Hauser, and M. Sartori, “Capability assessment and performance metrics for the titan multispectral mapping lidar,” Remote Sens. 8(11), 936 (2016). [CrossRef]  

53. Y. Altmann, A. Maccarone, A. McCarthy, G. Buller, and S. McLaughlin, “Joint range estimation and spectral classification for 3d scene reconstruction using multispectral lidar waveforms,” in 2016 IEEE Statistical Signal Processing Workshop (SSP), (IEEE, 2016), 1–5.

54. M. Ester, “A density-based algorithm for discovering clusters in large spatial databases with noise,” Proc. 1996 Int. Conf. Knowledg Discovery and Data Mining (KDD ’96), 226–231 (1996).

55. B. An, S.-h. Chen, and W.-d. Yan, “Application of sam algorithm in multispectral image classification,” Chinese Journal of Stereology and Image Analysis 1, 55–60 (2005).

56. D. Hoiem, A. N. Stein, A. A. Efros, and M. Hebert, “Recovering occlusion boundaries from an image,” Int. J. Comput. Vis. 91(3), 328–346 (2011). [CrossRef]  

57. L. Du, W. Gong, S. Shi, J. Yang, J. Sun, B. Zhu, and S. Song, “Estimation of rice leaf nitrogen contents based on hyperspectral lidar,” Int. J. Appl. Earth Obs. Geoinf. 44, 136–143 (2016). [CrossRef]  

58. A. Nguyen and B. Le, “3d point cloud segmentation: A survey,” in 2013 6th IEEE conference on robotics, automation and mechatronics (RAM), (IEEE, 2013), 225–230. [CrossRef]  

59. A. M. Ramiya, R. R. Nidamanuri, and R. Krishnan, “Object-oriented semantic labelling of spectral–spatial lidar point cloud for urban land cover classification and buildings detection,” Geocarto Int. 31(2), 121–139 (2016). [CrossRef]  

60. H. Samet and M. Tamminen, “Efficient component labeling of images of arbitrary dimension represented by linear bintrees,” IEEE Trans. Pattern Anal. Mach. Intell. 10(4), 579–586 (1988). [CrossRef]  

61. G. T. Georgiev and J. J. Butler, “Long-term calibration monitoring of spectralon diffusers brdf in the air-ultraviolet,” Appl. Opt. 46(32), 7892–7899 (2007). [CrossRef]   [PubMed]  

62. E. Puttonen, J. Suomalainen, T. Hakala, E. Räikkönen, H. Kaartinen, S. Kaasalainen, and P. Litkey, “Tree species classification from fused active hyperspectral reflectance and lidar measurements,” For. Ecol. Manage. 260(10), 1843–1852 (2010). [CrossRef]  

63. A. M. Martinez and A. C. Kak, “Pca versus lda,” IEEE Trans. Pattern Anal. Mach. Intell. 23(2), 228–233 (2002). [CrossRef]  

64. J. Yang, Y. Cheng, L. Du, W. Gong, S. Shi, J. Sun, and B. Chen, “Analyzing the effect of the incidence angle on chlorophyll fluorescence intensity based on laser-induced fluorescence lidar,” Opt. Express 27(9), 12541–12550 (2019). [CrossRef]   [PubMed]  

65. S. Kaasalainen, A. Jaakkola, M. Kaasalainen, A. Krooks, and A. Kukko, “Analysis of incidence angle and distance effects on terrestrial laser scanner intensity: Search for correction methods,” Remote Sens. 3(10), 2207–2221 (2011). [CrossRef]  

66. K. Koenig and B. Höfle, “Full-waveform airborne laser scanning in vegetation studies—a review of point cloud and waveform features for tree species classification,” Forests 7(12), 198 (2016). [CrossRef]  

67. E. Puttonen, T. Hakala, O. Nevalainen, S. Kaasalainen, A. Krooks, M. Karjalainen, and K. Anttila, “Artificial target detection with a hyperspectral lidar over 26-h measurement artificial target detection with a hyperspectral lidar over 26-h measurement,” Opt. Eng. 54(1), 013105 (2015). [CrossRef]  

68. Q. Li and X. Cheng, “Comparison of different feature sets for tls point cloud classification,” Sensors (Basel) 18(12), 4206 (2018). [CrossRef]   [PubMed]  

69. S. Shuo, S. Shalei, G. Wei, D. Lin, Z. Bo, and H. Xin, “Improving backscatter intensity calibration for multispectral lidar,” IEEE Geosci. Remote Sens. Lett. 12(7), 1421–1425 (2015). [CrossRef]  

70. S. Kaasalainen, M. Åkerblom, O. Nevalainen, T. Hakala, and M. Kaasalainen, “Uncertainty in multispectral lidar signals caused by incidence angle effects,” Interface Focus 8(2), 20170033 (2018). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 Illustration of the hyperspectral lidar system.
Fig. 2
Fig. 2 Two indoor experimental scenes with nine and seven materials.
Fig. 3
Fig. 3 Flowchart of the our proposed three-stage point cloud segmentation method for hyperspectral lidar.
Fig. 4
Fig. 4 Illustration of true positive (TP), true negative (TN), false positive (FP), and false negative (FN) of the segmentation in a binary case. The blue and red ellipses represent the ground truth component and component recognized by the segmentation method.
Fig. 5
Fig. 5 Hyperspectral lidar point cloud presented in false color; the false color was assigned by the first three components of PCA from the 32-channel spectra.
Fig. 6
Fig. 6 Output components by the CCL using geometric information that are enclosed in orange boxes. The colors of components are assigned random, so different components may be in the same color.
Fig. 7
Fig. 7 Points in the four components (i.e., (a) Sansevieria Trifasciata plant and ceramic flowerpot, (b) Aloe Vera plant and its white ceramic flowerpot, (c) the orange and green part of ceramic, and (d) plastic flowerpot) are clustered by DBSCAN.
Fig. 8
Fig. 8 Final segmentation result based on our proposed three-stage segmentation method.
Fig. 9
Fig. 9 3D and intensity feature based segmentation result.

Tables (4)

Tables Icon

Table 1 Recall, precision, and F score of nine materials based on CCL in scene one

Tables Icon

Table 2 Recall, precision, and F score of seven materials based on CCL in scene two

Tables Icon

Table 3 Recall, precision and F score of nine materials of final result in scene one

Tables Icon

Table 4 Recall, precision, and F score of seven materials of final result in scene two

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

DC = k / (dis_max 2 × density)
θ = cos 1 i = 1 32 x i y i ( i = 1 32 x i 2 ) 1 / 2 ( i = 1 32 y i 2 ) 1 / 2 , θ [ 0 , π 2 ]
r e c a l l = T P / ( T P + F N ) ,
p r e c i s i o n = T P / ( T P + F P ) ,
F score = 2 × r e c a l l × p r e c i s i o n r e c a l l + p r e c i s i o n
s c o r e w e i g h t e d = 1 i | R i | i | R i | max j [ T P ij / ( T P ij + F P ij + F N ij ) ] ,
s c o r e un w e i g h t e d = i 1 N r max j [ T P ij / ( T P ij + F P ij + F N ij ) ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.