Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Multispectral LiDAR point cloud highlight removal based on color information

Open Access Open Access

Abstract

With the rapid development of light detection and ranging (LiDAR) technology, multispectral LiDAR (MSL) can realize three-dimensional (3D) imaging of the ground object by acquiring rich spectral information. Although color restoration has been achieved on the basis of the full-waveform data of MSL, further improvement of the visual effect of color point clouds still faces many challenges. In this paper, a highlight removal method for MSL color point clouds is proposed to explore the potential of 3D visualization. First, the MSL reflection model are introduced according to radar equation and Phong model, and the restored color of the MSL point clouds is determined to comprise diffuse and specular components. Second, a data conversion method is proposed to improve the massive point cloud processing efficiency by spatial dimension reduction and data compression. Then, the visual saliency map after color denoising is used to obtain the highlight region, the unknown information of which is recovered based on the global or local color information. Finally, three representative targets are selected and evaluated by qualitative and quantitative validation, which verifies that the method can effectively recover the high-quality highlight-free point clouds of MSL.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Three-dimensional (3D) vision is a research hotspot in computer vision in recent years. As the main technical means of 3D vision, light detection and ranging (LiDAR) can acquire spatial information to constitute point clouds that have unique advantages of high precision, all-day and large-scale detection [1]. The point clouds of LiDAR have been widely used in many applications, such as autonomous driving, digital construction, and target detection [25].

Most point clouds of LiDAR are currently monochromatic, lacking abundant spectral or color information. Many efforts have been proposed by active and passive imaging data fusion to compensate for this insufficiency [6,7]. However, the fusion data will suffer from shortcomings, such as dependency on solar illumination and shadowing effects [8,9]. Moreover, achieving complete matching between discrete echo points and continuous plane pixels is still difficult [10]. With the advancement of supercontinuum laser source and photoelectric detection technique, LiDAR technology is developing toward single-wavelength, multispectral, or even hyperspectral trends [1115]. This new kind of multispectral LiDAR (MSL) data can realize the acquisition of 3D point clouds with multispectral information at a single laser footprint by increasing the receiving bands. Numerous studies have promoted the MSL data in target recognition and physical property discrimination, which have shown the considerable superiority compared with the traditional monochromatic LiDAR data [1618].

In addition to the detection capability of 3D physical property, the multispectral point clouds can be inverted into color point clouds, which provide a great potential for 3D visualization [19]. Since the passive imagery compensation is not required, the color point clouds generated by MSL could solve the shortcoming of traditional monochrome point clouds and meet the increasing application requirements in 3D scene reconstruction.

However, the specular reflection during laser transmission will inevitably results in point cloud highlights in the color space. Highlights are common phenomena in active and passive imaging. An ideal lambert surface is generally assumed to produce only diffuse reflection, but most surfaces will also produce specular reflection. Considerably strong specular highlights will induce saturated echo signals and seriously reduce the quality and visualization of color point clouds. Highlight removal has been extensively studied in passive imaging, some methods such as physical model analysis, mathematical estimation and light source compensation have been proved to be effective [2023]. However, for the monochromatic point clouds, highlights are unfortunately often ignored or masked in point cloud processing due to the lack of color information. Some LiDAR reflection models with both diffuse and specular reflection are proposed, so as to analyze the effect of specular reflection on point clouds and correct laser echo intensity [2427]. This model-based approach relies on the selection or estimation of surface roughness and specular reflection coefficient, which is difficult to be applied to point cloud highlight removal in real scenes. There are also studies attempted to remove the virtual points generated by specular reflection for 3D point clouds through estimating multiple glass planes [28].

The above studies analyze the effect of specular reflection on point cloud data. However, there are still no effective solutions for highlights removal in 3D color space. Benefitting from the visualization of 3D color point cloud provided by MSL, it will provide a new solution for this inevitable problem of point cloud highlights

In this paper, we proposed a new MSL point cloud highlight removal method based on color information. In addition, experiments were designed to prove the feasibility and accuracy of the method. The current work aims to provide a new idea for enhancing the visualization of color point clouds, and further promote the development of 3D imaging using the MSL system.

The contributions of this paper are as follows:

  • 1. The problem of highlight removal for MSL color point clouds is first to raised and solved;
  • 2. The reflection mechanism of MSL is investigated and a first highlight removal method for MSL point clouds is proposed. It is worth noting that this method is based on the local or global prior of color information, and has universal applicability to most scanned targets.
  • 3. A dimension reduction algorithm is proposed to improve the computational efficiency of massive color point clouds. The complete target color information is retained without the consideration of complex geometric information.

The remainder of this paper is organized as follows. Section 2 introduces the MSL system and reflection model. Section 3 proposes the highlight removal methods for MSL point clouds, including the conversion between point clouds and images, the highlight detect, and the highlight inpainting. Section 4 presents the experimental results. Section 5 concludes the paper with future research issues.

2. MSL system and reflection model

2.1 MSL system

The instrument involved in this paper was a MSL system introduced by previous studies [19,29]. As presented in Fig. 1(a), a supercontinuum laser source covering almost the entire visible band (400–700 nm) emits a discrete illumination pulse for scanning detection. Considering the spectral energy distribution of this broadband laser source, the detector response at different receiving bands, and the CIE 1931 color space chromaticity, the most appropriate RGB bands in the visible spectral portion are selected for the receiving channels of color information, namely 434.5–474.5, 517–537, and 612–644 nm [16]. The laser reference signal and return waveforms of each RGB channel are detected and recorded in a 12-bit digitizer. Subsequently, the color point clouds are obtained through field programmable gate array online processing and calculation.

 figure: Fig. 1.

Fig. 1. Data acquisition and processing of MSL system

Download Full Size | PDF

Figure 1(b) illuminates multi-target detection by recording multi-echo time domains (t1, t2, and t3) at RGB channels. A multispectral waveform decomposition method [29] is applied following the recorded time domains and intensity information to inverse color information of each echo, while the delay of each time window is measured as spatial information. The original data are recorded by the proposed system in the form of full-waveform and finally calculated into point clouds with RGB color. This new kind of color point cloud dataset integrates 3D point clouds and color information, which solves the shortcoming of traditional monochrome point clouds in color visualization.

2.2 MSL reflection model

In the field of LiDAR, the radar equation [30] widely used for the radiometric calibration of diffuse (Lambertian) target is:

$${P_r} = {P_t}\frac{{{D_r}^2{\rho _d}\cos \theta {\eta _{sys}}{\eta _{atm}}}}{{4{R^2}}}$$
where ${P_t}$ is the transmitted laser power, ${P_r}$ is the received laser power, ${D_r}$ is the diameter of the receiver aperture, R is the range from the laser to the target, $\theta$ is the incidence angle, ${\eta _{sys}}$ and ${\eta _{atm}}$ are two transmission factors, ${\rho _d}$ is diffuse reflectance, which is the ratio of ${P_r}$ to ${P_t}$ in each determined direction.

An ideal Lambertian object is assumed to only produce diffuse reflection. However, most objects in the real world are non-Lambert, and the laser reflected on their surface comprises diffuse and specular components. When an incident laser strikes a surface that is smooth at the microscopic level, part of the laser is reflected in the form of specular reflection, causing the so-called highlight phenomenon. The empirical Phong surface model [31] describes the way a surface reflects light as a combination of diffuse and specular reflection. Such a model has a wide application in computer graphics and 3D model rendering. Based on the radar equation and the Phong model, the received laser intensity of MSL can be described as follows.

$$\left\{ {\begin{array}{l} {I(\lambda ) = {I_d}(\lambda )(1 - {k_s}) + {I_s}(\lambda ){k_s}}\\ {{I_d}(\lambda ) = {I_{in}}(\lambda ){\rho_d}(\lambda )\cos \theta }\\ {{I_s}(\lambda ) = {I_{in}}(\lambda ){{\cos }^{n(\lambda )}}(2\theta )}\\ {{I_{in}}(\lambda ) = ({P_t}(\lambda ){D_r}^2{\eta_{sys}}{\eta_{atm}})/4{R^2}} \end{array}} \right.$$
where $\lambda \in \textrm{\{ }r,g,b\textrm{\} }$ are wavelengths at R, G and B spectral channels, I is the MSL received intensity, ${I_{in}}$ is the MSL transmitted intensity, ${k_s}$ is specular reflection proportion coefficient that depends on geometry, ${I_d}$ is the diffuse reflection component, ${I_s}$ is the specular reflection component, n is the surface roughness exponent that depends on geometry and wavelength, respectively.

Equation (2) indicates that the received intensity I, which is influenced by receiving wavelength, incident angle, and object surface roughness, is determined by the combination of diffuse and specular components. The received intensity I could be calibrated by the standard whiteboard, which is regarded as an ideal Lamberite to achieve color restoration. The echo intensity of the whiteboard I0 is taken as a normalized reference:

$${I_0}(\lambda ) = {I_{in}}(\lambda ){\rho _0}(\lambda )cos\theta$$
where ${\rho _0}$ is the diffuse reflectance of the white board.

In order to ensure color uniformity, it’s assumed that ambient light conditions are consistent with CIE standard illuminant D65. In other words, the color restoration of MSL point clouds is under illuminant D65. Using the ratio of I to I0, the target color is obtained by MSL as:

$$\left\{ {\begin{array}{l} {{I_{color}}(\lambda ) = \frac{{I(\lambda )}}{{{I_0}(\lambda )}} = D(\lambda )(1 - {k_s}) + S(\lambda ){k_s}}\\ {D(\lambda ) = \frac{{{\rho_d}(\lambda )}}{{{\rho_0}(\lambda )}}}\\ {S(\lambda ) = \frac{{{{\cos }^{n(\lambda )}}(2\theta )}}{{{\rho_0}(\lambda )cos\theta }}} \end{array}} \right.$$
where ${I_{color}}$ the restored color of target, D and S are the color of the diffuse reflection and specular reflection, respectively. Notably, D is related to the target reflectance of each receiving band, and S is related to wavelength, incident angle and target roughness.

D and S illustrate the influence of target characteristics and laser characteristics on color restoration, respectively. The additional item S will introduce the complex phenomenon of highlights. Compared with the monochromatic LiDAR, the broadband laser source will receive more potential interference during the multi-channel detection, which would result in more possible highlights. Meanwhile, the highlights are wavelength dependent due to the spectroscopic design and photodetector response variation of different receiving spectral channels.

Figure 2 shows massive point clouds of real scene, from left to right are the monochromatic point clouds at RGB channels and the MSL color point clouds, respectively. In Fig. 2(a), the monochromatic point clouds at RGB channels can acquire spatial information and some fuzzy textures. However, the MSL color point clouds are directly obtained by an overall scan without overlaying the passive images, which markedly improves the point cloud visualization. As shown in Fig. 2(b), the highlights of a writing board are marked in a blue rectangle. The highlights have different influences at various spectral channels due to their relation with wavelength. In addition, MSL can realize multichannel simultaneous detection, which further improves the capability to detect and remove the highlights.

 figure: Fig. 2.

Fig. 2. Large-scale scene point clouds of conference room with highlights. From left to right are the monochromatic point clouds at RGB channels and the MSL color point clouds, respectively. (a) Entire room scene. (b) a writing board with highlights.

Download Full Size | PDF

3. Methods

As shown in Fig. 2, the highlights can have a serious impact on point cloud visualization. According to Eq. (4), the calculation of the specular reflection component S requires an estimate of ${k_s}$ and n, which is related with the surface roughness. However, the two factors are often referred to as experiential values that have specific values for specific targets, which complicates the direct estimation of the specular reflection component without a prior knowledge of these values.

Highlight removal have always been the research hotspots of image processing in computer vision, and many proposed methods have achieved promising results. Different from monochromatic LiDAR, MSL can directly obtain color point clouds, which provides the possibility of using existing image processing methods for reference. In order to conduct the highlight removal of point clouds with unknown materials and unknown regions, the color denoising to produce realistic colors of MSL point clouds in non-highlight regions is attempted. The color of the MSL point clouds in highlight regions is then recovered by the global or local color information. This approach is expected to achieve the color uniformity of the MSL point clouds and fully eliminate the highlights.

The flowchart of the visual enhancement for MSL point clouds containing four main steps is displayed in Fig. 3. First, the color restoration is conducted to obtain the initial color point clouds from raw signals at RGB channels. Next, the conversion between point clouds and image is performed with color information retained. Then, the highlight region is detected by the visual saliency map after color denoising. Finally, the highlights are inpainted by solving the optimization problem of objective function and similarity, and the final color point clouds are obtained through the color assignment.

 figure: Fig. 3.

Fig. 3. Flowchart of the point cloud highlight removal, which comprises preprocessing, conversion, highlight detect, and highlight inpainting.

Download Full Size | PDF

3.1 Conversion

Massive point cloud data are acquired to ensure the completeness of 3D imaging. However, this large amount of data will reduce the processing efficiency of MSL color point clouds. To solve the problem, the 3D color point clouds can be converted into a 2D image in a certain field of view. The conversion by spatial dimension reduction and data compression will improve the data processing efficiency; meanwhile, the visual enhancement is performed accurately without sacrificing color information. Figure 4 shows the projection of point clouds onto a plane, with red and blue respectively representing the point clouds and the plane.

 figure: Fig. 4.

Fig. 4. Projection of point clouds onto a plane comprising three steps.

Download Full Size | PDF

Before point cloud projection, point cloud data is preprocessed using the traditional point cloud geometric denoising method. In this step, the outlier noise points near or far from the surface of main target point clouds are removed, which can effectively improve the precision of projection. The 3D point clouds of the target are projected on the 2D plane from different angles, which will result in different images. The optimal plane required by the projection of target point clouds must first be identified to retain as much information as possible in images. Least square (LS) [32] and random sampling consensus (RANSAC) [33] are common plane fitting methods. LS is used to fit all data, which leads to unsatisfactory fitting effects in the case of large data offset. On the contrary, RANSAC can flexibly deal with large data offset by fitting the main data. Therefore, RANSAC is suitable for point clouds. First, the threshold th1 is set to determine whether a point is invalid. The point is invalid when the distance between a point and the fitting plane exceeds th1.

$$th1 = {\mu _{range}} + 3{\sigma _{range}}$$
where ${\mu _{range}}$ and ${\sigma _{range}}$ is the mean value and the standard deviation of the nearest range for each point.

Then, a number of planes are randomly fitted. By counting the number of invalid points of a series of fitting planes, the plane with the least number of invalid points is founded as the best fitting plane. Based on the idea of weighted voting, this method can overcome the deviation of a few discrete points in a specific perspective, and ensure the optimal solution of most points.

The point clouds are then projected onto the plane according to Eq. (6).

$${p_1} = {p_0} - d \cdot \overrightarrow n$$
where p0 is the 3D coordinate before the projection, p1 is the 3D coordinate after the projection, d is the distance from the point to the plane, and $\overrightarrow n$ is the normal vector of the plane.

The target point clouds are converted into the plane after projection. The fitting projected plane is supposedly rotated to a reference plane, such as XOY, XOZ, or YOZ, to further convert the projected plane into an image. The corresponding rotation matrices of plane rotation around the X-, Y-, or Z-axis are as follows:

$$\left\{ {\begin{array}{cccc} {{R_X}(\alpha ) = \left[ {\begin{array}{cccc} 1&0&0&0\\ 0&{\cos \alpha }&{ - \sin \alpha }&0\\ 0&{\sin \alpha }&{\cos \alpha }&0\\ 0&0&0&1 \end{array}} \right]}\\ {{R_Y}(\alpha ) = \left[ {\begin{array}{cccc} {\cos \alpha }&0&{ - \sin \alpha }&0\\ 0&1&0&0\\ {\sin \alpha }&0&{\cos \alpha }&0\\ 0&0&0&1 \end{array}} \right]}\\ {{R_Z}(\alpha ) = \left[ {\begin{array}{cccc} {\cos \alpha }&{ - \sin \alpha }&0&0\\ {\sin \alpha }&{\cos \alpha }&0&0\\ 0&0&1&0\\ 0&0&0&1 \end{array}} \right]} \end{array}} \right.$$

The final rotation matrix RT is related to the selection of reference plane and rotation direction. For example, if the projected plane takes XOY as the reference plane and rotates around the X-axis, then the obtained R is:

$${R_T} = {R_Z}( - {\alpha _1}) \cdot {R_X}({\alpha _2}) \cdot {R_Z}({\alpha _1})$$

After rotation, the point clouds in rotated plane is obtained by:

$$\left[ {\begin{array}{c} {{p_2}}\\ 1 \end{array}} \right] = R \cdot \left[ {\begin{array}{c} {{p_1}}\\ 1 \end{array}} \right]$$
where ${p_1} = {\left[ {\begin{array}{ccc} x&y&z \end{array}} \right]^{\prime}}$, ${p_2} = {\left[ {\begin{array}{ccc} {{x^{\prime}}}&{{y^{\prime}}}&{z^{\prime}} \end{array}} \right]^{\prime}}$ is the 3D coordinate after rotation. Since the reference plane selected is XOY, $z^{\prime}$ is clearly zero.

To convert the point clouds into images, p2 is divided into grids at an appropriate size, which is guided by the average point density. The average RGB values of the point clouds in each grid are taken as the corresponding pixel values of the grid to form the image. The size of each grid is related to the pixel resolution of the image. A small grid size indicates a high pixel resolution.

Moreover, the initial 3D coordinates are stored in the grid to form the depth image. Therefore, the depth image can be converted to the color point clouds again through the guide of color assignment. That is taking the pixel values of the corresponding grid as the color of the point clouds.

3.2 Highlight detect

Then, the next procedure of our method is based on the converted depth images. Unlike camera which captures a 2D image at a time, the MSL obtains 3D point clouds through point-by-point scanning. Thus, the massive point clouds will contain some uncertainties induced by the system and measurement errors. The following two calibration methods are used to increase the confidence of MSL point clouds.

  • 1. The laser reference signal and echo signal of RGB channels are simultaneously full-waveform recorded, then the pulse energy fluctuation could be calibrated by calculating the intensity ratio of echo and the correspondent laser reference signal.
  • 2. A standard whiteboard has been applied to calibrate the reflectance of different targets.

Despite the above calibrations, the residual noise still influences the point cloud quality. Figure 5 analyzes the distribution and statistics of intensity values (0–255) for the point clouds of writing board in Fig. 2(c). In Fig. 5(a), the intensity distribution along the marked blue line reveals that noise and highlights in color space cause the intensity values to fluctuate or saturate. In Fig. 5(b), the intensity statistics in the marked blue rectangle shows that the intensity probability scattered by color noise and highlights. According to the influence on intensity statistics, color noise is mainly divided into impulse noise in region N1 and Gaussian noise in region N2, and highlight is mainly divided into weak highlight in region H1 and saturated highlight in region H2. It’s also found that color noise and highlights have various effects at different spectral channels due to wavelength dependence. For example, the probability in regions N1 and H2, the full width at half maxima (FWHM) and peak heights in region N2 are different between the R, G, and B channels.

 figure: Fig. 5.

Fig. 5. Distribution and statistics of intensity values (0–255) for the point clouds of writing board in Fig. 2(c), where R, G, and B channels are presented left to right. (a) The distribution of intensity values along the marked blue line. (b) The statistics of intensity values in the marked blue rectangle.

Download Full Size | PDF

Therefore, highlight detection needs to eliminate the interference of color noise. Rather than using a fixed size, the filtering window ${\Omega _1}$ is chosen to accommodate the converted images with different resolutions. Note that color noise has distinguishing probability distributions, color denoising is considered to performed by a combination of a global bilateral filter [34] and a local median filter [35]. Specific color denoising strategies are as follows:

  • 1. For all pixels, bilateral filtering is performed first;
  • 2. If the pixel satisfies $\Delta I(i) > th2$ according to Eq. (10), median filtering is performed.
    $$\left\{ {\begin{array}{l} {\Delta I(i) = \max ({\Delta {I_r}(i),\Delta {I_g}(i),\Delta {I_b}(i)} )}\\ {\Delta {I_\lambda }(i) = \left|{\frac{{\sum\nolimits_{j \in {\Omega _1}} {{I_\lambda }(j)} }}{m} - {I_\lambda }(i)} \right|}\\ {th2 = 5{\sigma_{color,\lambda }}} \end{array}} \right.$$
    where $\Delta I(i)$ is the chromatism of pixel i, and m is the size of ${\Omega _1}$, ${\sigma _{color,\lambda }}$ is the standard deviation of the color at each channel. Notably, m is related to point cloud density, total number of target point clouds and resolution requirements.

After color denoising, the highlight detection can be conducted depending on the highlight level. For pixels with saturated highlights, the specular reflection component occupies most of the color. According to Eq. (11), only the threshold th3 is set to 200 for the determination of such pixels.

$${I_{\min }}(i) = \min ({{I_r}(i),{I_g}(i),{I_b}(i)} )\left\{ {\begin{array}{l} { > th3,\textrm{ }\textrm{saturated highlight}}\\ { < th3,\textrm{ }\textrm{other}} \end{array}} \right.$$

Saturated highlights are relatively evident to be detected, but the detection of weak highlights between saturated highlights and non-highlights is the key point. Considering the inconsistency of the highlights, an improved visual saliency detection algorithm based on Frequency-tuned algorithm [36] is proposed to effectively detect weak highlights at RGB channels.

First, Gaussian smoothing with an adaptive filtering window ${\Omega _1}$ is used to preserve the overall information of the image. And then the saliency of pixels at RGB channels can be calculated as follows:

$${J_\lambda }(i) = ||{\overline {I_\lambda^G} - I_\lambda^G(i)} ||$$
where ${J_\lambda }$ is the normalized saliency of pixel i, $I_\lambda ^G(i)$ is the pixel value after Gaussian smoothing, $\overline {I_\lambda ^G}$ is the mean value of $I_\lambda ^G(i)$.

Pixels with weak highlights are then screened if any saliency of each channel is larger than th4:

$${J_\lambda }(i)\left\{ {\begin{array}{l} { > th4\textrm{, weak highight}}\\ { < th4,\textrm{ }\textrm{other}} \end{array}} \right.$$
$$th4 = \overline {{J_\lambda }} + 3{\sigma _{saliency,\lambda }}$$
where $\overline {{J_\lambda }}$ is the mean saliency of all pixels, ${\sigma _{saliency,\lambda }}$ is the standard deviation of the saliency of each channel.

3.3 Highlight inpainting

After highlight detection, the color restoration of highlight pixels is taken into account. However, most of the image highlight removal methods are not applicable to MSL point clouds due to the contradiction of highlight consistency. Besides, the MSL point clouds consider the highlight to be wavelength related, which makes it difficult to estimate the value of highlight without the priors of target reflectance. Therefore, the color restoration of highlight pixels for MSL is an ill-posed inverse problem that has no well-defined unique solution. To solve the problem, it is necessary to introduce another prior knowledge. That is, MSL highlight inpainting follows the assumption that the known and unknown pixels have similar statistical characteristics and texture structures [37].

For this, it is acceptable to transform the assumption into local or global priors to provide images with reasonable textures and satisfactory visual effect after completion. The highlight inpainting is conducted using the Space-time completion algorithm [38] with a variable window ${\Omega _1}$ depending on the image resolution. The Space-time completion algorithm presents a new framework for the completion of missing information based on local structures. It poses the task of completion as a global optimization problem with a well-defined objective function and a similarity measure. Since the completion of MSL highlight is static, we simplify the objective function and similarity measure as:

$$\left\{ {\begin{array}{l} {Coherence(H,F) = \mathop \prod \limits_{p \in H} \mathop {\max }\limits_{q \in F} sim({W_p},{V_q})}\\ {sim({W_p},{V_q}) = \exp ( - \frac{{\sum {{{||{{W_p}(x,y) - {V_q}(x,y)} ||}^2}} }}{{2{\sigma^2}}})} \end{array}} \right.$$
where H are the highlight pixels, F are the non-highlight pixels, p and q run over all pixels in H and F, W and V are the patches with a sampling window size which is only measured by RGB values, $\sigma$ is the smoothness index which is variable.

Each iteration of the Space-time completion algorithm requires a global search of the image. In order to accelerate the convergence of algorithm iteration, a Patch-match algorithm [39] has been introduced to optimize the process of finding the nearest neighbor of a curtain patch. The core of Patch-match is to greatly reduce the scope of search by utilizing image continuity.

After the highlight inpainting, it’s necessary to guide the color assignment to make the conversion from depth image to color point clouds again. The final color point clouds will no longer be interfered by the specular reflection item S. Moreover, the continuity of texture and color will be restored as much as possible.

4. Results and discussion

It’s assumed that the color restoration of MSL point clouds were conducted under CIE illuminant D65. The performance of point cloud highlight removal method was evaluated on three targets of different materials scanned by the MSL system.

4.1 Dataset

As analyzed in Section 2, targets with different materials have varying reflectance, which affects whether the detection signal contains the specular reflection component. One representative scene in Fig. 2, namely a writing board, is selected to evaluate the accuracy and feasibility of the proposed method. In addition, the evaluation and reference of color calibration are considered, and a standard color checker and a color deer model are selected.

The actual images of the datasets are shown in Fig. 6. The color checker in Fig. 6(a), which follows the color standards of CIE, comprises 24 marked color squares in the size of 4 × 4 cm, including natural object, chromatic, primary, and gray-scale colors. The writing board in Fig. 6(b) is aluminum framed and has a smooth and flat surface with a uniform color. And the deer model in Fig. 6(c) comprises smooth fiberglass and has a complex color composition and 3D structure. Among the above targets, the color checker is used to evaluate the color denoising result; while the writing board and the deer model, which are prone to produce highlights, are selected for the evaluation of the highlight removal method.

 figure: Fig. 6.

Fig. 6. Actual images of the datasets. (a) Color checker. (b) Writing board. (c) Deer model.

Download Full Size | PDF

4.2 Qualitative validation

The visual perception ability of human eyes can distinguish the dynamic changes of objects in color, which can be used as a qualitative evaluation approach of color quality. Figure 7 shows the results of color denoising for point clouds of color checker. From left to right are the monochromatic point clouds at RGB channels and the color point clouds, respectively. In Fig. 7(a), it is observed that the initial point clouds are influenced by the color noise, which exhibits wavelength dependence. In other words, the R channel has evident impulse noise, while the G and B channels suffer from serious Gaussian noise. In Fig. 7(b), the Gaussian noise and impulse noise is filtered by color denoising method. As can be seen, our approach can recover realistic color of MSL point clouds in each color square, which lays the foundation for the subsequent highlight removal.

 figure: Fig. 7.

Fig. 7. Results of color denoising for the point clouds of color checker. From left to right are the monochromatic point clouds at RGB channels and the MSL color point clouds. (a) Initial point clouds. (b) Noise-free point clouds.

Download Full Size | PDF

The highlight removal results for the point clouds of writing board are shown in Fig. 8. From top to bottom are the monochromatic point clouds at RGB channels and the combined color point clouds, respectively. The LiDAR system has much smaller field of view corresponding to the laser divergence angle with 0.5mrad. This means that it is hard to produce a large-scale highlight area, but a relatively small highlight area. The point cloud highlights shown in Fig. 8 are the largest area of MSL detection results. In Fig. 8(a), the presence of color noise and highlights considerably affect the visualization of MSL point clouds. In order to avoid the interference of noise, we first perform the color denoising method to get the noise-free data in Fig. 8(b). Due to the highlight inconsistency, the saliency map of a single channel is hard to detect the highlights correctly. Figure 8(c) illustrates that only by combining the characteristics of multi-channel for highlight detection can a desired highlight region be obtained. Then, the modified Space-time completion algorithm is used to inpaint the detected highlights. After the highlight inpainting, the highlights marked in pink almost disappear in the salience map of Fig. 8(d). Figure 8(e) further reveals that this approach can recover realistic looking color of MSL point clouds despite saturated highlights.

 figure: Fig. 8.

Fig. 8. Results of highlight removal for the point clouds of writing board. From top to bottom are the monochromatic point clouds at RGB channels and the MSL color point clouds. (a), (b), and (e) Initial, noise-free, and highlight-free point clouds, respectively. (c) and (d) Visual salience map with highlight detection marked in pink before and after highlight removal.

Download Full Size | PDF

In addition to the low-textured writing board, the high-textured deer model with complex 3D structure is further applied to test the feasibility of the highlight removal method in Fig. 9. From 1st-3rd row of Fig. 9(c), we can find that the highlight areas detected by a single channel are significantly different. In contrast, multi-channel highlight detection can robustly improve the accuracy of highlight recognition, which is shown in the 4th row of Fig. 9(c). Besides, due to the full consideration of the texture continuity, the highlight removal algorithm can unveil the masked color and texture to improve the 3D imaging quality. For the highlight region, the unknown color information can be finally similar substitute after several iterations, which is based on the certain color information of the domain non-highlight region. This method won’t lead to color distortion in the region of fitting repair, but achieve the maximum consistency of the visualization effect. As shown in the 4th row of Fig. 9(e), the highlights of the point cloud for the colored deer model appear at the junction of red and yellow regions, and the restored color can also maintain the continuity of its color and texture.

 figure: Fig. 9.

Fig. 9. Results of highlight removal for the point clouds of deer model. From top to bottom are the monochromatic point clouds at RGB channels and the MSL color point clouds. (a), (b), and (e) Initial, noise-free, and highlight-free point clouds, respectively. (c) and (d) Visual salience map with highlight detection marked in pink before and after highlight removal.

Download Full Size | PDF

4.3 Quantitative validation

To intuitively compare the effects of color denoising and highlight removal algorithm, the intensity values of point clouds are analyzed in Fig. 10. Compared to the original intensity values in Fig. 5(a), the color denoising makes the intensity values more concentrated in Fig. 10(a), and the highlight removal method further corrects the deviation of the intensity values in Fig. 10(b). Figures 10(c) and (d) show the influence of the noise filtering and highlight removal on the intensity statistics. The color denoising makes the intensity probability to concentrate in the mean value, with higher peaks. The highlight removal algorithm makes the probability statistics of the weak highlights in region H1 and the saturated highlights in region H2 converge to the mean. Combined with the above analyses, the algorithm can successfully eliminate the influence of color noise and highlights on the intensity values of point clouds.

 figure: Fig. 10.

Fig. 10. Distribution and statistics of intensity values (0–255) for color point clouds of writing board (the same position as in Fig. 2(b)) after color denoising and highlight removal, where left to right are the R, G, and B channels. (a) and (b) Distribution of intensity values after noise filtering. (c) and (d) Statistics of intensity values after noise filtering and highlight removal.

Download Full Size | PDF

The quantitative evaluation of the highlight removal for MSL point clouds is quite challenging, since obtaining the ground truth color point clouds without reflection distortion is difficult. To this end, except for the color checker with standard color, the target image is taken as the reference color. For quantitative validation, peak signal-to-noise ratio (PSNR) and relative standard deviation (RSD) are often used to evaluate the authenticity and stability of the color. A larger PSNR or a smaller RSD indicates a better result.

Figure 11 displays the PSNR and RSD results of the color denoising for the point clouds of color checker. In Fig. 11(a), squares 8, 13, 14, 17 and 18 cannot obtain significant PSNR improvement because their original PSNR is limited to low intensity values. Nevertheless, the PSNR improvement in most of the other squares after filtering is larger than 1 dB. This PSNR result evidences the effectiveness of the color denoising to make most of the representative colors in the MSL point clouds of color checker realistic.

 figure: Fig. 11.

Fig. 11. Local result of PSNR and RSD for the color point clouds of color checker. (a) PSNR result. The higher triangulated pink lines and squared green lines represent the point clouds of color checker before filtered and after filtered, respectively. (b), (c) and (d) RSD results at R, G and B channels. The right triangulated red lines and circled blue lines represent the point clouds before filtered and after filtered, respectively.

Download Full Size | PDF

The RSD at R, G, and B channels in 24 color squares, which are respectively shown in Figs. 11(b), (c), and (d), is calculated to evaluate the uniformity of the color accurately. Although the MSL system has realized some achievements in color restoration by using lognormal function and pulse accumulation methods [19], the proposed color denoising method can further reduce the RSD of each channel. Meanwhile, the RSD result also exhibits that the measured data has considerable higher accuracy at R channel than the other two channels, which is related to the original data stability. After noise filtering, the RSD of R channel ranges from 0.1% to 6.0% while that of G and B channels are over 6% in some squares.

The restored color of MSL point clouds is under the CIE standard illuminant D65, while the images in Fig. 6 are taken under daytime light conditions. This explains the rationality of the color difference in the MSL point clouds. Table 1 shows the quantitative evaluation of the highlight removal methods for the MSL point clouds of three targets, including the color checker, the writing board, and the deer model. Notably, the evaluation of each target is the comprehensive result of its monochromatic area, such as the 24 color squares of the color checker, the writing board surface excluding the metal frame, and the five color areas of the deer model. Table 1 illustrates that the PSNR and RSD results of the initial point clouds data of the deer model cannot reach the level of other targets due to the complexity of 3D structure and texture. However, the PSNR and RSD results of the three targets were optimized to varying degrees. It is also noted that as a substep of highlight removal, color denoising also contributes to visual enhancement of point cloud. In addition to the deer model, the PSNR of other targets increased to more than 20 dB, and the RSD decreased to less than 10%. The maximum PSNR could reach 27.9 dB and the minimum RSD could reach 2.2/3.2/5.9, which shows an acceptable result for the attempt of MSL system to display the visualization effect of massive color point clouds.

Tables Icon

Table 1. Quantitative evaluation of MSL point cloud highlight removal method.

Through the experiments of three representative targets, it can be found that the proposed method is not limited by the material, color, texture and 3D structure of the target, and has a satisfactory effect to achieve the highlight removal for MSL point clouds. For the color point clouds formed by large-scale complex scenes, further verification of the applicability of the method will be left in future research.

5. Conclusion

The MSL system can obtain color point clouds directly, which is becoming a new trend in 3D imaging. Compared with traditional monochromatic point clouds, color point clouds have great potential to display the visualization effects. In order to deal with the highlights during the new data acquisition, we proposed a MSL point cloud highlight removal method. Based on the radar equation and the illumination Phong model, we analyzed the reflection characteristics of MSL, and further found that the color of the point clouds comprises diffuse and specular components. Projecting the point clouds to the optimal fitting plane and obtaining the corresponding depth image is attempted to simplify the data processing. After the color denoising, the specular highlights are detected by visual saliency. Then, the highlight inpainting is performed according to the global or local color information. Finally, the processed image is converted to color point clouds again through guiding the color assignment. Three targets with different textures and colors are selected to conduct the MSL scanning experiments to verify the validity of the algorithm. The qualitative and quantitative analyses reveals that the algorithms are effective and robust in highlight removal and provided a new idea for the visual enhancement of MSL point clouds. In the future research, we can further improve the reflection model of MSL point cloud data, and optimize the highlight removal approach for complex scenes.

Funding

National Natural Science Foundation of China (42171347); National Key Research and Development Program of China (2018YFB0504500).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. W. Wagner, A. Ullrich, V. Ducic, T. Melzer, and N. Studnicka, “Gaussian decomposition and calibration of a novel small-footprint full-waveform digitising airborne laser scanner,” ISPRS J. Photogramm. Remote Sens. 60(2), 100–112 (2006). [CrossRef]  

2. B. Yang, Y. Liu, Z. Dong, F. Liang, B. Li, and X. Peng, “3D local feature BKD to extract road information from mobile laser scanning point clouds,” ISPRS J. Photogramm. Remote Sens. 130, 329–343 (2017). [CrossRef]  

3. R. Klokov and V. Lempitsky, “Escape from Cells: Deep Kd-Networks for the Recognition of 3D Point Cloud Models,” in 2017 IEEE International Conference on Computer Vision (ICCV), 2017, 863–872.

4. H. Jing and Y. Suya, “Point cloud labeling using 3D Convolutional Neural Network,” in 2016 23rd International Conference on Pattern Recognition (ICPR), 2016, 2670–2675.

5. Y. Guo, F. Sohel, M. Bennamoun, J. Wan, and M. Lu, “A novel local surface feature for 3D object recognition under clutter and occlusion,” Inf. Sci. 293, 196–213 (2015). [CrossRef]  

6. T. Sankey, J. Donager, J. McVay, and J. B. Sankey, “UAV lidar and hyperspectral fusion for forest monitoring in the southwestern USA,” Remote Sens Environ 195, 30–43 (2017). [CrossRef]  

7. M. Alonzo, B. Bookhagen, and D. A. Roberts, “Urban tree species mapping using hyperspectral and lidar data fusion,” Remote Sens Environ 148, 70–83 (2014). [CrossRef]  

8. J. Zhang and X. Lin, “Advances in fusion of optical imagery and LiDAR point cloud applied to photogrammetry and remote sensing,” International Journal of Image and Data Fusion (2016).

9. E. Puttonen, J. Suomalainen, T. Hakala, E. Räikkönen, H. Kaartinen, S. Kaasalainen, and P. Litkey, “Tree species classification from fused active hyperspectral reflectance and LIDAR measurements,” Forest Ecol Manag 260(10), 1843–1852 (2010). [CrossRef]  

10. G. Kereszturi, L. N. Schaefer, W. K. Schleiffarth, J. Procter, R. R. Pullanagari, S. Mead, and B. Kennedy, “Integrating airborne hyperspectral imagery and LiDAR for volcano mapping and monitoring through image classification,” Int J Appl Earth Obs 73, 323–339 (2018). [CrossRef]  

11. B. Wang, S. Song, S. Shi, Z. Chen, Y.-S. Li, D. Wu, D. Liu, and W. Gong, “Multichannel Interconnection Decomposition for Hyperspectral LiDAR Waveforms Detected From Over 500 m,” IEEE Trans. Geosci. Remote Sensing 1, 1–14 (2021). [CrossRef]  

12. L. Matikainen, K. Karila, J. Hyyppä, P. Litkey, E. Puttonen, and E. Ahokas, “Object-based analysis of multispectral airborne laser scanner data for land cover classification and map updating,” ISPRS J. Photogramm. Remote Sens. 128, 298–313 (2017). [CrossRef]  

13. Z. Niu, Z. Xu, G. Sun, W. Huang, L. Wang, M. Feng, W. Li, W. He, and S. Gao, “Design of a New Multispectral Waveform LiDAR Instrument to Monitor Vegetation,” IEEE Geosci. Remote Sensing Lett. 12(7), 1506–1510 (2015). [CrossRef]  

14. P. Hartzell, C. Glennie, K. Biber, and S. Khan, “Application of multispectral LiDAR to automated virtual outcrop geology,” ISPRS J. Photogramm. Remote Sens. 88, 147–155 (2014). [CrossRef]  

15. T. Hakala, J. Suomalainen, S. Kaasalainen, and Y. Chen, “Full waveform hyperspectral LiDAR for terrestrial laser scanning,” Opt. Express 20(7), 7119–7127 (2012). [CrossRef]  

16. B. Chen, S. Shi, J. Sun, W. Gong, J. Yang, L. Du, G. Kuanghui, B. Wang, and B. Chen, “Hyperspectral lidar point cloud segmentation based on geometric and spectral information,” Opt. Express 27(17), 24043 (2019). [CrossRef]  

17. Z. Wang, Y. Chen, C. Li, M. Tian, M. Zhou, W. He, H. Wu, H. Zhang, L. Tang, Y. Wang, H. Zhou, E. Puttonen, and J. Hyyppä, “A Hyperspectral LiDAR with Eight Channels Covering from VIS to SWIR,” in IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium, 2018, 4293–4296.

18. J. C. Fernandez-Diaz, W. E. Carter, C. Glennie, R. L. Shrestha, Z. Pan, N. Ekhtari, A. Singhania, D. Hauser, and M. Sartori, “Capability Assessment and Performance Metrics for the Titan Multispectral Mapping Lidar,” Remote Sens. 8, 1 (2016). [CrossRef]  

19. B. Wang, S. Song, W. Gong, X. Cao, D. He, Z. Chen, X. Lin, F. Li, and J. Sun, “Color Restoration for Full-Waveform Multispectral LiDAR Data,” Remote Sens. 12, 1 (2020). [CrossRef]  

20. R. Saha, P. Pratim Banik, S. Sen Gupta, and K.-D. Kim, “Combining highlight removal and low-light image enhancement technique for HDR-like image generation,” IET Image Processing 14(9), 1851–1861 (2020). [CrossRef]  

21. M. W. Tao, J. Su, T. Wang, J. Malik, and R. Ramamoorthi, “Depth Estimation and Specular Removal for Glossy Surfaces Using Point and Line Consistency with Light-Field Cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 38(6), 1155–1169 (2016). [CrossRef]  

22. Q. Yang, J. Tang, and N. Ahuja, “Efficient and Robust Specular Highlight Removal,” IEEE Trans. Pattern Anal. Mach. Intell. 37(6), 1304–1311 (2015). [CrossRef]  

23. H. Kim, H. Jin, S. Hadap, and I. Kweon, “Specular Reflection Separation Using Dark Channel Prior,” in 2013 IEEE Conference on Computer Vision and Pattern Recognition, 2013, 1460–1467.

24. X. Qian, J. Yang, S. Shi, W. Gong, L. Du, B. Chen, and B. Chen, “Analyzing the effect of incident angle on echo intensity acquired by hyperspectral lidar based on the Lambert-Beckman model,” Opt. Express 29(7), 11055–11069 (2021). [CrossRef]  

25. J. Wagen, U. T. Virk, and K. Haneda, “Measurements based specular reflection formulation for point cloud modelling,” in 2016 10th European Conference on Antennas and Propagation (EuCAP), 2016, 1–5.

26. A. Tatoglu and K. Pochiraju, “Point cloud segmentation with LIDAR reflection intensity behavior,” in 2012 IEEE International Conference on Robotics and Automation, 2012, 786–790.

27. Q. Ding, W. Chen, B. King, Y. Liu, and G. Liu, “Combination of overlap-driven adjustment and Phong model for LiDAR intensity correction,” ISPRS J. Photogramm. Remote Sens. 75, 40–47 (2013). [CrossRef]  

28. J. S. Yun and J. Y. Sim, “Virtual Point Removal for Large-Scale 3D Point Clouds with Multiple Glass Planes,” IEEE Trans. Pattern Anal. Mach. Intell. 43(2), 729–744 (2021). [CrossRef]  

29. S. Song, B. Wang, W. Gong, Z. Chen, X. Lin, J. Sun, and S. Shi, “A new waveform decomposition method for multispectral LiDAR,” ISPRS J. Photogramm. Remote Sens. 149, 40–49 (2019). [CrossRef]  

30. W. Wagner, “Radiometric calibration of small-footprint full-waveform airborne laser scanner measurements: Basic physical concepts,” ISPRS J. Photogramm. Remote Sens. 65(6), 505–513 (2010). [CrossRef]  

31. B. T. Phong, “Illumination for computer generated pictures,” Commun. ACM 18(6), 311–317 (1975). [CrossRef]  

32. J. Steinier, Y. Termonia, and J. J. Deltour, “Smoothing and differentiation of data by simplified least square procedure,” Anal. Chem. 44(11), 1906–1909 (1972). [CrossRef]  

33. M. A. Fischler and R. C. Bolles, “Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography,” in Readings in Computer Vision, M. A. Fischler and O. Firschein, eds. (Morgan Kaufmann, San Francisco (CA), 1987), pp. 726–740.

34. C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271), 1998, 839–846.

35. S. J. Ko and Y. H. Lee, “Center weighted median filters and their applications to image enhancement,” IEEE Trans. Circuits Syst. 38(9), 984–993 (1991). [CrossRef]  

36. R. Achantay, S. Hemamiz, F. Estraday, and S. Süsstrunky, “Frequency-tuned salient region detection,” 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009, 1597–1604 (2009).

37. C. Guillemot and O. L. Meur, “Image Inpainting : Overview and Recent Advances,” IEEE Signal Process. Mag. 31(1), 127–144 (2014). [CrossRef]  

38. Y. Wexler, E. Shechtman, and M. Irani, “Space-Time Completion of Video,” IEEE Trans. Pattern Anal. Mach. Intell. 29(3), 463–476 (2007). [CrossRef]  

39. C. Barnes, E. Shechtman, A. Finkelstein, and D. Goldman, “PatchMatch: A Randomized Correspondence Algorithm for Structural Image Editing,” ACM Trans. Graph. 28, 1 (2009). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Data acquisition and processing of MSL system
Fig. 2.
Fig. 2. Large-scale scene point clouds of conference room with highlights. From left to right are the monochromatic point clouds at RGB channels and the MSL color point clouds, respectively. (a) Entire room scene. (b) a writing board with highlights.
Fig. 3.
Fig. 3. Flowchart of the point cloud highlight removal, which comprises preprocessing, conversion, highlight detect, and highlight inpainting.
Fig. 4.
Fig. 4. Projection of point clouds onto a plane comprising three steps.
Fig. 5.
Fig. 5. Distribution and statistics of intensity values (0–255) for the point clouds of writing board in Fig. 2(c), where R, G, and B channels are presented left to right. (a) The distribution of intensity values along the marked blue line. (b) The statistics of intensity values in the marked blue rectangle.
Fig. 6.
Fig. 6. Actual images of the datasets. (a) Color checker. (b) Writing board. (c) Deer model.
Fig. 7.
Fig. 7. Results of color denoising for the point clouds of color checker. From left to right are the monochromatic point clouds at RGB channels and the MSL color point clouds. (a) Initial point clouds. (b) Noise-free point clouds.
Fig. 8.
Fig. 8. Results of highlight removal for the point clouds of writing board. From top to bottom are the monochromatic point clouds at RGB channels and the MSL color point clouds. (a), (b), and (e) Initial, noise-free, and highlight-free point clouds, respectively. (c) and (d) Visual salience map with highlight detection marked in pink before and after highlight removal.
Fig. 9.
Fig. 9. Results of highlight removal for the point clouds of deer model. From top to bottom are the monochromatic point clouds at RGB channels and the MSL color point clouds. (a), (b), and (e) Initial, noise-free, and highlight-free point clouds, respectively. (c) and (d) Visual salience map with highlight detection marked in pink before and after highlight removal.
Fig. 10.
Fig. 10. Distribution and statistics of intensity values (0–255) for color point clouds of writing board (the same position as in Fig. 2(b)) after color denoising and highlight removal, where left to right are the R, G, and B channels. (a) and (b) Distribution of intensity values after noise filtering. (c) and (d) Statistics of intensity values after noise filtering and highlight removal.
Fig. 11.
Fig. 11. Local result of PSNR and RSD for the color point clouds of color checker. (a) PSNR result. The higher triangulated pink lines and squared green lines represent the point clouds of color checker before filtered and after filtered, respectively. (b), (c) and (d) RSD results at R, G and B channels. The right triangulated red lines and circled blue lines represent the point clouds before filtered and after filtered, respectively.

Tables (1)

Tables Icon

Table 1. Quantitative evaluation of MSL point cloud highlight removal method.

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

P r = P t D r 2 ρ d cos θ η s y s η a t m 4 R 2
{ I ( λ ) = I d ( λ ) ( 1 k s ) + I s ( λ ) k s I d ( λ ) = I i n ( λ ) ρ d ( λ ) cos θ I s ( λ ) = I i n ( λ ) cos n ( λ ) ( 2 θ ) I i n ( λ ) = ( P t ( λ ) D r 2 η s y s η a t m ) / 4 R 2
I 0 ( λ ) = I i n ( λ ) ρ 0 ( λ ) c o s θ
{ I c o l o r ( λ ) = I ( λ ) I 0 ( λ ) = D ( λ ) ( 1 k s ) + S ( λ ) k s D ( λ ) = ρ d ( λ ) ρ 0 ( λ ) S ( λ ) = cos n ( λ ) ( 2 θ ) ρ 0 ( λ ) c o s θ
t h 1 = μ r a n g e + 3 σ r a n g e
p 1 = p 0 d n
{ R X ( α ) = [ 1 0 0 0 0 cos α sin α 0 0 sin α cos α 0 0 0 0 1 ] R Y ( α ) = [ cos α 0 sin α 0 0 1 0 0 sin α 0 cos α 0 0 0 0 1 ] R Z ( α ) = [ cos α sin α 0 0 sin α cos α 0 0 0 0 1 0 0 0 0 1 ]
R T = R Z ( α 1 ) R X ( α 2 ) R Z ( α 1 )
[ p 2 1 ] = R [ p 1 1 ]
{ Δ I ( i ) = max ( Δ I r ( i ) , Δ I g ( i ) , Δ I b ( i ) ) Δ I λ ( i ) = | j Ω 1 I λ ( j ) m I λ ( i ) | t h 2 = 5 σ c o l o r , λ
I min ( i ) = min ( I r ( i ) , I g ( i ) , I b ( i ) ) { > t h 3 ,   saturated highlight < t h 3 ,   other
J λ ( i ) = | | I λ G ¯ I λ G ( i ) | |
J λ ( i ) { > t h 4 , weak highight < t h 4 ,   other
t h 4 = J λ ¯ + 3 σ s a l i e n c y , λ
{ C o h e r e n c e ( H , F ) = p H max q F s i m ( W p , V q ) s i m ( W p , V q ) = exp ( | | W p ( x , y ) V q ( x , y ) | | 2 2 σ 2 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.