Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Development of a multi-sensor system for defects detection in additive manufacturing

Open Access Open Access

Abstract

Defects detection technology is essential for monitoring and hence maintaining the product quality of additive manufacturing (AM) processes; however, traditional detection methods based on single sensor have great limitations such as low accuracy and scarce information. In this study, a multi-sensor defect detection system (MSDDS) was proposed and developed for defect detection with the fusion of visible, infrared, and polarization detection information. The assessment criteria for imaging quality of the MSDDS have been optimized and evaluated. Meanwhile, the feasibility of processing and assembly of each sensor module has been demonstrated with tolerance sensitivity and the Monte Carlo analysis. Moreover, multi-sensor image fusion processing, super-resolution reconstruction, and feature extraction of defects are applied. Simulation and experimental studies indicate that the developed MSDDS can obtain high contrast and clear key information, and high-quality detected images of AM defects such as cracking, scratches, and porosity can be effectively extracted. The research provides a helpful and potential solution for defect detection and processing parameter optimization in AM processes such as Selective Laser Melting.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Additive manufacturing (AM) technology, also known as “3D printing”, “Solid freeform fabrication” and “Rapid prototyping”, is a competitive, low-cost, high-degree-of-manufacturing technology [1]. AM technology facilitates the processing of special metal materials, non-metallic materials, and medical biomaterials, such as alloys [1], ceramics [2], silica gel [3], hydrogel [4], and composite materials [5]. AM technology has the potential to shorten processing time, produce complex custom workpieces, repair mechanical parts and process various free-form components [6], thus it has been widely used in aerospace, military, medical equipment, energy, and automobile manufacturing [79]. However, AM still has great limitations in the production processes, the main reasons are quality and repeatability, which may be seriously affected by defects (such as cracks, and spheroidization) in the AM process [10,11]. Therefore, AM defect detection technology has been studied a lot to improve the quality of parts.

SLM is a mature AM technology, its processing mainly includes material supply, preparation, processing, and post-processing [12]. SLM is a complex thermodynamic process affected by various process parameters, including pre-processing parameters, post-processing parameters, and controllable parameters [13]. In the SLM process, more than 50 parameters related to parts’ quality, such as laser power, powder size, scan spacing, etc. These parameters have a great influence on the thermophysical mechanism, resulting in the instability of microstructure evolution, thermal stress, and melt pool, which in turn cause defects and deteriorate the mechanical and physical properties of the parts [14,15]. Presently, various detection methods use optical, acoustic, and thermal signals to establish the relationship between detection signals and defects [16]. AM defect detection categories include high-speed cameras, thermal imaging cameras, photodiodes and pyrometers, X-ray microscopy, and acoustics when classified by sensor type [17]. Nadipalli et al. [18] proposed to install on-axis and off-axis sensors in the SLM system and tested single-rail samples at different power levels, duty cycles, and scan speeds. Pavlov et al. [19] used a two-color pyrometer to study the temperature radiation in the laser action area during the SLM process and explored the effects of processing strategy, scan spacing, and powder layer thickness on thermal changes. Ye et al. [20] used a near-infrared (NIR) camera to study the changes in the plume and spatter characteristics with changes in laser power and scan speed. Caltanissetta et al. [21] proposed the use of a measurement system to characterize the accuracy of in situ contour recognition in SLM layered images. Land et al. [22] investigated a novel non-contact metrology system that combines traditional machine vision with a phase-shifted fringe projection system. Zheng et al. [23] proposed a high-speed vision system to extract plume, melt pool, and spatter features based on the processing process. Yakout et al. [24] proposed an in-situ monitoring system consisting of a high-speed infrared thermal imager and an infrared pyrometer to detect powder delamination and spattering in the SLM process. However, although various detection methods based on a single sensor using optical, acoustic, and thermal signals have been widely used in the SLM, and can qualitatively establish a relationship between the monitoring signals and defects, they still have great limitations such as low accuracy and scarce information. The information provided by any single sensor is unilateral and incomplete, which may be inaccurate. The accuracy and the types of information provided by a single sensor are also different. Fusing the information provided by multiple sensors, would make more reliable defect detection and judgment than a single sensor, and improve the accuracy of the defect detection system. Meanwhile, the information characteristics provided by different sensors are different, and the multi-sensor information fusion system can obtain a large amount of feature information that a single sensor cannot obtain, which greatly enhances the anti-interference ability of the defect detection system. Thus, a series of detection systems composed of multiple sensors, namely multi-sensor detection systems [25] appeared.

Based on the multi-sensor detection of light, sound, heat, and other signals, more comprehensive, reliable, and accurate information can be captured for defect detection and extraction in the SLM process. Craeghs et al. [26], Tatsuaki et al. [27], and Sebastian et al. [28] investigated continuous detection of high-speed melt pools in SLM processes to achieve real-time feedback control of process parameters. The in-situ detection system is mainly composed of a CCD (Charge Coupled Device)/CMOS (Complementary Metal Oxide Semiconductor) camera, a photodiode, and a data acquisition and processing system. Aniruddha et al. [29] studied SLM process data over a wide range of laser velocities and laser powers using a high-speed camera and a pyrometer. Yakout et al. [30] proposed an in-situ detection system consisting of a high-speed infrared thermal camera and an infrared pyrometer to detect powder delamination and spattering in the SLM processes. Gusarov et al. [31] developed a detection system consisting of a high-speed CCD camera, a near-infrared camera, and a pyrometer to diagnose the SLM process under different laser power densities and obtained the relationship between geometric parameters of each machining trajectory and the laser power density distributions. Gould et al. [32] proposed a detection method combining high-speed infrared imaging with high-speed X-ray imaging to detect the vapor plume flow mechanics, cooling rate, splash, and molten pool three-dimensional morphology. However, as the extremely complex environment such as high temperature, strong light, and powder splashing during the processing brings great challenges to defect detection, the existing detection methods still have limitations and deficiencies, and it is difficult to achieve effective detection and extraction of pores, balling, cracking, and other defects.

In the field of visual inspection, visible-light imaging can provide detailed information, which is conducive to improving the detection ability and ensuring the detection accuracy, but its imaging quality is seriously affected by the lighting environment, and it is difficult to detect the defects covered by powder and annihilated by strong reflected light. Infrared imaging has good penetration ability and thermal contrast and is less affected by the complex environment such as powder splashing, but it is difficult to capture defect details and has low detection accuracy. Meanwhile, polarization imaging is not affected by the surface temperature, which can avoid the interference of background clutter, effectively highlighting the defects, and solving the problem that it is difficult to achieve high-precision restoration of defects information in strong reflected areas. Therefore, in this study, a multi-sensor defect detection system (MSDDS) was designed, and the MSDDS was established in the fusion of visible, infrared, and polarization detection information. The MSDDS can capture defect information clearly and accurately through reasonable optical design, and effectively achieve defect feature extraction and analysis by image processing. This paper is organized as follows. The design concept and specification for the MSDDS are described in Section 2. In Section 3, optical design and evaluation are presented. Section 4 illustrates the tolerance analysis of the MSDDS. Section 5 described the experimental studies and discussion, and then the conclusions are presented in Section 6.

2. Design concept and specifications for the MSDDS

The optical design of the MSDDS mainly includes the visible light channel (VL), infrared channel (IL), and polarization channel (PL), as shown in Fig. 1. The VL includes a visible light imaging objective lens, filter (GCC-301031, DAHENG OPTROELECTRONICS), and a CMOS camera with a resolution of 7728×5368 and a single-pixel size of 1.1 µm. The effective frame rate of the VD is 60 fps. The IL includes an infrared imaging objective lens, filter (NENIR03B, THORLABS), and an InGaAs sensor with a resolution of 320×256 and a single-pixel size of 30 µm. The effective frame rate of the ID is 25 fps. The PL includes a polarization imaging objective lens, linear polarizer (LPVISE100-A, THORLABS), filters (GCC-301031, DAHENG OPTROELECTRONICS), and a CMOS camera with a resolution of 2448×2048 and a single-pixel size of 3.45 µm. The effective frame rate of the PD is 24 fps. The maximum allowable frame rate between the HUB and the PC is 24 fps. Additionally, the system also includes two beam splitters (BSW30, THORLABS), a hub, and an image processing computer.

 figure: Fig. 1.

Fig. 1. Schematic diagram of the MSDDS for laser AM: PL: Polarization channel imaging system; IL: Infrared channel imaging system; VL: Visible channel imaging system; DM1: Beamsplitter 1; DM2: Beamsplitter 2; FT: Filters; ID: Infrared channel image sensor; PD: Polarization channel image sensor; VD: Visible channel image sensor; PR: polarizer; PC: computer.

Download Full Size | PDF

The VL and PL work in the wavelength band of 0.4-0.7 µm, and the IL is in the near-infrared band of 0.9-1.7 µm. Table 1 shows the design parameters of the MSDDS. When the MSDDS captures the surface image of the SLM workpiece, the feature information of defects is reflected and transmitted by the beam splitter DM1. Although the near-infrared band of 0.9-1.7 µm in the IL system can be weak compared to the VL and PL systems, the captured light of the PL system has a certain attenuation, the PL system possesses a good high reflection light suppression effect, and can effectively highlight the edge contour information of defects with high resolution. To ensure the fusion effect of defect detection information of the VL, PL, and IL system, 50% of the transmitted light is captured by the image sensor of the PL, and the other 50% of the reflected light is then reflected and transmitted by the beam splitter DM2, in which 25% of the transmitted light is captured by the image sensor of the VL, and the other 25% of the reflected light is captured by the image sensor of the IL. The detection information captured by the image sensors is transmitted to the image processing and storage module of the computer through the hub for fusion and defect extraction, which significantly realizes multi-sensor fusion detection of surface defects in laser AM processes under complex working conditions.

Tables Icon

Table 1. Design Parameters of the MSDDS

In the design process, the selection of focal length is closely related to the resolution of the object, the field of view, etc. The focal length is determined by the detection area, the working distance, and the size of the image sensor. The design of the VL selects the common 50 mm focal length on the market. The size of the CMOS image sensor is about 8.50 mm × 5.90 mm. According to the concept of the aperture angle of the optical system, the field angle can be expressed as:

$$\mathrm {2{\omega } = 2 {arctan} \left( {\frac{{y}}{{f}}} \right)}$$
where ${\mathbf \omega }$ is the half angle of view, y is the half image height of the system, and f is the focal length of the system. Considering the machining and installation tolerance of the system, the calculated image height is 0.15 mm more than the actual image sensor size in the design.

According to Eq. (1), the field of view in the horizontal direction of the VL is:

$$2{\omega _1} = 2\arctan \left( {\frac{{(8.50 + 0.15)/2}}{{50}}} \right) \approx 9.89^\circ$$

The field of view in the vertical direction of the VL is:

$$\mathrm {2{{\omega }_2} = 2{\arctan} \left( {\frac{{{(5.90 + 0.15)}/2}}{{{50}}}} \right) {\approx 6.92}^\circ}$$

The field of view in the diagonal direction of the VL is:

$$2{\omega _3} = 2\arctan \left( {\frac{{\sqrt {(8.50 + {{0.15})^2} + (5.90 + {{0.15})^2}} /2}}{{50}}} \right) \approx 12.05^\circ$$

The size of the system's object-side field of view can be calculated from the focal length, working distance, and image sensor size:

$$2Y = 2\frac{{y \cdot WD}}{f}$$
where Y is the half-width of the field of view, WD is the working distance, y is the half-image height, and f is the focal length.

According to Eq. (2), the width of the field of view in the horizontal direction of the VL is:

$$2{Y_1} = 2\frac{{(8.50 + 0.15)/2 \cdot 300}}{{50}} = 51.90\; \textrm{mm}$$

The width of the field of view in the vertical direction of the VL is:

$$2{Y_2} = 2\frac{{(5.90 + 0.15)/2 \cdot 300}}{{50}} = 36.30\; \textrm{mm}$$

The width of the field of view in the diagonal direction of the VL is:

$$2{Y_3} = 2\frac{{\sqrt {(8.50 + {{0.15})^2} + (5.90 + {{0.15})^2}} /2 \cdot 300}}{{50}} \approx 63.33\; \textrm{mm}$$

Furthermore, the limited resolution of the system can be calculated according to the Eq. (3):

$$\alpha = f \cdot \frac{{1.22\lambda }}{D} = 1.22\lambda F$$
where $\lambda $ is the center wavelength, f is the focal length, D is the entrance pupil diameter, F is the aperture number, and $\alpha $ is the minimum Airy disk size.

The limited resolution of the system is related to the pixel size of the image sensor. To ensure the reliability of imaging quality and defect detection, 2×2-pixel units are selected as the calculation pixels, and the minimum Airy disk size $\alpha $ of the VL should be smaller than 2.2 µm, that is:

$$\alpha = 1.22\lambda F = 1.22 \cdot 0.587 \cdot F \le 2.2$$

Thus the $F \le 3.07$. And according to the system's illuminance expression:

$$E = \frac{\pi }{4}\tau L{\left( {\frac{D}{f}} \right)^2}$$
where $\tau $ is the transmittance, L is the brightness of the target surface, D is the diameter of the entrance pupil, and f is the focal length. The larger the relative aperture of the system, the greater the luminous flux, and the greater the illuminance of the system, but the lateral size of the optical element would become larger, which makes the optical design and manufacturing process more complicated, and the system higher cost. Therefore, the F-number of the selected VL is 2.5, which can meet the requirements of illuminance and resolution. The diameter of the entrance pupil is 20 mm. The parameter calculations for the PL and IL are similar.

3. Optical design and evaluation

3.1 Design and evaluation of the VL channel

Considering the high detection accuracy, small focal length ratio, compact structure, and large relative aperture of the VL system, the asymmetrical or quasi-symmetrical optical structure is selected in the design processes. The double Gaussian structure optical system is a typical symmetrical structure whose diaphragm is in the middle, and the lens groups on both sides of the aperture are symmetrical or approximately symmetrical concerning the aperture. It has almost no vertical axis aberration and can efficiently optimize the distortion, coma, and chromatic aberration of magnification. The optimization of the optical system is carried out in the software of ZEMAX. The field curvature, chromatic aberration, spherical aberration, and astigmatism of the system are corrected mainly by optimizing the lens thickness, surface parameters, and materials. The telecentric optical structure is widely used in metrology, machine vision, and lithography [32]. It is not affected by positional changes in visual inspection, providing constant image magnification, eliminating field of view errors, and enabling the sensor to capture images with uniform relative illumination, which is of great significance in defect detection. Therefore, the VL system is further optimized with the imaging side telecentric structure to ensure the ability of defect detection in the laser AM processes.

To optimize the high-order distortion and spherical aberration, improve the quality of detected defect images, reduce the number of lenses, and reduce the complexity of the VL system, an even-order aspheric surface is investigated in the optimization process. The even-order aspheric surface is composed of the base quadratic surface and the even-order power series polynomial, and its expression is:

$$Z(h )= \frac{{{h^2}c}}{{1 + \sqrt {1 - ({1 + K} ){h^2}{c^2}} }} + \mathop \sum \nolimits_{m = 2}^M {A_{2m}}{h^{2m}}$$
where $c = 1/R$ is the curvature at the vertex of the aspheric surface, m = 2, 3, 4 … M, and M is the highest number of terms of the accessory polynomial, h is the sag of the even-order aspheric surface, ${h^2} = {x^2} + {y^2}$. ${A_{2m}}$ is the coefficient of the even power series polynomial, and K is the conic coefficient of the base quadratic surface.

Figure 2 shows the cross-sectional schematic. The VL system consists of a pair of doublet lenses and three single lenses, with a maximum size of 36 mm, a total optical length is 77.3 mm with a back focal length of 17.5 mm, a half image height is 5.280 mm, and a ratio of the total optical length to the focal length is 1.546. The surface shape of the eighth optical surface S8 is a 10th-order even-order aspheric surface. The maximum sag at the edge of S8 is 3.94 mm, the conic coefficient K is -0.8198, the curvature size at the vertex is -15.8498, and the coefficients of each additional even power series polynomial are shown in Table 2. As the quadratic term coefficient of the 10th-order even-order aspheric surface collides with the corresponding curvature radius, which causes the optical system to be unstable, so it is not set as a variable in the optimization process.

 figure: Fig. 2.

Fig. 2. Cross-sectional schematic of the VL system.

Download Full Size | PDF

Tables Icon

Table 2. Surface Parameters of Even-order Aspheric S8 in the VL System

After the above optimization steps, the design results of the VL system are obtained. Image quality evaluations of the modulation transfer function (MTF), spot diagram, energy diagram, field curvature, and distortion need to be further completed. As shown in Fig. 3, the image quality evaluation results at a working distance of 300 mm include the diffraction MTF values, field curvature, distortion diagram, spot diagram, energy envelope diagram, relative illuminance (RI) map, and wavefront map. Using the optical transfer function to comprehensively evaluate the imaging quality of an optical system is based on considering the object as a spectrum of various frequencies, that is, expanding the light field distribution function of the object into the form of a Fourier series, using the MTF to calculate the diffraction modulation transfer function of all the positions of the field of view, and the optical transfer function to evaluate the imaging quality of the optical system can fully reflect the imaging quality of the optical system [33]. MTF= (Imax - Imin)/(Imax + Imin), where I max and I min represent the highest and lowest brightness, respectively. At the cutoff frequency of 227.27 lp /mm, the MTF values of the VL system under the on-axis field of view, 0.3 fields of view, 0.5 fields of view, 0.707 fields of view, and 1.0 field of view are 0.44116, 0.44093, 0.43361, 0.41958 and 0.40640, as shown in Fig. 3 (a), the MTF values of each field of view are all greater than 0.3, so the design results of the VL system are can significantly meet the requirements. Field curvature is the off-axis point beam aberration, which occurs as the convergence point of the off-axis beam on the imaging plane would be shifted relative to the Gaussian image point due to the refraction of the optical system. The field curvature is only a function of the field of view as the sensor is a plane, the field curvature would have a great impact on the imaging quality of the system to a certain extent. Distortion is a measure of the deformation degree of an object after being imaged by an optical system, which does not affect the clarity of the system imaging. But in the application of defect detection, the distortion would affect the recognition and extraction of the geometric features. In the process of image quality evaluation, SMIA TV distortion is applied to perform distortion analysis, SMIA TV distortion (%) = ((h ′ - h) / h) ×100%, where h ′ is the half-height of the corner of the image, and h is the half-height of the center of the image. Figure 3 (b) shows the field curvature and distortion diagram of the VL system, of which the left side is the astigmatic field curvature curve, the distance between the T and S curves of the same wavelength represents astigmatism, and the value of the abscissa represents the field of view. The maximum field curvature of the VL system is only -0.02252, and the distance between the field curvature curves under each working wavelength is small, indicating that the astigmatism is small, which does not affect the clarity of the received image on the image sensor. The maximum distortion is -0.39%, which fully meets the design requirements.

 figure: Fig. 3.

Fig. 3. The image quality evaluation results of the VL system: (a) Diffraction MTF values, the cutoff frequency is 227.27 lp /mm; (b) Field curvature and distortion diagram; (c) Spot diagram; (d) Energy envelope diagram; (e) Relative illuminance (RI) map; (f) Wavefront map.

Download Full Size | PDF

The spot diagram uses the density of points to measure the imaging quality of the optical system, and the range of graphics formed by more than 60% of the points is often referred to as an effective speckle. Figure 3 (c) is the spot diagram of the VL system. Different colors represent different working wavelengths. As the pixel size of the system is 2.2 µm, the relative change of the spot size under different fields of view is small, and the size of the Airy disk is 1.931 µm. The mean square values of the spot under the on-axis, 0.3, 0.5, 0.707, and full fields of view are 1.258 µm, 1.270 µm, 1.317 µm, 1.401 µm, and 1.579 µm, respectively, which are smaller than the pixel size and the Airy disk size. The lights in the edge field of view are relatively scattered, but most of the energy is concentrated in the Airy disk. The energy envelopment map can completely reflect the position of the energy dispersion of the system. The more concentrated the energy density received by the system, the better the signal feedback of the imaging system. Figure 3 (d) shows the energy surrounding the distribution curve of the VL system in each field of view. More than 80% of the energy in each field of view is concentrated within the pixel size of 2.2 µm, which can meet the needs of defects detection.

Relative illuminance refers to the ratio of the illuminance between different coordinate points and the central coordinate point on the image plane of the sensor, which is affected by distortion, vignetting, and pupil aberration [34]. If the relative illuminance of the optical system is small, the illuminance of the acquired image would be uneven, and the problem of underexposure or overexposure would easily occur, which seriously affects the image processing in visual inspection and the feature extraction of detection targets. Figure 3 (e) is the relative illuminance curve of the VL system, and its value is greater than 99.89%, which meets the design requirements. Figure 3 (f) shows the wavefront map of the VL system. Theoretically, the peak-to-valley (PV) value of the wavefront should be less than λ /4, indicating that the imaging system has high optical quality. The PV value of the designed VL system is 0.0210 λ, and its root mean square value is 0.0058 λ. Therefore, the above analysis indicates that the imaging quality of the VL can effectively meet the design requirements.

To improve the reliability of the MSDDS, multiple structures are used to optimize the imaging performance of the VL system at different working distances. Figure 4 presents the MTF results at 227.27 lp/mm for working distances of 290 mm, 330 mm, 370 mm, and 410 mm. The MTF of the VL system in each field of view is relatively uniform. When the working distance is 290 mm, the MTF values in the on-axis, 0.3, 0.5, 0.707 and 1.0 fields are 0.44527, 0.43019, 0.40093, 0.36053 and 0.30257, respectively. The MTF values under the on-axis, 0.3, 0.5, 0.707 and 1.0 fields at a working distance of 330 mm are 0.45397, 0.45397, 0.45102, 0.44303, and 0.41592. The MTF values for on-axis, 0.3, 0.5, 0.707, and 1.0 fields at a working distance of 370 mm are 0.45562, 0.45503, 0.45138, 0.44107, and 0.41026, respectively. The MTF values in the on-axis, 0.3, 0.5, 0.707 and 1.0 fields at a working distance of 410 mm are 0.45299, 0.44815, 0.43716, 0.41535 and 0.36204, respectively. The simulated results indicated that the MTF value of the entire VL system is greater than 0.3 at the spatial frequency of 227.27lp/mm. Therefore, the imaging quality of the VL system can meet the design requirements.

 figure: Fig. 4.

Fig. 4. MTF curves of the VL system at different working distance: (a) WD = 290 mm; (b) WD = 330 mm; (c) WD = 370 mm; (d) WD = 410 mm.

Download Full Size | PDF

3.2 Design and evaluation of the IL system

Figure 5 shows the cross-sectional schematic of the IL system. The IL system consists of a single lens and three double lenses. The maximum size of the IL system is 40 mm, the total optical length is 76 mm with a back focal length of 17.5 mm with a half-image height of 6.252 mm, and the ratio of the total optical length to the focal length is 1.52. The image quality evaluation results are presented in Fig. 6.

 figure: Fig. 5.

Fig. 5. Cross-sectional schematic of the IL system.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. The image quality evaluation results of the IL system: (a) Diffraction MTF values, the cutoff frequency is 16.67 lp /mm; (b) Field curvature and distortion diagram; (c) Spot diagram; (d) Energy envelope diagram; (e) Relative illuminance (RI) map; (f) Wavefront map.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. MTF curves of the IL system at different working distance: (a) WD = 290 mm; (b) WD = 320 mm; (c) WD = 350 mm; (d) WD = 380 mm.

Download Full Size | PDF

At the cutoff frequency of 16.67 lp /mm, the MTF values of the IL system under the on-axis, 0.3, 0.5, 0.707, and full field of view are 0.66749, 0.67508, 0.60246, 0.53978 and 0.51976, respectively, as shown in Fig. 6 (a). The MTF values of each field of view are greater than 0.3, and the design results of the IL system can meet the design requirements. Figure 6 (b) shows the field curvature and distortion diagram of the IL system. The maximum field curvature value of the system is only 0.11861, and the distance between the field curvature curves at each working wavelength is quite small, indicating that the astigmatism is uniform. It is small and does not affect the clarity of the image received on the sensor. Meanwhile, the maximum distortion of the IL system is -0.18%, which fully meets the design requirements. Figure 6 (c) is the spot diagram of the IL system. Different colors represent different working wavelengths. The pixel size of the system is 30 µm, the relative changes of the values of the spot size under different fields of view are small, and the mean square values of the spot under the on-axis, 0.3, 0.5, 0.707, and the full field of view are 13.965 µm, 11.957 µm, 19.555 µm, 24.362 µm, and 23.639 µm, respectively, which are smaller than the pixel size. The energy envelope distribution curve of the IL system under each field of view is shown in Fig. 6 (d). More than 95% of the energy in each field of view is concentrated in the pixel size of 30 µm. Figure 6 (e) is the relative illuminance curve of the IL system, and its value is greater than 97.26%, which can obtain the detection image with uniform illumination on the sensor. Figure 6 (f) presents the wavefront map of the IL system, the PV value is 0.1518 λ, and the root means the square value is 0.0338 λ.

Similar to the analysis of the VL system, multiple structures are used to optimize the imaging performance of the IL system at different working distances. Figure 7 shows the MTF curves at 16.67 lp/mm for working distances of 290 mm, 320 mm, 350 mm, and 380 mm, respectively. At a working distance of 290 mm, the MTF values in the on-axis, 0.3, 0.5, 0.707 and 1.0 fields are 0.38538, 0.38881, 0.30678, 0.28783 and 0.31452, respectively. When the working distance is 320 mm, the MTF values in the on-axis, 0.3, 0.5, 0.707 and 1.0 fields are 0.55320, 0.56952, 0.51271, 0.49752 and 0.56724, respectively. The MTF values for on-axis, 0.3, 0.5, 0.707, and 1.0 fields at a working distance of 350 mm are 0.68119, 0.73182, 0.60962, 0.52807 and 0.55161, respectively. The MTF values for on-axis, 0.3, 0.5, 0.707, and 1.0 fields at a working distance of 380 mm are 0.56276, 0.63524, 0.50171, 0.40770 and 0.37986, respectively. The MTF values of the entire IL system are greater than 0.3 at the spatial frequency of 16.67 lp/mm. Obviously, the IL system can maintain good imaging quality within a large working distance range.

Considering that the designed IL system is used in the visual inspection of SLM, the thermal analysis of the optical system is carried out based on the normal temperature and higher working temperature, and the lens structure of the imaging system is divided into six types utilizing passive athermalization design. The six different working temperature configurations are T = 20°C, T = 30°C, T = 40°C, T = 50°C, T = 60°C, and T = 70°C. As shown in Fig. 8, when the working temperature is 20°C, 30°C, 40°C, 50°C, 60°C, and 70°C, the corresponding wavefront PV values are 0.1518 λ, 0.1680 λ, 0.1359 λ, 0.1546 λ, 0.2013 λ, and 0.2185 λ, respectively, all less than λ/4. Therefore, the designed IL system can significantly maintain good imaging performance within the configuration range of working temperature of 20°C to 70°C.

 figure: Fig. 8.

Fig. 8. Wavefront map of the IL system at each working temperature: (a) T = 20°C (b) T = 30°C (c) T = 40°C (d) T = 50°C (e) T= 60°C (f) T = 70°C.

Download Full Size | PDF

3.3 Design and evaluation of the PL system

To simplify the optical structure of the PL system and improve the image quality, an aspherical structure is investigated in the design processes. When the traditional power series aspheric fitting item exceeds 10 items, the related Gram matrix goes wrong, and the surface representation is failed [35]. To ensure the yield of aspheric surface processing, it is necessary to control the slope of the lens during the design process. Therefore, the image quality of the PL system is optimized based on the Q-type aspheric surface, and the Qbsf aspheric surface can control the root mean square slope of the mirror surface. With the development of ultra-precision machining and inspection technology, the machining and inspection of various aspheric surfaces including Q-type aspheric surfaces have become possible [35,36]. Presently, the optical system design for Q-type aspheric surface mainly focuses on the camera lens [37], ultraviolet lithography lens [38], and panoramic ring [39], but there is no related application in a polarization imaging system.

The Q-type aspheric surface is formed with a Q-type polynomial as the annexed polynomial, which is divided into mild aspheric surface Qbsf and strong aspheric surface Qcon [40]. The reference quadratic surface of the Qbsf aspheric surface is the most fitting spherical surface, that is, the ideal spherical surface obtained by fitting the maximum value of the clear aperture of the aspheric surface and the vertices of the aspheric surface. The curvature of the most fitting spherical surface can be expressed as:

$${\rho _{\textrm{bsf}}} = \frac{{2f({{h_{\textrm{max}}}} )}}{{h_{\textrm{max}}^2 + f{{({{h_{\textrm{max}}}} )}^2}}}$$
where ${h_{\textrm{max}}}$ is the maximum value of the aspheric sag, $f({{h_{\textrm{max}}}} )$ is the deviation of the Qbsf aspheric surface and the best-fit sphere, denoted by $\varDelta Z$, can be expressed as
$$\varDelta Z(r )= \frac{{{r^2}({1 - {r^2}} )}}{{\sqrt {1 - \rho _{\textrm{bsf}}^2{h^2}} }}\mathop \sum \nolimits_{m = 0}^M {a_m}Q_m^{\textrm{bsf}}({r^2})$$
where ${a_m}$ is the coefficient of the aspherical term, m = 0, 1, 2… M, M is the highest number of terms of the aspherical surface, h is the sag of the aspherical surface, ${h^2} = {x^2} + {y^2}$, $\rho $ is the curvature at the vertex of the aspherical surface, and r is the normalized spherical height of the aspherical surface. $r = h\rho /\rho {h_{\textrm{max}}} = h/{h_{\textrm{max}}}$, then the expression of the Q bsf aspheric surface is:
$$Z(h )= \frac{{{\rho _{\textrm{bsf}}}{h^2}}}{{1 + \sqrt {1 - \rho _{\textrm{bsf}}^2{h^2}} }} + \frac{{{r^2}({1 - {r^2}} )}}{{\sqrt {1 - \rho _{\textrm{bsf}}^2{h^2}} }}\mathop \sum \nolimits_{m = 0}^M {a_m}Q_m^{\textrm{bsf}}({r^2})$$
where $Q_m^{\textrm{bsf}}({{r^2}} )$ is a set of Jacobi polynomials, and its first six terms are:
$${$\displaystyle\left\{{\begin{array}{@{}l@{}} {Q_\textrm{0}^{\textrm{bsf}}} ({{r^2}} )= 1\\ {Q_\textrm{1}^{\textrm{bsf}}} ({{r^2}} )= \frac{1}{{\sqrt {19({13 - 16{r^2}} )} }}\\ {Q_\textrm{2}^{\textrm{bsf}}} ({{r^2}} )= \sqrt {\frac{2}{{95}}} \left({29 - 4{r^2}({25 - 19{r^2}} )} \right)\\ {Q_\textrm{3}^{\textrm{bsf}}} ({{r^2}} )= \sqrt {\frac{2}{{2545}}} \left({207 - 4{r^2}\left({315 - {r^2}({577 - 320{r^2}} )} \right)} \right)\\ {Q_\textrm{4}^{\textrm{bsf}}} ({{r^2}} )= \frac{1}{{\sqrt[3]{{131831}}}}\left({7737 - 16{r^2}\left({4653 - 2{r^2}\left({7381 - 8{r^2}({1168 - 509{r^2}} )} \right)} \right)} \right)\\ {Q_\textrm{5}^{\textrm{bsf}}} ({{r^2}} )= \frac{1}{{\sqrt[3]{{6632213}}}}\left({66657 - 32{r^2}\left({28338 - {r^2}\left({135325 - 8{r^2}\left({35884 - {r^2}({34661 - 12432{r^2}} )} \right)} \right)} \right)} \right)\end{array}}\right.$}$$

The interference and precision loss among the coefficients of power series can be avoided, which is beneficial to improving the efficiency of optimization [35]. Figure 9 shows the cross-sectional schematic of the PL system. The PL system consists of a doublet lens and two single lenses, with a maximum size of 33.4 mm, a total optical length of 70 mm, a back focal length of 17.5 mm, a half-image height of 5.71 mm, and a ratio of total optical length to focal length is 1.4. The surfaces of the seventh optical surface S7 and the ninth optical surface S9 are Qbsf aspheric surfaces, and their surface shapes are shown in Fig. 8. The maximum sagittal height at the edge are 1.07 mm and 0.826 mm, the correlation coefficients of the two Q bsf aspheric surfaces S7 and S9 are shown in Table 3.

 figure: Fig. 9.

Fig. 9. Cross-sectional schematic of the PL system.

Download Full Size | PDF

Tables Icon

Table 3. Surface Parameters of Q-type Aspheric Coefficients in the PL System

The image quality evaluation results of the PL system are presented in Fig. 10. At the cutoff frequency of 144.93 lp/mm, the MTF values of the PL system under the on-axis, 0.3, 0.5, 0.707, and the full field of view is 0.54034, 0.51694, 0.37505, 0.37505 and 0.32172, respectively, As shown in Fig. 10(a), the MTF values of each field of view are greater than 0.3, which shows that the design results of the PL system can meet the design requirements. Figure 10(b) presents the field curvature and distortion diagram of the PL system. The maximum field curvature value of the system is only 0.02216, and the spacing of each working wavelength is quite small, which indicates that the astigmatism is small and does not affect the clarity of the detected image. Moreover, the maximum distortion of the PL system is 0.50%, which fully meets the design requirements.

 figure: Fig. 10.

Fig. 10. The image quality evaluation results of the VL system: (a) Diffraction MTF values, the cutoff frequency is 144.93 lp /mm; (b) Field curvature and distortion diagram; (c) Spot diagram; (d) Energy envelope diagram; (e) Relative illuminance (RI) map; (f) Wavefront map.

Download Full Size | PDF

Figure 10(c) is the spot diagram of the PL system. Different colors represent different working wavelengths. The pixel size is 3.45 µm. The mean square values of the spot size in the on-axis, 0.3, 0.5, 0.707, and the full field of view are 1.845 µm, 1.963 µm, 2.284 µm, 2.698 µm, and 4.281 µm, respectively. The energy is concentrated within the pixel area. Although the fringe of the edge field of view is relatively scattered, its root means the square value is also less than twice the pixel size, which can meet the design requirements. Figure 10(d) shows the energy envelope distribution curve of the PL system in each field of view. More than 81.512% of the energy in the on-axis, 0.3, 0.5, and 0.707 is concentrated in the pixel size of 3.45 µm, the energy within a single pixel size under the 1.0 field of view accounts for 73.337%, and the double pixel size accounts for 90.221%. Figure 10(e) shows the relative illuminance curve of the PL system, the value of which is greater than 99.88%, which can obtain a detection image with uniform illumination on the sensor. Figure 10(f) shows the wavefront map of the PL system, the PV value is 0.2242 λ, and the root means the square value is 0.0605 λ.

Figure 11 shows the MTF values at 144.93 lp/mm for working distances of 290 mm, 320 mm, 350 mm, and 370 mm. At a working distance of 290 mm, the MTF values at the on-axis, 0.3, 0.5, 0.707, and 1.0 fields are 0.51342, 0.45054, 0.39650, 0.38469, and 0.36569, respectively. When the working distance is 320 mm, the MTF values in the on-axis, 0.3, 0.5, 0.707 and 1.0 fields are 0.57565, 0.51094, 0.44492, 0.40815 and 0.36989, respectively. At a working distance of 350 mm, the MTF values at the on-axis, 0.3, 0.5, 0.707, and 1.0 fields are 0.51844, 0.47676, 0.42980, 0.39042 and 0.34102, respectively. When the working distance is 370 mm, the MTF values in the on-axis, 0.3, 0.5, 0.707 and 1.0 fields are 0.46837, 0.43580, 0.39455, 0.35526, and 0.31528, respectively. Obviously, the MTF values of the entire PL system are greater than 0.3 at the spatial frequency of 144.93 lp/mm, thus the designed system can maintain good imaging quality within a large working distance range.

 figure: Fig. 11.

Fig. 11. MTF curves of the PL system at different working distance: (a) WD = 290 mm; (b) WD = 320 mm; (c) WD = 350 mm; (d) WD = 370 mm.

Download Full Size | PDF

4. Tolerance analysis

The PL system has good imaging quality according to the previous simulation and analysis. However, there would also be deviations in the performance parameters of the optical materials because of the processing errors in the optical parts processing and the assembly errors in the installation process. These errors would affect the system's performance. Therefore, it is necessary to comprehensively consider the influence of the above error factors on the system performance, that is, to carry out tolerance analysis [41,42]. In the process of tolerance analysis of this design scheme, initial tolerance values are set for the optical and structural parameters of the optical system and compensation parameters are added. Meanwhile, Monte Carlo analysis is performed, and the average MTF value is used as the analysis and evaluation standard. The tolerances of individual parameters are appropriately tightened or loosened to obtain tolerance analysis results that meet the requirements of machining and assembly.

Table 4 shows the tolerance values of the MSDDS. By using sensitivity analysis, the MTF value at the cutoff frequency of each optical channel was selected as the evaluation standard, and the results are obtained by simulating the 2000 times of Monte Carlo analysis. Controlling the decrease of the MTF average value at the cutoff frequency within 10%, the top ten tolerance items with the greatest impact are obtained. The analysis results are shown in Fig. 12. TRAD is the radius of curvature tolerance, TSDY is the eccentricity tolerance of the surface in the Y direction, TIRX is the inclination tolerance of the surface in the X direction, TTHI is the thickness tolerance, TSDX is the eccentricity tolerance of the surface in the X direction, TIRY is the inclination of the surface in the Y direction Tolerance, TEDY represents the eccentricity tolerance of the component in the Y direction. Figure 12(a) shows the tolerance analysis results of the VL system, MTF values vary by a maximum of 9.079% due to the radius of curvature tolerance of +0.04 mm. The decrease rates of MTF values caused by the top ten tolerance items with the greatest impact are: 9.079%, 8.797%, 8.700%, 8.539%, 8.539%, 8.241%, 8.241%, 7.799%, 7.794% and 7.556%. Furthermore, the Monte Carlo analysis results of the VL system are shown in Fig. 12(b). Obviously, under the on-axis, 0.3, 0.5, 0.707, and full field of view, the MTF value has a 90% probability greater than 0.30559, 0.29460, and 0.28021, 0.26965 and 0.26805, respectively. Based on the above tolerance analysis results, it is obvious that the design of the VL system can meet the processing and assembly requirements.

 figure: Fig. 12.

Fig. 12. (a) Tolerance analysis results of the VL system; (b) Monte Carlo analysis results of the VL system; (c) Tolerance analysis results of the IL system; (d) Monte Carlo analysis results of the IL system; (a) Tolerance analysis results of the PL system; (b) Monte Carlo analysis results of the PL system.

Download Full Size | PDF

Tables Icon

Table 4. Tolerance Values of the MSDDS

Figure 12(c) illustrates the tolerance analysis results of the IL system. The maximum change in MTF value is 9.671%, which is caused by the surface eccentricity tolerance of +0.14 mm. The decrease in MTF value caused by the top ten tolerance items with the greatest impact are: 9.671%, 9.645%, 7.866%, 7.866%, 7.827%, 7.827%, 7.050%, 6.865%, 6.813% and 5.801%, respectively. Figure 12(d) shows the Monte Carlo analysis result of the IL system. Obviously, under the on-axis, 0.3, 0.5, 0.707, and full field of view, the MTF value has a 90% probability greater than 0.58628, 0.57310, and 0.5460, 0.49581 and 0.40276, respectively. Based on the above tolerance analysis results, the design of the IL system meets the requirements of processing and assembly. As shown in Fig. 12(e), the results of the tolerance analysis of the PL system indicate that the MTF value varies by a maximum of 9.643% due to the surface eccentricity tolerance of +0.14 mm. The decrease rates of MTF value caused by the top ten tolerance items with the greatest impact are: 9.643%, 9.468%, 8.629%, 8.355%, 7.561%, 6.665%, 6.665%, 6.511%, 6.217% and 6.032%. Furthermore, as shown in Fig. 12(f) is the Monte Carlo analysis results of the PL system. The MTF values all have a 90% probability greater than 0.44508, 0.40105 and 0.36488, 0.30306 and 0.24891. Therefore, the tolerance analysis results show that the designed optical structure of the MSDDS can effectively meet the requirements.

5. Experimental studies and discussion

5.1 Experimental setup

Figure 13 shows the MSDDS designed and built in this paper. The MSDDS consists of a hardware system and a software system. The hardware system includes an illumination device, the VL system, the IL system, the PL system, the hub, the beam splitter, the polarizer, the filter, and the support structure. The software system includes surface defect image acquisition, the fusion of multi-sensor detection images, super-resolution reconstruction of defect images, and defect feature extraction and analysis. The sensor resolution of the VL system is 7728×5368 and the size of a single pixel is 1.1 µm. The designed VL system has a focal length of 50 mm and the F-number of 2.5. The sensor resolution of the IL system is 320×256, the size of a single pixel is 30 µm, the focal length of the designed IL system is 50 mm, and the F-number is 1.5. The sensor resolution of the PL system is 2448×2048, the size of a single pixel is 3.45 µm, and the designed PL system has a focal length of 50 mm and the F-number of 2. After the MSDDS system captures defect detection images, it completes the detection and analysis of parts’ surface defects through multi-sensor image fusion, super-resolution reconstruction, and defect feature extraction. The computer used for image processing and analysis is Lenovo Think Pad S2, the CPU is i5 8250U, and its maximum frequency is 1.8G Hz.

 figure: Fig. 13.

Fig. 13. (a) The MSDDS designed and constructed in this paper; (b) The internal structure of the MSDDS; (a) The external packaging model of the MSDDS.

Download Full Size | PDF

5.2 Defect detection and feature extraction

In this paper, the MSDDS is used to study the defect status of three groups of samples, including sample 1, sample 2, and sample 3. Figure 14 presents the schematic diagram of the main technical route of defect feature extraction and characterization based on multi-sensor fusion. The multi-sensor data fusion and defect feature extraction flow are shown in Fig. 15, Fig. 16, and Fig. 17. In the process of defect analysis, the defect detection image processed by the image fusion algorithm is firstly selected for the region of interest (ROI), and then the ROI is subjected to super-resolution reconstruction and defect information extraction. As presented in Fig. 15, Fig. 16, and Fig. 17, when the MSDDS is used for defect detection of laser AM parts, the defect detection images captured by the VL system have higher resolution and richer defect details. The high reflectivity of the IL system can easily lead to the annihilation of key information in the defect area. The defect detection image of the IL system has high contrast and penetration ability, but its resolution is low, and it is difficult to obtain detailed information about the defect, while the defect detection image of the PL system can inhibit the high reflection phenomenon on the surface of the part and highlight the edge contour information of the defect area, which is beneficial to the later defect extraction and characterization. Combined with the characteristics of each optical detection channel, the image fusion algorithm is used to register and fuse the defect detection images of visible light, infrared, and polarization, which can effectively improve the richness of detection information and the defect detection ability under complex working conditions. The fusion-processed detection image has a stronger ability to distinguish the detailed information of the defect area, the contrast and clarity of the image can be effectively improved, and the edge contours of defects such as porosity, cracking, and scratches can be highlighted.

 figure: Fig. 14.

Fig. 14. Schematic diagram of the main technical route of defect feature extraction and characterization based on multi-sensor fusion.

Download Full Size | PDF

 figure: Fig. 15.

Fig. 15. Multi-sensor data fusion and defect feature extraction of sample 1.

Download Full Size | PDF

 figure: Fig. 16.

Fig. 16. Multi-sensor data fusion and defect feature extraction of sample 2.

Download Full Size | PDF

 figure: Fig. 17.

Fig. 17. Multi-sensor data fusion and defect feature extraction of sample 3.

Download Full Size | PDF

To evaluate the quality improvement effect of defect detection images objectively and quantitatively after the fusion of multi-sensor data, and to compare and analyze the detection images of visible light, infrared, and polarization, the Average Gradient (AG), Entropy (E), Spatial Frequency (SF), Edge Intensity (EI) and Standard deviation (SD) are used. The multi-sensor image fusion evaluation results are shown in Fig. 18. These evaluation indicators are defined as follows.

 figure: Fig. 18.

Fig. 18. Multi-sensor image fusion evaluation results.

Download Full Size | PDF

The AG, also known as the gray-scale change rate, reflects the changes in image details and sharpness. It is a measure of the image's ability to express the contrast of details and texture information [43]. The larger the AG value, the richer the gradient information and the more information about the image. The AG is defined as:

$$AG = \frac{1}{{({X - 1} )({Y - 1} )}}\mathop \sum \nolimits_{i = 0}^{X - 1} \mathop \sum \limits_{j = 0}^{Y - 1} \sqrt {\frac{{I_x^2 + I_y^2}}{2}} $$
where ${I_x} = I({i + 1,j} )- I({i,j} )$ represents the horizontal gradient information at the image $({i,j} )$, ${I_y} = I({i,j + 1} )- I({i,j} )$ represents the vertical gradient information at the image $({i,j} )$.

The E is an index to measure the richness of image information. The larger the E value, the greater the contrast of the image, the greater the amount of information, and the better the effect of image fusion. It is defined as:

$$E ={-} \mathop \sum \nolimits_{i = 0}^{L - 1} {P_i}\textrm{lo}{\textrm{g}_2}({{P_i}} )$$
where L represents the total gray level of the image, ${P_i}$ is the proportion of pixels with the gray level of i in the image to the total pixels.

The SF can reflect the overall activity of the image in the spatial domain. The larger the SF value, the better the quality of the fused image. defined as:

$$\left\{ {\begin{array}{l} {SF = \sqrt {C{F^2} + R{F^2}} }\\ {RF = \sqrt {\frac{1}{{X({Y - 1} )}}} \mathop \sum \nolimits_{x = 1}^X \mathop \sum \nolimits_{y = 2}^Y {{({{I_{x,y}} - {I_{x,y - 1}}} )}^2}}\\ {CF = \sqrt {\frac{1}{{({Y - 1} )X}}} \mathop \sum \nolimits_{x = 2}^X \mathop \sum \nolimits_{y = 1}^Y {{({{I_{x,y}} - {I_{x - 1,y}}} )}^2}} \end{array}} \right.$$
where $SF$ represents the spatial frequency, $CF$ is the spatial column frequency, and $RF$ is the spatial row frequency.

The EI is essentially the magnitude of the image edge point gradient, that is, the local variation intensity of the image along the edge normal direction. The larger the edge strength value is, the more obvious the edge effect of the image is, which is of great significance in defect identification and extraction. For an image $I({i,j} )$, the Canny operator detects edges and the edge strength of the image at a point $({i,j} )$ is expressed as:

$$\left\{ {\begin{array}{{l}} {EI({i,j} )= \sqrt {E_i^2 + E_j^2} }\\ {{E_i} = \frac{{\partial G}}{{\partial i}}\ast I({i,j} )}\\ {{E_j} = \frac{{\partial G}}{{\partial j}}\ast I({i,j} )}\\ {G({i,j} )= \frac{1}{{2\pi {\sigma^2}}}exp\left( { - \frac{{{i^2} + {j^2}}}{{2{\sigma^2}}}} \right)} \end{array}} \right.$$
where $G({i,j} )$ represents the center edge point operator, $\frac{{\partial G}}{{\partial i}}$ and $\frac{{\partial G}}{{\partial j}}$ are the gradients of the graph in the ij direction, respectively. $\mathrm{\ast }$ represents the convolution operation.

The SD can reflect the grayscale difference information of the image, measure the difference between the source image and the fusion image, and can compare and evaluate the fusion quality more intuitively. defined as:

$$\left\{ {\begin{array}{{l}} {SD = \sqrt {\frac{1}{{XY}}\mathop \sum \nolimits_{i = 0}^{X - 1} \mathop \sum \nolimits_{j = 0}^{Y - 1} {{[{I({i,j} )- \bar{I}} ]}^2}} }\\ {\bar{I} = \frac{1}{{XY}}\mathop \sum \nolimits_{i = 0}^{X - 1} \mathop \sum \nolimits_{j = 0}^{Y - 1} I({i,j} )} \end{array}} \right.$$
where $\bar{I}$ represents the mean value.

From the AG index, the average improvement rates of the multi-sensor fusion defect detection images of sample 1, sample 2, and sample 3 are 116.824%, 136.659%, and 131.404%, respectively, compared with the single-channel detection images before fusion. The fused detection image has richer gradient information and a stronger ability to express the contrast of key details and texture information in the defect area. Meanwhile, from the E index, the average improvement rate is 18.244%, 5.241%, and 7.511%, respectively, indicating that the multi-sensor fusion defect detection image has greater contrast than single-channel detection, and the detection image has more information. Then, from the perspective of the SF index, the average improvement rates are 58.487%, 50.430%, and 53.737% respectively, indicating that the quality of the fused image has been effectively improved. Moreover, from the EI index, the average improvement rates are 84.576%, 148.492%, and 155.783%, respectively. The edge contour contrast and resolution of the key porosity, cracking and scratch areas in the defect detection image processed by sensor fusion processing are high, and the edge is highlighted, which is of great significance for subsequent defect identification and feature extraction. From the SD index, the average rates are 59.859%, 30.800%, and 23.703%, respectively. Based on the analysis of the above results, the defect detection image based on the fusion of visible light, infrared, and polarization has more information than the single-channel detection image, the details of the defect area are clearer, and the contrast is higher. The defects can be highlighted and extracted, which further verifies the effectiveness of the designed MSDDS and provides technical support for the analysis of defects in the laser AM processes.

Furthermore, the geometric parameters such as the area and perimeter of the inspected defects were tested with image processing. As shown in Fig. 19, the defects extracted in #1 ROI1 and #1 ROI2 are typical pores, and three pores were extracted in #1 ROI1, namely #1 ROI1-1, #1 ROI1-2 and #1 ROI1-3, in which the area and perimeter of the pores numbered #1 ROI1-1 were the largest, 1513.6 µm2 and 141.8 µm, respectively. The width of the defect marker rectangle is close to the short axis of the circumscribed rectangle, and the height of the defect marker rectangle is similar to the size of the long axis of the circumscribed rectangle. The maximum length of #1 ROI1-1 is 67.1 µm, while the maximum width of #1 ROI1-3 is 39.3 µm. The direction factors of #1 ROI1-1 and #1 ROI1-2 are 0.58 and 0.77, respectively, which are both less than 1, indicating that the main direction of the pores in the vertical direction, while the direction factor of #1 ROI1-3 is 1.07 and the main direction is horizontal.

 figure: Fig. 19.

Fig. 19. Defect feature extraction and 3D distribution of Sample 1.

Download Full Size | PDF

In terms of defects shape, the length-width ratio of #1 ROI1-1, #1 ROI1-2, and #1 ROI1-3 are 2.06, 1.57, and 0.85, respectively, among which the #1 ROI1-1 has a larger length-width ratio, and the #1 ROI1-3 is close to 1, but these three defects are irregular circular pores. Judging the approximate direction of the defect from the degree of inclination, the #1 ROI1-1, #1 ROI1-2, and #1 ROI1-3 are distributed along the lower left to the upper right direction with different inclination degrees, and their inclinations are -69.15°, -45.00° and -15.53°, respectively. Two pores were extracted in #1 ROI2, namely #1 ROI2-1 and #1 ROI2-2, among which the area and perimeter of the pore defect numbered #1 ROI2-1 were the largest, 9817.4 µm2 and 439.1 µm, respectively. The marked rectangles of #1 ROI2-1 and #1 ROI2-2 are similar in size to the circumscribed rectangle, among which, the length of #1 ROI2-2 is at most 392.4 µm, while the width of #1 ROI2-1 is at most 98.8 µm. The direction factors of #1 ROI2-1 and #1 ROI2-2 are 0.28 and 0.15, respectively, which are far less than 1, and it is obvious that the main direction of the pores is in the vertical direction. The length-width ratio of #1 ROI2 -1 and #1 ROI2-2 are 3.60 and 6.89, respectively, and the values are relatively large, which indicates that the shape of the area of the pores is approximately a long and narrow strip, belonging to the elongated pores.

As shown in Fig. 20, two defects in #2 ROI1 and #2 ROI2 are extracted from Sample 2. The area and perimeter of defects #2 ROI1 and #2 ROI2 are similar in size, with areas of 3341.1 µm2 and 3495.8 µm2, the perimeters are 227.4 µm and 189.3 µm, respectively. The marked rectangles of #2 ROI1 and #2 ROI2 have the same width, both 388.6 µm, and the short axis of the circumscribed rectangle of #2 ROI1 and #2 ROI2 is similar. The maximum length of #2 ROI1 is 412.3 µm, while the maximum width of #2 ROI2 is 62.9 µm. The direction factors of #2 ROI1 and #2 ROI2 are 2.62 and 3.83, respectively, which are all greater than 1, which indicates the main direction of the defect is the horizontal direction. The length-width ratio of #2 ROI1 and #2 ROI2 are 6.68 and 6.33, respectively, and the shape of the defects is approximately a narrow strip, which is a common cracking defect. From the inclination results to judge the approximate direction of the defect, it is obvious that the #2 ROI1 is distributed along the upper left to lower right direction, and its inclination is 25.94°, while the #2 ROI2 is distributed along the lower left to the upper right direction, and its slope is -8.13°.

 figure: Fig. 20.

Fig. 20. Defect feature extraction and 3D distribution of Sample 2.

Download Full Size | PDF

As shown in Fig. 21, two defects in #3 ROI, numbered #3 ROI-1 and #3 ROI-2, respectively, are extracted from Sample 3. The #3 ROI-2 has the largest area of 12314.1 µm2 and perimeter of 243.1 µm, respectively. The #3 ROI-2 has a maximum length of 512.1 µm and a maximum width of 189.1 µm. Both the #3 ROI-1 and #3 ROI-2 have a direction factor of 1.05, which is close to 1, and the direction of the main defect body is about a 45° inclined direction. The length-width ratio of the #3 ROI-1 and #3 ROI-2 are 3.00 and 2.71, respectively, and the shape of the defect area is approximately a long strip. The shape of the defect area is slender, but the width of the defect is large, which is a common scratch defect. Judging the approximate direction of the defect from the inclination results, it is obvious that the #3 ROI-1 and #3 ROI-2 are distributed from the lower left to the upper right, and their inclinations are -43.27° and -43.20°, respectively. For the larger scratch defect of #3 ROI-2, by calculating its geometric parameters, the fitting equations of the scratch edge can be obtained as y=-0.9391x+245.31 and y=-0.9391x+411.52, the scratch width is 189.1 µm obtained by conversion calculation. Therefore, the experimental results indicate that the proposed MSDDS can significantly detect the defects of laser AM workpieces and perform multi-sensor information fusion, effectively extracting the defects for analysis after image processing. Furthermore, this provides a helpful and potential solution for defect detection and processing parameter optimization in the laser AM. Meanwhile, the design scheme of the MSDDS is also applicable to other visual detection systems, such as welding, laser cutting, and other systems.

 figure: Fig. 21.

Fig. 21. Defect feature extraction and 3D distribution of Sample 3.

Download Full Size | PDF

6. Conclusions

Defect detection of laser AM parts by traditional methods such as high-speed cameras, infrared thermal imagers, and photodiodes can only measure sound, light, heat, and other sensing signals alone, which has low signal accuracy, large limitations, and insufficient information. To address these problems, this study proposed a design scheme for a multi-sensor defect detection system (MSDDS) for laser AM. Firstly, according to the optical performance requirements, the design parameters of the MSDDS including the Visual light (VL), Infrared light (IL), and Polarization light (PL) subsystems are calculated respectively, and the optical imaging system is designed and optimized by using the optical design software of ZEMAX. Then, the image quality evaluation indexes of each optical channel are analyzed, including the MTF, field curvature and distortion, spot diagram, relative illuminance, and wavefront map. Moreover, to meet the requirements of processing and assembly, tolerance analysis and Monte Carlo analysis are carried out on the optical structures of the MSDDS. The analysis results show that the MSDDS can obtain clear image center and edge detail information, uniform illumination distribution, small distortion, and field curvature, providing a guarantee for multi-sensor data fusion and defect feature extraction. Finally, the experimental setup of the MSDDS is built for verification test, by which the defect detection and feature extraction are carried out. The experimental results show that the MSDDS can obtain high contrast and clear key information after multi-sensor image fusion processing, super-resolution reconstruction, and feature extraction of defects in the laser AM. High-quality detection images of some surface defects such as cracking, scratches, and porosity can be effectively extracted by the MSDDS. This provides a helpful and potential solution for defect detection and processing parameter optimization in the laser AM. Furthermore, the proposed method and system can also be extended to other inspection applications, such as the processes of welding, laser cutting, etc.

Funding

National Key Research and Development Program of China (2017YFA0701200); National Natural Science Foundation of China (52075100); Fudan University-CIOMP Joint Fund (FC2020-006).

Disclosures

The authors declare no conflicts of interest

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. P. M. Pragana, R. F. V. Sampaio, I. M. F. Bragança, C. M. A. Silva, and P. A. F. Martins, “Hybrid metal additive manufacturing: A state–of-the-art review,” Advances in Industrial and Manufacturing Engineering 2, 100032 (2021). [CrossRef]  

2. Y. Chen, X. Peng, L. B. Kong, G. X. Dong, A. Remani, and R. Leach, “Defect inspection technologies for additive manufacturing,” Int. J. Extrem. Manuf. 3(2), 022002 (2021). [CrossRef]  

3. E. Luis, H. M. Pan, S. L. Sing, A. K. Bastola, and W. Y. Yeong, “Silicone 3D printing: process optimization, product biocompatibility, and reliability of silicone meniscus implants,” 3D Printing and Additive Manufacturing 6(6), 319–332 (2019). [CrossRef]  

4. L. A. Hockaday, K. H. Kang, N. W. Colangelo, P. Y. C. Cheung, B. Duan, E. Malone, J. Wu, L. N. Girardi, L. J. Bonassar, C. C. Chu, and J. T. Butcher, “Rapid 3D printing of anatomically accurate and mechanically heterogeneous aortic valve hydrogel scaffolds,” Biofabrication 4(3), 035005 (2012). [CrossRef]  

5. G. D. Goh, W. Toh, Y. L. Yap, T. Y. Ng, and W. Y. Yeong, “Additively manufactured continuous carbon fiber-reinforced thermoplastic for topology optimized unmanned aerial vehicle structures,” Composites, Part B 216, 108840 (2021). [CrossRef]  

6. T. D. Ngo, A. Kashani, G. Imbalzano, K. T. Q. Nguyen, and D. Hui, “Additive manufacturing (3D printing): A review of materials, methods, applications, and challenges,” Composites, Part B 143, 172–196 (2018). [CrossRef]  

7. R. Mercado and A. Rojas, “Additive manufacturing methods: techniques, materials, and closed-loop control applications,” Int J Adv Manuf Technol 109(1-2), 17–31 (2020). [CrossRef]  

8. N. Pengda, L. Ruidi, Z. Shuya, W. Minbo, C. Chao, and Y. Tiechui, “Hot cracking, crystal orientation and compressive strength of an equimolar CoCrFeMnNi high-entropy alloy printed by selective laser melting,” Opt. Laser Technol. 127, 106147 (2020). [CrossRef]  

9. Y. H. Wang, X. Z. Chen, S. Konovalov, C. C. Su, A. N. Siddiquee, and N. Gangil, “In-situ wire-feed additive manufacturing of Cu-Al alloy by addition of silicon,” Appl. Surf. Sci. 487, 1366–1375 (2019). [CrossRef]  

10. Y. Mostafa, M. A. Elbestawi, and S. C. Veldhuis, “A Review of Metal Additive Manufacturing Technologies,” Solid State Phenom. 278, 1–14 (2018). [CrossRef]  

11. I. Echeta, X. Feng, B. Dutton, R. Leach, and S. Piano, “Review of defects in lattice structures manufactured by powder bed fusion,” Int J Adv Manuf Technol 106(5-6), 2649–2668 (2020). [CrossRef]  

12. X. Peng and L. B. Kong, “A review of in situ defect detection and monitoring technologies in selective laser melting,” 3D Printing and Additive Manufacturing 00, 1–28 (2022). [CrossRef]  

13. M. Król, L. A. Dobrzański, L. Reimann, and I. Czaja, “Surface Quality in Selective Laser Melting of Metal Powders,” Arch. Mater. Sci. Eng. 60, 87–92 (2013).

14. S. L. Sing and W. Y. Yeong, “Laser powder bed fusion for metal additive manufacturing: perspectives on recent developments,” Virtual and Physical Prototyping 15(3), 359–370 (2020). [CrossRef]  

15. V. N. Lednev, P. A. Sdvizhenskii, R. D. Asyutin, R. S. Tretyakov, M. Y. Grishin, A. Y. Stavertiy, A. N. Fedorov, and S. M. Pershin, “In situ elemental analysis and failures detection during additive manufacturing process utilizing laser induced breakdown spectroscopy,” Opt. Express 27(4), 4612–4628 (2019). [CrossRef]  

16. R. G. Louca and V. H. Brecht, “A virtual sensing approach for monitoring melt-pool dimensions using high speed coaxial imaging during laser powder bed fusion of metals,” Addit. Manuf. 40, 101923 (2021). [CrossRef]  

17. L. H. Yang, L. Lo, S. J. Ding, and T. Ozel, “Monitoring and detection of meltpool and spatter regions in laser powder bed fusion of super alloy Inconel 625,” Prog Addit Manuf 5(4), 367–378 (2020). [CrossRef]  

18. V. K. Nadipalli, S. A. Andersen, J. S. Nielsen, and D. B. Pedersen, “Considerations for interpreting in-situ photodiode sensor data in pulsed mode laser powder bed fusion,” in Joint Special Interest Group meeting between EUSPEN and ASPE. Advancing Precision in Additive Manufacturing, 2019.

19. M. M. Pavlov and I. S. Doubenskaia, “Pyrometric analysis of thermal processes in SLM technology,” Phys. Procedia 5, 523–531 (2010). [CrossRef]  

20. D. S. Ye, K. P. Zhu, J. Y. H. Fuh, Y. Zhang, and H. G. Soon, “The investigation of plume and spatter signatures on melted states in selective laser melting,” Opt. Laser Technol. 111, 395–406 (2019). [CrossRef]  

21. C. Fabio, G. Marco, P. Stefano, and M. C. Bianca, “Characterization of in-situ measurements based on layerwise imaging in laser powder bed fusion,” Addit. Manuf. 24, 183–199 (2018).

22. S. L. William, Z. Bin, Z. John, and A. Davies, “In-Situ Metrology System for Laser Powder Bed Fusion Additive Process,” Procedia Manufacturing 1, 393–403 (2015). [CrossRef]  

23. Y. J. Zhang, S. H. Geok, D. S. Ye, K. P. Zhu, and Y. H. F. Jerry, “Extraction and evaluation of melt pool, plume and spatter information for powder-bed fusion AM process monitoring,” Mater. Des. 156, 458–469 (2018). [CrossRef]  

24. M. Yakout, I. Phillips, M. A. Elbestawi, and Q. Y. Fang, “In-situ monitoring and detection of spatter agglomeration and delamination during laser-based powder bed fusion of Invar 36,” Opt. Laser Technol. 136, 106741 (2021). [CrossRef]  

25. J. J. Blecher, C. M. Galbraith, V. Van, T. A. Palmer, J. M. Fraser, P. J. L. Webster, and T. DebRoy, “Real time monitoring of laser beam welding keyhole depth by laser interferometry,” Sci. Technol. Weld. Joining 19(7), 560–564 (2014). [CrossRef]  

26. T. Craeghs, F. Bechmann, S. Berumen, and J. P. Kruth, “Feedback control of Layerwise Laser Melting using optical sensors,” Phys. Procedia 5, 505–514 (2010). [CrossRef]  

27. F. Tatsuaki, E. Kyota, M. Kenta, and A. Satoshi, “Experimental investigation of melt pool behaviour during selective laser melting by high-speed imaging,” CIRP Ann. 67(1), 253–256 (2018). [CrossRef]  

28. B. Sebastian, B. Florian, L. Stefan, K. J. Pierre, and C. Tom, “Quality control of laser- and powder bed-based Additive Manufacturing (AM) technologies,” Phys. Procedia 5, 617–622 (2010). [CrossRef]  

29. G. Aniruddha, G. Brian, M. G. Gabriel, F. J. Baptiste, J. M. Manyalibo, and R. Prahalada, “Heterogeneous sensing and scientific machine learning for quality assurance in laser powder bed fusion– A single-track study,” Addit. Manuf. 36, 101659 (2020). [CrossRef]  

30. A. V. Gusarov, A. A. Okun’kova, P. Y. Peretyagin, I. V. Zhirnov, and P. V. Podrabinnik, “Mean of optical diagnostics of selective laser melting with non-gaussian beams,” Meas. Tech. 58(8), 872–877 (2015). [CrossRef]  

31. B. Gould, S. Wolff, N. Parab, C. Zhao, M. C. Lorenzo-Martin, K. Fezzaa, A. Greco, and T. Sun, “In Situ Analysis of Laser Powder Bed Fusion Using Simultaneous High-Speed Infrared and X-ray Imaging,” JOM 73(1), 201–211 (2021). [CrossRef]  

32. A. Mikš and J. Novák, “Design of a double - sided telecentric zoom lens,” Appl. Opt. 51(24), 5928–5935 (2012). [CrossRef]  

33. X. Peng and L. B. Kong, “Design of a real-time fiber-optic infrared imaging system with wide-angle and large depth of field,” Chin. Opt. Lett. 20(1), 011201 (2022). [CrossRef]  

34. D. Cheng, Y. T. Wang, L. Yu, and X. H. Liu, “Optical design and evaluation of a 4 mm cost-effective ultra-high-definition arthroscope,” Biomed. Opt. Express 5(8), 2697–2714 (2014). [CrossRef]  

35. M. Jia and C. X. Xue, “Design of Dual-Band Infrared Optical System with Q-type Asphere,” Acta Opt. Sin. 39(10), 1022001 (2019). [CrossRef]  

36. G. W. Forbes, “Shape specification for axially symmetric optical optical surface,” Opt. Express 15(8), 5218–5226 (2007). [CrossRef]  

37. B. Ma, K. Sharma, K. P. Thompson, and J. P. Rolland, “Mobile device camera design with Q-type polynomials to achieve higher production yield,” Opt. Express 21(15), 17454–17463 (2013). [CrossRef]  

38. B. Ma, L. Li, K. P. Thompson, and J. P. Rolland, “Applying slope constrained Q-type aspheres to develop higher performance lens,” Opt. Express 19(22), 21174–21179 (2011). [CrossRef]  

39. X. D. Zhou and J. Bai, “Small Distortion Panoramic Annular Lens Design with Q-type Asphere,” Acta Opt. Sin. 35(7), 0722003 (2015). [CrossRef]  

40. Y. Tian, W. Yang, and J. Wang, “Image Fusion Using a Multi-level Image Decomposition and Fusion Method,” Appl. Opt. 60(24), 7466–7479 (2021). [CrossRef]  

41. R. B. Li, X. Du, and Z. H. Zhang, “Design, Machining and Measurement Technologies of Ultra-precision Freeform Optics,” China Machine Press p9(2015).

42. S. He, Y. Meng, and M. Gong, “Freeform lens design to eliminate retro reflection for optical systems,” Appl. Opt. 57(5), 1218–1224 (2018). [CrossRef]  

43. M. J. Li, B. D. Yu, and L. W. Xiao, “Image Fusion Algorithm Based on Wavelet Transform and Laplacian Pyramid,” Adv. Mater. Res. 860-863, 2846–2849 (2013). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (21)

Fig. 1.
Fig. 1. Schematic diagram of the MSDDS for laser AM: PL: Polarization channel imaging system; IL: Infrared channel imaging system; VL: Visible channel imaging system; DM1: Beamsplitter 1; DM2: Beamsplitter 2; FT: Filters; ID: Infrared channel image sensor; PD: Polarization channel image sensor; VD: Visible channel image sensor; PR: polarizer; PC: computer.
Fig. 2.
Fig. 2. Cross-sectional schematic of the VL system.
Fig. 3.
Fig. 3. The image quality evaluation results of the VL system: (a) Diffraction MTF values, the cutoff frequency is 227.27 lp /mm; (b) Field curvature and distortion diagram; (c) Spot diagram; (d) Energy envelope diagram; (e) Relative illuminance (RI) map; (f) Wavefront map.
Fig. 4.
Fig. 4. MTF curves of the VL system at different working distance: (a) WD = 290 mm; (b) WD = 330 mm; (c) WD = 370 mm; (d) WD = 410 mm.
Fig. 5.
Fig. 5. Cross-sectional schematic of the IL system.
Fig. 6.
Fig. 6. The image quality evaluation results of the IL system: (a) Diffraction MTF values, the cutoff frequency is 16.67 lp /mm; (b) Field curvature and distortion diagram; (c) Spot diagram; (d) Energy envelope diagram; (e) Relative illuminance (RI) map; (f) Wavefront map.
Fig. 7.
Fig. 7. MTF curves of the IL system at different working distance: (a) WD = 290 mm; (b) WD = 320 mm; (c) WD = 350 mm; (d) WD = 380 mm.
Fig. 8.
Fig. 8. Wavefront map of the IL system at each working temperature: (a) T = 20°C (b) T = 30°C (c) T = 40°C (d) T = 50°C (e) T= 60°C (f) T = 70°C.
Fig. 9.
Fig. 9. Cross-sectional schematic of the PL system.
Fig. 10.
Fig. 10. The image quality evaluation results of the VL system: (a) Diffraction MTF values, the cutoff frequency is 144.93 lp /mm; (b) Field curvature and distortion diagram; (c) Spot diagram; (d) Energy envelope diagram; (e) Relative illuminance (RI) map; (f) Wavefront map.
Fig. 11.
Fig. 11. MTF curves of the PL system at different working distance: (a) WD = 290 mm; (b) WD = 320 mm; (c) WD = 350 mm; (d) WD = 370 mm.
Fig. 12.
Fig. 12. (a) Tolerance analysis results of the VL system; (b) Monte Carlo analysis results of the VL system; (c) Tolerance analysis results of the IL system; (d) Monte Carlo analysis results of the IL system; (a) Tolerance analysis results of the PL system; (b) Monte Carlo analysis results of the PL system.
Fig. 13.
Fig. 13. (a) The MSDDS designed and constructed in this paper; (b) The internal structure of the MSDDS; (a) The external packaging model of the MSDDS.
Fig. 14.
Fig. 14. Schematic diagram of the main technical route of defect feature extraction and characterization based on multi-sensor fusion.
Fig. 15.
Fig. 15. Multi-sensor data fusion and defect feature extraction of sample 1.
Fig. 16.
Fig. 16. Multi-sensor data fusion and defect feature extraction of sample 2.
Fig. 17.
Fig. 17. Multi-sensor data fusion and defect feature extraction of sample 3.
Fig. 18.
Fig. 18. Multi-sensor image fusion evaluation results.
Fig. 19.
Fig. 19. Defect feature extraction and 3D distribution of Sample 1.
Fig. 20.
Fig. 20. Defect feature extraction and 3D distribution of Sample 2.
Fig. 21.
Fig. 21. Defect feature extraction and 3D distribution of Sample 3.

Tables (4)

Tables Icon

Table 1. Design Parameters of the MSDDS

Tables Icon

Table 2. Surface Parameters of Even-order Aspheric S8 in the VL System

Tables Icon

Table 3. Surface Parameters of Q-type Aspheric Coefficients in the PL System

Tables Icon

Table 4. Tolerance Values of the MSDDS

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

2 ω = 2 a r c t a n ( y f )
2 ω 1 = 2 arctan ( ( 8.50 + 0.15 ) / 2 50 ) 9.89
2 ω 2 = 2 arctan ( ( 5.90 + 0.15 ) / 2 50 ) 6.92
2 ω 3 = 2 arctan ( ( 8.50 + 0.15 ) 2 + ( 5.90 + 0.15 ) 2 / 2 50 ) 12.05
2 Y = 2 y W D f
2 Y 1 = 2 ( 8.50 + 0.15 ) / 2 300 50 = 51.90 mm
2 Y 2 = 2 ( 5.90 + 0.15 ) / 2 300 50 = 36.30 mm
2 Y 3 = 2 ( 8.50 + 0.15 ) 2 + ( 5.90 + 0.15 ) 2 / 2 300 50 63.33 mm
α = f 1.22 λ D = 1.22 λ F
α = 1.22 λ F = 1.22 0.587 F 2.2
E = π 4 τ L ( D f ) 2
Z ( h ) = h 2 c 1 + 1 ( 1 + K ) h 2 c 2 + m = 2 M A 2 m h 2 m
ρ bsf = 2 f ( h max ) h max 2 + f ( h max ) 2
Δ Z ( r ) = r 2 ( 1 r 2 ) 1 ρ bsf 2 h 2 m = 0 M a m Q m bsf ( r 2 )
Z ( h ) = ρ bsf h 2 1 + 1 ρ bsf 2 h 2 + r 2 ( 1 r 2 ) 1 ρ bsf 2 h 2 m = 0 M a m Q m bsf ( r 2 )
$ { Q 0 bsf ( r 2 ) = 1 Q 1 bsf ( r 2 ) = 1 19 ( 13 16 r 2 ) Q 2 bsf ( r 2 ) = 2 95 ( 29 4 r 2 ( 25 19 r 2 ) ) Q 3 bsf ( r 2 ) = 2 2545 ( 207 4 r 2 ( 315 r 2 ( 577 320 r 2 ) ) ) Q 4 bsf ( r 2 ) = 1 131831 3 ( 7737 16 r 2 ( 4653 2 r 2 ( 7381 8 r 2 ( 1168 509 r 2 ) ) ) ) Q 5 bsf ( r 2 ) = 1 6632213 3 ( 66657 32 r 2 ( 28338 r 2 ( 135325 8 r 2 ( 35884 r 2 ( 34661 12432 r 2 ) ) ) ) ) $
A G = 1 ( X 1 ) ( Y 1 ) i = 0 X 1 j = 0 Y 1 I x 2 + I y 2 2
E = i = 0 L 1 P i lo g 2 ( P i )
{ S F = C F 2 + R F 2 R F = 1 X ( Y 1 ) x = 1 X y = 2 Y ( I x , y I x , y 1 ) 2 C F = 1 ( Y 1 ) X x = 2 X y = 1 Y ( I x , y I x 1 , y ) 2
{ E I ( i , j ) = E i 2 + E j 2 E i = G i I ( i , j ) E j = G j I ( i , j ) G ( i , j ) = 1 2 π σ 2 e x p ( i 2 + j 2 2 σ 2 )
{ S D = 1 X Y i = 0 X 1 j = 0 Y 1 [ I ( i , j ) I ¯ ] 2 I ¯ = 1 X Y i = 0 X 1 j = 0 Y 1 I ( i , j )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.