Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Intelligent recognition of composite material damage based on deep learning and infrared testing

Open Access Open Access

Abstract

Composite materials are commonly used in aircraft, and the integrity of these materials affects both flight and safety performance. Damage detection technology involving infrared nondestructive testing has played an important role in damage detection in aircraft composite materials. Traditional manual detection methods are inefficient, and the use of intelligent detection methods can effectively improve detection efficiency. Due to the diverse types of damage that can occur in composite materials, this damage is difficult to distinguish solely from infrared images. The introduction of infrared signals, which is temporal signals, provides the possibility of judging the type of damage. In this paper, a 1D-YOLOv4 network is established. The network is based on the YOLOv4 network and adds a changing neck and a 1D-CNN for improvement. Testing shows that the algorithm can identify infrared images and infrared signals in composite materials. Its recognition accuracy is 98.3%, with an AP of 91.9%, and a kappa of 0.997. Comparing the network in this paper with networks such as YOLOv3, YOLOv4 and YOLOv4+Neck, the results show that the proposed network is more effective. At the same time, the detection effects of the original data, the fitted data, the first derivative data and the second derivative data are studied, and the detection effect of the first derivative data has the best outcome.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Composite materials combine different materials through a composite process [1]. Composite materials have excellent properties, such as good insulation, strong heat resistance, and good corrosion resistance, and they are widely used in aircraft fuselages, wings, interior parts, radomes and other structures. For example, the European A400M military logistics aircraft uses composite wing covers [2]. Composite materials account for more than 35% of the materials used in the F-22 fighter [3], 50% of the materials used in the Boeing 787 [4], and up to 52% of the materials used in the Airbus A350 [5].

During the preparation and application of composite materials, various damages, such as internal delamination and debonding, will inevitably occur. Most of this damage is difficult to determine in terms of location and extent, which can present serious aircraft safety hazards [6,7]. With the extensive application of composite materials in aircraft, the detection of aircraft composite material damage has become a key issue for ensure flight safety [8,9].

At present, the commonly used nondestructive testing methods for composite materials include X-ray [10], ultrasonic [11] and acoustic emission [12]. These conventional methods have shortcomings, such as a small single detection area and slow detection speed. On the whole, they are not suitable for the rapid detection of damage in larger components. Active pulse thermal imaging technology in infrared nondestructive testing has many advantages, such as a large single detection area, fast detection speed, noncontact process, simple operation, suitability for on-site detection, and a wide range of research applications. Traditional infrared nondestructive testing research has mainly focused on the problems of blurred edges, uneven heating and low resolution in thermal images. A variety of image enhancement techniques have been proposed to improve the contrast of thermal images [13,14]. At present, infrared thermal imaging images mainly rely on manual recognition. The operator uses these images to judge the damage of a material, which requires an operator with certain level of professional knowledge [15,16].

With the rapid development of deep learning networks, it is now possible to use deep learning networks to intelligently identify damage in various materials [1720]. Khan [21] et al. used CNN to classify structural vibration signals of composite materials. The classification accuracy reached 90.1%. Zhang [22] et al. proposed an intelligent diagnosis algorithm based on a convolutional neural network, with a detection accuracy of 90.5%, which can realize the automatic diagnosis of bearing faults. Tripathi [23] et al. used feature designs combined with a k-nearest neighbour classifier to classify piezoelectric ceramic ultrasonic signals, which can provide up to 98% classification accuracy. Schmidt [24] et al. used convolutional neural networks to classify and detect CFRP (Carbon Fibre Reinforced Plastics) thermal images, and the detection accuracy of known prepregs reached 96.2%. Meng [25] et al. identified multiple types of ultrasound A-scan signals of CFRP by combining wavelet transform and CNN. Its recognition accuracy reached 98.15%. Luo [26] et al. proposed a thermal image automatic damage detection method based on a space and time hybrid deep learning architecture, which damage recognition POD value reached 0.667, but the two networks are all used for the segmentation of damaged areas. Wei [27] et al. proposed using a CNN to quickly predict the effective thermal conductivity of composite materials, and the predicted RMSE was 1.9%. Bang [28] et al. used RTM and TIM methods to prepare composite materials. A faster R-CNN was used to intelligently identify damage in composite materials such as delaminations and cracks, and the identified AP value reached 75.05%.

An infrared image reflects the temperature field distribution of all points in a plane. Infrared images can be used to determine the location of damage, but they have difficulty determining the type of damage. The infrared signal reflects the temperature change of a fixed point over time. Infrared signals are used to determine the type of internal material because the thermal conductivity is related to the type of material.

We creatively propose an infrared image and signal fusion algorithm (1D- YOLOv4) that realizes the detection of damaged areas through images and the detection of damage types through signals. The main innovation are as follows:

  • (1) Improve YOLOv4 by changing the neck to improve the small object detection ability and robustness of the model.
  • (2) Design an infrared signal recognition network based on a 1D-CNN to distinguish different types of damage.
  • (3) Compare original data, fitted data, first derivative data, and second derivative data to obtain the best data processing method.

2. Method

2.1 Infrared image detection network

2.1.1 YOLOv4 Network

The YOLOv4 network [29] was proposed on the basis of YOLOv3 [30]. This network is a typical one-stage object detection algorithm that cannot identify timing signals. The YOLOv4 network is mainly composed of a backbone, neck and head. The backbone uses CSPDarknet53, which is based on Darknet53, and introduces the CSPNet [31] block concept. It contains 29 convolutional layers with a receptive field of 725×725. The neck is composed of spatial pyramid polling (SPP) [32] and a path aggregation network (PAN) [33]. SPP is used to broaden the receptive field of the model and isolate more important contextual information without causing a decrease in model reasoning speed. The PAN has a structure that repeatedly extracts features, which improves small object detection accuracy. The head network follows the anchor-based head in YOLOv3. In addition, YOLOv4 also combines a variety of tuning techniques, such as the use of the mosaic data enhancement method, weighted residual connection (WRC), self-adversarial training (SAT), and CIoU as a loss function. It is a detection algorithm that exhibits excellent performance compared with current object detection algorithms. The overall structure of YOLOv4 is shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. Schematic diagram of YOLOv4

Download Full Size | PDF

2.1.2 Neck improvements

In the YOLOv4 model, the pictures are downsampled by CSPDarknet53 to extract three different scales of effective feature maps, corresponding to three different scales of YOLO head prediction modules. The shallow feature map contains more location information, and the deep feature map obtains stronger semantic information with continuous downsampling operations. In the Resblock_5 layer, damaged objects with a size smaller than 32×32 are “compressed” to less than one pixel, which can lead to feature disappearance. The size of the damaged object in this paper is mostly less than 32×32 after calculation. Corresponding to the YOLO head module responsible for output prediction and detection of large objects is of little significance, and its existence will also cause model parameter redundancy and consume computing resources. Similarly, in the Resblock_3 layer, damaged objects with a size smaller than 8×8 are “compressed” to less than one pixel, so the model's ability to locate small objects is limited. In summary, the improvements to the YOLOv4 algorithm are as follows:

  • (1) The neck output of the Resblock_5 layer feature map is deleted to reduce model memory and save computing resources.
  • (2) The Resblock_2 layer feature map is introduced to the neck to enhance the model's detection effect on objects with a defect size smaller than 8×8 and improve model accuracy.

2.2 Infrared signal recognition network

According to the essential characteristics of infrared signals, this paper proposes an infrared signal feature extraction and recognition network based on a 1D-CNN. A 1D-CNN can directly perform feature extraction on the time series data of infrared signals without requiring a large amount of data preprocessing operations [34]. It avoids the complicated manual feature extraction work of experts based on high domain knowledge. The network simplifies the feature extraction steps for individual sampled data, reduces the data processing difficulty, and improves the performance of the recognition system. At the same time, its processing speed is faster than a 2D-CNN used to process two-dimensional images [35].

The 1D-CNN model can extract infrared signal features through a convolutional layer, activation function layer, pooling layer, etc., to determine the type of signal. Figure 2 shows a schematic diagram of an infrared signal recognition network, where Input is an infrared signal with a scale of 256×1. Conv1D, BN and MaxPooling layers together form a signal feature extraction network. Flatten converts multidimensional characteristic signals into one-dimensional characteristic signals. Linear is a fully connected layer. Softmax is a classifier used to classify the infrared signals.

 figure: Fig. 2.

Fig. 2. Schematic infrared signal recognition network

Download Full Size | PDF

Figure 3 shows the process of the Conv1D operation. ${x_l} = \{{x_1^l,x_2^l,x_3^l \cdots x_n^l} \}$ is the input characteristic signal. $x_{g({a:a + k - 1} )}^l = \{{x_a^l,x_{a + 1}^l \cdots x_{a + k - 1}^l} \}$ is the gth feature of the Lth layer signal. w is a one-dimensional convolution kernel with a size of $1 \times k$. ${x_{l + 1}} = \{{x_1^{l + 1},x_2^{l + 1} \cdots x_m^{l + 1}} \}$ is the output characteristic signal. $x_h^{l + 1}$ is the hth characteristic signal of the $L + 1$th layer. ${\ast }$ represents the convolution operation, b represents the bias, and $f({\cdot} )$ represents the ReLU activation function. See formula (1) and formula (2) for a detailed calculation:

$$x_h^{l + 1} = f(w \ast x_{g(a:a + k - 1)}^l + b)$$
$$f(z )= \left\{ \begin{array}{cc} 1 &z > 0 \\ 0 &z \le 0 \end{array} \right.$$

 figure: Fig. 3.

Fig. 3. Conv1D calculation process diagram

Download Full Size | PDF

2.3 1D-YOLOv4 model

Figure 4 shows our network model, which has been improved on the basis of the YOLOv4 network. The original infrared image is transformed into a 416${\times} $416 image after preprocessing. The image is extracted by the CSPDarknet53 network. Then, SSP and the PAN merge the low-level feature maps with the high-level feature maps, which can significantly increase the range of the receptive field for the classification network. The Resblock_2 input is introduced, and the Resblock_5 output is deleted. The fused feature map is detected by the YOLO head module to obtain the damaged area of the infrared image. The signal recognition network is used to identify the extracted infrared signals, and by comparing the proportions of the three signal results, the signal category with the highest proportion in the area is judged as the type of damage in the area. Finally, the inspection results are presented on the infrared image, including the area and type of damage.

 figure: Fig. 4.

Fig. 4. 1D-YOLOv4 Schematic

Download Full Size | PDF

3. Experiment

3.1 Data preparation

In this experiment, a total of 50 damaged composite material parts are prepared. Prepreg (Preimpregnated Materials), which is an intermediate material for manufacturing composite materials, uses medium temperature epoxy resin. And formed by molding process. Each plate is 30 cm long, 20 cm wide, and 0.5 cm thick. It includes 50 layers of prepreg and is moulded at 1000 MPa and 120°C. Multiple damage points are randomly arranged inside the material during the prepreg paving process. The damaged materials are iron and plastic, which are used to simulate multiple types of composite material damage.

Active pulse thermal imaging technology in infrared nondestructive testing uses an external heat source to act on the monitored object. The temperature response of the object surface is monitored through a thermal imager, and a corresponding temperature map is constructed for further analysis [36]. The experimental equipment mainly includes a computer system, two pulsed flash lamps, and a thermal imager. In this experiment, a pulse flash lamp is used to illuminate the composite material, and then infrared images are collected by a thermal imager. Figure 5(a) shows the Experimental schematic.

 figure: Fig. 5.

Fig. 5. Experimental equipment (a) Experimental schematic, (b) Infrared image, (c) Infrared signal

Download Full Size | PDF

As shown in Fig. 5(b) and (c), the infrared image is a 400×300×1 greyscale image, which represents the temperature field distribution at a certain time. The infrared signal is a 1×256 temporal signals, which represents the temperature change curve of a certain point in the whole time period. This experiment extracts the infrared signal of the material surface within 20s after excitation, including a total of 256 infrared images, so each group of infrared signals includes 256 data points.

We adopt a variety of programs to increase the amount of data collected. The flash power is set to 60%, 80%, and 100%. The material is heated to 10°C, 30°C, and 50°C. Finally, 2,000 infrared images and 20,000 infrared signals are selected as the data set. This paper divides the data into a training set and a test set at a ratio of 7:3.

3.2 Data preprocessing

The fitting, compression and reconstruction of thermal wave image sequence data is a key common core technology in infrared thermal wave nondestructive testing. This approach can significantly reduce the time-domain noise between frames, reduce the influence of uneven heating, and improve the contrast of damage. Moreover, the storage space can be greatly compressed, and the accuracy and speed of detection can be improved. Therefore, this paper proposes a variety of data preprocessing methods based on polynomial fitting methods [36].

Polynomial fitting is a general data fitting method, and the least squares method is a commonly used approach. The basic concept of the least square method is to minimize the sum of the squares of the distance between the observation point and the estimated point, and a balance is proposed in the error of each equation to prevent a certain extreme error.

$$min\sum\nolimits_{i = 1}^n {{r_i}^2} = min\sum\nolimits_{i = 1}^n {{{[{\varphi ({{x_i}} )- {y_i}} ]}^2}}$$
where $\varphi ({{x_i}} )$ is the approximate curve. The polynomial of the fitting function is:
$${p_n}(x) = \sum\nolimits_{k = 0}^n {{a_k}{x^k}}$$

In our experiments, we found that taking the logarithm of the original data can effectively reduce the amount of data, while the recognition results change little. When selecting the polynomial fitting function, it is found that among the 6-order, 7-order, and 8-order, the 7-order polynomial fitting function has the best effect. Therefore, this experiment finally calculates the logarithm of the original data to reduce the amount of data. A 7-order polynomial fitting function is used as the fitting function. Calculate the first derivative and the second derivative of the fitted function. The details are shown in formulas (5)–(8):

$$data = {\log _{10}}raw$$
$${p_7}(x) = {a_7}{x^7} + {a_6}{x^6} + {a_5}{x^5} + {a_4}{x^4} + {a_3}{x^3} + {a_2}{x^2} + {a_1}x + {a_0}$$
$$p^{\prime} _7(x) = 7{a_7}{x^6} + 6{a_6}{x^5} + 5{a_5}{x^4} + 4{a_4}{x^3} + 3{a_3}{x^2} + 2{a_2}x + {a_1}$$
$$p^{\prime \prime} _7(x) = 42{a_7}{x^5} + 30{a_6}{x^4} + 20{a_5}{x^3} + 12{a_4}{x^2} + 6{a_3}x + 2{a_2}$$

Figure 6 shows a fitting comparison chart. The first row is the 100th frame of the infrared image, and the figure in the second line represents the infrared signal. (a) is the original data, (b) is the fitted data, (c) is the first derivative data, and (d) is the second derivative data. It can be seen in the figure that compared to the original image, the fitted image is blurred, and the first derivative map and the second derivative map are clearer. The degree of discrimination of the first guide image is the highest. The discrimination between the original signal and the fitted signal is small, and the discrimination between the first derivative and second derivative signals is significantly improved. The discrimination of the first derivative signal is the highest. Therefore, the first derivative data are selected as the research object.

 figure: Fig. 6.

Fig. 6. Comparison of 4 types of data (a) Original, (b) Fitted, (c) First derivative, (d) Second derivative

Download Full Size | PDF

3.3 Experimental environment

The operating system is Ubuntu 18.04, the CPU is an Intel Core i7-9500, the GPU is an NVIDIA GeForce RTX 3090, PyTorch is version 1.7, and CUDA is version 11.1.

The parameters of the infrared image recognition model are shown in Table 1. Input is the size of the input image. Batch_size is the number of samples selected for one training session. Momentum is an acceleration technology in the gradient descent method. Learning_rate is an important hyperparameter in deep learning, which determines whether the objective function can converge to a local minimum and when to converge to the minimum. An epoch is the process of training all images once. The learning rate is reduced by 10 times and 100 times when the epoch is16 and 22. We choose the last training model and test it on the test set.

Tables Icon

Table 1. Model parameter information table

The infrared signal classification model is trained for 50 epochs. Input is 1×256. Batch_size is 512. Momentum is 0.949. Learning_rate is 0.0001. We choose the last training model and test it on the test set.

3.4 Evaluation function

In the infrared image detection of composite materials, we quantitatively evaluate related methods based on statistical results. Our classification results have two categories, namely, positive and negative, and then we can obtain the following four cases: true positives (TP), false positives (FP), false negatives (FN), and true negatives (TN). The following 4 evaluation indicators can be obtained:

$$Precision = \frac{{TP}}{{TP + FP}}$$
$$Recall = \frac{{TP}}{{TP + FN}}$$
$$Accuracy = \frac{{TP + TN}}{{TP + FP + FN + TN}}$$

Precision is the proportion of the actual number of positive samples in the predicted samples to the total number of positive samples. Recall is the proportion of the actual number of positive samples in the predicted samples to all predicted samples. Accuracy is the proportion of correct predictions.

$$AP = \int_0^1 {p(r)dr}$$
$$mAP = \frac{{\sum\nolimits_{i = 1}^Q {AP(i)} }}{Q}$$
where p stands for precision, r stands for recall, and p is a function with r as a parameter. Since image detection is a single type, the AP value is the same as the mAP value, so the AP value is used as the evaluation index.

In the classification of infrared signals of composite materials, we use the kappa coefficient to judge the ability of the model to classify infrared signals. The kappa coefficient is a measure of consistency in statistics, with a value of [−1,1]. Its calculation is based on a confusion matrix. It can more accurately reflect the classification effect of the model in the imbalanced data set than Accuracy.

The confusion matrix is a situation analysis table that summarizes the prediction results of the classification model in deep learning. In the form of a matrix, the records in the data set are summarized according to the two criteria of the real category and the category predicted by the classification model. The rows of the matrix represent the true values, and the columns of the matrix represent the predicted values. The calculation formula of Kappa is as follows:

$$Kappa = \frac{{{p_o} - {p_e}}}{{1 - {p_e}}}$$
$${p_o} = \frac{Sum\;of\;diagonal\;elements}{The\;sum\;of\;the\;elements\;of\;the\;entire\;matrix}$$
$${p_e} = \frac{\sum\nolimits_i {(Sum\;of\;elements\;in\;row\;i\; \times \;Sum\;of\;elements\;in\;column\;i} )}{{\left( {\sum {All\;elements\;of\;the\;matrix}}\right)}^2}$$

3.5 Results

According to Table 2, the model used in this paper provides good detection results for damage in the data set. Its accuracy and AP values are 98.3% and 91.9%, respectively.

Tables Icon

Table 2. Infrared image detection model performance

In Fig. 7(a) is the model training loss curve. According to the curve, the model changes significantly in the first 5 epochs, and then it continues to slowly decrease until the end of the training process. We choose to stop after training for 50 epochs, in order to prevent the model from overfitting, and choose the last model for testing. (b) is the model confusion matrix, where 1 represents “Normal”, 2 represents “Iron”, 3 represents “Plastic”, the ordinate is the real class, and the abscissa is the predicted class. It can be seen in the figure that the model's classification accuracy for the three types of signals is 1.00. After calculation, the final kappa value is 0.997, which shows that the model can accurately distinguish the three types of signals.

 figure: Fig. 7.

Fig. 7. Performance of infrared signal classification modelt (a) Model training loss curve, (b) Model confusion matrix

Download Full Size | PDF

The damage detection result of the composite material is shown in Fig. 8. It can be seen in the figure that the model can accurately identify the infrared damage area and type of composite materials, and its detection results are basically consistent with the artificially determined damage area and type. The test results show that the method in this paper can better identify the damage of composite materials and has a high detection quality. This provides a feasible method to solve the damage identification and location of composite materials.

 figure: Fig. 8.

Fig. 8. Model test results

Download Full Size | PDF

4 Discussion

4.1 Comparative experiment

We designed four sets of experimental methods for comparative analysis to prove the advanced nature of our method. Experiment 1 is the YOLOv3 model. Experiment 2 is the original YOLOv4 model. Experiment 3 is the YOLOv4 model using our improved neck method. Experiment 4 is our 1D-YOLOv4 model. The quantitative evaluation is shown in Table 3, and the damage detection image is shown in Fig. 9.

 figure: Fig. 9.

Fig. 9. Comparison of test results (a) Original, (b) Ground Truth, (c) YOLOv3, (d) YOLOv4, (e) YOLOv4+Neck, (f) 1D-YOLOv4

Download Full Size | PDF

Tables Icon

Table 3. Model performance comparison

In Table 3, accuracy represents the percentage of correct predictions in the model's prediction results. AP represents the accuracy of infrared image detection, and kappa represents the classification accuracy of infrared signals. FPS (Frames Per Second) is the detection speed of the model. The accuracy of YOLOv3 is 94.3%, and the AP is 84.1%, which is the worst among the four model types. YOLOv4 has an AP of 90.8% and an FPS of 48.8, which is the fastest detection speed. Experiments show that the AP of YOLOv4+Neck is 91.6% and FPS is 46.3, which is the high detection result among the four model types. Compared with YOLOv4, the AP value of 1D-YOLOv4 is increased by 3.1%, and the FPS is only decreased by 2.1. The detection accuracy is improved while maintaining a high detection speed, which is conducive to the identification of composite material loss. Only 1D-YOLOv4 has a Kappa value, which is 0.997. This is because only 1D-YOLO has signal recognition capabilities and other models do not.

In Fig. 9(a) shows the original image, (b) the actual damaged mark image, and (c)-(f) the resulting images of each model detection. In Fig. 9(b), the blue box represents iron, and the red box represents plastic. YOLOv3 has poor detection capabilities for small objects and has repeated detections. It is the model with the worst detection effect. YOLOv4 can detect all objects, but the confidence of detection is not high. The detection accuracy of YOLOv4+Neck is high. The 1D-YOLOv4 detection result has the highest accuracy, meets the required performance requirements, and has good robustness. 1D-YOLOv4 can realize the identification of damage types, such as “Normal”, “Iron”, and “Plastic” marked in the figure, which are not available in other models.

4.2 Data comparison experiment

In order to show that the data processing method of taking the logarithm of the original data and performing the 7-order polynomial fitting function is effective, we conducted a data comparison experiment. Including no logarithm, 6-order and 8-order. The quantitative evaluation table is shown in Table 4.

Tables Icon

Table 4. Comparison of data

According to the indicators in the table, among the four types of experiments, the Kappa value of ‘Logarithm + 7-order’ is the highest. When the logarithm is taken, the training time is 8 minutes, and when the logarithm is not taken, the training time is 56 minutes. Therefore, it is effective to adopt a data processing method that takes the logarithm of the original data and performs a 7-order polynomial fitting function.

We conduct experiments on data one by one to illustrate the effectiveness of our data processing, including original data, fitted data, first derivative data, and second derivative data. The quantitative evaluation is detailed in Table 5.

Tables Icon

Table 5. Data performance comparison

In Table 5, the detection speeds of the four types of models are almost the same, but the AP value and the kappa value are quite different. The AP and kappa values of the original data are 83.0% and 0.892, respectively. The AP and kappa values of the fitted data are 75.4% and 0.811, respectively, which are the worst among the four types of data. The AP and kappa values of the first derivative data are 91.9% and 0.997, respectively. The AP and kappa values of the second derivative data are 88.6% and 0.990, respectively. The results show that the infrared image detection accuracy of the first guided data is the highest, and the signal classification ability is the strongest. Especially the signal classification ability is greatly affected by the data type.

Figure 10(a) the result images of original image, (b) the result images of the fitted image, (c) the result images of the first derivative image, (d) the result images of the second derivative image. In figure (a), there are a large number of missed detections in image detection, and the signal detection effect is relatively poor. In figure (b), there are a large number of missed detections in image detection, and some signals are classified incorrectly. In figure (c), all objects are detected in the image, and the signal classification is accurate. In figure (d), there are a few missed detections in the image detection, and the signal classification is accurate. Among the four types of data, the first derivative detection result has the highest accuracy, can meet the required performance requirements, and has good robustness.

 figure: Fig. 10.

Fig. 10. Comparison of test results (a) Original, (b) Fitted, (c) First derivative, (d) Second derivative

Download Full Size | PDF

5. Conclusion

In this article, we first prepared composite material damage specimens and used infrared nondestructive testing methods to detect composite materials. Then, the 1D-YOLOv4 network was designed to realize the intelligent fusion detection of infrared images and infrared signals. Finally, we compared our model with a variety of models, and the results showed that our model has the best detection capabilities.

After improvement, the accuracy, AP and kappa of the 1D-YOLOv4 network reached 98.3%, 91.9% and 0.997, respectively. The algorithm can detect infrared images and infrared signals at the same time. It can identify the area of composite material damage and can judge the type of damage, which provides a new concept for composite material detection.

Funding

National Natural Science Foundation of China (51975583).

Acknowledgement

We acknowledge support from the National Natural Science Foundation of China (51975583).

Disclosures

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. D. Hull and T. W. Clyne, “An Introduction to Composite Materials,” 2(1), xvii–xxix (1994).

2. M. Maria, “Advanced composite materials of the future in aerospace industry,” Incas Bulletin 5(3), 139–150 (2013). [CrossRef]  

3. D. C. Aronstein, M. J. Hirschberg, A. C. Piccirillo, and I. Ebrary, Advanced Tactical Fighter to F-22 Raptor: Origins of the 21st Century Air Dominance Fighter. Advanced Tactical Fighter to F-22 Raptor: Origins of the 21st Century Air Dominance Fighter, 1998.

4. P. A. Toensmeier, “Advanced composites soar to new heights in Boeing 787,” Plastics Engineering -Connecticut 61(8), 8 (2005).

5. M. A. Limin, J. Zhang, G. Yue, J. Liu, and J. Xue, “Application of composites in new generation of large civil aircraft,” Fuhe Cailiao Xuebao/Acta Mater. Compos. Sin. 32(2), 317–322 (2015). [CrossRef]  

6. T. J. a, C. P. b, and F. L. b, “Damage of Composite Materials,” Procedia Eng. 66, 746–758 (2013). [CrossRef]  

7. R. C. Pavan, G. J. Creus, and S. Maghous, “A simplified approach to continuous damage of composite materials and micromechanical analysis,” Compos. Struct. 91(1), 84–94 (2009). [CrossRef]  

8. S. Gholizadeh, “A review of non-destructive testing methods of composite materials,” Procedia Struct. Integr. 1, 50–57 (2016). [CrossRef]  

9. J. M. Arenas, C. Alía, J. Narbón, R. Ocaña, and C. González, “Considerations for the industrial application of structural adhesive joints in the aluminium–composite material bonding,” Composites, Part B 44(1), 417–423 (2013). [CrossRef]  

10. B. M. Galkin, “Method and apparatus for testing radiographic film processors,” 1991.

11. J. Krautkrämer and H. Krautkrämer, “Ultrasonic testing of materials,” J. Appl. Mech. 51(1), 225 (1984). [CrossRef]  

12. M. R. Gorman, “Plate wave acoustic emission,” J. Acoust. Soc. Am. 90(1), 358–364 (1991). [CrossRef]  

13. J. S. Lee, “Digital Image Enhancement and Noise Filtering by Use of Local Statistics,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-2(2), 165–168 (1980). [CrossRef]  

14. X. Bai, F. Zhou, and B. Xue, “Infrared image enhancement through contrast enhancement by using multiscale new top-hat transform,” Infrared Phys. Technol. 54(2), 61–69 (2011). [CrossRef]  

15. P. G. Cielo, X. Maldague, and J. C. Krapez, “Device for subsurface flaw detection in reflective materials by thermal transfer imaging,” 1991.

16. R. Mulaveesala and S. V. Ghali, “Coded excitation for infrared non-destructive testing of carbon fiber reinforced plastics,” Rev. Sci. Instrum. 82(5), 054902 (2011). [CrossRef]  

17. W. Choi, Y.-J. J. I. T. o, and I. E. Cha, “SDDNet: Real-time crack segmentation,” IEEE Trans. Ind. Electron. 67(9), 8016–8025 (2020). [CrossRef]  

18. V. Munoz, P. Valez, M. Perrin, M.L. Pastor, H. Welemane, A. Cantarel, and M. Karama, “Damage detection in CFRP by coupling acoustic emission and infrared thermography,” Composites, Part B 85, 68–75 (2016). [CrossRef]  

19. G. Li, Y. Yang, X. J. I. T. o, and I. E. Qu, “Deep Learning Approaches on Pedestrian Detection in Hazy Weather,” IEEE Trans. Ind. Electron. 67(10), 8889–8899 (2020). [CrossRef]  

20. N. Saeed, N. King, Z. Said, and M. A. Omar, “Automatic defects detection in CFRP thermograms, using convolutional neural networks and transfer learning,” Infrared Phys. Technol. 102, 103048 (2019). [CrossRef]  

21. A. Khan, D.-K. Ko, S. C. Lim, and H. S. Kim, “Structural vibration-based classification and prediction of delamination in smart composite laminates using deep learning neural network,” Composites, Part B 161, 586–594 (2019). [CrossRef]  

22. J. Zhang, Y. Sun, L. Guo, H. Gao, and H. Song, “A new bearing fault diagnosis method based on modified convolutional neural networks,” Chin. J. Aeronaut. 33(2), 439–447 (2020). [CrossRef]  

23. G. Tripathi, H. Anowarul, K. Agarwal, and D. K. Prasad, “Classification of Micro-Damage in Piezoelectric Ceramics Using Machine Learning of Ultrasound Signals,” Sensors 19(19), 4216 (2019). [CrossRef]  

24. C. Schmidt, T. Hocke, and B. Denkena, “Artificial intelligence for non-destructive testing of CFRP prepreg materials,” Prod. Eng. Res. Devel. 13(5), 617–626 (2019). [CrossRef]  

25. M. Meng, Y. J. Chua, E. Wouterson, and C. P. K. Ong, “Ultrasonic signal classification and imaging system for composite materials via deep convolutional neural networks,” Neurocomputing 257, 128–135 (2017). [CrossRef]  

26. Q. Luo, B. Gao, W. L. Woo, and Y. Yang, “Temporal and spatial deep learning network for infrared thermal defect detection,” NDT&E Int. 108, 102164 (2019). [CrossRef]  

27. H. Wei, S. S. Zhao, Q. Y. Rong, and H. Bao, “Predicting the effective thermal conductivities of composite materials and porous media by machine learning methods,” Int. J. Heat Mass Transfer 127, 908–916 (2018). [CrossRef]  

28. H. T. Bang, S. Park, and H. Jeon, “Defect identification of composites via thermography and deep learning techniques,” Compos. Struct. 246, 112405 (2020). [CrossRef]  

29. A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.10934 2020.

30. J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018.

31. C. Y. Wang, H. Liao, I. H. Yeh, Y. H. Wu, P. Y. Chen, and J. W. Hsieh, “CSPNet: A New Backbone that can Enhance Learning Capability of CNN,” 2019.

32. K. He, X. Zhang, S. Ren, and J. Sun, “Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1904–1916 (2015). [CrossRef]  

33. S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, “Path Aggregation Network for Instance Segmentation,” IEEE, 2018.

34. H. Cho and Y. Sang, “Divide and Conquer-Based 1D CNN Human Activity Recognition Using Test Data Sharpening,” Sensors 18(4), 1055 (2018). [CrossRef]  

35. S. Huang, J. Tang, J. Dai, and Y. Wang, “Signal Status Recognition Based on 1DCNN and Its Feature Extraction Mechanism Analysis,” Sensors 19(9), 2018 (2019). [CrossRef]  

36. K. Q. Levenberg, “A Method for The Solution of Certain Non-Linear Problem in Least Squares,” Quart.apl.math, 1944.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Schematic diagram of YOLOv4
Fig. 2.
Fig. 2. Schematic infrared signal recognition network
Fig. 3.
Fig. 3. Conv1D calculation process diagram
Fig. 4.
Fig. 4. 1D-YOLOv4 Schematic
Fig. 5.
Fig. 5. Experimental equipment (a) Experimental schematic, (b) Infrared image, (c) Infrared signal
Fig. 6.
Fig. 6. Comparison of 4 types of data (a) Original, (b) Fitted, (c) First derivative, (d) Second derivative
Fig. 7.
Fig. 7. Performance of infrared signal classification modelt (a) Model training loss curve, (b) Model confusion matrix
Fig. 8.
Fig. 8. Model test results
Fig. 9.
Fig. 9. Comparison of test results (a) Original, (b) Ground Truth, (c) YOLOv3, (d) YOLOv4, (e) YOLOv4+Neck, (f) 1D-YOLOv4
Fig. 10.
Fig. 10. Comparison of test results (a) Original, (b) Fitted, (c) First derivative, (d) Second derivative

Tables (5)

Tables Icon

Table 1. Model parameter information table

Tables Icon

Table 2. Infrared image detection model performance

Tables Icon

Table 3. Model performance comparison

Tables Icon

Table 4. Comparison of data

Tables Icon

Table 5. Data performance comparison

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

x h l + 1 = f ( w x g ( a : a + k 1 ) l + b )
f ( z ) = { 1 z > 0 0 z 0
m i n i = 1 n r i 2 = m i n i = 1 n [ φ ( x i ) y i ] 2
p n ( x ) = k = 0 n a k x k
d a t a = log 10 r a w
p 7 ( x ) = a 7 x 7 + a 6 x 6 + a 5 x 5 + a 4 x 4 + a 3 x 3 + a 2 x 2 + a 1 x + a 0
p 7 ( x ) = 7 a 7 x 6 + 6 a 6 x 5 + 5 a 5 x 4 + 4 a 4 x 3 + 3 a 3 x 2 + 2 a 2 x + a 1
p 7 ( x ) = 42 a 7 x 5 + 30 a 6 x 4 + 20 a 5 x 3 + 12 a 4 x 2 + 6 a 3 x + 2 a 2
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
A c c u r a c y = T P + T N T P + F P + F N + T N
A P = 0 1 p ( r ) d r
m A P = i = 1 Q A P ( i ) Q
K a p p a = p o p e 1 p e
p o = S u m o f d i a g o n a l e l e m e n t s T h e s u m o f t h e e l e m e n t s o f t h e e n t i r e m a t r i x
p e = i ( S u m o f e l e m e n t s i n r o w i × S u m o f e l e m e n t s i n c o l u m n i ) ( A l l e l e m e n t s o f t h e m a t r i x ) 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.