Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Convolutional neural network based displacement gradients estimation for a full-parameter initial value guess of digital image correlation

Open Access Open Access

Abstract

The selection of initial value in digital image correlation (DIC) has significant influence on the search efficiency of image subpixel displacement and the algorithmic convergence speed. An accurate and reasonable initial value can reduce the number of iterations of subsequent IC-GN optimization, accelerate the convergence of the results, and avoid the divergence of the algorithm in the iterative process. This paper proposes a full-parameter initial value estimation method based on a regression convolution neural network with multithreaded calculation. The proposed method sequentially uses the integer-pixel estimation based on neighborhood search, the subpixel estimation based on surface fitting and the first-order displacement gradients estimation based on a regressive convolutional neural network to achieve the initial value estimation of inverse compositional Gauss-Newton (IC-GN) iteration. Experimental results show that the iteration times of the proposed method are reduced by about 30% compared with the integer-pixel initial value estimation method. In the process of IC-GN iteration, the computational efficiency of CPU multithreaded calculation is nearly twice higher as that of the single-thread method. It can not only improve the accuracy of the initial value estimation but also has high adaptability, which can adapt to selecting different subset sizes or different speckle patterns. This study provides a reference for the effect of iterative initial value optimization on efficiency and accuracy in DIC.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

As a non-contact optical deformation measurement method, digital image correlation (DIC) [13] has been widely used in experimental mechanics and other related scientific and engineering fields to measure shape, motion, and deformation due to its simple optical path, strong anti-interference ability and suitable for full-field measurement [410]. In order to track the position of a physical point from a reference image to the deformed image, a square subset centered on the point is usually selected as the reference subset. Since the shape of the reference subset is changed after deformation, shape function is used to estimate the shape and position of the image subset after deformation. The zero-mean normalized sum-of-squares difference (ZNSSD) criterion quantifying the similarity degree of image subsets before and after deformation is a nonlinear equation with respect to the desired deformation parameter vectors [11], which are solved by classic iterative Newton-Raphson (NR) algorithm [11,12], forward additive Newton-Raphson (FA-NR) algorithm [11,13,14], inverse compositional Gauss-Newton (IC-GN) algorithm [15] or other non-linear iterative local optimization algorithms. Nevertheless, one common feature of these algorithms is that they are all local optimization algorithms, which requires an initial estimation of deformation sufficiently close to the true value to start the iterative calculation, and appropriate convergence criteria to end the iteration [16]. Therefore, accurate initial value is the premise of obtaining reliable displacement field. When the deformation of the target subset is small relative to the reference subset, the initial value obtained by the relevant integral pixel search algorithm is used as the initial iterative value of the FA-NR algorithm or IC-GN algorithm. When the surface of the measured object has a large deformation, it is easy to fall into local convergence [16].

At present, some initial value estimation methods are based on the traditional feature extraction algorithm to estimate the initial value [1,13,1619]. The traditional SIFT (scale-invariant feature transform [20] feature matching and SURF (speeded up robust feature) [21] were based on Linear Gaussian pyramid multiscale decomposition, which sacrificing the local accuracy. And these traditional methods will have certain matching errors in feature point matching, which will reduce the efficiency of the following optimization algorithm. On the other hand, these methods use FA-NR algorithm to optimize the solution after estimating the initial value. However, IC-GN algorithm is superior to traditional FA-NR algorithm in convergence time and calculation speed. Therefore, this paper uses IC-GN algorithm to optimize after estimating the initial value, which can greatly improve the effectiveness of the algorithm.

In the past decade years, convolutional neural networks had developed rapidly in the field of optics [2225]. Especially combining with the optical flow [26], the performance of the network has been greatly improved [24,2729]. And the displacement accuracy of the trained Artificial Neural Network (ANN) could already reach sub-pixel level [30,31]. Even the displacement accuracy trained by Ma et al. [32] was as high as 0.01 pixel, the efficiency was not discussed. However, it’s worth celebrating that their research of displacement estimation based on the neural networks break the limits of traditional thinking in the field of initial value estimation. In this paper, the use of neural network for initial value estimation can not only improve the initial value calculation speed, but also reduce the efficiency of subsequent IC-GN algorithm optimization.

Besides, according to the results concluded from the existing studies of subset-based DIC, the selection of subset size has a crucial influence on measurement accuracy whichever kind of subpixel registration algorithm is adopted. Generally, the appropriate size selection for a specified subset is usually determined by image quality and local deformation gradient. Whether it is to select a uniform subset size for all POIs in the whole image for correlation calculation according to the users’ experience and intuition [33], or to determine the optimal subset size for any POI in the image by a self-adaptive manner [34], the variable subset size is the basic requirement of subset-based DIC.

In this paper, a full-parameter initial value estimation method based on regression neural network with multi-threaded calculation was proposed to provide more accurate initial values and reduce the iteration times of IC-GN. In Section 2, the definition of full parameters and their corresponding calculation strategies were described in detail. Then, in Section 3, the adaptability to different sizes of subsets and different types of speckles was shown, and the reasons for recommending 41 × 41 pixels subsets were analyzed in detail. In Section 4, the efficiency and accuracy of the proposed method were analyzed. Finally, the algorithm was verified by the actual experimental results.

2. Full-parameter initial value estimation

Authors should refer to the Publication Charge page for journal specific information about article processing charges (APCs), open access options, and overlength fees.

As shown in the Fig. 1, the initial values ${p_0}$ is particularly important in the IC-GN algorithm. In the original IC-GN algorithm, the initial values ware only integer pixel parameter.

$${p_0} = \{{u,v,{u_x},{u_y},{v_x},{v_y}} \}$$
where, u and v represent the displacement components in horizontal and vertical directions respectively, ${u_x},{u_y},{v_x},{v_y}$ represent the first-order displacement gradient.

 figure: Fig. 1.

Fig. 1. Full-parameter initial value estimation with IC-GN algorithm.

Download Full Size | PDF

The accuracy of the initial value not only affects the accuracy of the displacement estimation method, but also affects the number of iterations of the IC-GN algorithm. In the proposed method, u and v were divided in two parts in Eq. (2).

$${P_0} = \{{{u_{int}} + {u_{sub}},{v_{int}} + {v_{sub}},{u_x},{u_y},{v_x},{v_y}} \}$$
where, ${u_{int}}$ and ${v_{int}}$ represent the integer pixel values in horizontal and vertical directions, respectively. ${u_{sub}}$ and ${v_{sub}}$ represent the sub-pixel values in horizontal and vertical directions, respectively. The flow chart of the proposed full-parameter initial value estimation algorithm is shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. The flow chart of the proposed full-parameter initial value estimation algorithm.

Download Full Size | PDF

The neighborhood search method was applied to the displacement components estimation of integer pixels in the proposed method, it should be noted that the step size of its search was only the pixel level. The algorithm performed a Zero-mean Normalized Cross-Correlation evaluate on each subset within a defined suitable region, the exact integer pixel value displacement can be calculated. Firstly, a seed point $({{x_0},{y_0}} )$ was selected on reference image, and the seed point was regarded as the center of the reference subset. Secondly, a point $({x,y} )$ was selected on deformed image, and a same size deformed subset was got on deformed image with a center of point. And the reference subset and deformed subset were evaluated by the Eq. (3) to evaluate their degree of correlation.

$${C_{ZNCC}} = \frac{{\mathop \sum \nolimits_{({i,j} )\in s} [{f({{x_{refi}},{y_{refj}}} )- {f_m}} ][{g({{x_{curi}},{y_{curj}}} )- {g_m}} ]}}{{\sqrt {\left[ {\mathop \sum \nolimits_{({i,j} )\in s} {{[{f({{x_{refi}},{y_{refj}}} )- {f_m}} ]}^2}} \right]\left[ {\mathop \sum \nolimits_{({i,j} )\in s} {{[{g({{x_{curi}},{y_{curj}}} )- {g_m}} ]}^2}} \right]} }}$$
where, $f({{x_{refi}},{y_{refj}}} )$ and $g({{x_{curi}},{y_{curj}}} )$ represent the grayscale values at the point $({x,y} )$ in the reference and the deformed subset, respectively. ${f_m}$ and ${g_m}$ represent the mean grayscale values of the reference and the deformed subset, respectively.
$${f_m} = \frac{{\mathop \sum \nolimits_{({i,j} )\in s} f({{x_{refi}},{y_{refj}}} )}}{{N(s )}}$$
$${g_m} = \frac{{\mathop \sum \nolimits_{({i,j} )\in s} g({{x_{curi}},{y_{curj}}} )}}{{N(s )}}$$
where, $N(s )$ represents the number of elements in s. Then a point which is adjacent of point $({x,y} )$ was selected to repeat the above process. In order to reduce the amount of calculation, the search range was limited to a suitable region, and the size of the search region was determined by the deformation displacement. Finally, the coordinates of the point with the highest degree of correlation were taken as the best matching point of the seed point.

2.2 Initial value estimation of sub-pixel displacement components

In order to improve the iterative efficiency of IC-GN, the method of quadratic surface fitting was applied to sub-pixel displacement components estimation. The correlation coefficient matrix of the 3 × 3 points was fitted by the quadratic surface fitting, and the center of 3 × 3 point $({x,y} )$ was set by the integer pixel displacement component estimated in the previous step. The fitting expression of bivariate quadric surface is as follows:

$$C({{x_i},{y_j}} )= {a_0} + {a_1}{x_i} + {a_2}{y_j} + {a_3}x_i^2 + {a_4}{x_i}{y_j} + {a_5}y_j^2$$
where, $C({{x_i},{y_j}} )$ represents the correlation coefficient at $({{x_i},{y_j}} )$; ${a_0}$, ${a_1}$, ${a_2}$, ${a_3}$, ${a_4}$ and ${a_5}$ represent the coefficient term, which extracted by the least square method. And the sub-pixel displacement components are as follows:
$${u_{sub}} = \frac{{2{a_1}{a_5} - {a_2}{a_4}}}{{a_4^2 - 4{a_3}{a_5}}}$$
$${v_{sub}} = \frac{{2{a_2}{a_3} - {a_1}{a_4}}}{{a_4^2 - 4{a_3}{a_5}}}$$

2.3 Initial value estimation of first-order displacement gradients

In this paper, the regression convolution neural network was used to calculate the four parameters of the first-order displacement gradient. As shown in Fig. 2, the network was composed by three convolution layers, two pool layers, two full connection layers and one output layer. And the ReLU activation function was applied for the network. In order to avoid the image feature collection surface being too small, and to perform cross-channel interaction and information integration, the subset of image was set to 41 × 41 pixels and the convolution type was set to 1 × 1. In Fig. 2, the input reference image of the network was the reference image obtained by frequency domain phase shifting [35], and the phase shift parameters in frequency domain was extracted by the initial value estimation of integer-pixel displacement components ${u_{int}}$, ${v_{int}}$ and sub-pixel displacement components ${u_{sub}}$, ${v_{sub}}$, the input deformed image of the network was the original deformed image. And the output was initial value estimation of first-order displacement gradients calculation by regression convolution neural network.

Besides, instead of the traditional mean square loss function, this paper proposed a custom loss function based on first-order shape function theory.

$$loss = \frac{{\sqrt {\mathop \sum \nolimits_{i = 1}^n {{[{({{{\hat{u}}_{{x_i}}} - {u_{{x_i}}}} )dx + ({{{\hat{u}}_{{y_i}}} - {u_{{y_i}}}} )dy} ]}^2} + \mathop \sum \nolimits_{i = 1}^n {{[{({{{\hat{v}}_{{x_i}}} - {v_{{x_i}}}} )dx + ({{{\hat{v}}_{{y_i}}} - {v_{{y_i}}}} )dy} ]}^2}} }}{n}$$
where, ${\hat{u}_{{x_i}}}$, ${\hat{u}_{{y_i}}}$, ${\hat{v}_{{x_i}}}$ and ${\hat{v}_{{y_i}}}$ represent the predicted value of first-order displacement gradient at the point $({{x_i},{y_i}} )$ in horizontal and vertical directions, respectively. ${u_{{x_i}}}$, ${u_{{y_i}}}$, ${v_{{x_i}}}$ and ${v_{{y_i}}}$ represent the true value of first-order displacement gradient at the point $({{x_i},{y_i}} )$ in horizontal and vertical directions, respectively. $dx$ and $dy$ represent the distance matrices in horizontal direction and vertical direction from the point on the subset to the center of the subset, respectively.

2.4 Multi-thread DIC based on CPU

In the usual DIC calculation, the rectangular ROI contains thousands of POIs whose displacement components and first-order displacement gradients are to be calculated. Based on the continuous deformation assumption and observations from many previous studies [36], only the initial value of first POI (seed point) needs to be determined by initial value estimation method, the initial value parameters of previous point could be propagated as the initial value for the current point, and this initial value propagation method is adopted by this study. After the initial value estimation of the seed point is determined, for the rectangular region of interest containing A × B points, the bidirectional S-shaped calculation path is used as shown in Fig. 3(a). With the seed point as the center, on the row where the seed point is, the correlation calculation is performed from left to right for the points of interest behind the seed point, and from right to left for the points of interest before the seed point. For the row before the seed point, the correlation calculation is performed from left to right on odd rows, from right to left on even rows, and the opposite is true for rows after the seed point. The advantages of this calculation path extension algorithm are: (1) In the initial value estimation part, only the seed point was used to calculate the first-order shape function parameters, which can reduce a lot of calculation time. (2) The initial value of the next seed point is passed from the previous seed point, and the number of iterations is less in the sub-pixel iterative calculation process.

 figure: Fig. 3.

Fig. 3. Schematic diagram of four thread bidirectional S-shaped extension algorithm.

Download Full Size | PDF

In order to improve the calculation speed, the parallel computing was used for the initial guess of the POI [37]. The paralleled multi-thread computing with bidirectional S-shaped calculation path is applied to further increase the speed of calculation. Four seed points are selected in rectangular ROI and four paralleled threads are created. One thread performs a point at a time and expands the sub-ROI according to bidirectional S-shaped path. The threads are terminated until all points in the whole sub-ROI have been processed. This ensures that the sub-ROIs growing from the seed points are contiguous and the computation of the entire image is assigned relatively evenly for each thread. Figure 3(b) provides a brief description of a four threads computation for a rectangular ROI, which is applied in this study.

3. Adaptive scheme for processing parameter changes

3.1 Adaptive approach of subset size selects

The proposed first-order displacement gradient estimation algorithm based on the convolutional neural network, needs to be able to adapt to the selects of the subset size, which enables us to minimize the effect on the measurement accuracy. The problem we are faced with is that when the size of the input image changes, the convolutional neural network will fail to work because of the mismatching of the matrix dimensions of the fully connected layer. A forced sampling method for subset of seed point is proposed to deal with the particular problem, which uses the forced sampling method to unify the input image size, so as to adapt to the fixed network parameter model. For the seed point required initial value estimation, no matter what the subset size selected by the users is, the size of the input image for the convolutional neural network is forced to be fixed at 41 × 41 pixels.

That is, when the selected subset size is less than or equal to 41 × 41 pixels, the subset image pairs of 41 × 41 pixels are extracted with the seed point as center and used as the input of the convolutional neural network. When the selected subset size is larger than 41 × 41 pixels, the reference image subset and the deformed image subset are down sampled to 41 × 41 pixels to adapt to the input of the convolutional neural network. It should be pointed out that for the other POIs except the seed point, DIC calculation is carried out using the user-selected subset size combined with the above initial value propagation method.

3.2 Adaptive approach of speckle pattern changes

As the carrier of deformation information, speckle has different gray distribution characteristics. However, it is not possible to guarantee that the type of speckle will remain the same every time. Based on the large number of samples and time cost on the deep learning, it is obviously unreasonable to perform deep learning on different types of speckles one by one. Differ to the traditional machine learning, transfer learning focuses on storing the solution model of an existing problem and using it on other different but related problems, and the task of repeating network learning can be avoided and the training speed of the network can be accelerated by weight sharing [29,38]. Hence, transfer learning was applied to the network for initial value estimation of first-order displacement gradients. And based on the proposed regression convolution neural network, the Gaussian speckle can be learnt without a large number of samples [32]. In order to improve the efficiency of transfer learning and ensure the training results, the method of fine-tuning the network parameters was used to perform transfer learning of different types of speckle patterns. This transfer learning method enables the initial values of first-order displacement gradients to be accurately and efficiently evaluated by the network even when the number of training sets was halved.

4. Verifications of the proposed method

4.1 Efficiency analysis of the proposed initial value estimation method

The initial value estimation algorithm of all parameters proposed in this paper improve the computational efficiency. The initial value estimation of the first-order displacement gradients was combined with the multi-thread calculation method of seed point diffusion, which improves the operation efficiency.

In order to exclude the influence of experimental conditions on the number of iterations, the two types of speckle pattern with 125 × 125 pixels image pairs generated by the Boolean speckle model [39] and Gaussian speckle And the deformed image with ${u_{int}} = 3$, ${u_{sub}} = 0.75$, ${v_{int}} ={-} 3$, ${v_{sub}} ={-} 0.32$, ${u_x} = 0.01$, ${u_y} ={-} 0.01$, ${v_x} ={-} 0.005$ and ${v_y} = 0.01$ initial values were generated by Ref. [40]. And the convergence criteria were set to 10−4 pixels [41]. A sub-set of 81 × 81 pixels in the middle of the reference image was selected as the ROI, and the number of iterations of initial value estimation of all parameters, integer pixel initial value estimation and zero initial value parameter estimation were compared by the IC-GN algorithm. The calculation results are as expected, the average number of iterations for full-parameter initial value estimation was only 27 times. And the number of iterations of integer pixel initial value estimation was 38 times, the zero-parameter initial value estimation was 54 times. As shown in Fig. 4(a), the number of iterations of the full parameter initial value estimation was about 33% less than the integer pixel initial value estimation, and half of the zero initial value parameter estimation. Therefore, the full parameter initial value estimation method proposed in this paper can effectively reduce the number of iterations of the IC-GN algorithm. Figure 4(b) shows the number of iterations of different types of initial value estimation with Gaussian speckle.

 figure: Fig. 4.

Fig. 4. The number of iterations with different parameter initial value; (a) The number of iterations by original network; (b) The number of iterations by transfer learning.

Download Full Size | PDF

4.2 Accuracy verification of the proposed initial value estimation method

The accuracy of the proposed full-parameter initial value estimation method is composed of three parts: initial value estimation of integer-pixel displacement components, initial value estimation of sub-pixel displacement components and initial value estimation of first-order displacement gradients. The accuracy of these three parts will be verified separately below.

Four 501 × 501 pixels Boolean model speckle images are used as reference images, and the corresponding images according to the Gaussian speckle respectively assign the deformation parameters with the image center as the deformation center as shown in Table 1. And the ROI was set as the 261 × 261 pixels in the middle area, the search subset was set as 60 × 60 pixels, the subset was set as 41 × 41 pixels, and the step size is 5 pixels. The test results are shown in Fig. 5(a). Figure 5(b) and 5(c) show that the initial value of the integer pixels can be accurately estimated, and its standard deviation error is 0.

 figure: Fig. 5.

Fig. 5. The result of integer-pixel displacement test.

Download Full Size | PDF

Tables Icon

Table 1. Detailed deformation parameters of integer pixel

In order to verify the accuracy of the sub-pixel displacement estimation, several groups of different forms of simulated deformation experiments were performed to verify the error of the quadratic surface fitting. The reference image was also generated by the Boolean model, and the Gaussian speckle was used to generate deformed images. The detailed parameters are shown in Table 2. The subset was set as 41 × 41 pixels, and all points in the subset were calculated by quadratic surface fitting. The calculated displacement fields are shown in Fig. 6(a), and the displacement error in the v direction is higher than the u direction in Fig. 6(b) and 6(c). Besides, the error distributions of pure tension and pure shear are similar in Fig. 6. Based on the characteristics of quadratic function fitting, the magnitude of displacement error in group B4 and group B4 is consistent with the displacement parameters in Table 2. And all displacement was less than 0.25 pixels.

 figure: Fig. 6.

Fig. 6. The result of sub-pixel displacement test.

Download Full Size | PDF

Tables Icon

Table 2. Detailed deformation parameters of subpixel

The regression convolution neural network was used to calculate the four parameters of the first-order displacement gradient. And the image pairs were generated by the same way as the previous approach. The generated image size was 125 × 125 pixels, the subset was set as 41 × 41 pixels, and the data set samples are shown in Table 3.

Tables Icon

Table 3. The parameters of data set samples

As shown in Fig. 7, the maximum deviation of prediction was 0.0024 and the mean deviation was about 0.00049.

 figure: Fig. 7.

Fig. 7. Absolute deviation of first-order displacement gradient: (a) Absolute deviation of parameter ${u_x}$; (b) Absolute deviation of parameter ${u_y}$; (c) Absolute deviation of parameter ${v_x}$; (d) Absolute deviation of parameter ${v_y}$.

Download Full Size | PDF

4.3 Performance of the adaptive scheme for processing parameter changes

For the selected subset was not 41 × 41 pixels, the method of controlling variables was used to verify the results of the four parameters of the first-order displacement gradient estimated by the proposed method. The image pairs were generated by the same way as the previous approach. And the value of each parameter that needs to be verified in the deformed image was in the range of -0.02∼0.02 with an increment of 0.002. Each value was set to 50 images, and the other three parameter values were set randomly within the range of -0.02∼0.02. In order to verify the performance of the proposed adaptive approach for changing the subset size, the subset was set to 41 × 41 pixels, 61 × 61 pixels, 81 × 81 pixels and 101 × 101 pixels, respectively. As shown in Fig. 8, the best estimated was the subset with 41 × 41 pixels, because some deformation laws of image are changed by this approach, which are different from the training set of neural network. And the mean error of the 41 × 41 pixels subset was about 0.0005, and as the subset increases, the mean error also increases. And the larger the selected subset with the smaller deformation, the smaller the mean error. Therefore, a small deformation with a large subset, the estimation will be more accuracy. But for larger deformation, the deformation trend can be estimated by the network.

 figure: Fig. 8.

Fig. 8. The first-order displacement gradients estimated by different subsize.

Download Full Size | PDF

For the different speckle type, also to exclude the influence of experimental conditions, image pairs were generated by a Gaussian speckle model. Before the transfer learning of the new speckle, 10,000, 20,000, and 30,000 pairs of images were tried as training sets, and the training results failed to meet the expected expectations. For the new speckle pattern, the train set was halved based on the original sample, and Table 4 shows the details of datasets in transfer learning. The initial learning rate is set to 0.0001, and the network is set to stop training after 110 epochs. Not all transfer learning method can be used in the proposed network, Fig. 9 shows the efficiency of random initialization parameters by the three transfer learning methods was compared, and the best result is to fine tune the parameters of the original model.

 figure: Fig. 9.

Fig. 9. The comparison of transfer learning efficiency (Test1: initialize only the last fu­ll connection layer, Test2: initialize all full connection layers, Test3: fine tuning network parameters, Test4: randomly initialize full-parameters).

Download Full Size | PDF

Tables Icon

Table 4. Data sets in transfer learning

The two types of speckle generated by Boolean model and Gaussian speckle was shown in Fig. 10. Figure 11 shows the comparison of parameter error estimated by the two types of speckle data. Compared with the Boolean model, the accuracy of Gaussian speckle was worse than former, because of the speckle quality and the boundary were not good for the former, which led to the pixel value of subpixel position was difficult to be distinguished. The same test as in Section 4.1 was carried, the number of iterations for the three estimation methods as shown in Fig. 4(b), which indicated the different speckles do not affect the computational efficiency of the proposed method.Typographical style

 figure: Fig. 10.

Fig. 10. Two types of speckles generated by (a) Boolean speckle and (b) Gaussian speckle.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. The comparison of parameter error estimated by the two types of speckle.

Download Full Size | PDF

5. Typographical style real experiments

Use initial cap for first word in title or for proper nouns. Use lowercase following colon. Title should not begin with an article or contain the words “first,” “new” or “novel.”

To further demonstrate the practicality of the proposed method, the images acquired in a three-point bending experiment of material were processed to extract full-field deformation. The three-point bending test is shown in Fig. 12(a), and the camera view of the surfaces of the specimen is illustrated in Fig. 12(b). In the DIC analysis, a rectangular ROI of 1280 × 1024 pixels were specified, which was discretized to 12343 regularly distributed calculation points with a grid step of 5 pixels. For each measurement point, a subset size of 41 × 41 pixels are adopted. Undeformed images of the surfaces of the specimen are taken before loading, and deformed images of those are recorded after exerting 1500N, 2000N and 2500 N compressive loading.

 figure: Fig. 12.

Fig. 12. (a) The setup of three-point bending; (b) The camera view of the specimen surface.

Download Full Size | PDF

In the proposed algorithm, the convolution neural network was trained by the speckle pattern from the reference image in the experiment. And 6 × 31 = 186 non-overlapping images with a size of 43 × 43 pixels were intercepted in the reference image. For the purpose of data augmentation, the following ways were suggested:

  • (1) The speckle pattern is intercepted at 15 pixels intervals in the horizontal and vertical directions, and a total of 1162 basic images named sample A can be obtained.
  • (2) Rotate the speckle pattern a horizontally by 15°, 30°, 45° respectively and intercept the complete speckle pattern in the image, a total of 1800 basic images named sample B can be obtained.
  • (3) Adjust the brightness of all samples in the basic image B to 1.2 times and 0.8 times of the original respectively, and a total of 2324 basic images named sample C can be obtained.
  • (4) Rotate the obtained basic samples A, B and D by 90°, 180° and 270° respectively to obtain a new 15858 (5286 × 3) images, named as sample D;
  • (5) The training set consists of the basic samples of A, B, C and D.

In order to show that the proposed initial estimation method does not affect the accuracy of DIC, the experimental results of three-point bending are compared with those obtained by Ncorr software [42]. The open-source software Ncorr, which has been widely used in the study of deformation mechanics of materials, implements a more sophisticated approach to calculate full-field displacement. In Ncorr, the integer pixel initial value based normalized cross-correlation is used as an initial guess prior to the application of the IC-GN algorithm. As shown in Fig. 13, there is no significant difference between the two types of initial value. Figure 14 shows that the maximum error is 0.0049 and the mean error is about 0.0032 compared with results obtained from Ncorr software. With the increase of load, the mean error and standard deviation error were increasing slightly. It can be proved that the proposed full parameter initial value estimation has high accuracy for the displacement field estimation, but with fewer iterations through the aforementioned. The result shows the feasibility of full-parameter initial value estimation method with basically the same accuracy compared to the digital image correlation result obtained from an open-source software Ncorr, which provides new opportunities in experimentation and algorithms to study deformation mechanics of materials.

 figure: Fig. 13.

Fig. 13. The x- and y-directional displacement fields at different loading computed by DIC with integer pixel initial value and full parameter initial value. (units: mm).

Download Full Size | PDF

 figure: Fig. 14.

Fig. 14. The mean bias and standard deviation at different loading

Download Full Size | PDF

6. Conclusion

An initial value estimation method of full-parameter based on regression convolution neural network was proposed in this paper, and the applicability of the algorithm under different selection of subset sizes and different patterns of speckle is considered. The proposed method respectively uses the integer-pixel estimation based on neighborhood search, the subpixel estimation based on surface fitting and the first-order displacement gradients estimation based on regressive convolutional neural network to achieve the initial value estimation of inverse compositional Gauss-Newton (IC-GN). The simulation results show that the iteration times of the proposed full-parameter initial value estimation method are reduced by about 33% compared with the integer-pixel initial value estimation method. At the same time, in the process of IC-GN iteration, the computational efficiency of CPU multithreaded calculation is nearly twice higher than that of single thread method. The practicality of the proposed full-parameter initial value estimation method is verified by three-point bending experiment.

Funding

Fundamental Research Funds for the Central Universities (2021ZY67).

Disclosures

The authors declare no conflicts of interest.

Data availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

References

1. B. Pan, Z. Wang, and Z. Lu, “Genuine full-field deformation measurement of an object with complex shape using reliability-guided digital image correlation,” Opt. Express 18(2), 1011–1023 (2010). [CrossRef]  

2. Y. Zhou, B. Pan, and Y. Q. Chen, “Large deformation measurement using digital image correlation: a fully automated approach,” Appl. Opt. 51(31), 7674–7683 (2012). [CrossRef]  

3. F. Hild and S. Roux, “Digital image correlation: from displacement measurement to identification of elastic properties - a review,” Strain 42(2), 69–80 (2006). [CrossRef]  

4. L. Chen, Y. Wang, X. Dan, X. Ying, and L. Yang, “Experimental research of digital image correlation system in high temperature test,” in Seventh International Symposium on Precision Mechanical Measurements (2016).

5. Z. Wang, J. Zhao, L. Fei, Y. Jin, and D. Zhao, “Deformation monitoring system based on 2D-DIC for cultural relics protection in museum environment with low and varying illumination,” Math. Probl. Eng. 2018, 1–13 (2018). [CrossRef]  

6. R. A. Galantucci and F. Fatiguso, “Advanced damage detection techniques in historical buildings using digital photogrammetry and 3D surface anlysis,” J. Cult. Herit. 36, 51–62 (2019). [CrossRef]  

7. V. Srivastava and J. Baciersad, “An optical-based technique to obtain operating deflection shapes of structures with complex geometries,” Mech. Syst. Signal Process. 128, 69–81 (2019). [CrossRef]  

8. Y. Pang, B. K. Chen, S. F. Yu, and S. N. Lingamanaik, “Enhanced laser speckle optical sensor for in-situ strain sensing and structural health monitoring,” Opt. Lett. 45(8), 2331–2334 (2020). [CrossRef]  

9. M. S. Dizaji, M. Alipour, and D. K. Harris, “Subsurface damage detection and structural health monitoring using digital image correlation and topology optimization,” Eng. Struct. 230, 111712 (2021). [CrossRef]  

10. J. Li, Z. Guo, D. Ai, J. Yang, and Z. Wei, “Nonlinear characteristics of granite after high-temperature treatment captured by digital image correlation and acoustic emission technology,” Nat. Resour. Res. 31(3), 1307–1327 (2022). [CrossRef]  

11. H. A. Bruck, S. R. Mcneill, M. A. Sutton, and W. H. Peters, “Digital image correlation using Newton-Raphson method of partial differential correction,” Exp. Mech. 29(3), 261–267 (1989). [CrossRef]  

12. S. Baker and I. Matthews, “Lucas-Kanade 20 years on: a unifying framework,” Int. J. Comput. Vis. 56(3), 221–255 (2004). [CrossRef]  

13. G. Vendroux and W. G. Knauss, “Submicron deformation field measurements: Part 2. Improved digital image correlation,” Exp. Mech. 38(2), 86–92 (1998). [CrossRef]  

14. W. Chen, Z. Jiang, L. Tang, Y. Liu, and Z. Liu, “Equal noise resistance of two mainstream iterative sub-pixel registration algorithms in digital image correlation,” Exp. Mech. 57(6), 979–996 (2017). [CrossRef]  

15. B. Pan, K. Li, and W. Tong, “Fast, robust and accurate digital image correlation calculation without redundant computations,” Exp. Mech. 53(7), 1277–1289 (2013). [CrossRef]  

16. B. Pan, “An evaluation of convergence criteria for digital image correlation using inverse compositional Gauss–Newton algorithm,” Strain 50(1), 48–56 (2014). [CrossRef]  

17. M. A. Sutton, W. J. Wolters, W. H. Peters, W. F. Ranson, and S. R. Mcneill, “Determination of displacements using an improved digital correlation method,” Image Vis. Comput. 1(3), 133–139 (1983). [CrossRef]  

18. Z. Wang, M. Vo, H. Kieu, and T. Pan, “Automated fast initial guess in digital image correlation,” Strain 50(1), 28–36 (2014). [CrossRef]  

19. Z. F. Zhang, Y. L. Kang, H. W. Wang, Q. H. Qin, Y. Qiu, and X. Q. Li, “A novel coarse-fine search scheme for digital image correlation method,” Measurement 39(8), 710–718 (2006). [CrossRef]  

20. D. Lowe, “Distinctive image features from scale-invariant key points,” Int. J. Comput. Vis. 20, 91–110 (2003).

21. H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-Up Robust Features (SURF),” Comput. Vis. Image Underst. 110(3), 346–359 (2008). [CrossRef]  

22. Y. Lee, H. Yang, and Z. Yin, “PIV-DCNN: cascaded deep convolutional neural networks for particle image velocimetry,” Exp. Fluids 58(12), 171 (2017). [CrossRef]  

23. S. Cai, J. Liang, Q. Gao, C. Xu, and R. Wei, “Particle image velocimetry based on a deep learning motion estimator,” IEEE Trans. Instrum. Meas. 69(6), 3538–3554 (2020). [CrossRef]  

24. .E. Ilg, N. Mayer, T. Saikia, M. Keuper, and T. Brox, “FlowNet 2.0: evolution of optical flow estimation with deep networks,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)1647–1655 (2017).

25. L. Kong and J. Yang, “FDFlowNet: Fast Optical Flow Estimation using a Deep Lightweight Network,” in International Conference on Image Processing, 1501 (2020).

26. P. Fischer, A. Dosovitskiy, E. Ilg, P. Husser, C. Hazrba, V. Golkov, V. Patrick, D. Cremers, and T. Brox, “FlowNet: Learning Optical Flow with Convolutional Networks,” 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 2758–2766 (2016).

27. T. W. Hui, X. Tang, and C. C. Loy, LiteFlowNet: A Lightweight Convolutional Neural Network for Optical Flow Estimation (IEEE, 2018).

28. D. Sun, X. Yang, M. Y. Liu, and J. Kautz, “PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018).

29. S. Boukhtache, K. Abdelouahab, F. Berry, B. Blaysat, M. Grédiac, and F. Sur, “When Deep Learning Meets Digital Image Correlation,” Opt. Lasers Eng. 136, 106308 (2021). [CrossRef]  

30. M. Pitter, C. W. See, and M. Somekh, “Subpixel microscopic deformation analysis using correlation and artificial neural networks,” Opt. Express 8(6), 322–327 (2001). [CrossRef]  

31. X. Liu and Q. Tan, “Subpixel in-plane displacement measurement using digital image correlation and artificial neural networks,” in Symposium on Photonics and Optoelectronics, Chengdu, 1–4 (2010).

32. C. Ma, Q. Ren, and J. Zhao, “Optical-numerical method based on a convolutional neural network for full-field subpixel displacement measurements,” Opt. Express 29(6), 9137–9156 (2021). [CrossRef]  

33. X. Y. Liu, X. Z. Qin, R. L. Li, Q. H. Li, and X. L. Wu, “A Self-Adaptive Selection of Subset Size Method in Digital Image Correlation Based on Shannon Entropy,” IEEE Access 8, 184822–184833 (2020). [CrossRef]  

34. Y. Yuan, J. Huang, X. Peng, C. Xiong, J. Fang, and F. Yuan, “Accurate displacement measurement via a self-adaptive digital image correlation method based on a weighted ZNSSD criterion,” Opt. Lasers Eng. 52, 75–85 (2014). [CrossRef]  

35. H. W. Schreier, J. R. Braasch, and M. A. Sutton, “Systematic errors in digital image correlation caused by intensity interpolation,” Opt. Eng. 39(11), 2915–2921 (2000). [CrossRef]  

36. P. Cheng, M. A. Sutton, H. W. Schreier, and S. R. McNeill, “Full-field speckle pattern image correlation with B-Spline deformation function,” Exp. Mech. 42(3), 344–352 (2002). [CrossRef]  

37. X. Shao and X. He, “Noise robustness and parallel computation of the inverse compositional Gauss–Newton algorithm in digital image correlation,” Opt. Lasers Eng. 71, 9–19 (2015). [CrossRef]  

38. S. J. Pan and Q. Yang, “A Survey on Transfer Learning,” IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010). [CrossRef]  

39. F. Sur, B. Blaysat, and M. Grédiac, “Rendering deformed speckle images with a Boolean model,” J. Math. Imaging Vis. 60(5), 634–650 (2018). [CrossRef]  

40. P. Zhou and K. E. Goodson, “Subpixel displacement and deformation gradient measurement using digital image/speckle correlation (DISC),” Opt. Eng 40(8), 1613–1620 (2001). [CrossRef]  

41. D. Atkinson and T. Becker, “A 117 line 2D digital image correlation code written in MATLAB,” Remote Sens. 12(18), 2906 (2020). [CrossRef]  

42. J. Blaber, B. Adair, and A. Antoniou, “2015 Ncorr: open-source 2D digital image correlation Matlab software,” Exp. Mech. 55(6), 1105–1122 (2015). [CrossRef]  

Data availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. Full-parameter initial value estimation with IC-GN algorithm.
Fig. 2.
Fig. 2. The flow chart of the proposed full-parameter initial value estimation algorithm.
Fig. 3.
Fig. 3. Schematic diagram of four thread bidirectional S-shaped extension algorithm.
Fig. 4.
Fig. 4. The number of iterations with different parameter initial value; (a) The number of iterations by original network; (b) The number of iterations by transfer learning.
Fig. 5.
Fig. 5. The result of integer-pixel displacement test.
Fig. 6.
Fig. 6. The result of sub-pixel displacement test.
Fig. 7.
Fig. 7. Absolute deviation of first-order displacement gradient: (a) Absolute deviation of parameter ${u_x}$; (b) Absolute deviation of parameter ${u_y}$; (c) Absolute deviation of parameter ${v_x}$; (d) Absolute deviation of parameter ${v_y}$.
Fig. 8.
Fig. 8. The first-order displacement gradients estimated by different subsize.
Fig. 9.
Fig. 9. The comparison of transfer learning efficiency (Test1: initialize only the last fu­ll connection layer, Test2: initialize all full connection layers, Test3: fine tuning network parameters, Test4: randomly initialize full-parameters).
Fig. 10.
Fig. 10. Two types of speckles generated by (a) Boolean speckle and (b) Gaussian speckle.
Fig. 11.
Fig. 11. The comparison of parameter error estimated by the two types of speckle.
Fig. 12.
Fig. 12. (a) The setup of three-point bending; (b) The camera view of the specimen surface.
Fig. 13.
Fig. 13. The x- and y-directional displacement fields at different loading computed by DIC with integer pixel initial value and full parameter initial value. (units: mm).
Fig. 14.
Fig. 14. The mean bias and standard deviation at different loading

Tables (4)

Tables Icon

Table 1. Detailed deformation parameters of integer pixel

Tables Icon

Table 2. Detailed deformation parameters of subpixel

Tables Icon

Table 3. The parameters of data set samples

Tables Icon

Table 4. Data sets in transfer learning

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

p 0 = { u , v , u x , u y , v x , v y }
P 0 = { u i n t + u s u b , v i n t + v s u b , u x , u y , v x , v y }
C Z N C C = ( i , j ) s [ f ( x r e f i , y r e f j ) f m ] [ g ( x c u r i , y c u r j ) g m ] [ ( i , j ) s [ f ( x r e f i , y r e f j ) f m ] 2 ] [ ( i , j ) s [ g ( x c u r i , y c u r j ) g m ] 2 ]
f m = ( i , j ) s f ( x r e f i , y r e f j ) N ( s )
g m = ( i , j ) s g ( x c u r i , y c u r j ) N ( s )
C ( x i , y j ) = a 0 + a 1 x i + a 2 y j + a 3 x i 2 + a 4 x i y j + a 5 y j 2
u s u b = 2 a 1 a 5 a 2 a 4 a 4 2 4 a 3 a 5
v s u b = 2 a 2 a 3 a 1 a 4 a 4 2 4 a 3 a 5
l o s s = i = 1 n [ ( u ^ x i u x i ) d x + ( u ^ y i u y i ) d y ] 2 + i = 1 n [ ( v ^ x i v x i ) d x + ( v ^ y i v y i ) d y ] 2 n
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.