Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Image fusion based on multiscale transform and sparse representation to enhance terahertz images

Open Access Open Access

Abstract

High-quality terahertz (THz) images are vital to integrated circuit (IC) manufacturing. Due to the unique sensitivity of THz waves to different materials, the images obtained from the point-spread function (PSF) model have fewer image details and less texture information in some frequency bands. This paper presents an image fusion technique to enhance the resolution of THz IC images. The source images obtained from the PSF model are processed by a fusion method combining a multiscale transform (MST) and sparse representation (SR). The low-pass band is handled by sparse representation, and the high-pass band is fused by the conventional “max-absolute” rule. From both objective and visual perspectives, four popular multiscale transforms—the Laplacian pyramid, the ratio of low-pass pyramids, the dual-tree complex wavelet transform and the curvelet transform—are thoroughly compared at different decomposition levels ranging from one to four. This work demonstrates the feasibility of using image fusion to enhance the resolution of THz IC images.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Terahertz (THz) imaging is vital to integrated circuit (IC) manufacturing. The main causes of IC failures, which include bond wire melting, metallization diffusion and dielectric breakdown, are attributed to the thermal shocks induced by electrical overstress. A dielectric breakdown inside an IC also generates internal defects, such as cracks, delamination, and voids inside the packaging materials [1]. Therefore, nondestructive inspection of the internal structure is of great significance for improving the quality and reliability of ICs. At present, the main common detection technologies are optical detection, ultrasonic testing [2], X-ray and THz waves [3], but these technologies have some limitations. Optical detection cannot observe the inside of an IC because it cannot penetrate the packaging and substrate of the IC. Due to the use of acoustic couplants in testing procedures, the ultrasonic technique is time consuming and costly as well as dirty. The X-ray technique only detects metal, and it cannot detect cracks, delamination, or voids inside the IC [1]. In addition, the ionizing properties of X-rays are likely to damage the inner circuit configuration and are harmful to humans. Only a small amount of energy radiates when a THz wave penetrates IC packaging composed of nonmetallic materials. Thus, the THz imaging technique has played a significant role in observing the circuit configuration of ICs [4]. At present, there are two ways to improve the resolution of terahertz imaging. Those methods are the use of high-performance terahertz spectral devices and imaging algorithms. Wen et al. proposed a high performance all-optical spatial terahertz modulators and show that based on silicon micro pyramid arrays, which not only enhances the light-harvesting across a broad wavelength range but also greatly increases the active area for THz modulation [5]. It effectively improves the modulation depth and quality of THz images, With the scale of imaging samples unceasingly expands, the time and cost of THz imaging doubled. Thus, it is urgent and important to develop imaging algorithms.

Trofimov et al. used correlation function for high contrast for detecting certain objects in THz images [6]. Schildknecht et al. then proposed a new approach for THz image quality enhancement using blind deconvolution and a point-spread function (PSF) on the THz image [7]. Kiarash presented a comprehensive theory to perform cohesive mathematical modeling for the point-spread function and to simulate the THz imaging process [3]. Although these methods have made great contributions to the improvement of THz imaging resolution, they are still not sufficient for IC defect detection [8]. Due to the unique sensitivity of THz waves to different materials [9], the images obtained from the PSF model express fewer image details and less texture information in only some frequency bands.

In recent years, image fusion has played a significant role in the development of image processing. Image fusion generates a composite image by extracting complementary information from multiple source images of the same scene [10,11]. Multiscale transform (MST) is the most common tool used in image fusion tasks, such as in computer vision, surveillance, medical imaging, and remote sensing. Classical fusion methods based on MST include the Laplacian pyramid (LP) [12], ratio of low-pass pyramids (RP) [13], dual-tree complex wavelet transform (DTCWT) [14] and curvelet transform (CVT) [15]. Conventionally, the basic steps of MST fusion are to decompose the source images into a multiscale transform domain and reconstruct the fused images by reversing the corresponding transform after merging the transformed coefficients according to a selected fusion rule. These MST methods assume that the potentially significant information in the source images can be extracted from the decomposed coefficients. The fusion rule selecting either the high-pass or low-pass band always has a great impact on the fused results. In general, the largest absolute value of the high-pass coefficient is used as the fusion rule at each pixel position. In the majority of situations, the fused image of the low-pass band is obtained by averaging all source inputs. The MST can’t excellently handle the misregistration and the edge of fused image is not clear.

Elad used the sliding-window technique to make the fusion process more robust to noise [16]. Sparse representation (SR) emulates the physiological characteristics of the human visual system, which represents the natural sparsity of processing signals. The fundamental assumption of SR is that the signal $x \in {R^n}$ can be approximately expressed by a linear combination of elements from an overcomplete dictionary $D \in {R^{n \times m}}(n < m)$, where n is the dimension of the signal and m is the size of the dictionary [17]. Thus, the signal can be expressed as $x \approx D\alpha ({\alpha \in {R^m}} )$, where $\alpha $ is an unknown sparse coefficient vector. Because it is an overcomplete dictionary, there are many solutions to this undetermined system. The goal of SR is to calculate the smallest number of nonzero terms in all feasible solutions; this is called sparse coding. To improve the stability and efficiency of the algorithm, sparse coding is usually used in local image blocks in SR-based image processing methods [18]. The two source images have little difference in a certain area. The max L1 fusion rule can’t accurately judge which one has more details and increase the uncertainty of the fusion result. In this case, the fused result is very sensitive to noisy situations. Thus, the randomness is very large and result in a grayscale discontinuity effect of the fusion results. Liu used an image fusion method combining MST and SR to simultaneously overcome the inherent defects of both the MST- and SR-based fusion methods [19].

To further improve the quality of THz IC images, image fusion is adopted in extracting the characteristics of THz images in different frequency bands. In this research, there are several types of IC packaging, including quad flat no-lead, quad flat-pack packaging schemes, shrink double-in-line, and no-lead chip carriers. The image fusion technique is applied to enhance the resolution of THz IC images. First, the THz source images are acquired from the PSF model. Second, the source images are fused by four different MST-SR methods of four popular MSTs: LP, RP, DTCWT and CVT. Both subjectively and objectively, the effectiveness of the MST-SR methods is compared at different decomposition levels ranging from one to four. The method combining DTCWT and SR at 3 decomposition levels has the best performance in terms of improving the restoration of THz IC images.

The rest of this article is arranged as follows: We briefly describe the mathematical PSF model and the major fusion steps based on MST-SR in Section 2. In Section 3, the source images resulting from the PSF model are analyzed and shown. The comparison results of fused IC images are recorded in Section 4. Section 5 gives the conclusions of this research.

2. Theoretical method

To better explain the advantages of the proposed method over THz image fusion, we briefly present the details of the corresponding steps in this section.

2.1 Fusion method based on MST and SR

In this work, we employ a basic fusion framework based on MST and SR. A fusion method based on SR is adopted for low-pass band fusion, while the traditional “maximum absolute” rule is adopted for high-pass band fusion [19]. Specifically, due to the low signal-to-noise terahertz source images, the low-pass band images obtained from the multiscale decomposition have a serious block effect. Different from the traditional fusion framework, the maximum absolute value fusion is adopted for all-pass band images including high-pass band images and low-pass band images. The fusion steps are the followings:

  • (1) The two source images A and B are processed by using multiscale transform. It’s worth noting that the high-pass images and low-pass images are just relative expression. Image A size of 280${\times} $108 as an example, the high-pass bands and low-pass bands images are respectively denoted as {HA1, HA2, HA3 and LA4}, which size are 140${\times} $54, 70${\times} $28, 36${\times} $14 and 18${\times} $7.
  • (2) The low-pass bands are fused using sparse representation. The LA4 and LB4 are converted to low-pass vector LA4 and LB4 vectors. For example, LA4 can be expressed as ${L_{\textrm{A4}}} \approx D{\alpha _{A4}}({{\alpha_{A4}} \in {R^m}} )$, where ${\alpha _{A4}}$is an unknown sparse coefficient vector and D is unified dictionary. The sparse coefficient vector ${\alpha _{A4}}$ and ${\alpha _{B4}}$ are merged with the “max-L1 value” rule to obtain the sparse coefficient fused vector ${\alpha _M}$. The fused sparse vector G is calculated by $G \approx D{\alpha _M}$. The fused sparse vector G is converted to low-pass fusion matrix M. The size of the vector G and matrix M are 1${\times} $126 and 18${\times} $7, respectively.
  • (3) The HAi and HBi, LA4 and LB4 are fused with the “max-absolute” rule to obtain the all-pass fusion matrix {E(1), E(2), E(3), E(4)}, which adopted source patches with a 3 × 3 window.
  • (4) The low-pass fusion matrix M and all-pass fusion matrix E(i) are performed to obtain final fusion image. The reconstruction formula is M (i) = (MT(i)+eps).* E(i), where MT(i) is expanded matrix of M and eps is 1e-6.
The final fused image is reconstructed by performing the corresponding MST over fused high-pass bands and fused low-pass bands. The fusion method can be directly applied to more than two images. An example of the fusion algorithm of two source images is shown in the lower schematic diagram in Fig. 1.

 figure: Fig. 1.

Fig. 1. Schematic diagram of the fusion method based on MST and SR.

Download Full Size | PDF

2.2 Objective evaluation metrics

Quantitative evaluation of the fusion THz IC images is a comprehensive task. The indicators are used to quantitatively evaluate the quality of fusion images and the performance of fusion methods. There are usually divided into two categories. The first category is an objective evaluation method that requires a standard reference image, such as phase congruency based fusion metric (QP), gradient based fusion metric (QG) [20], mean square error (MSE) [21], mean average error (MAE), and universal image quality index (UIQI) [22]. The second category is an objective evaluation method that does not require a standard reference image, such as standard deviation (SD), entropy (EN) [23], average gradient (AG), mutual information (MI) and space frequency (SF) [24,25]. It is usually essential to employ several indicators for a comprehensive evaluation. Due to the real IC images including all features that are unable to be obtained, the four following indicators are used to quantitatively evaluate the performance of different fusion methods.

  • 1. Standard deviation (SD). The definition of SD is primarily applied to measure the overall contrast ratio of the fused image.
    $$SD = \sqrt {\frac{1}{{H \times W}}\mathop \sum \nolimits_{x = 1}^H \mathop \sum \nolimits_{y = 1}^W {{(F({x,y} )- \mu )}^2}} $$
    where H and W are the dimensions of the fused image and $\mu $ is the mean value of the fused image.
  • 2. Entropy (EN). The definition of EN is mainly applied to measure the amount of information in the fused image.
    $$EN ={-} \mathop \sum \nolimits_{l = 0}^{L - 1} {p_F}(l ){log _2}{p_F}(l )$$
    where L is the gray level and is set to 256, and ${p_F}(l )$ is a standardized histogram of the fused image.
  • 3. Average gradient (AG). The AG is mainly used to describe the image details and texture transformation. The AG of the fused image is defined as
    $$AG = \frac{1}{{H \times W}}\mathop \sum \nolimits_{i = 1}^H \mathop \sum \nolimits_{j = 1}^W \sqrt {\frac{{f_x^2({i,j} )+ f_y^2({i,j} )}}{2}} $$
    where H and W are the dimensions of the fused image $f_x^2({i,j} )$ and $f_y^2({i,j} )$ are the first-order differences of the fused image in the row and column directions, respectively. The larger the average gradient, the clearer the image and the better the quality of the fused image.
  • 4. Space frequency (SF). The SF of the fused image reflects the active degree of the image in the spatial domain. The SF of the fused image is defined as
    $$SF = \sqrt {R{F^2} + C{F^2}} $$
    $$RF = \sqrt {\frac{1}{{H \times W}}\mathop \sum \nolimits_{i = 1}^H \mathop \sum \nolimits_{j = 2}^W {{[F({i,j} )- F({i,j - 1} )]}^2}} $$
    $$CF = \sqrt {\frac{1}{{H \times W}}\mathop \sum \nolimits_{i = 2}^H \mathop \sum \nolimits_{j = 1}^W {{[F({i,j} )- F({i - 1,j} )]}^2}} $$
    where H and W are the dimensions of the fused image. The larger the space frequency, the better the quality of the fused image.

3. Image source for the PSF model

The T-Gauge 5000 manufactured by Advanced Photonix was employed in our research. THz imaging IC testing was performed in an ultraclean laboratory to effectively ensure temperature and humidity stability. The ICs were fixed on a two-dimensional moving stage in a THz time-domain spectroscopy (THz-TDS) system operating in transmission mode. We adopted point-by-point raster scanning with the THz beam, in which the scanning step length and scanning speed of the moving stage were set to 2.25 mm and 50 mm/s, respectively. The spectral bandwidth ranged from 0.01 to 5 THz, and the signal-to-noise ratio was greater than 60 db. The focal length of our system and the diameter of the lens were 150 mm and 38 mm, respectively. The value of the f-number was calculated to be 3.95.

The spectral information of each data pixel was captured, and the intensity of its THz pulse was recorded when the THz beam passed through the ICs. Four sampling points were randomly selected in certain ICs. For example, Col (36) Row (137) represents the pixel point in the 36th column and 137th row from the top left. Figure 2(a) compares the different sampling points of THz time-domain pulses with a reference signal. The peaks of these time-domain signal are 0.4845, 0.1605, 0.1165, 0.0538 and 0.0184. The amplitude attenuation ratio (AT/A0) resulting from the frequency-domain amplitudes of these points and the reference signal are calculated by dividing the amplitude of the sampling points by the amplitude of the reference signal, as shown in Fig. 2(b). At approximately 0.88 THz, the AT/A0 value of Col (36) Row (137) and of Col (41) Row (141) have different tendencies. AT/A0 is related to the amplitude attenuation factor and absorption coefficient. Thus, the value of AT/A0 is a significant parameter in the following PSF model.

 figure: Fig. 2.

Fig. 2. (a) comparing the different pixels of THz time-domain pulses with the reference signal, (b) comparing the different pixels of amplitude attenuation ratio (AT/A0).

Download Full Size | PDF

Jepsen proved that the beam is approximated by Gaussian illumination distribution [26]. As Kiarash reported, the two-dimensional THz beam scan has been mathematically modeled by the convolution of the PSF and the object function [3,27,28]. In order to mathematically model the PSF, the system spectrum and transmission parameters are combined into the Gaussian beam theory, such as correlated spectrum intensity $I({x,y,z} )$, the amplitude extinction coefficient $\alpha ({z,f} )$ of the object, the beam divergence $\rho $ and the depth of focus z. THz beam passes through the interior of the object along the z-axis to obtain images of different layers in different z-planes of the object. Therefore, the PSF model can be accurately represented by the following Eq. (8). According to (7) and (8), object function is obtained from the deconvolution of the PSF and THz images.

$$o({x,y,z} )= i({x,y,z} )\ast PSF{(x,y,z)^{ - 1}}$$
$$PSF({z,f} )= {I_{ref}}({0,z,f} )exp ( - z\alpha - 2{\rho ^2}/\left( {\frac{{0.565}}{{\sqrt {2\ln 2} }}\frac{k}{{NA}}\frac{c}{f}\sqrt {1 + {{(\frac{{2\ln 2}}{{c\pi }}{{(\frac{{NA}}{{0.565k}})}^2}fz)}^2}} } \right))$$
where $i({x,y,z} )$ is the THz image, k is determined by the truncation ratio and irradiance level, NA is the numerical aperture of the THz system, f is the frequency of the THz beam and c is the speed of light.

Because the THz beam is treated as a monochromatic beam, objective images are obtained over the entire frequency band. In this paper, a defective STC89C52RC IC and a qualified STC89C52C IC are used as examples to demonstrate the effect of the enhancement methods. Thus, images containing different characteristic information are reconstructed from the corresponding frequencies. The images are selected relatively clearly in the full spectrum. In Figs. 3(a), 3(b), and 3(c) represent images of the qualified STC89C52C IC labeled #1 at 0.9 THz, 0.75 THz and 0.6 THz, respectively. The bond wire and counterbore are unclear at 0.9 THz. Their characteristics are distinct at 0.6 THz and 0.75 THz. In particular, the groove outside the packaging pillar is blurry at 0.9 THz and clear at 0.6 THz. Figures 3(d), 3(e), and 3(f) represent images of the defective STC89C52RC IC labeled #2 at 0.9 THz, 0.75 THz and 0.6 THz, respectively. The features shown at different frequencies are different from each other. The marked positions, which are the bond wire, counterbore and defect, are still blurry at 0.9 THz. These texture details are also observed in the images at other frequencies. To express this information more intuitively and conveniently, we manipulate the source images further in the process of image fusion. The objective assessment of the source images with the PSF model is listed in following Table 5. As in the above images, the lower EN, AG and SF of these images indicate a weaker ability to extract details and texture information.

 figure: Fig. 3.

Fig. 3. THz image of a qualified STC89C52C IC: (a) 0.9 THz, (b) 0.75 THz, (c) 0.6 THz. Images of a defective STC89C52RC IC: (d) 0.9 THz, (e) 0.75 THz, (f) 0.6 THz.

Download Full Size | PDF

4. Image fusion with four popular MSTs

Among MST and SR methods, there are four common MSTs used to improve the quality of image fusion, namely, the Laplacian pyramid (LP), low-pass pyramid ratio (RP), double-tree complex wavelet transform (DTCWT) and curve transform (CVT). The composite images generated by these methods retain the details of the source image that are most relevant to visual perception. To accurately evaluate the performance of the image fusion algorithm, we use four objective indicators for each multiscale transformation and various decomposition levels. The number of decomposition levels of the multiscale transformation increases successively from 1 to 4. We compare the optimal experimental results under the above four transformations. Under the fusion methods of MST and SR, the best-performing fusion algorithm is obtained, and the corresponding type of multiscale transformation and decomposition level are given in the form MST-SR-x. For instance, RP-SR-3 indicates that the decomposition level of the method combining RP with SR is 3.

Because the decomposition level is related to the size of the THz source image, the decomposition level is set to 4, and the “max-absolute” rule is adopted in merging the high-pass bands of the source image patches with a 3 × 3 window. In this experiment, the unified dictionary of sparse coding learning by the K-SVD algorithm reported in [17] is used. In all tables, the recorded values are the average results of the specific type of MST fusion method, and the values marked in bold indicate the best performance using the corresponding fusion method. The impact of image filters on the fused results is weaker than both MST and the decomposition level. Accordingly, in this article, the image filter is set to [1 4 6 4 1]/16, according to the related results reported in [19]. The value of the sparse reconstruction error is set to 0.1. The high-frequency spatial information of fused high-pass band is obtained by applying MST and extracted by the max-absolute rule. Every decomposition of the source high-pass bands is processed twice with convolution. The size of the source image patches is set to 8 × 8. The number of overlapping pixels between two neighboring patches is generally set to 6 because of the minor effect of different overlapping pixel values. The subsequent experiments verify that the fusion strategies of MST and SR methods always have better performance than methods with either MST or SR alone.

4.1 RP-SR

The objective assessments of the RP, SR and RP-SR methods are given in Table 1. In #1 of Table 1, the values of both the SD and EN increase, but the AG and SF generally decrease as the decomposition level gradually rises. Conversely, when the decomposition level increases in #2, the values of the SD and EN decrease, and the AG and SF simultaneously increase. There is greater redundancy and correlation between the data of each layer after the decomposition, and the detailed information has greater redundancy. The correlation between different resolutions can easily cause the algorithm to be unstable. The robustness of RP-SR is not sufficiently strong, and this ultimately results in the asymmetry that the objective assessments of different images do not have inconsistent trends. The corresponding images are shown in Fig. 4, and a few noises are introduced into the fused images, as shown in Fig. 5. Thus, the RP and RP-SR methods may not be suitable for THz IC image fusion.

 figure: Fig. 4.

Fig. 4. The assessment of the RP, SR and RP-SR methods: (a) SD and SF, (b) AG and EN.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Images of IC #1 using methods (a) RP, (b) SR, (c) RP-SR-1, (d) RP-SR-2, (e) RP-SR-3, and (f) RP-SR-4; images of defective IC #2 using methods (g) RP, (h) SR, (i) RP-SR-1, (j) RP-SR-2, (k) RP-SR-3, and (l) RP-SR-4.

Download Full Size | PDF

Tables Icon

Table 1. Objective assessments of the RP, SR and RP-SR methods.

4.2 CVT-SR

Table 2 lists the objective assessments of the RP, SR and RP-SR methods. The performance of the series of CVT and SR fusion strategies is greatly approximated in Table 1, and the corresponding curves are shown in Fig. 6. According to the experimental results, the impact of the decomposition level on the four indicators is very sensitive and irregular. The corresponding images in Fig. 7 show that the CVT-SR strategies are unable to deal with the task of THz IC image fusion.

 figure: Fig. 6.

Fig. 6. The assessment of the CVT, SR and CVT-SR methods: (a) SD and SF, (b) AG and EN.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Images of IC #1 using methods (a) CVT, (b) SR, (c) CVT-SR-1, (d) CVT-SR-2, (e) CVT-SR-3 and (f) CVT-SR-4; images of defective IC #2 using methods (g) CVT, (h) SR, (i) CVT-SR-1, (j) CVT-SR-2, (k) CVT-SR-3and (l) CVT-SR-4.

Download Full Size | PDF

Tables Icon

Table 2. Objective assessments of the CVT, SR and CVT-SR methods.

4.3 LP-SR

The objective evaluation of the LP, SR and LP-SR methods is given in Table 3, and the corresponding curves are shown in Fig. 8. The experimental results of the LP-SR fusion strategies are entirely distinct from those of RP-SR and CVT-SR. Compared to the combinations of other methods, LP-SR-2 has better performance in THz IC image fusion. The impact of the four indicators on the decomposition levels is also very regular. The assessment values improve as the decomposition level increases, maximizing at LP-SR-2. As the decomposition level continues to increase, the values gradually decline. The corresponding images in Fig. 9 show that the LP-SR-2 method has potential in the field of THz IC image fusion.

 figure: Fig. 8.

Fig. 8. Assessments of the LP, SR and LP-SR methods: (a) SD and SF, (b) AG and EN.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Images of IC #1 using methods (a) LP, (b) SR, (c) LP-SR-1, (d) LP-SR-2, (e) LP-SR-3, and (f) LP-SR-4; images of defective IC #2 using methods (g) LP, (h) SR, (i) LP-SR-1, (j) LP-SR-2, (k) LP-SR-3, and (l) LP-SR-4.

Download Full Size | PDF

Tables Icon

Table 3. Objective assessments of the LP, SR and LP-SR methods.

4.4 DTCWT-SR

The objective evaluation of the DTCWT, SR and DTCWT-SR methods is listed in Table 4, and the corresponding curves are shown in Fig. 10. In accordance with [29], the image filters used for the first-level and higher-level decomposition are LeGall 5-3 and Qshift-06, which is a quarter-sample-shift orthogonal 10-10 tap filter with 6-6 nonzero taps. As seen in Table 4, the methods combining DTCWT with SR are obviously better than either the SR or DTCWT method alone on all four indicators. The corresponding images in Fig. 11 show that the DTCWT-SR-3 and DTCWT-SR-4 methods have better performance.

 figure: Fig. 10.

Fig. 10. Assessments of the DTCWT(DTC), SR and DTCWT-SR(DTC-SR) methods: (a) SD and SF, (b) AG and EN.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Images of IC #1 using methods (a) DTCWT, (b) SR, (c) DTCWT-SR-1, (d) DTCWT-SR-2, (e) DTCWT-SR-3, and (f) DTCWT-SR-4; images of defective IC #2 using methods (g) DTCWT, (h) SR, (i) DTCWT-SR-1, (j) DTCWT-SR-2, (k) DTCWT-SR-3, (l) and DTCWT-SR-4.

Download Full Size | PDF

Tables Icon

Table 4. Objective assessments of the DTCWT, SR and DTCWT-SR methods.

4.5 Comprehensive comparison

To further evaluate the efficiency of the proposed fusion method, the above three fusion strategies are comprehensively compared. The mean assessment value of the source input images is added to perform a comprehensive comparison. The objective assessment of the source images and the fused images obtained from the LP-SR-2, DTCWT-SR-3 and DTCWT-SR-4 methods is listed in Table 5. It shows that the performance of fused methods definitively outperforms the source images at 0.9 THz, 0.75 THz and 0.6 THz. The results of the fused THz images show that each method has its advantages, and the corresponding curves are given in Fig. 12. The DCTWT-SR-4 method can obtain the best values of GD and EN, but the AG and SF are not greater than those of the other two methods. The reverse is true for the LP-SR-2 method: the values of AG and SF are the best, and the SD and EN are not optimal. The LP-SR-2 method has a stronger ability to extract image details and texture information, but the ability of improving the image contrast ratio is weaker. The DTCWT-SR-4 method is weaker in extracting image details and texture information than that of DTCWT-SR-3. The DTCWT-SR-3 method can balance robustness and the ability to extract image details.

 figure: Fig. 12.

Fig. 12. Assessment of PSF model at 0.9 THz, 0.75 THz, 0.6 THz and LP-SR-2, DTCWT-SR-3(DTC-SR-3), DTCWT-SR-4(DTC-SR-4): (a) SD and SF, (b) AG and EN.

Download Full Size | PDF

Tables Icon

Table 5. Objective assessment of the LP-SR-2, DTCWT-SR-3 and DTCWT-SR-4 methods.

In addition, several sets of source images and fused images obtained with the above three methods are shown in Fig. 13. From a visual perspective, the DTCWT-SR-3 method has good performance in extracting the spatial information and texture. The internal details of the IC are clearly shown in the result of image fusion. The shape and position of the defect inside the defective STC89C52RC IC are also accurately pinpointed. The localization of defects is important in integrated circuit manufacturing.

 figure: Fig. 13.

Fig. 13. Source images of PSF model #1 at (a) 0.9 THz, (b) 0.75 THz, and (c) 0.6 THz; image of IC #1 using methods LP-SR-2, (d) LP-SR-2, (e) DTCWT-SR-3, and (f) DTCWT-SR-4. Source images of PSF model #2 at (g) 0.9 THz, (h) 0.75 THz, and (i) 0.6 THz; image of IC #2 using methods LP-SR-2, (j) LP-SR-2, (k) DTCWT-SR-3, and (l) DTCWT-SR-4.

Download Full Size | PDF

5. Conclusion

This paper presents an image fusion technique to enhance the resolution of THz IC imaging. The source images obtained from the PSF model are processed by a fusion method combining MST and SR. The low-pass band is dealt with by SR, and the high-pass band is fused by the conventional “max-absolute” rule. In the above experiments, four popular multiscale transforms, RP, CVT, LP and DTCWT, are decomposed at different levels ranging from one to four. From both objective and visual perspectives, DTCWT-SR-3 retains more details and texture information. This fusion method has great potential in enhancing the resolution of THz IC images. The fusion methods not only improve the resolution of THz IC images but also apply in other scenes including solid-state, chemical, biological systems and body security screening.

Funding

National Key Research and Development Program of China (2018YFB1004004, 2018YFB1702701).

Disclosures

The authors declare no conflicts of interest.

References

1. E. Keenan, R. G. Wright, R. Mulligan, and L. V. Kirkland, “Terahertz and laser imaging for printed circuit board failure detection,” in IEEE Systems Readiness Technology Conference (IEEE, 2004), pp. 563–569.

2. E. Martin, C. Larato, A. Clément, and M. Saint-Paul, “Detection of delaminations in sub-wavelength thick multi-layered packages from the local temporal coherence of ultrasonic signals,” NDT&E Int. 41(4), 280–291 (2008). [CrossRef]  

3. K. Ahi, N. Asadizanjani, S. Shahbazmohamadi, M. Tehranipoor, and M. Anwar, in Conference on Terahertz Physics, Devices, and Systems IX - Advanced Applications in Industry and Defense, (SPIE, 2015), 94830K.

4. K. Fan, J. Y. Suen, X. Liu, W. J. Padilla, and D. N. U. S. Duke Univ, “All-dielectric metasurface absorbers for uncooled terahertz imaging,” Optica 4(6), 601 (2017). [CrossRef]  

5. Q. Y. Wen, Y. L. He, Q. H. Yang, P. Yu, Z. Feng, W. Tan, T. L. Wen, Y. X. Zhang, Z. Chen, and H. W. Zhang, “High-Performance Photo-Induced Spatial Terahertz Modulator Based on Micropyramid Silicon Array,” Adv. Mater. Technol. 5(6), 1901058 (2020). [CrossRef]  

6. V. A. Trofimov, V. V. Trofimov, and I. E. Kuchik, “Resolution enhancing of commercially available passive THz cameras due to computer processing,” Proc. SPIE 9199, 91990P (2014). [CrossRef]  

7. C. Schildknecht, T. Kleine-Ostmann, P. Knobloch, E. Rehberg, and M. Koch, “Numerical image enhancement for THz time-domain spectroscopy,” in IEEE Tenth International Conference on Terahertz Electronics, (IEEE, 2002), 157–160.

8. K. Ahi, S. Shahbazmohamadi, and N. Asadizanjani, “Quality control and authentication of packaged integrated circuits using enhanced-spatial-resolution terahertz time-domain spectroscopy and imaging,” Opt. Lasers Eng. 104, 274–284 (2018). [CrossRef]  

9. Z. Zhang, Y. Zhang, G. Zhao, and C. Zhang, “Terahertz time-domain spectroscopy for explosive imaging,” Optik 118(7), 325–329 (2007). [CrossRef]  

10. S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Trans. on Image Process. 22(7), 2864–2875 (2013). [CrossRef]  

11. A. Ardeshir Goshtasby and S. Nikolov, “Image fusion: Advances in the state of the art,” Inform. Fusion 8(2), 114–118 (2007). [CrossRef]  

12. P. J. Burt and E. H. Adelson, “The Laplacian Pyramid as a Compact Image Code,” IEEE Trans. Commun. 31(4), 532–540 (1983). [CrossRef]  

13. A. Toet, “Image fusion by a ration of low-pass pyramid,” Pattern Recogn. Lett. 9(4), 245–253 (1989). [CrossRef]  

14. V. S. Petrovic and C. S. Xydeas, “Gradient-based multiresolution image fusion,” IEEE Trans. on Image Process. 13(2), 228–237 (2004). [CrossRef]  

15. H. Li, B. S. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graph. Models Image Process. 57(3), 235–245 (1995). [CrossRef]  

16. M. Elad and I. Yavneh, “A Plurality of Sparse Representations Is Better Than the Sparsest One Alone,” IEEE Trans. Inform. Theory 55(10), 4701–4714 (2009). [CrossRef]  

17. M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation,” IEEE Trans. Signal Process. 54(11), 4311–4322 (2006). [CrossRef]  

18. M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Trans. on Image Process. 15(12), 3736–3745 (2006). [CrossRef]  

19. Y. Liu, S. Liu, and Z. Wang, “A general framework for image fusion based on multi-scale transform and sparse representation,” Inform. Fusion 24, 147–164 (2015). [CrossRef]  

20. C. Xydeas and V. Petrovic, “Objective image fusion performance measure,” Electron. Lett. 36(4), 308–309 (2000). [CrossRef]  

21. L. Gang and L. Xue-qin, “Performance measure for image fusion considering region information,” J. Zhejiang Univ. - Sci. A 8(4), 559–562 (2007). [CrossRef]  

22. J. Zhao, R. Laganiere, and L. Zheng, “Performance assessment of combinative pixel-level image fusion based on an absolute feature measurement,” International Journal of Innovative Computing Information & Control Ijicic. 3(6), (2006).

23. Z. Wang and A. C. Bovik, “A universal image quality index,” IEEE Signal Process. Lett. 9(3), 81–84 (2002). [CrossRef]  

24. G. Piella and H. J. A. M. Heijmans, “A new quality metric for image fusion,” in international conference on image processing, (2003), 173–176.

25. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image Quality Assessment: From Error Visibility to Structural Similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

26. P. U. Jepsen and S. R. Keiding, “Radiation patterns from lens-coupled terahertz antennas,” Opt. Lett. 20(8), 807 (1995). [CrossRef]  

27. K. Ahi, “Mathematical Modeling of THz Point Spread Function and Simulation of THz Imaging Systems,” IEEE Trans. THz Sci. Technol. 7(6), 747–754 (2017). [CrossRef]  

28. K. Ahi, “A method and system for enhancing the resolution of terahertz imaging,” Measurement 138, 614–619 (2019). [CrossRef]  

29. S. Li, B. Yang, and J. Hu, “Performance comparison of different multi-resolution transforms for image fusion,” Inform. Fusion 12(2), 74–84 (2011). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Schematic diagram of the fusion method based on MST and SR.
Fig. 2.
Fig. 2. (a) comparing the different pixels of THz time-domain pulses with the reference signal, (b) comparing the different pixels of amplitude attenuation ratio (AT/A0).
Fig. 3.
Fig. 3. THz image of a qualified STC89C52C IC: (a) 0.9 THz, (b) 0.75 THz, (c) 0.6 THz. Images of a defective STC89C52RC IC: (d) 0.9 THz, (e) 0.75 THz, (f) 0.6 THz.
Fig. 4.
Fig. 4. The assessment of the RP, SR and RP-SR methods: (a) SD and SF, (b) AG and EN.
Fig. 5.
Fig. 5. Images of IC #1 using methods (a) RP, (b) SR, (c) RP-SR-1, (d) RP-SR-2, (e) RP-SR-3, and (f) RP-SR-4; images of defective IC #2 using methods (g) RP, (h) SR, (i) RP-SR-1, (j) RP-SR-2, (k) RP-SR-3, and (l) RP-SR-4.
Fig. 6.
Fig. 6. The assessment of the CVT, SR and CVT-SR methods: (a) SD and SF, (b) AG and EN.
Fig. 7.
Fig. 7. Images of IC #1 using methods (a) CVT, (b) SR, (c) CVT-SR-1, (d) CVT-SR-2, (e) CVT-SR-3 and (f) CVT-SR-4; images of defective IC #2 using methods (g) CVT, (h) SR, (i) CVT-SR-1, (j) CVT-SR-2, (k) CVT-SR-3and (l) CVT-SR-4.
Fig. 8.
Fig. 8. Assessments of the LP, SR and LP-SR methods: (a) SD and SF, (b) AG and EN.
Fig. 9.
Fig. 9. Images of IC #1 using methods (a) LP, (b) SR, (c) LP-SR-1, (d) LP-SR-2, (e) LP-SR-3, and (f) LP-SR-4; images of defective IC #2 using methods (g) LP, (h) SR, (i) LP-SR-1, (j) LP-SR-2, (k) LP-SR-3, and (l) LP-SR-4.
Fig. 10.
Fig. 10. Assessments of the DTCWT(DTC), SR and DTCWT-SR(DTC-SR) methods: (a) SD and SF, (b) AG and EN.
Fig. 11.
Fig. 11. Images of IC #1 using methods (a) DTCWT, (b) SR, (c) DTCWT-SR-1, (d) DTCWT-SR-2, (e) DTCWT-SR-3, and (f) DTCWT-SR-4; images of defective IC #2 using methods (g) DTCWT, (h) SR, (i) DTCWT-SR-1, (j) DTCWT-SR-2, (k) DTCWT-SR-3, (l) and DTCWT-SR-4.
Fig. 12.
Fig. 12. Assessment of PSF model at 0.9 THz, 0.75 THz, 0.6 THz and LP-SR-2, DTCWT-SR-3(DTC-SR-3), DTCWT-SR-4(DTC-SR-4): (a) SD and SF, (b) AG and EN.
Fig. 13.
Fig. 13. Source images of PSF model #1 at (a) 0.9 THz, (b) 0.75 THz, and (c) 0.6 THz; image of IC #1 using methods LP-SR-2, (d) LP-SR-2, (e) DTCWT-SR-3, and (f) DTCWT-SR-4. Source images of PSF model #2 at (g) 0.9 THz, (h) 0.75 THz, and (i) 0.6 THz; image of IC #2 using methods LP-SR-2, (j) LP-SR-2, (k) DTCWT-SR-3, and (l) DTCWT-SR-4.

Tables (5)

Tables Icon

Table 1. Objective assessments of the RP, SR and RP-SR methods.

Tables Icon

Table 2. Objective assessments of the CVT, SR and CVT-SR methods.

Tables Icon

Table 3. Objective assessments of the LP, SR and LP-SR methods.

Tables Icon

Table 4. Objective assessments of the DTCWT, SR and DTCWT-SR methods.

Tables Icon

Table 5. Objective assessment of the LP-SR-2, DTCWT-SR-3 and DTCWT-SR-4 methods.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

S D = 1 H × W x = 1 H y = 1 W ( F ( x , y ) μ ) 2
E N = l = 0 L 1 p F ( l ) l o g 2 p F ( l )
A G = 1 H × W i = 1 H j = 1 W f x 2 ( i , j ) + f y 2 ( i , j ) 2
S F = R F 2 + C F 2
R F = 1 H × W i = 1 H j = 2 W [ F ( i , j ) F ( i , j 1 ) ] 2
C F = 1 H × W i = 2 H j = 1 W [ F ( i , j ) F ( i 1 , j ) ] 2
o ( x , y , z ) = i ( x , y , z ) P S F ( x , y , z ) 1
P S F ( z , f ) = I r e f ( 0 , z , f ) e x p ( z α 2 ρ 2 / ( 0.565 2 ln 2 k N A c f 1 + ( 2 ln 2 c π ( N A 0.565 k ) 2 f z ) 2 ) )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.