Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fast lithographic source optimization method of certain contour sampling-Bayesian compressive sensing for high fidelity patterning

Open Access Open Access

Abstract

Fast source optimization (SO) is in demand urgently for holistic lithography on-line at 14-5 nm nodes. Our earlier works of fast compressive sensing (CS) SO methods adopted randomly sampling monitoring pixels on layout patterns, consequently resulting in failure of SO sometimes and poor image fidelity compared to gradient-based SO with complete sampling (SD-SO). This paper proposes a novel certain contour sampling-Bayesian compressive sensing SO (CCS-BCS-SO) method to achieve the goals of fast SO and high fidelity patterns simultaneously. The CCS assures the optimized source uniquely and reduces the computational complexity significantly. The BCS theory, to our best knowledge, is for the first time applied to resolution enhancement techniques (RETs) in lithography systems to ensure high fidelity patterns. The results demonstrate that CCS-BCS-SO simultaneously achieves fast SO like CS-SO and high fidelity patterns like SD-SO.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

As the critical dimension (CD) of integrated circuit continuously shrinks, computational lithography techniques [1,2] are increasingly important to compensate for the imaging distortion in lithography systems. Lithographic light source optimization (SO) is a significant computational lithography technique to enhance resolution and improve image fidelity by modulating the intensity distribution and incidence angles of the lithographic light source.

After the emergence of pixelated sources realized by freeform diffractive optical element (DOE) [3,4], several pixelated SO methods were proposed to improve the image fidelity [57], computation efficiency [8,9], and process robustness [10]. Besides, SO is also used as a part of inverse lithography techniques (ILTs) to increase optimization degree of freedom for better imaging performance, such as source mask optimization (SMO) [1119], source mask polarization optimization (SMPO) [20,21], and source mask numerical aperture optimization (SMNO) [22]. Current gradient-based SO methods need to calculate the aerial images and the gradients of cost functions on the dense mesh in each iteration, which is the most time-consuming step during the optimization. Recently, the emergence of holistic lithography [23] requires fast SO to modulate the source pattern. It is necessary to develop more advanced strategies to speed up the SO methods with better image fidelity.

In our previous works [2426], compressive sensing (CS) theory [27,28] was firstly introduced to accelerate the SO procedure. By sampling monitoring pixels on layout patterns based on random sampling method [24] or blue noise sampling method [25], aerial images and gradients of cost functions can be calculated on the sparse mesh, thus reducing the computational complexity. Then the SO procedure was achieved by an underdetermined l1-norm reconstruction algorithm. To further improve the performance of these CS-SO methods, an lp-norm (0 < p < 1) reconstruction algorithm, which has more accurate globally minimizing solution than above l1-norm algorithm, was used in our previous work [26]. However, these methods may be invalid in industrial applications by the fact that above random sampling method and blue noise sampling method can induce the uncertainty of optimization results. This means that researchers maybe need to optimize source pattern several times and find a better result from these results, thus increasing time cost and storage cost. To avoid the randomness, we proposed a downsampling method to accelerate the optical proximity correction (OPC) [29] and SMO procedure [30]. However, simulation results showed this downsampling method can achieve high image fidelity results only if there are enough sampling measurements, which may increase computational complexity.

In addition, the results have also revealed that, for computational lithography application, the above l1-norm and lp-norm algorithms cannot achieve same image fidelity results as gradient-based method with complete sampling. In other words, these CS-SO methods accelerate SO procedure but sacrifice image fidelity of patterns. Fortunately, a new framework named Bayesian compressive sensing (BCS) or sparse Bayesian learning (SBL) has been proposed recently [31,32], which was proved to have same globally minimizing solution with l0-norm algorithm and reduce the number of local minima [33]. Benefiting from its superior performance, BCS method has been generally applied in synthetic aperture radar (SAR) imaging [34], speech enhancement [35], thermal imaging [36], biological signal transmission [37], and other areas [38].

In this paper, we propose a novel certain contour sampling-Bayesian compressive sensing SO (CCS-BCS-SO) method to achieve fast SO with high fidelity patterns. To reduce the computational complexity and avoid randomness, a novel certain contour sampling method is proposed, which first locates the contour regions of target layout and then selects the sampling pixels by systematically sampling the contour regions. To our best knowledge, the BCS theory is for the first time applied to resolution enhancement techniques (RETs) in lithography systems. The proposed CCS-BCS-SO framework formulates SO procedure as a series of re-weighted l1-norm reconstruction problems, which can achieve high imaging performance SO with little sampling measurements. Subsequently, a proximal gradient descent (PGD) algorithm is used to realize CCS-BCS-SO. The simulations at 14nm technology node show that CCS-BCS-SO method can accelerate SO procedure by a factor of 95 than SD-SO method, and improve the image fidelity by 16% than CS-SO method in similar runtime, which means that CCS-BCS-SO simultaneously achieves fast SO and high fidelity patterns. In addition, benefiting from the CCS method without randomness, the proposed CCS-BCS-SO can reduce cost of time and storage than current CS-SO methods.

The remainder of this paper is organized as follow. The fundamentals of Bayesian compressive sensing SO are formulated in Section 2. The method of certain contour sampling-Bayesian compressive sensing is described in Section 3. Simulations and analysis are presented in Section 4. Conclusions are provided in Section 5.

2. Fundamentals of Bayesian compressive sensing

The BCS theory is concerned with the generative observation model

$$\overrightarrow {y} = {\mathbf {\Phi} }\overrightarrow x + \overrightarrow \varepsilon ,$$
where $\overrightarrow {y}$ is a known $M \times 1$ compressive measurement vector, $\overrightarrow {x}$ is an unknown $N \times 1$ signal, ${\mathbf {\Phi} }$ is a linear mapping between signal and measurement vector, and $\overrightarrow \varepsilon $ is Gaussian noise distributed as ${{\cal N}}(0,{\sigma ^2})$. In this situation, we seek the unknown signal $\overrightarrow {x}$ whose entries are mainly zero while still allowing us to accurately approximate the measurement $\overrightarrow {y}$. The BCS method imposes generalized Gaussian distribution as the prior of the unknown signal $\overrightarrow {x}$, which is given by [31]
$$p(\overrightarrow x ;\overrightarrow \gamma ) = {{\cal N}}(0,\overrightarrow \gamma ) = \prod\limits_{i = 1}^N {{{(2\pi {\gamma _i})}^{ - \frac{1}{2}}}\exp ( - \frac{{x_i^2}}{{2{\gamma _i}}})} ,$$
where $\overrightarrow x = {[{x_1},\ldots ,{x_N}]^T}$, and $\overrightarrow \gamma = {[{\gamma _1},\ldots ,{\gamma _N}]^T}$ is a vector governing the prior variance of each unknown coefficient. These hyperparameters can be estimated from the data by marginalizing over the $\overrightarrow {x}$ and then performing evidence maximization or type-II maximum likelihood [39,40]. This is equivalent to minimizing [31]
$$L(\overrightarrow \gamma ) ={-} \log \int {p(\overrightarrow y |\overrightarrow x )p(\overrightarrow x ;\overrightarrow \gamma )d\overrightarrow x = \log |{{\boldsymbol {\Sigma} }_y}|+ {{\overrightarrow y }^T}{\boldsymbol{\Sigma} }_y^{ - 1}\overrightarrow y } ,$$
where ${{\boldsymbol{\Sigma} }_y} = {\sigma ^2}{\textbf{E}} + {\mathbf {\Phi} \mathbf{\Gamma} }{{\mathbf {\Phi} }^T}$, ${\textbf{E}}$ is an identity matrix, and ${\mathbf{\Gamma}} = \textrm{diag}(\overrightarrow \gamma )$. Essentially, this BCS framework operates in the $\overrightarrow \gamma $-space, since the cost function is a function of $\overrightarrow \gamma $. Once the optimal ${\overrightarrow \gamma _\ast } = \arg \min L(\overrightarrow \gamma )$ is computed, the optimal estimate of the unknown signal can be obtained by
$${\overrightarrow x _\ast } = {{\mathbf {\Gamma} }_\ast }{{\mathbf {\Phi} }^T}{\boldsymbol {\Sigma} }_{{y_\ast }}^{ - 1}\overrightarrow y .$$
To operate in the $\overrightarrow x$-space directly, it is proved [33] that the cost function in Eq. (3) can be minimized by iteratively solving the re-weighted convex l1-regularized cost function as
$${\overrightarrow x _{opt}} = \arg \min ||\overrightarrow y - {\mathbf {\Phi} }\overrightarrow x ||_2^2 + 2{\sigma ^2}\sum\limits_{i = 1}^N {z_{i}^{1/2}|{x_i}|} ,$$
$${\gamma _i} = z_i^{{-1/2}}|{x_{opt,i}}|,$$
$$\overrightarrow z = diag({{\mathbf {\Phi} }^T}{\boldsymbol {\Sigma} }_y^{ - 1}{\mathbf {\Phi} }),$$
where $\overrightarrow z = {[{z_1},\ldots ,{z_N}]^T}$. The BCS method in $\overrightarrow x$-space iterates Eqs. (5)–(7) until convergence to some ${\overrightarrow x _\ast }$. In the following, we will use the BCS method in $\overrightarrow x$-space to find out the optimal source patterns of the optical lithography systems.

3. Method of certain contour sampling-Bayesian compressive sensing SO

3.1 The certain contour sampling method

In computational lithography system, the total pixel number of mask pattern or image on the wafer is usually much larger than that of source pattern. To reduce the computational complexity and memory requirement, this paper proposes a novel certain contour sampling method. We first locate the contour regions of target layout, which are defined as the areas between the boundaries of the target pattern and the outer contour with one pixel far away from the boundaries. It should be noted that the contour regions defined in this paper are only used to describe the proposed method conveniently, which means researchers can redefine practical contour regions according to different applications. Then we sample one point per Kc points in the contour regions and these selected points compose of sampling regions. The Kc is a user-defined parameter.

For example, we design the layout pattern shown in Fig. 1(a) as the target pattern. The sampling regions based on blue noise sampling method, downsampling method, and certain contour sampling method are presented in Figs. 1(b)–1(d), respectively, where the while points are the selected pixels and the orange regions are the pattern layout. Figure 1(c) shows that the downsampling method selects too many pixels out of contour regions which contribute less in SO procedure. Figure 1(b) shows that the blue noise sampling method only focus on contour regions. However, the random sampling operation can result in uncertainty of the optimization results, which means different target measurements in same CS reconstruction and even results in the failure of SO sometimes. Figure 1(d) shows that the proposed certain contour sampling method can select a little sampling pixels and avoid randomness.

 figure: Fig. 1.

Fig. 1. Sampling regions based on different sampling methods.

Download Full Size | PDF

3.2 The CCS-BCS-SO framework based on PGD algorithm

Our previous work [41] proved that, according to the Abbe’s method, the aerial image of the lithography systems can be calculated as

$${\textbf {I}} = \frac{1}{{{J_{\textrm{sum}}}}}\sum\limits_{{x_S}} {\sum\limits_{{y_S}} {[{\textbf {J}}({x_S},{y_S}) \times \sum\limits_{p = x,y,z} {|{\textbf {H}}_p^{{x_S},{y_S}} \otimes ({{\textbf {B}}^{{x_S},{y_S}}} \odot {\textbf {B}}){|^2}} ]} } ,$$
where ${\textbf {B}} \in {{\mathbb R}^{N \times N}}$ denotes the mask pattern, and ${{\mathbb R}^{N \times N}}$ is the real number set with the size $N \times N$. The ${{\textbf {B}}^{{x_S},{y_S}}} \in {{\mathbb R}^{N \times N}}$ represents the phase shift resulting from the oblique incidence effect of the light rays. The ${\textbf {H}}_p^{{x_S},{y_S}}$, where p = x, y, or z, are the equivalent point spread function (PSF) of the lithography system with respect to the electric field components in the spatial coordinate $(x,y,z)$, respectively. The convolution of the PSF and mask pattern, which can be calculated as ${\textbf {H}}_p^{{x_S},{y_S}} \otimes ({{\textbf {B}}^{{x_S},{y_S}}} \odot {\textbf {B}})$, embodies the diffraction effect of the mask pattern in lithography system. The matrix ${\textbf {J}} \in {{\mathbb R}^{{N_S} \times {N_S}}}$ represents the intensity distribution of the source pattern, where ${\textbf {J}}({x_S},{y_S})$ represents the intensity of the source point at $({x_S},{y_S})$. The ${J_{\textrm{sum}}} = \sum\nolimits_{{x_S}} {\sum\nolimits_{{y_S}} {{\textbf {J}}({x_S},{y_S})} }$ is an illumination intensity normalization factor. The notations ${\otimes}$ and ${\odot}$ represent matrix convolution and matrix Hadamard product, respectively. Equation (8) can be transformed into
$$\overrightarrow I = {{\textbf {I}}_{cc}}\overrightarrow J ,$$
where $\overrightarrow I \in {{\mathbb R}^{{N^2} \times 1}}$ and $\overrightarrow J \in {{\mathbb R}^{N_s^2 \times 1}}$ are the column-scanned aerial image and normalized source pattern, respectively. The ${{\textbf {I}}_{cc}}$ matrix, with dimension of ${N^2} \times N_s^2$, represents the imaging process in Eq. (8) as $\sum\limits_{{x_S}} {\sum\limits_{{y_S}} {\left( {\sum\limits_{p = x,y,z} {|{\textbf {H}}_p^{{x_S},{y_S}} \otimes ({{\textbf {B}}^{{x_S},{y_S}}} \odot {\textbf {B}}){|^2}} } \right)} }$.

Using above certain contour sampling method, according to the Eqs. (5)–(7), the SO procedure can be formulated as

$$\overrightarrow {{J_{opt}}} = \arg \mathop {\min }\limits_{\overrightarrow J } ||\alpha \overrightarrow {{Z_s}} - \overrightarrow {{I_s}} ||_2^2 + \beta \sum\limits_{i = 1}^{N_s^2} {{w_i}|{J_i}|} \mbox{ = }\arg \mathop {\min }\limits_{\overrightarrow J } ||\alpha \overrightarrow {{Z_s}} - {\textbf {I}}_{cc}^s\overrightarrow J ||_2^2 + \beta \sum\limits_{i = 1}^{N_s^2} {{w_i}|{J_i}|} ,$$
$${\gamma _i} = {w_i}^{ - 1}{J_{opt,i}},$$
$$\overrightarrow w = sqrt({diag({{{({\textbf {I}}_{cc}^s)}^T}{{({\beta {\textbf {E}} + {\textbf {I}}_{cc}^sdiag(\overrightarrow \gamma ){{({\textbf {I}}_{cc}^s)}^T}} )}^{ - 1}}{\textbf {I}}_{cc}^s} )} ),$$
where $\overrightarrow {{Z_s}} \in {{\mathbb R}^{M \times 1}}$ represents the target pattern on the selected sampling pixels while $\overrightarrow {{I_s}} \in {{\mathbb R}^{M \times 1}}$ represents the corresponding actual imaging intensities, and M is the number of selected pixels. The factor $\alpha$ is a constant used to modify the amplitude of the target pattern to improve convergence [42], and $\beta$ is the regularization coefficient. The ${\textbf {I}}_{cc}^s \in {{\mathbb R}^{M \times N_s^2}}$ matrix is composed of M rows from ${{\textbf {I}}_{cc}} \in {{\mathbb R}^{{N^2} \times N_s^2}}$ matrix corresponding to the selected pixels. The ${J_i}$, ${w_i}$, and ${\gamma _i}$ are the i-th elements of $\overrightarrow J$, weight $\overrightarrow w$, and $\overrightarrow \gamma $, respectively. The matrix ${\textbf {E}}$ in Eq. (12) is an identity matrix. In this paper, the proposed CCS-BCS-SO method can achieve SO by iteratively using Eqs. (10)–(12) until convergence to some optimal $\overrightarrow {{J_\ast }}$.

The Eq. (10) is a re-weighted l1 cost function. To solve this problem conveniently, we transform the Eq. (10) to a least absolute shrinkage and selection operator (LASSO) form [43] as

$$\overrightarrow {{u_\ast }} = \arg \mathop {\min }\limits_{\overrightarrow u } ||\alpha \overrightarrow {{Z_s}} - {\mathbf {\Phi} }\overrightarrow u ||_2^2 + \beta ||\overrightarrow u |{|_1},$$
where we make the parameter transformation of $\overrightarrow u = diag(\overrightarrow w ) \cdot \overrightarrow J$, $\overrightarrow J = {({diag(\overrightarrow w )} )^{ - 1}} \cdot \overrightarrow u$, and ${\mathbf {\Phi} } = {\textbf {I}}_{cc}^s \cdot {({diag(\overrightarrow w )} )^{ - 1}}$. Then we can define this cost function as
$$f(\overrightarrow u ) = g(\overrightarrow u ) + h(\overrightarrow u ) = ||\alpha \overrightarrow {{Z_s}} - {\mathbf {\Phi} }\overrightarrow u ||_2^2 + \beta ||\overrightarrow u |{|_1},$$
where $g(\overrightarrow u ) = ||\alpha \overrightarrow {{Z_s}} - {\mathbf {\Phi} }\overrightarrow u ||_2^2$ controls the image fidelity and $h(\overrightarrow u ) = \beta ||\overrightarrow u |{|_1}$ governs the sparsity of source pattern. The Eq. (13) can be iteratively solved by PGD algorithm [44,45] as
$$\overrightarrow v = {\overrightarrow u _k} - step \times \nabla g({\overrightarrow u _k}),$$
$${u_{k + 1,i}} = shrink({v_i},\beta /2) = \left\{ {\begin{array}{c} {{v_i} + \beta /2, {v_i} \le - \beta /2}\\ {0, |{v_i}|< \beta /2}\\ {{v_i} - \beta /2, {v_i} \ge \beta /2} \end{array}} \right.,$$
where step is the step length, $\nabla g({\overrightarrow u _k})$ is the gradient of cost function $g({\overrightarrow u _k})$ with respect to ${\overrightarrow u _k}$, $shrink({\cdot} , \cdot )$ is the soft threshold operator, and ${u_{k + 1,i}}$ and ${v_i}$ are the i-th elements of ${\overrightarrow u _{k + 1}}$ and $\overrightarrow v$, respectively.

4. Simulation and analysis

This section presents a set of simulations to verify the superior performance of proposed CCS-BCS-SO method at 14nm technology node. All of the computations are implemented by Matlab and carried out on a computer with Inter(R) Core(TM) i5-7500 CPU, 3.4GHz, and 8GB of RAM.

In this paper, we use pattern error (PAE) to evaluate the image fidelity of the optimized source pattern, which is defined as

$$\mbox{PAE} = ||{\textbf {Z}} - resist({\textbf {I}})||_2^2,$$
where ${\textbf {Z}}$ is the target pattern, ${\textbf {I}}$ is the aerial image on wafer, and $resist({\cdot} )$ represents the resist effect. In this paper, a constant threshold resist (CTR) model is used to approximate the resist effect as
$$resist({\textbf {I}}) = \Gamma \{ {\textbf {I}} - {t_r}\} ,$$
where $\Gamma \{{\cdot} \} = 1$ when the argument is larger than 0, otherwise, $\Gamma \{{\cdot} \} = 0$, and ${t_r}$ is the process threshold. It should be noted that the CTR model is generally used to develop computational lithography techniques while industrial researchers can easily apply proposed CCS-BCS-SO method by replacing this CTR model to more accurate models.

4.1 Simulations on different SO method

In this paper, the simulations are based on the 193nm ArF immersion lithography systems. An annular illumination light source with TE-polarized electric field is used as the initial source pattern, with the inner and outer partial coherence factors of ${\sigma _{in}} = 0.82$ and ${\sigma _{out}} = 0.97$, respectively. In this paper, the pixelated source pattern is represented by a ${N_S} \times {N_S}$ matrix, where ${N_S}\mbox{ = }\mbox{4}1$. The numerical aperture (NA) on the wafer side is 1.35, the refractive index of the immersion medium is 1.44, and the demagnification factor of the projection optics is $R\mbox{ = }\mbox{4}$.

The target layouts used in following simulations are shown in Fig. 2. The target layouts shown in Figs. 2(a) and 2(b) are line-space patterns with critical dimension (CD) of 14nm, and the both duty ratio are 1:4. The target layouts shown in Figs. 2(c) and 2(d) are block layout patterns at 14nm technology node. All of the pixelated masks are represented by $N \times N$ matrices, where $N\mbox{ = }\mbox{201}$. The pixel size on these masks is $\mbox{12}\mbox{.4nm} \times 12.4\mbox{nm}$.

 figure: Fig. 2.

Fig. 2. Target layouts at 14 nm technology node.

Download Full Size | PDF

To verify the superior performance of the proposed CCS-BCS-SO method on high image fidelity SO, we compare the simulation results of gradient-based SO framework [5] with complete sampling method and steepest descent algorithm (SD-SO method), l1-norm based CS-SO framework [25] with downsampling method and PGD algorithm (D-CS-SO method),l1-norm based CS-SO framework [25] with blue noise sampling method and PGD algorithm (BN-CS-SO method), and proposed CCS-BCS-SO method with PGD algorithm (CCS-BCS-SO method) in this section. We set $\alpha = 0.2$ in Eq. (3) and ${t_r} = 0.1$ in Eq. (12) in all of these three frameworks, and $\beta = 0.02$ in D-CS-SO method, BN-CS-SO method and CCS-BCS-SO method in following simulations.

Figure 3 illustrates the simulation results of different SO methods based on the target layouts in Fig. 2. From top to bottom, the simulation results using initial system, SD-SO method, D-CS-SO method, BN-CS-SO method and CCS-BCS-SO method are shown, respectively. From first column to fifth column, the optimized source pattern, and print images of targets 1-4 are shown, respectively. To make the results more intuitive, a histogram of pattern error is shown in Fig. 4. The blue, green, orange, and yellow bars represent the results of SD-SO method, D-CS-SO method, BN-CS-SO method, and CCS-BCS-SO method, respectively. From left to right, the five groups show the pattern error based on targets 1-4 and the average pattern error, respectively. The total runtime of above three methods are shown in Fig. 5. It should be noted that the results of BN-CS-SO method in Figs. 4 and 5 are the average results of 100 simulations due to the randomness of BN-CS-SO method. Besides, the result of BN-CS-SO method shown in Fig. 3 is a simulation result close to the average.

 figure: Fig. 3.

Fig. 3. Simulations of different SO methods at 14 nm technology node.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Pattern error results of different SO methods at 14 nm technology node.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. The runtime of different SO methods at 14 nm technology node.

Download Full Size | PDF

As Figs. 35 show, the D-CS-SO method, BN-CS-SO method and CCS-BCS-SO method can accelerate the SO procedure than SD-SO method by a factor of 260, 273 and 95, respectively. Besides, the CCS-BCS-SO method can reduce the average pattern error by 16% than BN-CS-SO method and by 8.5% than D-CS-SO method, and only increases the average pattern error by 0.53% than SD-SO method, which obviously can be neglected. The results demonstrate that CCS-BCS-SO simultaneously achieves fast SO like BN-CS-SO and high fidelity patterns like SD-SO. In addition, benefiting from the CCS method without randomness, the proposed CCS-BCS-SO can reduce cost of time and storage than BN-CS-SO method. It should be noted that the image fidelity of SO results is limited by degree of freedom of SO. Consequently, the improvement in best PAE by 16% demonstrates the superior performance of proposed CCS-BCS-SO method. In the future, we will extend proposed method combined with our previous nonlinear CS methods [29,30] to OPC and SMO applications.

In the above simulations, some parameters like the amplitude modification factor $\alpha$, the regularization coefficient $\beta$, and the process threshold ${t_r}$ are empirically selected to improve the convergence and image fidelity. In this paper, to apply the BCS theory, we omit the nonlinear resist effect model in Eq. (10) to keep the linear relationship between print image and source pattern, which is generally used in current CS-SO methods [2426] and can improve the contrast of optimized imaging results [24]. As a result, the $\alpha$ is used to modify the amplitude of the target pattern to improve convergence. In addition, the $\beta$ is used to modify the weight of sparsity of source pattern in the cost function. The larger the $\beta$ is, the more sparse the optimized source pattern will be. Actually, the value of the ${t_r}$ does not affect the optimization in this paper. But as shown in Eqs. (17) and (18), ${t_r}$ is a process parameter representing the resist effect, which can directly affect the image fidelity. Some methods aim to reduce sensitivity to it to realize robustness computational lithography but this is out of the scope of this paper. We make a comparison to present the impact of these different parameters to optimization results. Table 1 lists the image fidelity obtained by the proposed CCS-BCS-SO method with different $\alpha$ and $\beta$. As shown in Table 1, image fidelity deteriorates whenever the $\alpha$ is too large or too small, because the value of $\alpha$ should match the contrast of imaging results. As mentioned above, the larger the $\beta$ is, the more sparse the optimized source pattern will be. The smaller the $\beta$ is, the better the image fidelity will be. As a result, the balance point between sparsity of source pattern and image fidelity can be sought by adjusting the value of $\beta$.

Tables Icon

Table 1. The image fidelity obtained by the proposed CCS-BCS-SO method with different $\alpha$ and $\beta$.

4.2 Simulations on different SO framework

In this section, to verify the superior performance of proposed BCS-SO framework, we compare the simulation results of CS-SO framework and BCS-SO framework. All of the following simulations are based on certain contour sampling method and PGD algorithm, and the only difference of these methods is the cost function. The other parameters are set as same as those used in above simulations. It has been proved that, in sampling situation, CS-SO method prevails gradient-based SO method in both runtime and image fidelity [24]. As a result, we omit the comparison with SD-SO method in this section.

Figure 6 illustrates the simulation results of different SO frameworks based on CCS method. From top to bottom, the simulation results using SD-SO method, CS-SO method, and BCS-SO method are shown, respectively. From first column to fifth column, the optimized source pattern, and print images of targets 1-4 are shown, respectively. To make the results more intuitive, a histogram of pattern error is shown in Fig. 7. The blue, orange, and yellow bars represent the results of SD-SO method, CS-SO method, and BCS-SO method, respectively. From left to right, the five groups show the pattern error based on targets 1-4 and the average pattern error, respectively. The total runtime of above three methods are shown in Fig. 8.

 figure: Fig. 6.

Fig. 6. Simulations of different SO methods based on CCS method at 14 nm technology node.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Pattern error results of different SO methods based on CCS method.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. The runtime of different SO methods based on CCS method at 14 nm technology node.

Download Full Size | PDF

As Figs. 68 show, the CS-SO method and BCS-SO method, both using certain contour sampling method, can accelerate the SO procedure than SD-SO method by a factor of 273 and 95, respectively. However, the CS-SO method increases the average pattern error than SD-SO method by 14.75% while the BCS-SO method only increases the average pattern error than SD-SO method by 0.53%, which obviously can be neglected. In other words, the CS-SO method accelerates the SO procedure by sacrificing image fidelity while BCS-SO method accelerates the SO process and still maintains high image fidelity, which benefits from the re-weighted l1-norm reconstruction framework.

4.3 Simulations on different sampling method

In this section, to verify the superior performance of proposed certain contour sampling method, we compare the simulation results of downsampling method, blue noise sampling method, and certain contour sampling method, respectively. All of the following simulations are based on BCS-SO framework and PGD algorithm, and the only difference of this three methods is the sampling method. The other parameters are set as same as those used in above simulations.

To compare the simulation results as fair as possible, we try to select the same number of sampling pixels in all these three methods. However, we cannot realize exactly the same sampling number because of the systematic sampling operation in downsampling method and certain contour sampling method. Besides, the simulation results of the downsampling method will deteriorate sharply if we continue reducing the number of sampling pixels. This is the reason why the sampling number is not exactly the same in these three methods as shown in Table 2.

Tables Icon

Table 2. Number of sampling pixels in different sampling methods.

Figure 9 illustrates the simulation results of different sampling method based on the target layouts in Fig. 2. From top to bottom, the simulation results using downsampling method, blue noise sampling method, and certain contour sampling method are shown respectively. It should be noted that the ‘good’ results and ‘bad’ results of blue noise sampling method are shown in Figs. 9(f)–9(j) and Figs. 9(k)–9(o), respectively, because the simulation results of blue noise sampling method may be good or bad resulting from its randomness. The ‘good’ or ‘bad’ results are the simulation results with minimum or maximum average PAE from 20 simulations. From first column to fifth column, the optimized source pattern, and print image patterns of Targets 1-4 are shown, respectively. To make the results more intuitive, a histogram of pattern error is shown in Fig. 10. The blue, orange, and yellow bars represent the results of downsampling method, blue noise sampling method, and certain contour sampling method, respectively. From left to right, the five groups show the pattern error based on Targets 1-4 and the average pattern error. It should be noted that the results of blue noise sampling method in Figs. 10 and 11 are the average results of 100 simulations. The total runtime of above three methods are shown in Fig. 11.

 figure: Fig. 9.

Fig. 9. Simulations of different sampling methods at 14 nm technology node.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Pattern error results of different sampling methods at 14 nm technology node.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. The runtime of different sampling methods at 14 nm technology node.

Download Full Size | PDF

As Figs. 911 show, benefiting from less sampling pixels, the blue noise sampling method and certain contour sampling method, which cost same time, can accelerate the SO procedure than downsampling method by a factor of 2.5. Besides, the certain contour sampling method can reduce pattern error than downsampling method and blue noise sampling method by 2.7% and 9.0%, respectively, because the certain contour sampling method focus the sampling regions on the contour regions of the target layout, which contribute significantly on SO results.

Due to the nonlinear relationship between mask pattern and aerial image, the proposed method cannot be applied to OPC and SMO directly. In the future, we will study the application of the proposed method on OPC and SMO to further improve the lithography imaging performance.

5. Conclusion

This paper proposes a CCS-BCS-SO framework to achieve fast SO and high fidelity imaging results. By only sampling the image pixels on contour regions, the proposed certain contour sampling method can assure the optimized source uniquely and reduce the computational complexity significantly. The BCS theory, to our best knowledge, is for the first time applied to resolution enhancement techniques (RETs) in lithography systems to ensure high fidelity patterns. The proposed CCS-BCS-SO framework formulates SO procedure as a series of re-weighted l1 problems, which can achieve high image fidelity patterns with little sampling measurements. The results demonstrate that CCS-BCS-SO simultaneously achieves fast SO like CS-SO and high fidelity patterns like SD-SO.

Funding

National Natural Science Foundation of China (61675026, 11627808); National Science and Technology Major Project (2017ZX02101006-001).

Acknowledgments

We thank KLA-Tencor for providing academic use of PROLITH.

References

1. A. K. Wong, Resolution Enhancement Techniques in Optical Lithography (SPIE, 2001).

2. A. K. Wong, Optical Imaging in Projection Lithography (SPIE, 2005).

3. Y. V. Miklyaev, W. Imgrunt, V. S. Pavelyev, D. G. Kachalov, T. Bizjak, L. Aschke, and V. N. Lissotschenko, “Novel continuously shaped diffractive optical elements enable high-efficiency beam shaping,” Proc. SPIE 7640, 764024 (2010). [CrossRef]  

4. J. T. Carriere, J. Stack, A. D. Kathman, and M. D. Himel, “Advances in DOE modeling and optical performance for SMO applications in immersion lithography at the 32 nm node and beyond,” Proc. SPIE 7640, 764025 (2010). [CrossRef]  

5. Y. Granik, “Source optimization for image fidelity and throughput,” J. Microlith. Microfab. 3(4), 509–522 (2004).

6. K. Tian, A. Krasnoperova, D. Melville, A. E. Rosenbluth, D. Gil, J. Tirapu-Azpiroz, K. Lai, S. Bagheri, C. C. Chen, and B. Morgenfeld, “Benefits and trade-offs of global source optimization in optical lithography,” Proc. SPIE 7274, 72740C (2009). [CrossRef]  

7. K. Iwase, P. D. Bisschop, B. Laenens, Z. Li, K. Gronlund, P. V. Adrichem, and S. Hsu, “A new source optimization approach for 2x node logic,” Proc. SPIE 8166, 81662A (2011). [CrossRef]  

8. J. C. Yu, P. Yu, and H. Y. Chao, “Fast source optimization involving quadratic line-contour objectives for the resist image,” Opt. Express 20(7), 8161–8174 (2012). [CrossRef]  

9. L. Wang, S. Li, X. Wang, G. Yan, and C. Yang, “Source optimization using particle swarm optimization algorithm in optical lithography,” Acta Opt. Sin. 35(4), 0422002 (2015). [CrossRef]  

10. H. Jiang and T. Xing, “A method of source optimization to maximize process window,” Laser Optoelectron. Prog. 52(10), 101101 (2015). [CrossRef]  

11. A. E. Rosenbluth, S. Bukofsky, C. Fonseca, M. Hibbs, K. Lai, A. Molless, R. N. Singh, and A. K. Wong, “Optimum mask and source patterns to print a given shape,” J. Microlith. Microfab. 1(1), 486 (2002). [CrossRef]  

12. A. Erdmann, T. Fühner, T. Schnattinger, and B. Tollkühn, “Towards automatic mask and source optimization for optical lithography,” Proc. SPIE 5377, 646–657 (2004). [CrossRef]  

13. X. Ma and G. R. Arce, “Pixel-based simultaneous source and mask optimization for resolution enhancement in optical lithography,” Opt. Express 17(7), 5783–5793 (2009). [CrossRef]  

14. J. Yu and P. Yu, “Gradient-based fast source mask optimization (SMO),” Proc. SPIE 7973, 797320 (2011). [CrossRef]  

15. J. Li, Y. Shen, and E. Y. Lam, “Hotspot-aware fast source and mask optimization,” Opt. Express 20(19), 21792–21804 (2012). [CrossRef]  

16. X. Ma, C. Han, Y. Li, L. Dong, and G. R. Arce, “Pixelated source and mask optimization for immersion lithography,” J. Opt. Soc. Am. A 30(1), 112–123 (2013). [CrossRef]  

17. J. Li, S. Liu, and E. Y. Lam, “Efficient source and mask optimization with augumented Lagrangian methods in optical lithography,” Opt. Express 21(7), 8076–8090 (2013). [CrossRef]  

18. C. Han, Y. Li, X. Ma, and L. Liu, “Robust hybrid source and mask optimization to lithography source blur and flare,” Appl. Opt. 54(17), 5291–5302 (2015). [CrossRef]  

19. T. Li and Y. Li, “Lithographic source and mask optimization with low aberration sensitivity,” IEEE Trans. Nanotechnol. 16(6), 1099–1105 (2017). [CrossRef]  

20. S. G. Hansen, “Source mask polarization optimization,” J. Micro/Nanolithogr., MEMS, MOEMS 10(3), 033003 (2011). [CrossRef]  

21. X. Ma, L. Dong, C. Han, J. Gao, Y. Li, and G. R. Arce, “Gradient-based joint source polarization mask optimization for optical lithography,” J. Micro/Nanolithogr., MEMS, MOEMS 14(2), 023504 (2015). [CrossRef]  

22. X. Guo, Y. Li, L. Dong, L. Liu, X. Ma, and C. Han, “Parametric source-mask-numerical aperture co-optimization for immersion lithography,” J. Micro/Nanolithogr., MEMS, MOEMS 13(4), 043013 (2014). [CrossRef]  

23. M. V. D. Brink, “Holistic lithography and metrology’s importance in driving patterning fidelity,” Proc. SPIE 9778, 977802 (2016). [CrossRef]  

24. Z. Song, X. Ma, J. Gao, J. Wang, Y. Li, and G. R. Arce, “Inverse lithography source optimization via compressive sensing,” Opt. Express 22(12), 14180–14198 (2014). [CrossRef]  

25. X. Ma, D. Shi, Z. Wang, Y. Li, and G. R. Arce, “Lithographic source optimization based on adaptive projection compressive sensing,” Opt. Express 25(6), 7131–7149 (2017). [CrossRef]  

26. X. Ma, Z. Wang, H. Lin, Y. Li, G. R. Arce, and L. Zhang, “Optimization of lithography source illumination arrays using diffraction subspaces,” Opt. Express 26(4), 3738–3755 (2018). [CrossRef]  

27. E. J. Candés, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52(2), 489–509 (2006). [CrossRef]  

28. D. Donoho, “Compressive sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]  

29. X. Ma, Z. Wang, Y. Li, G. R. Arce, L. Dong, and J. G. Frias, “Fast optical proximity correction method based on nonlinear compressive sensing,” Opt. Express 26(11), 14479–14498 (2018). [CrossRef]  

30. Y. Sun, N. Sheng, T. Li, Y. Li, E. Li, and P. Wei, “Fast nonlinear compressive sensing lithographic source and mask optimization method using Newton-IHTs algorithm,” Opt. Express 27(3), 2754–2770 (2019). [CrossRef]  

31. D. P. Wipf and B. D. Rao, “Sparse Bayesian learning for basis selection,” IEEE T. Signal Proces. 52(8), 2153–2164 (2004). [CrossRef]  

32. S. Ji, Y. Xue, and L. Carin, “Bayesian compressive sensing,” IEEE T. Signal Proces. 56(6), 2346–2356 (2008). [CrossRef]  

33. D. Wipf and S. Nagarajan, “A new view of automatic relevance determination,” Advancies in Neural Information Processing Systems 20 (2008).

34. J. Wu, F. Liu, L. Jiao, and X. Wang, “Compressive sensing SAR image reconstruction based on Bayesian framework and evolutionary computation,” IEEE T. Image Proces. 20(7), 1904–1911 (2011). [CrossRef]  

35. H. You, Z. Ma, W. Li, and J. Zhu, “A speech enhancement method based on multi-task Bayesian compressive sensing,” IEICE Tran. Inf. & Syst. E100.D(3), 556–563 (2017). [CrossRef]  

36. X. Gu, P. Zhou, and X. Gu, “Bayesian compressive sensing for thermal imagery using Gaussian-Jeffreys prior,” Infrared Phys. Techn. 83, 51–61 (2017). [CrossRef]  

37. Z. Zhang, T. Jung, S. Makeig, and B. D. Rao, “Compressed sensing for energy-efficient wireless telemonitoring of noninvasive fetal ECG via block sparse Bayesian learning,” IEEE Trans. Biomed. Eng. 60(2), 300–309 (2013). [CrossRef]  

38. K. Huang, S. Tan, Y. Luo, X. Guo, and G. Wang, “Enhanced radio tomographic imaging with heterogeneous Bayesian compressive sensing,” Pervasive and Mobile Computing 40, 450–463 (2017). [CrossRef]  

39. D. J. C. MacKay, “Bayesian interpolation,” Neural Comp. 4(3), 415–447 (1992). [CrossRef]  

40. M. E. Tipping, “Spare Bayesian learning and the relevance vector machine,” Appl. Phys. Lett. 1, 211–244 (2000). [CrossRef]  

41. X. Ma, Y. Li, and L. Dong, “Mask optimization approaches in optical lithography based on a vector imaging model,” J. Opt. Soc. Am. A 29(7), 1300–1312 (2012). [CrossRef]  

42. X. Ma, Y. Li, X. Guo, and L. Dong, “Vectorial mask optimization method for robust optical lithography,” J. Micro/Nanolith. MESM. MOEMS. 11(4), 043008 (2012). [CrossRef]  

43. R. Tibshirani, “Regression shrinkage and selection via the Lasso,” J. Roy. Stat. Soc. B. Met. 58(1), 267–288 (1996).

44. P. Combettes and V. Wajs, “Signal recovering by proximal forward-backing splitting,” Multiscale Model. Simul. 4(4), 1168–1200 (2005). [CrossRef]  

45. S. Mosci, L. Rosasco, S. Matteo, A. Verri, and S. Villa, “Solving structured sparsity regularization with proximal methods,” Lect. Notes. Artif. Int. 6322, 418–433 (2010). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Sampling regions based on different sampling methods.
Fig. 2.
Fig. 2. Target layouts at 14 nm technology node.
Fig. 3.
Fig. 3. Simulations of different SO methods at 14 nm technology node.
Fig. 4.
Fig. 4. Pattern error results of different SO methods at 14 nm technology node.
Fig. 5.
Fig. 5. The runtime of different SO methods at 14 nm technology node.
Fig. 6.
Fig. 6. Simulations of different SO methods based on CCS method at 14 nm technology node.
Fig. 7.
Fig. 7. Pattern error results of different SO methods based on CCS method.
Fig. 8.
Fig. 8. The runtime of different SO methods based on CCS method at 14 nm technology node.
Fig. 9.
Fig. 9. Simulations of different sampling methods at 14 nm technology node.
Fig. 10.
Fig. 10. Pattern error results of different sampling methods at 14 nm technology node.
Fig. 11.
Fig. 11. The runtime of different sampling methods at 14 nm technology node.

Tables (2)

Tables Icon

Table 1. The image fidelity obtained by the proposed CCS-BCS-SO method with different α and β .

Tables Icon

Table 2. Number of sampling pixels in different sampling methods.

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

y = Φ x + ε ,
p ( x ; γ ) = N ( 0 , γ ) = i = 1 N ( 2 π γ i ) 1 2 exp ( x i 2 2 γ i ) ,
L ( γ ) = log p ( y | x ) p ( x ; γ ) d x = log | Σ y | + y T Σ y 1 y ,
x = Γ Φ T Σ y 1 y .
x o p t = arg min | | y Φ x | | 2 2 + 2 σ 2 i = 1 N z i 1 / 2 | x i | ,
γ i = z i 1 / 2 | x o p t , i | ,
z = d i a g ( Φ T Σ y 1 Φ ) ,
I = 1 J sum x S y S [ J ( x S , y S ) × p = x , y , z | H p x S , y S ( B x S , y S B ) | 2 ] ,
I = I c c J ,
J o p t = arg min J | | α Z s I s | | 2 2 + β i = 1 N s 2 w i | J i |  =  arg min J | | α Z s I c c s J | | 2 2 + β i = 1 N s 2 w i | J i | ,
γ i = w i 1 J o p t , i ,
w = s q r t ( d i a g ( ( I c c s ) T ( β E + I c c s d i a g ( γ ) ( I c c s ) T ) 1 I c c s ) ) ,
u = arg min u | | α Z s Φ u | | 2 2 + β | | u | | 1 ,
f ( u ) = g ( u ) + h ( u ) = | | α Z s Φ u | | 2 2 + β | | u | | 1 ,
v = u k s t e p × g ( u k ) ,
u k + 1 , i = s h r i n k ( v i , β / 2 ) = { v i + β / 2 , v i β / 2 0 , | v i | < β / 2 v i β / 2 , v i β / 2 ,
PAE = | | Z r e s i s t ( I ) | | 2 2 ,
r e s i s t ( I ) = Γ { I t r } ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.