Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Multi spectral holographic ellipsometry for a complex 3D nanostructure

Open Access Open Access

Abstract

We present an innovative ellipsometry technique called self-interferometric pupil ellipsometry (SIPE), which integrates self-interference and pupil microscopy techniques to provide the high metrology sensitivity required for metrology applications of advanced semiconductor devices. Due to its unique configuration, rich angle-resolved ellipsometric information from a single-shot hologram can be extracted, where the full spectral information corresponding to incident angles from 0° to 70° with azimuthal angles from 0° to 360° is obtained, simultaneously. The performance and capability of the SIPE system were fully validated for various samples including thin-film layers, complicated 3D structures, and on-cell overlay samples on the actual semiconductor wafers. The results show that the proposed SIPE system can achieve metrology sensitivity up to 0.123 nm. In addition, it provides small spot metrology capability by minimizing the illumination spot diameter up to 1 µm, while the typical spot diameter of the industry standard ellipsometry is around 30 µm. As a result of collecting a huge amount of angular spectral data, undesirable multiple parameter correlation can be significantly reduced, making SIPE ideally suited for solving several critical metrology challenges we are currently facing.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

A spectroscopic ellipsometry (SE) technique has played a crucial role in semiconductor manufacturing because it can provide rapid, sensitive, and nondestructive 3D measurement results [14]. In general, this technique allows us to characterize the critical dimensions (CDs) of 3D structures, thickness of thin film layers, and optical constant of a film by analyzing the amplitude ratio Ψ and phase delay Δ between two orthogonally polarized lights reflected from the structures on wafers [3,4]. In these days, the performance of SE has been further improved by increasing the spectral range of the system by including the ultraviolet (UV), visible, and/or infrared (IR) regions [58]. It is important to note that the broadband spectroscopic data sets of Ψ and Δ can provide the capability of sub-nanometer accuracy and multi-parameter characterization, those are suitable for semiconductor metrology [9,10]. Despite of the continuous development in SE over several decades, the optical metrology of recent semiconductor devices having design rules of single-digit nanometers encounters several challenging issues for accurately measuring resist stochastic errors and local CD uniformity (LCDU) as well as breaking spectral correlation between multiple CDs [1113]. The above mentioned issues can be solved if physical spot diameter becomes at least a few times smaller than the latest memory cell dimension (approximately 30 µm × 40 µm) and additional ellipsometric information (i.e., Ψ and Δ as a function of angle of incident for different wavelengths) is available. Unfortunately, the spot diameter of commercial SE having low numerical aperture (NA) illumination varies from 30 to 100 µm, while the analysis of LCDU in dynamic random access memory (DRAM) cell block requires the spot size of few micrometers [14].

Microellipsometry has been developed in response to the demand for reducing the illumination spot diameter and obtaining more angular spectral information in order to improve accuracy [1518]. It uses a high NA objective lens, where an image sensor is placed on the back-focal plane of the objective lens (i.e., pupil plane). This configuration allows the measurement of the ellipsometric parameters Ψ and Δ at various incident and azimuth angles, simultaneously. As a trade-off for the benefits of these advantages, microellipsometry has several drawbacks including slow measurement throughput and instability attributed to multiple data acquisitions for different wavelengths [1922]. In general, multiple measurements with various combinations of polarization states are inevitable to retrieve the phase delay between orthogonally polarized lights [22,23]. Some recent studies have suggested the snapshot approach for microellipsometry using angular symmetry; however, such a technique can only be used for unpatterned film thickness measurement due to the lack of obtaining all the required phase information [2426].

In this paper, we experimentally demonstrated a novel holographic ellipsometry technique, called self-interferometric pupil ellipsometry (SIPE), which integrates the self-interference holography and pupil microscopy techniques. The proposed holographic-based reconstruction algorithm enables to retrieve the amplitude and phase information of a semiconductor device on a wafer for incident angles from 0° to 70° with azimuth angles from 0° to 360°, simultaneously, by the single shot measurement. In Section 2, we present the optical configuration together with working principles and novel reconstruction algorithms to obtain Ψ and Δ as a function of the incident angle and wavelengths. Section 3 describes the performance of SIPE by measuring various wafer samples, including SiO2 thin layer and several critical layers in DRAM and logic devices. In addition to the CD measurement, the capability of overlay metrology was demonstrated. The principle of SIPE was fully verified by comparing the results of theoretical calculations and numerical simulations. Since SIPE provides very high measurement sensitivity of 0.12 nm together with the capability of overcoming parameter correlation issue, it has great potential for solving several spectral metrology challenges we are currently facing in high volume manufacturing (HVM). To the best of our knowledge, this study is the first demonstration of holographic ellipsometry for semiconductor device measurement.

2. System and principle

2.1 Optical system

The overview of the proposed SIPE system is described in Fig. 1. Figure 1(a) and 1(b) shows a schematic diagram and a photography of our system that is utilizing pupil microscopic geometry with a high-NA objective lens (50×, 0.95 NA, Zeiss), where a customized Nomarski prism was installed for self-interference generation. For the monochromatic light illumination, a high-power super-continuum laser (NKT Photonics) together with an acousto-optic tunable filter was used. The spectrally filtered illumination light has a Gaussian shape in a wavelength domain and its bandwidth varies from 1 to 3 nm for different wavelengths. In this architecture, the monochromatic light passes through several optical components, making the high quality coherent light. It is passing through the polarizer and the beam splitter, then it is directed to the objective lens. A wafer was illuminated by monochromatic light covering the full NA of the objective lens, and then the reflected light was collected through the same objective lens. It is very important to note that the collected light passing through the Nomarski prism can be separated into two orthogonally polarized lights (i.e., p- and s-polarized lights) with different propagating angles, as described in Fig. 1(c). After passing the linear polarizer, these two differently polarized lights will have an identical polarization state, then they can interfere each other, generating a spatially modulated hologram at the pupil plane in Fig. 1(c). For the sense of obtaining the interference signal, this technique is somewhat similar to the geometry of an off-axis holographic imaging system [27,28]. The detailed principles will be explained in the next session. Here, the separation angle between the two differently polarized beams is approximately 0.42° at a wavelength of 400 nm.

 figure: Fig. 1.

Fig. 1. Overview of the SIPE system. (a) Schematic of the optics configuration for SIPE. Beam paths for signal acquisition and wafer alignment are described by green and yellow colors, respectively. In the overlapped region of the two beams, the yellow beam is omitted to avoid confusion. L1 – 10: lenses; A1, A2: apertures; P1, P2: polarizers; M1 – 5: mirrors; BS1 – 3: beam splitters; Obj.: Objective lens; NP: Nomarski prism. (b) A photograph of the SIPE system. (c) Schematic of generating self-interference pattern using the Nomarski prism with the polarizer. (d) Actual DRAM cell image with an illumination spot. The spot diameter can be tuned from 1 to 15 µm.

Download Full Size | PDF

The optical resolution in the pupil plane is theoretically defined by the illumination spot size and the lens focal lengths according to the sampling theorem [30]. In case of the 400 nm illumination, the spot size of 15 µm and the objective lens focal length of 3.3 mm allows a diffraction-limited spot corresponding 0.027 NA in the pupil plane, which is also equivalent to π × (0.95 NA/0.027 NA)2 = 3,898 independent angular measurements. Pixel resolution in the pupil plane is calibrated as (0.95 NA/950 pixels) = 0.001 NA/pixel regardless the wavelength. Note that an additional light source and a camera is used to monitor the wafer plane to determine the correct lateral and vertical positions of a wafer, as described by the beam path represented by yellow color in Fig. 1(a).

The pupil geometry in this system provides two crucial benefits those are important for semiconductor metrology. First, angle-resolved information can be obtained without any mechanical movement, where each position on the pupil plane represents the individual illumination angle and direction on the wafer. Second, small-spot illumination can be achieved owing to the high NA of the objective lens with the spatial filter. In principle, a diffraction-limited spot size of approximately 300 nm can be achieved using the light from a single-mode fiber. In practical scenario, variable spot diameters of 1–15 µm are available by employing a multimode fiber and an adjustable aperture as displayed in Fig. 1(d). Since the typical size of a DRAM cell block is approximately 30–50 µm depending on the generation, the small spot capability of SIPE makes it suitable for in-cell locality measurement for DRAM and VNAND devices.

2.2 Theoretical analysis

In our system configuration, two orthogonally polarized lights reflected back from a wafer eventually have the same polarization state, and then interfere each other at the pupil plane. The resulting interference pattern is also called hologram, where it is recorded by a scientific CMOS camera. The Jones matrix is appropriate representation to describe this interference [4]. Let’s first assume that the polarizer at the illuminated part was aligned with P along the x-axis. Evidently, all the incident fields at the pupil plane have the same polarization, as described in

$${E_0} = \left[ \begin{array}{l} \cos P\\ \sin P \end{array} \right]. $$
 Let rs and rp be the reflection coefficients of a sample for s- and p-polarization. Here, s- and p-polarization in pupil microscopy have polar coordinates, whereas the incident field EO has Cartesian coordinates. Therefore, the coordinate rotation should be considered in the reflection matrix. As a result, the reflected field ER in Cartesian coordinate can be expressed as
$${E_R} = \left[ {\begin{array}{cc} {\cos \theta }&{ - \sin \theta }\\ {\sin \theta }&{\cos \theta } \end{array}} \right]\left[ {\begin{array}{cc} {{r_p}}&0\\ 0&{ - {r_s}} \end{array}} \right]\left[ {\begin{array}{cc} {\cos \theta }&{\sin \theta }\\ { - \sin \theta }&{\cos \theta } \end{array}} \right]\left[ \begin{array}{l} \cos P\\ \sin P \end{array} \right]\textrm{,}$$
where θ represents the azimuthal angle at the pupil position. The minus sign on rs has been introduced to compensate for flipping the coordinate when the optical field is reflected. The reflected field then enters the prism, and it is separated into two orthogonally polarized fields, E1 and E2, according to the alignment angle N of the Nomarski prism. The field EN pasting through the Nomarski prism can be defined as
$$\begin{aligned} {E_N} &= \left[ {\begin{array}{cc} {{e^{i{\mathbf{k}_1}\mathbf{r}}}}&0\\ 0&{{e^{i{\mathbf{k}_2}\mathbf{r}}}} \end{array}} \right]\left[ {\begin{array}{cc} {\cos N}&{\sin N}\\ { - \sin N}&{\cos N} \end{array}} \right]{E_R}\\ &= \left[ \begin{array}{l} \{{{r_p}\cos ({N - \theta } )\cos ({\theta - P} )+ {r_s}\sin ({N - \theta } )\sin ({\theta - P} )} \}{e^{i{\mathbf{k}_1}\mathbf{r}}}\\ \{{ - {r_p}\sin ({N - \theta } )\cos ({\theta - P} )+ {r_s}\cos ({N - \theta } )\sin ({\theta - P} )} \}{e^{i{\mathbf{k}_2}\mathbf{r}}} \end{array} \right]\\ &= \left[ \begin{array}{l} {E_1}{e^{i{\mathbf{k}_1}\mathbf{r}}}\\ {E_2}{e^{i{\mathbf{k}_2}\mathbf{r}}} \end{array} \right]\textrm{,} \end{aligned}$$
where k1 and k2 are the spatial frequencies those represent different propagation angles, respectively. Finally, the separated lights (i.e., p- and s-polarized lights) pass the analyzer (i.e., the polarizer after the sample is usually called the “analyzer”) with the alignment angle of A, as described in Fig. 1(c). The angle between the analyzer and the prism is defined as AN. Therefore, the final field at the pupil plane can be expressed as
$${E_F} = \cos ({A - N} ){E_1}{e^{i{\mathbf{k}_1}\mathbf{r}}} + \sin ({A - N} ){E_2}{e^{i{\mathbf{k}_2}\mathbf{r}}}. $$

Finally, a camera records the intensity IF, which is absolute square of the final optical field.

$$\begin{aligned} {I_F} &= {|{{E_F}} |^2} = {\cos ^2}({A - N} ){|{{E_1}} |^2} + {\sin ^2}({A - N} ){|{{E_2}} |^2}\\ &+ \cos ({A - N} )\sin (A - N)|{{E_1}{E_2}} |{e^{i\varphi }}{e^{i({{\mathbf{k}_1} - {\mathbf{k}_2}} )\mathbf{r}}}\\ &+ \cos ({A - N} )\sin (A - N)|{{E_1}{E_2}} |{e^{ - i\varphi }}{e^{ - i({{\mathbf{k}_1} - {\mathbf{k}_2}} )\mathbf{r}}}. \end{aligned}$$

Here, the phase difference between E1 and E2 is denoted by φ. Because of the sinusoidal term ${e^{ {\pm} i({{\mathbf{k}_1} - {\mathbf{k}_2}} )\mathbf{r}}}$ in Eq. (5), the recorded interference image shows spatially oscillating signal. This fringe-patterned image is generally called an interferogram or hologram. In order to achieve high visibility of the hologram, optimal angle of P and A were suggested to be N ± 45°. It should be noted that φ implies a phase difference between the two orthogonally polarized fields, which is ∠(E1/E2). Therefore, the physical interpretation of φ is analogous to that of Δ. The only difference in terms of physics is that the basis polarization is either x- and y-polarization (φ in SIPE) or s- and p-polarization (Δ in SE). According to Eq. (3), the relation between the electric fields in SIPE and the reflection coefficients are expressed as

$$\left[ \begin{array}{l} {E_1}\\ {E_2} \end{array} \right] = \left[ {\begin{array}{cc} {\cos ({N - \theta } )\cos ({\theta - P} )}&{\sin ({N - \theta } )\sin ({\theta - P} )}\\ { - \sin ({N - \theta } )\cos ({\theta - P} )}&{\cos ({N - \theta } )\sin ({\theta - P} )} \end{array}} \right]\left[ \begin{array}{l} {r_p}\\ {r_s} \end{array} \right]. $$

2.3 SIPE reconstruction algorithm

The phase delay φ between the two orthogonally polarized fields is contained in the holograms. In order to retrieve the ellipsometric information (φ and Δ) from the self-interference hologram, we use the modified off-axis holography reconstruction algorithm optimized for the SIPE system [27]. As described in Eq. (5), the phase information φ is contained only in the last two terms, which are the AC terms, whereas the first two intensity terms are the DC terms having no phase information. Although φ cannot be directly extracted from the raw hologram, the AC terms can be separated via a 2D spatial Fourier transform because the signals are spatially modulated by the frequency of k1k2.

Figure 2 illustrates the reconstruction procedures of amplitude and phase information in the SIPE system shown in Fig. 1, where system configuration of P = A = 45° and N = 0° was used.

 figure: Fig. 2.

Fig. 2. Reconstruction procedures for off-axis geometry using the SIPE shown in Fig. 1. (a) The raw hologram of SiO2 wafer was recorded by the pupil camera. The interference fringes of off-axis geometry are shown in the expanded version, where the fringe patterns are bent depending on the local phase difference between E1 and E2. (b) Fourier spectrum of this hologram, which includes spatial frequencies of AC and DC terms. (c) The AC signal is selectively filtered by applying an appropriate digital mask. (d) The reconstructed amplitude and phase images. FT and iFT represent Fourier transform and inverse Fourier transform, respectively.

Download Full Size | PDF

To demonstrate the capability of its phase and amplitude reconstruction, we first measured SiO2 thin layer that has a thickness of 900 nm. The raw hologram of the SiO2 wafer was recorded as shown in Fig. 2(a). Note that the interference fringes between reflected p-polarized and s-polarized waves were clearly visible within the expanded frame of Fig. 2(a). The corresponding 2D Fourier spectrum is also shown in Fig. 2(b). Here, three strong signal peaks are observed in the Fourier domain. The central peak represents the unmodulated DC signals, |E1|2 + |E2|2, whereas the two side peaks represent the AC and complex-conjugated AC signals, respectively. Because the DC and AC terms are spatially separated in the Fourier plane, one of the AC signal can be selectively extracted by using a digital mask. By applying the inverse Fourier transform to a selectively masked AC signal, we can reconstruct the pupil maps having only the interference signals |E1E2|e. Because this AC signal is a complex number, we can separate the amplitude |E1E2| and phase φ terms. Although the retrieved amplitude and phase signals from the hologram are not directly identical to the conventional SE parameters, the SIPE signals represent the information equivalent to Ψ and Δ, as defined in Eq. (6). It is important to note that the reconstruction technique of SIPE is essentially very similar to the method used for the off-axis holography, but the only tangible difference is SIPE does not need to employ digital back propagation.

3. Results

3.1 Performance Verification of SIPE for thin film and pattern wafers

To verify SIPE, a SiO2-monolayer wafer with a thickness of 900 nm was used as a thin film test sample. The measured holograms for three different wavelengths (480, 520, and 560 nm) are shown in Fig. 3(a). Each of raw hologram shows unique interference shape depending on the reflection properties at different wavelengths.

 figure: Fig. 3.

Fig. 3. SIPE results of a SiO­2-monolayer wafer at three representative wavelengths of 480, 520, and 560 nm. (a) Raw holograms recorded by the pupil camera. (b) Experimentally reconstructed amplitude and phase images from the recorded holograms. (c) Theoretically calculated amplitude and phase images. (d) Comparison between the experimental and calculated amplitude and phase values along the dashed line presented in (b) and (c).

Download Full Size | PDF

Characteristic curved fringe representing the local amplitude and phase information are shown in the enlarged images in Fig. 3(a). The amplitude and phase distribution in the pupil plane can be reconstructed by applying the holographic reconstruction algorithms to the individual holograms, and the results are shown in Fig. 3(b). To eliminate the effects of the optical system, the reconstructed complex fields were calibrated using the reconstructed fields of a bare silicon wafer. The reconstructed amplitude and phase signals were compared with the theoretically expected phase distributions as present in Fig. 3(c). For better comparison, we plotted the sectional profiles of the signals along the diagonal direction. As shown in Fig. 3(d), the experimentally reconstructed signals are highly consistent with the theoretical values. It is important to note that some degrees of discrepancy between theory and experimental are the result of coming from the residual errors of imperfect components. However, this discrepancy can be minimized by applying the calibration terms in our model or replacing better quality of the optical components.

In addition to the thin film metrology application, we also validated the metrology capability for a patterned wafer. We first used a crystalline-Si line-space (LS) pattern sample with width, height, and pitch of 130, 115, and 280 nm, respectively. Figure 4 represents the measured holograms together with both reconstructed amplitude and phase images recorded at a wavelength of 400 nm. To verify the reconstructed signals, a numerical simulation for the same structures was performed using rigorous coupled-wave analysis (RCWA). Here, the electric field of each polarization component was first calculated, and then the corresponding self-interference hologram was generated by considering the directions of the polarizer, the prism, and the analyzer. A comparison between the experiment and simulation results is shown in Fig. 4, and it shows the great agreement between them for both amplitude and phase. It is important to mention that the optical singularity points, where the field vanishes, can also be found at similar positions corresponding the incident angles of 48.5° and 47.8° for the experiment and the simulation, respectively. The consistency of the signals distribution including the singularity positions establishes the validity of our proposed system.

 figure: Fig. 4.

Fig. 4. Comparison of the experimental and simulation results for the LS structure at 400 nm wavelength. (a) Holograms from the experiment (left) and simulation (right). (b) Reconstructed amplitude images. (c) Reconstructed phase images. Dashed circles in (b) and (c) represent phase singularity, where the amplitude is zero, and they are observed at similar locations in the pupil from both experiment and simulation results. Shaded areas represent the ± 1st diffraction order signals from the LS structure with pitch of 280 nm.

Download Full Size | PDF

In the upper and lower regions of the images, the signals of the ± 1st diffraction orders from the horizontally aligned LS structures are overlapped, which are not considered in our simulation. However, this overlap will not be concern for the metrology of advanced semiconductor device as majority of the devices have the structure having less than 100 nm pitch.

3.2 System stability

Because there are no mechanical moving components during data acquisition, the SIPE system can record the highly stable measurement data, resulting in the low fluctuations of the reconstructed amplitude and phase signals. In addition, the single-shot measurement in common-path geometry greatly enhances the repeatability of the system. To quantitatively investigate the signal fluctuation of the system, 30 holograms for a bare silicon wafer are measured, where wavelength of 600 nm and acquisition time of 50 ms are used. The analysis of the repeatability performance for both amplitude and phase signals is shown in Fig. 5. For the reconstructed 30 amplitude and phase pupil images, the standard deviation over the mean value of the amplitude and phase are calculated. Figure 5 (a) and (b) shows that the average standard deviation of 0.091% and 0.036° for amplitude and phase, respectively, can be achieved. Note that the phase fluctuation level of the most state-of-art SE equipment is around from 0.03° to 0.05° at the fixed incident angle, and therefore the current system stability of the proposed SIPE is comparable to the performance of the SE tool, making the SIPE system ideally suited for the next generation metrology tool in HVM. If 0.03° fluctuation in SE Ψ is converted to the fluctuation of tanΨ = |rp/rs|, the fluctuation level corresponds to approximately 0.13% fluctuation of tanΨ. It should also be noted that the components of the current prototype, such as the system hardware and algorithm, can be further optimized. For example, the fluctuation can be further suppressed by reducing the bandwidth of the illumination spectrum to improve a temporal coherence length for better interference visibility. Otherwise, the design of the prism can be changed to have a smaller separation angle reducing the optical path length difference between two polarized lights over the FOV.

 figure: Fig. 5.

Fig. 5. Fluctuations of the amplitude and the phase of a bare silicon wafer at the wavelength of 600 nm. (a) Average fluctuation of the amplitude over the pupil plane is 0.091%. (b) Average fluctuation of phase over the pupil plane is 0.036°. (c-d) Histograms of the signal fluctuation over the pupil plane for the amplitude (c) and phase (d), respectively.

Download Full Size | PDF

3.3 Sensitivity improvement in semiconductor device metrology

The capability of SIPE for the CD measurements was verified for one of the critical layers, which was a capacitor layer including deep trench structures in a DRAM device. Figure 6(a) shows the schematic of the target structure for the measurement, where a masking layer with height (HT) of 80–90 nm was deposited at the top of the DRAM capacitor layer in the manufacturing process. To analyze the metrology sensitivity of SIPE for this structure, we measured several wafers with slightly different mask heights. For each of measurement position, 41 pupil holograms were successfully measured at the wavelengths from 450 to 650 nm with 5 nm step size. Because the variation of phase signal is generally more sensitive compared to the amplitude signal variation for changing CDs, our study was focusing on analyzing phase signal in this example [7,16,29]. By stacking the angle-resolved phase images corresponding to various wavelengths, spectro-angular 3D phase signals can be generated, as shown in Fig. 6(b). It is very important to mention that this 3D phase image includes all the information about the measured structures as a function of wavelength, azimuthal, and incident angle, and this rich angular information cannot be obtained from the conventional SE technique. We clearly observed small CD variation shows strong phase changes from the 3D phase data. The phase difference for the two different wafers with ΔHT = 3.59 nm was presented in Fig. 6(c). As a reference metrology for ΔHT, the actual CD measurement was performed through a scanning electron microscopy.

 figure: Fig. 6.

Fig. 6. Spectro-angular phase signals of a DRAM capacitor. (a) Simplified modeling of the structure under DRAM capacitor manufacturing. The height of the patterned mask is slightly different for each sample. (b) Representative spectro-angular 3D phases of the structure having a patterned mask of height 84.60 nm. The conventional SE data correspond to the data in the blue dashed box, which are the spectral data at a single illumination angle. (c) Difference in the spectro-angular phases for two slightly different structures having height difference of 3.59 nm.

Download Full Size | PDF

In order to further analyze the metrology sensitivity, four different wafer samples with different nominal HT of 84.60, 85.72, 87.49, and 88.19 nm, respectively, were measured. Figure 7(a) demonstrates the difference between the averaged phase map of 4 wafers and the individual phase at 3 different wavelengths, resulting in the different phase distribution depending on the mask height. According to the light–matter interaction of the individual samples, the phase signal for different angle of incident and wavelength is strongly depending on the variation of CD. Therefore, having phase information for many illumination angles using the broader wavelength region will be necessary for the current metrology application. For example, the variation in the phase images show quadrant pattern shape at the wavelength of 450 nm, whereas radial pattern shape is observed at a wavelength of 630 nm in Fig. 7(a). Depending on the wavelength, certain regions in the pupil has strong correlations with mask height variation, whereas other regions show weak correlation. Figure 7(b) shows both positively and negatively correlated regions, where they represents highly sensitive regions for CD variation. In addition, the sensitive regions of the sample are different for each wavelength, and therefore an optimal sampling scheme for only using the phase information from the sensitive region is necessary to develop. We extracted the phase value from the most sensitive illumination angle and the results are summarized for different wavelength, as shown in Fig. 7(c). The slopes of this figure represents signal sensitivity defined as Δphase/ΔCD, where the unit of the sensitivity is °/nm. The averaged sensitivity of the mask height at the wavelength of 600 nm is approximately −0.88°/nm. Considering that the phase fluctuation (1σ) was 0.036° at the wavelength of 600 nm, as described in Fig. 5(b), the proposed SIPE system can provide 0.123 nm metrology sensitivity, which is calculated by 3σ phase fluctuation divided by the signal sensitivity. By repeating same analysis for all the wavelengths, we finally summarized the CD sensitivity results of SIPE in Fig. 7(d).

 figure: Fig. 7.

Fig. 7. Sensitivity analysis for a DRAM sample. (a) Phase difference images for 4 different heights at three selected wavelengths. The individual images represent the phase difference from the averaged image of the four samples at each wavelength. (b) Identification of highly sensitive regions for mask HT variation. The regions having positive and negative correlations with the mask height are presented in dark blue and dark red colors, respectively. (c) Signal sensitivity as a function of mask heights for different wavelengths. The slope indicates Δphase/ΔCD. (d) the maximum CD sensitivity at individual wavelength, calculated by signal fluctuation (3σ)/signal sensitivity.

Download Full Size | PDF

In short, the CD sensitivity at optimally selected illumination angle showed a three-fold improvement when compared to the CD sensitivity using one fixed critical angle illumination of 65°, which is used for the conventional SE tool. Because the presented sensitivity was estimated based on using the single phase value from an optimal angle, we can claim that the CD sensitivity can be further improved by utilizing entire 3D spectro-angular dataset together with an appropriate algorithm such as optimized model-based fitting or learning-based regression.

3.4 On-cell overlay metrology

Due to the improved metrology sensitivity, the proposed system can be used to measure CDs as well as on-cell overlay. The on-cell overlay represents a lateral shift between upper and lower layers in the individual cell block. Unlike a logic device, the patterns in the DRAM layers are typically repeating, making it possible to extract overlay value directly from the device. To demonstrate the capability of the on-cell overlay metrology, we have measured the pupil phase images of the DRAM wafer, where the upper layer is the gate-bit line (GBL) and the lower layer is active (ACT), as shown in Fig. 8(a). For the given different overlay shifts, the corresponding reconstructed phase images at the wavelength of 530 nm are shown in Fig. 8(b), where the programed on-cell overlay values of 0, ± 0.5 and ± 2.0 nm are used. These five different overlay shift values were confirmed by the SEM.

 figure: Fig. 8.

Fig. 8. The performance of on-cell overlay measurement results using SIPE. The overlay shifts between GBL and ACT were measured. (a) SEM images for the GBL and ACT structures in DRAM, where the upper and lower layer contains GBL and ACT, respectively. Red square box represents the unit cell structure for both layers. (b) 5 pupil images representing the phase difference for different overlay split samples at the wavelength of 530 nm. (c) Angular phase profile (phase difference vs. azimuthal angle the incident angles of 44°) for 4 different overlay shifts. Different overlay values generate different phase profiles.

Download Full Size | PDF

To easily identify the change of the phase signals depending on the overlay shifts, each of the phase image was subtracted from the averaged phase image of 5 different overlay shifts. As presented in Fig. 8(b), the pupil phase images of the positive overlay values of 0.5 and 2.0 nm show completely opposite pupil shape compared to the phase images corresponding to negative overlay values of -0.5 and -2.0 nm, respectively. In addition, the pupil phase image for 0.5 nm overlay shift is clearly distinguished from that for higher overlay shift of 2.0 nm, demonstrating that the higher overlay shift will result in the higher phase signal. From the phase difference results shown in Fig. 8(b), we extract the angular phase information along all azimuthal angles at the incident angles of 44° (NA = 0.69), the result of which is presented in Fig. 8(c). It is clearly observed that the opposite overlay shift will cause the opposite pupil shape of phase difference profile. Therefore, we can conclude that the amount and the direction of the overlay shift can be identified through the shape and the sign of the phase information in the pupil plane, respectively. Since the maximum phase difference is approximately 1.5° for the overlay shift of 2.0 nm and 3σ phase noises is 0.11°, we expect that metrology sensitivity for the overlay application is approximately 0.15 nm. Note that the performance of SIPE for on-cell overlay application is comparable to the recent techniques [3133].

3.5 Breaking parameter correlation in small volume metrology

Ellipsometry and reflectometry are emerging as alternative methods to SEM. However, the ability to distinguish the effects of CD changes in one parameter from changes in another is challenging especially for complex 3D structures because multiple CD parameters must be determined from a single measured spectrum. For example, some of the CD parameters have very similar spectral behavior, which makes it difficult to separately identify them, especially when the parameters are related to very small volumes and are close to each other. Therefore, eliminating the correlation to other parameters is desirable. One known challenge for parameter correlation issue in logic fin field-effect transistors (FinFETs) can be found in the etch process, as described in Fig. 9(a). In this process, it is important to precisely measure the depth and the width of the etched structures to predict the FinFET performance. However, it is difficult to measure the depth and width separately using the current OCD instruments because those two parameters are strongly correlated in the conventional SE spectra collected at a single incident angle. In order to break this correlation issue, additional measurement data for various different AOI, azimuth angle, and wavelength are required. Fortunately, the proposed SIPE system can collect thousands of angular spectra data, simultaneously.

 figure: Fig. 9.

Fig. 9. Difference in phase image caused by variations in two parameters. (a) Modeling of Logic FinFET structure. The depth and width are the crucial parameters for the performance of the FinFET. (b) Variation in phase images for ± 1.0 nm changes in both depth and width. (c) PCA analysis of the spectra at the best-fixed angle . (d) PCA analysis of the spectra at the optimized angles for breaking the correlation between the depth and width. PC1 and PC2 denote the principal component scores for the 1st and 2nd principal axes, respectively.

Download Full Size | PDF

To validate our claim for the parameter decorrelation, the phase images for various combinations of the depth and width within ± 1.0 nm variation, corresponding less than 2% of nominal dimension, were numerically simulated in the wavelength range from 250 to 1000 nm. The calibrated RCWA model was used for this simulation. The phase images for various depth and width of the FinFET device at the wavelength of 600 nm are shown in Fig. 9(b). As expected, the angle-resolved information from a single measurement of SIPE is capable of breaking the correlation between the two parameters. In short, the width of the structure has high sensitivity in the area of top-left and bottom-right of the pupil, whereas the depth shows high sensitivity in the area of top-right and bottom-left, as clearly described in Fig. 9(b). To present the correlation between the structural parameters with spectral signal changes, we performed principal component analysis (PCA) for 2 different cases of using best one fixed angle and optimized angle for each of the wavelength, where the phase spectra in the wavelength of 250–1000 nm were used. Figure 9(c) shows the analysis for the fixed angle case when AOI = 65°, Azimuth angle = 45°. Ideally, the PCA scores of the spectra are expected to show orthogonal behaviors along the first two principal components, i.e., the width and the depth. Although we selected the best angle for which the two parameters had the smallest correlation in Fig. 9(c), some level of correlation still remains. This is the reason why we need to develop the multi AOI spectra collection system. In contrast, when the phase data are collected from the optimal angle where the signals depend only on either the depth or width, the PCA scores of the individual structures show discrete dependencies along the 1st and 2nd principal axes, respectively, as shown in Fig. 9(d). These results indicate that a set of spectra data from optimally selected illumination angles provide sufficient sensitivity to break parameter correlation issue.

4. Conclusion

We demonstrated a self-interferometric pupil ellipsometry technique that can provide the small spot metrology capability together with high metrology sensitivity required for characterizing advanced device architectures. In addition, the proposed SIPE system provides the broadest possible spectral data set, breaking the parameter correlation. Due to the use of angle-resolved geometry, the massive ellipsometric information from the entire angular range, incident angles from 0° to 70° with azimuthal angles from 0° to 360°, can be obtained, simultaneously. To demonstrate the outstanding metrology capabilities of the technique, we tested various wafer samples including the SiO2-monolayer, LS patterns, DRAM, and FinFET devices. We confirmed that SIPE provides CD sensitivity of 0.123 nm based on the result of the mask height measurement in the DRAM capacitor layer. In addition to CD metrology, we also demonstrated the great metrology performance for the on-cell overlay applications, where the DRAM device is used for the evaluation. Due to the use of a large amount of angular data, the parameter correlation issue can be resolved.

Despite the powerful metrology performance of SIPE, there are still remaining hurdles we need to overcome for high volume manufacturing use. First of all, maximizing measurement throughput is critically important since SIPE uses the wavelength scanning method to obtain full 3D spectral data as a function of wavelength, AOI, and azimuth angle. Although the wavelength scanning scheme is much more stable and faster compared to angular scanning, the measurement speed can be further improved by developing a faster monochromator with higher light transmission rate as well as using a higher power light source together with an image sensor with high quantum efficiency. Second, compared to the SE technique, the obtained SIPE signal is more vulnerable to various aberrations because the system is based on the high NA imaging microscopic geometry rather than a point-detecting spectroscopic method. Developing a high-quality ultra-broadband imaging system is necessary to expand the spectral ranges of SIPE into deep UV and IR. Lastly, in aspects of the regression algorithms converting SIPE data into metrology parameters such as CDs or overlay, heavy computing power is required for model-based metrology due to the extended dimension of information. However, the computational cost can be largely reduced by applying the optimal sample scheme which is selectively utilizing the highly sensitive data points instead of using entire spectro-angular data, as we demonstrated in Fig. 7(b). In addition, real-time regression in HVM is straightforward because the library of SIPE data for various structural variations will be prepared not to run the simulation for every measurement. Machine learning-based regression is also available to use.

In short, the experimental demonstration together with theoretical analysis shows that SIPE has the irreplaceable advantages of achieving higher sensitivity by utilizing massive angular information that was obtained by one-shot measurement. Considering its impressive performance of metrology throughput, precision, and sensitivity, we strongly believe SIPE is a promising metrology technique for the critical layers of advanced semiconductor devices. As a result, it can overcome some of current challenges related to metrology sensitivity and parameter correlation occurred in traditional SE techniques.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. R. M. A. Azzam, N. M. Bashara, and S. S. Ballard, “Ellipsometry and Polarized Light,” Phys. Today 31(11), 72 (1978). [CrossRef]  

2. H. G. Tompkins and E. A. Irene, Handbook of Ellipsometry (William Andrew, 2005). [CrossRef]  

3. F. L. McCrackin, E. Passaglia, R. R. Stromberg, and H. L. Steinberg, “Measurement of the thickness and refractive index of very thin films and the optical properties of surfaces by ellipsometry,” J. Res. Natl. Bur. Stand., Sect. A 67A(4), 363–377 (1963). [CrossRef]  

4. H. Fujiwara, Spectroscopic Ellipsometry: Principles and Applications (John Wiley & Sons, Ltd, 2007).

5. D. E. Aspnes and A. A. Studna, “High Precision Scanning Ellipsometer,” Appl. Opt. 14(1), 220–228 (1975). [CrossRef]  

6. D. Chandler-Horowitz and G. A. Candela, “Principal angle spectroscopic ellipsometry utilizing a rotating analyzer,” Appl. Opt. 21(16), 2972–2977 (1982). [CrossRef]  

7. M. Losurdo, M. Bergmair, G. Bruno, D. Cattelan, C. Cobet, A. De Martino, K. Fleischer, Z. Dohcevic-Mitrovic, N. Esser, M. Galliet, R. Gajic, D. Hemzal, K. Hingerl, J. Humlicek, R. Ossikovski, Z. V. Popovic, and O. Saxl, “Spectroscopic ellipsometry and polarimetry for materials and systems analysis at the nanometer scale: State-of-the-art, potential, and perspectives,” J. Nanopart. Res. 11(7), 1521–1554 (2009). [CrossRef]  

8. T. Minamikawa, Y. D. Hsieh, K. Shibuya, E. Hase, Y. Kaneoka, S. Okubo, H. Inaba, Y. Mizutani, H. Yamamoto, T. Iwata, and T. Yasui, “Dual-comb spectroscopic ellipsometry,” Nat. Commun. 8(1), 610 (2017). [CrossRef]  

9. M. Schubert, Infrared Ellipsometry on Semiconductor Layer Structures. Springer Tracts in Modern Physics (Springer Berlin Heidelberg, 2005), 209.

10. E. Garcia-Caurel, A. De Martino, J. P. Gaston, and L. Yan, “Application of Spectroscopic Ellipsometry and Mueller Ellipsometry to Optical Characterization,” Appl. Spectrosc. 67(1), 1–21 (2013). [CrossRef]  

11. K. Bhattacharyya, “Tough road ahead for device overlay and edge placement error,” Proc. SPIE 10959, 1 (2019). [CrossRef]  

12. A. Shchegrov, P. Leray, Y. Paskover, L. Yerushalmi, E. Megged, Y. Grauer, and R. Gronheid, “On product overlay metrology challenges in advanced nodes,” Proc. SPIE 11325, 49 (2020). [CrossRef]  

13. M. Sendelbach, A. Vaid, P. Herrera, T. Dziura, M. Zhang, and A. Srivatsa, “Use of multiple azimuthal angles to enable advanced scatterometry applications,” Proc. SPIE 7638, 76381G (2010). [CrossRef]  

14. C. Yoon, G. Park, D. Han, S. I. Im, S. Jo, J. Kim, W. Kim, C. Choi, and M. Lee, “Toward realization of high-throughput hyperspectral imaging technique for semiconductor device metrology,” J. Micro/Nanopatterning Mater. Metrol. 21(02), 021209 (2022). [CrossRef]  

15. J. M. Leng, J. Opsal, and D. E. Aspnes, “Combined beam profile reflectometry, beam profile ellipsometry and ultraviolet-visible spectrophotometry for the characterization of ultrathin oxide-nitride-oxide films on silicon,” J. Vac. Sci. Technol., A 17(2), 380–384 (1999). [CrossRef]  

16. J. A. Woollam, B. D. Johs, C. M. Herzinger, J. N. Hilfiker, R. A. Synowicki, and C. L. Bungay, “Overview of variable-angle spectroscopic ellipsometry (VASE): I. Basic theory and typical applications,” Proc. SPIE 10294, 1029402 (1999). [CrossRef]  

17. G. D. Feke, D. P. Snow, R. D. Grober, P. J. De Groot, and L. Deck, “Interferometric back focal plane microellipsometry,” Appl. Opt. 37(10), 1796–1802 (1998). [CrossRef]  

18. J. Jung, Y. Hidaka, J. Kim, M. Numata, W. Kim, S. Ueyama, and M. Lee, “A breakthrough on throughput and accuracy limitation in ellipsometry using self-interference holographic analysis,” Proc. SPIE 11611, 116111J (2021). [CrossRef]  

19. B. H. Ibrahim, S. Ben Hatit, and A. De Martino, “Angle resolved Mueller polarimetry with a high numerical aperture and characterization of transparent biaxial samples,” Appl. Opt. 48(27), 5025–5034 (2009). [CrossRef]  

20. A. De Martino, S. Ben Hatit, and M. Foldyna, “Mueller polarimetry in the back focal plane,” Proc. SPIE 6518, 65180X (2007). [CrossRef]  

21. O. Arteaga, S. M. Nichols, and J. Antó, “Back-focal plane Mueller matrix microscopy: Mueller conoscopy and Mueller diffractrometry,” Appl. Surf. Sci. 421, 702–706 (2017). [CrossRef]  

22. X. Colonna de Lega and P. J. De Groot, “Characterization of materials and film stacks for accurate surface topography measurement using a white-light optical profiler,” Proc. SPIE 6995, 69950P (2008). [CrossRef]  

23. Y. Liu, C. W. See, and M. G. Somekh, “Common path interferometric microellipsometry,” Proc. SPIE 2782, 635–645 (1996). [CrossRef]  

24. G. Choi, S. W. Lee, S. Y. Lee, and H. J. Pahk, “Single-shot multispectral angle-resolved ellipsometry,” Appl. Opt. 59(21), 6296–6303 (2020). [CrossRef]  

25. L. Peng, D. Tang, J. Wang, R. Chen, F. Gao, and L. Zhou, “Robust incident angle calibration of angle-resolved ellipsometry for thin film measurement,” Appl. Opt. 60(13), 3971–3976 (2021). [CrossRef]  

26. S. H. Ye, S. H. Kim, Y. K. Kwak, H. M. Cho, Y. J. Cho, and W. Chegal, “Angle-resolved annular data acquisition method for microellipsometry,” Opt. Express 15(26), 18056–18065 (2007). [CrossRef]  

27. M. Lee, O. Yaglidere, and A. Ozcan, “Field-portable reflection and transmission microscopy based on lensless holography,” Biomed. Opt. Express 2(9), 2721–2730 (2011). [CrossRef]  

28. M. Takeda, H. Ina, and S. Kobayashi, “Fourier-Transform Method of Fringe-Pattern Analysis for Computer-Based Topography and Inteferometry,” J. Opt. Soc. Am. 72(1), 156–160 (1982). [CrossRef]  

29. A. Nabok and A. Tsargorodskaya, “The method of total internal reflection ellipsometry for thin film characterisation and sensing,” Thin Solid Films 516(24), 8993–9001 (2008). [CrossRef]  

30. C. E. Shannon, “Communication in the presence of noise,” Proc. IRE 37(1), 10–21 (1949). [CrossRef]  

31. J. Mulkens, B. Slachter, M. Kubis, W. Tel, P. Hinnen, M. Maslow, H. Dillen, E. Ma, K. Chou, X. Liu, W. Ren, X. Hu, F. Wang, and K. Liu, “Holistic approach for overlay and edge placement error to meet the 5 nm technology node requirements,” Proc. SPIE 10585, 10585L (2018). [CrossRef]  

32. C. Messinis, T. T. M. van Schaijk, N. Pandey, V. T. Tenner, S. Witte, J. F. de Boer, and A. den Boef, “Diffraction-based overlay metrology using angular-multiplexed acquisition of dark-field digital holograms,” Opt. Express 28(25), 37419–37435 (2020). [CrossRef]  

33. M Lee, W Kim, J Jung, and M Ahn, “Imaging ellipsometry (IE)-based inspection method and method of fabricating semiconductor device by using IE-based inspection method,” US Patent App. 16/833, 903 (2021).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Overview of the SIPE system. (a) Schematic of the optics configuration for SIPE. Beam paths for signal acquisition and wafer alignment are described by green and yellow colors, respectively. In the overlapped region of the two beams, the yellow beam is omitted to avoid confusion. L1 – 10: lenses; A1, A2: apertures; P1, P2: polarizers; M1 – 5: mirrors; BS1 – 3: beam splitters; Obj.: Objective lens; NP: Nomarski prism. (b) A photograph of the SIPE system. (c) Schematic of generating self-interference pattern using the Nomarski prism with the polarizer. (d) Actual DRAM cell image with an illumination spot. The spot diameter can be tuned from 1 to 15 µm.
Fig. 2.
Fig. 2. Reconstruction procedures for off-axis geometry using the SIPE shown in Fig. 1. (a) The raw hologram of SiO2 wafer was recorded by the pupil camera. The interference fringes of off-axis geometry are shown in the expanded version, where the fringe patterns are bent depending on the local phase difference between E1 and E2. (b) Fourier spectrum of this hologram, which includes spatial frequencies of AC and DC terms. (c) The AC signal is selectively filtered by applying an appropriate digital mask. (d) The reconstructed amplitude and phase images. FT and iFT represent Fourier transform and inverse Fourier transform, respectively.
Fig. 3.
Fig. 3. SIPE results of a SiO­2-monolayer wafer at three representative wavelengths of 480, 520, and 560 nm. (a) Raw holograms recorded by the pupil camera. (b) Experimentally reconstructed amplitude and phase images from the recorded holograms. (c) Theoretically calculated amplitude and phase images. (d) Comparison between the experimental and calculated amplitude and phase values along the dashed line presented in (b) and (c).
Fig. 4.
Fig. 4. Comparison of the experimental and simulation results for the LS structure at 400 nm wavelength. (a) Holograms from the experiment (left) and simulation (right). (b) Reconstructed amplitude images. (c) Reconstructed phase images. Dashed circles in (b) and (c) represent phase singularity, where the amplitude is zero, and they are observed at similar locations in the pupil from both experiment and simulation results. Shaded areas represent the ± 1st diffraction order signals from the LS structure with pitch of 280 nm.
Fig. 5.
Fig. 5. Fluctuations of the amplitude and the phase of a bare silicon wafer at the wavelength of 600 nm. (a) Average fluctuation of the amplitude over the pupil plane is 0.091%. (b) Average fluctuation of phase over the pupil plane is 0.036°. (c-d) Histograms of the signal fluctuation over the pupil plane for the amplitude (c) and phase (d), respectively.
Fig. 6.
Fig. 6. Spectro-angular phase signals of a DRAM capacitor. (a) Simplified modeling of the structure under DRAM capacitor manufacturing. The height of the patterned mask is slightly different for each sample. (b) Representative spectro-angular 3D phases of the structure having a patterned mask of height 84.60 nm. The conventional SE data correspond to the data in the blue dashed box, which are the spectral data at a single illumination angle. (c) Difference in the spectro-angular phases for two slightly different structures having height difference of 3.59 nm.
Fig. 7.
Fig. 7. Sensitivity analysis for a DRAM sample. (a) Phase difference images for 4 different heights at three selected wavelengths. The individual images represent the phase difference from the averaged image of the four samples at each wavelength. (b) Identification of highly sensitive regions for mask HT variation. The regions having positive and negative correlations with the mask height are presented in dark blue and dark red colors, respectively. (c) Signal sensitivity as a function of mask heights for different wavelengths. The slope indicates Δphase/ΔCD. (d) the maximum CD sensitivity at individual wavelength, calculated by signal fluctuation (3σ)/signal sensitivity.
Fig. 8.
Fig. 8. The performance of on-cell overlay measurement results using SIPE. The overlay shifts between GBL and ACT were measured. (a) SEM images for the GBL and ACT structures in DRAM, where the upper and lower layer contains GBL and ACT, respectively. Red square box represents the unit cell structure for both layers. (b) 5 pupil images representing the phase difference for different overlay split samples at the wavelength of 530 nm. (c) Angular phase profile (phase difference vs. azimuthal angle the incident angles of 44°) for 4 different overlay shifts. Different overlay values generate different phase profiles.
Fig. 9.
Fig. 9. Difference in phase image caused by variations in two parameters. (a) Modeling of Logic FinFET structure. The depth and width are the crucial parameters for the performance of the FinFET. (b) Variation in phase images for ± 1.0 nm changes in both depth and width. (c) PCA analysis of the spectra at the best-fixed angle . (d) PCA analysis of the spectra at the optimized angles for breaking the correlation between the depth and width. PC1 and PC2 denote the principal component scores for the 1st and 2nd principal axes, respectively.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

E 0 = [ cos P sin P ] .
E R = [ cos θ sin θ sin θ cos θ ] [ r p 0 0 r s ] [ cos θ sin θ sin θ cos θ ] [ cos P sin P ] ,
E N = [ e i k 1 r 0 0 e i k 2 r ] [ cos N sin N sin N cos N ] E R = [ { r p cos ( N θ ) cos ( θ P ) + r s sin ( N θ ) sin ( θ P ) } e i k 1 r { r p sin ( N θ ) cos ( θ P ) + r s cos ( N θ ) sin ( θ P ) } e i k 2 r ] = [ E 1 e i k 1 r E 2 e i k 2 r ] ,
E F = cos ( A N ) E 1 e i k 1 r + sin ( A N ) E 2 e i k 2 r .
I F = | E F | 2 = cos 2 ( A N ) | E 1 | 2 + sin 2 ( A N ) | E 2 | 2 + cos ( A N ) sin ( A N ) | E 1 E 2 | e i φ e i ( k 1 k 2 ) r + cos ( A N ) sin ( A N ) | E 1 E 2 | e i φ e i ( k 1 k 2 ) r .
[ E 1 E 2 ] = [ cos ( N θ ) cos ( θ P ) sin ( N θ ) sin ( θ P ) sin ( N θ ) cos ( θ P ) cos ( N θ ) sin ( θ P ) ] [ r p r s ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.