Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Uniform illumination and rigorous electromagnetic simulations applied to CMOS image sensors

Open Access Open Access

Abstract

This paper describes a new methodology we have developed for the optical simulation of CMOS image sensors. Finite Difference Time Domain (FDTD) software is used to simulate light propagation and diffraction effects throughout the stack of dielectrics layers. With the use of an incoherent summation of plane wave sources and Bloch Periodic Boundary Conditions, this new methodology allows not only the rigorous simulation of a diffuse-like source which reproduces real conditions, but also an important gain of simulation efficiency for 2D or 3D electromagnetic simulations. This paper presents a theoretical demonstration of the methodology as well as simulation results with FDTD software from Lumerical Solutions.

©2007 Optical Society of America

1. Introduction

The Image Sensors market has experienced considerable growth over recent years due to the increasing demands of digital still and video cameras, security cameras, webcams, and mainly mobile cameras [1, 2]. Charge-coupled devices (CCDs) have historically been the dominant image-sensor technology. Nevertheless, in recent years, Complementary Metal Oxide Semiconductor (CMOS) technology has shown competitive performance but also many advantages in on-chip functionality, power consumption, pixel readout, and cost, so that it has become the main actor in the Image Sensor Industry.

Market trend follows an increase of resolution (large number of pixels) while keeping small sensors. Thus, pixel size and photodiode area (where photons are collected) shrink. Besides, since the thickness of interconnect layers scales less than the planar dimension, light has to travel through a narrower “tunnel” to reach the photodiode emphasizing the problem of light collection (and the way to simulate it) mainly for oblique incidence [3, 4].

To characterize this sensor but also to optimize its performance (Quantum Efficiency, Crosstalk, Angular Response…) by modifying the stack’s geometry for example, optical simulations are essential. We previously developed at ST Microelectronics ray tracing based simulations to optimize microlens and photon collection inside pixels [5]. But regarding smaller pixel sizes diffraction effects can substantially affect light propagation and thus, photon collection [6]. Now a ray tracing description is not sufficient and we must use a more fundamental description to simulate these diffractions effects. We chose an electromagnetic simulation tool based on Finite Difference Time Domain (FDTD) [7, 8], available from Lumerical Solutions [9], to describe light propagation and photon collection inside the pixels. Nowadays, there are no electromagnetic tools providing uniform illumination adapted to CMOS image sensor simulation, making mandatory the development of a diffuse light source compatible with periodic boundary conditions. The most efficient way is to sum incoherently tilted plane waves. We demonstrate this approach equivalent to uniform illumination at the focal plane of the CMOS camera.

The paper is organized as follows: in section 2 we will describe the details of the problem to be solved and the objective of our simulation methodology, in section 3 a theoretical demonstration of this latter will be made, section 4 will present the results of simulations, and section 5 presents concluding remarks.

2. The simulation methodology

2.1 The problem: the light shape

A CMOS Image Sensor consists of an array of pixels on a silicon wafer, each containing a photodiode, surrounded by a readout circuitry. The ratio of the photodiode area to the whole pixel area is called the Fill Factor. The rest of the area is occupied by the transistors for the purpose of collecting, converting and reading photogenerated electrons.

Above the photodiode, several metal layers separated by dielectrics layers form the interconnections between transistors. Then color filters allow color reconstruction and finally microlenses are deposited on top of the pixels to focus the light on each photodiode and reduce optical losses in the stack (see Fig. 1). Finally, the dice are encapsulated in a module in which an integrated macroscopic objective-lens system focuses light onto the pixels array.

 figure: Fig. 1.

Fig. 1. CMOS image sensor: schematic (left) and SEM picture (center) of CMOS pixel, and the final module (right).

Download Full Size | PDF

In order to correctly evaluate pixels optical performances we must simulate a product-like illumination. It is somewhat difficult to simulate the objective-lens and pixels together. The main problem is the scale: the lens is hundreds of times bigger than the pixel (several millimeters compared to several micrometers) so that computational requirements (speed and memory) of FDTD are too large to be practical. The second problem is that many objective lenses can be used for a same sensor, which means as many simulations as objectives. Thus, we have to find a source that recreates the effect of the objective-lens.

We first simulate a small group of pixels receiving the same uniform illumination (the spatial extension of the group is small compared to the spatial variation of the source). At the pixel level, light is distributed uniformly: spatially over the pixel area and angularly inside a cone defined by the exit pupil of the objective and the pixel (see Fig. 2). The parameters that define the angular distribution are the f-number of the objective (ratio between its focal length and its exit pupil diameter) and the chief-ray angle (the ray passing through the center of the exit pupil and the center of the pixel of interest).

 figure: Fig. 2.

Fig. 2. Light shape in case of a uniform pixel illumination provided by an objective lens.

Download Full Size | PDF

The object to be imaged could be represented by a wide source far from the detector. As it is wide, it could be decomposed in a sum of different incoherent point sources at different spatial locations. Each point emits a spherical wave, but as it is far from the detector, this kind of source could be approximated by plane waves that hit the detector with random phase from a variety of angles. Then each plane wave focuses on different points on pixels array depending on the incidence angle.

Thus, any object to be imaged is simply an incoherent sum of point sources at different spatial locations. Therefore an image on the CMOS detector array can be reconstructed by incoherently summing the electromagnetic fields Ei from these single point sources that pass through the objective-lens. Figure 3 below shows a schematic representation of the light source seen by the pixel array.

 figure: Fig. 3.

Fig. 3. Schematic representation of the light source.

Download Full Size | PDF

2.2 The different solutions for simulation

2.2.1 Impulse response superposition

In Lumerical software, a source called “thin lens” allows the simulation of such an objective-lens with a given f-number and chief ray angle (see Fig. 4). This source consists of a sum of plane waves that creates either a Gaussian beam or a sinc at the focal point of the simulated “thin lens”. Thus, to correctly simulate our uniform light source, we have to incoherently sum different spatial “thin lens” sources.

 figure: Fig. 4.

Fig. 4. The “thin lens” source simulated by the FDTD software from Lumerical.

Download Full Size | PDF

We need to know how many sources are required to correctly simulate a uniform source. The simulation results were completed on a two dimensional structure of 5 pixels with pitch equal to 2μm (see Fig. 5 on the left). In the figure below, examples outputs are shown for 5 and 19 Gaussian sources with a spacing of 1μm and 0.5μm. The individual Gaussian waves are shown on the left and the sum of the waves is on the right.

From these results, we see that 19 sources with 0.5μm separation are sufficient to illuminate uniformly 5 microlenses. The 2D simulation in the section 4 will use 5 sources per pixel of 2μm pitch.

 figure: Fig. 5.

Fig. 5. Layout of the simulated structure (left) and intensity in the focal plane for different number of Gaussian waves (right).

Download Full Size | PDF

Nevertheless, one must notice that the “thin lens” option can not be used with Bloch periodic boundary conditions which require only one incident angle for the light source. In this case, we have to use absorbing boundary conditions, and then simulate a wide domain to take in account the cross-talk between neighbor pixels (here 5 pixels simulated to study the central one). For 3D simulations, the computational volume becomes large and the simulation time increases proportionately. The methodology we have developed to simulate this diffuse-like source reduces the size of structure that must be simulated and consequently the simulation time, by using Bloch periodic boundary conditions: in this case, only the central pixel is simulated.

2.2.2 Plane wave’s superposition

A second approach to simulate the incoherent illumination of the detector is to use plane waves instead of focused beams. Plane waves at non-normal incidence angles to periodic structures can be implemented in FDTD using Bloch periodic boundary conditions, also known as the Sine-Cosine Method [8]. We have demonstrated (see sections 3 and 4) that the incoherent sum of several focused beams characterized by a given f-number (paragraph 2.2.1) is equivalent to the incoherent sum of several plane waves with incidence angles given by the previous f-number.

If the number of plane wave angles N is comparable to the number of focused beam positions to be simulated, more than one order of magnitude in simulation time can be gained by using plane waves and thus Bloch periodic conditions, as shown in Table 1.

Tables Icon

Table 1. Comparison of the two methodologies for 3D simulation. In both cases, two polarization states must be simulated to calculate the response to unpolarized light. The Bloch boundary conditions used for the second solution require complex-valued fields

3. Theoretical demonstration of the methodology

In this section, we demonstrate that the incoherent sum of several focused beam characterized by a given f-number is equivalent to the incoherent sum of several plane waves with incidence angles given by the previous f-number.

Let’s consider a thin lens with a diameter D and a focal length f. Its transmission versus the coordinate inside the exit pupil is t(x,y) which possibly includes vignetting and/or aberrations. This lens is illuminated by the superposition of incoherent tilted plane waves and the wavelength λ. Each wave has an amplitude A and its tilt is referred with its direction cosines α0=x0/f, β0=y0/f, and γ0=1α02β02 where x0 and y0 are the coordinates in the image plane of the impulse response offset (see Fig. 6).

The intensity I x0,y0 (xf,yf) due to each plane wave in the focal plan (xf,yf) (corresponding to the Ei electric field in Fig. 3) is given [10] by:

Ix0,y0(xf,yf)=A2λ2f2t(x,y)e2λf[x(xf+x0)+y(yf+y0)]dxdy2
 figure: Fig. 6.

Fig. 6. Schematic of the light source with the different parameters

Download Full Size | PDF

With a uniform distribution of tilted plane waves, we have a uniform intensity If in the image plane given by:

If=Ix0,y0(xf,yf)dx0dy0
If=A2t(x,y)e2λf(x.xf+y.yf)e2λf(x.x0+y.y0)dxdy2d(x0λf)d(y0λf)
If=A2FT{t(x,y)e2λf(x.xf+y.yf)}(x0λf,y0λf)2d(x0λf)d(y0λf)

Note: FT denotes Fourier Transform.

Using Parseval’s theorem (energy conservation between two spaces conjugated by Fourier Transform):

FT{f(x,y)}(u,v)2dudv=f(x,y)2dxdy

We obtain:

If=A2t(x,y)e2λf(x.xf+y.yf)2dxdy
If=A2t(x,y)2dxdy

If we transform this expression to introduce the wave vector k→:

k=kxx+kyy=2πλαx+2πλβy

with α=xf and β=yf,then:

If=A2(λf2π)2t(λf2πkx,λf2πky)2dkxdky

In case of a perfect thin lens (neither aberration nor vignetting), the transmission of the lens t(kx,ky) is simply the pupil function:

t(kx,ky)=P(kx,ky)={1,kx2+ky2NA*k20,else

with NA the Numerical Aperture, NA=D2f=sinφ.

Let’s now consider a sum of incoherent tilted plane waves (direction wave vectors kx, ky and amplitude A) weighted by the function W(kx,ky). The total intensity IPW in the plane (xf,yf) is:

IPW=Aei(kxx+kyy).W(kx,ky)2dkxdky
IPW=A2W(kx,ky)2dkxdky

This expression can be identified with Eq. (9) if W(kx,ky) = P(kx,ky), i.e. a uniform distribution of plane waves inside a cone defined by the exit pupil diameter D and the focal length f.

One must notice that even if we have taken here a perfect lens with no aberration and no vignetting, this demonstration is still true if we consider a lens with a non-perfect transmission, i.e. with P(kx,ky)≠1 and non-constant inside the cone defined by the pupil. In this case, we still could simulate the new thin lens by a sum of incoherent plane waves with different weight according to the wave vector kx, ky.

Finally, in the case of non-uniform object, simulations can also be made using the solution with the “thin lens” source in order to image a finite-sized object for example.

4. Simulation results

The simulation results were completed on a two dimensional structure to test the methodology and determine the number of plane waves required (see structure on Fig. 5). The results were then compared to those achieved using the thin lens sources, in which case 5 thin lens sources per period were used.

In the case of thin lens sources 5 pixels are simulated using absorbing boundary conditions, whereas with the plane wave sources only one pixel is simulated with Bloch periodic conditions on the boundaries (the pixel has been repeated 4 times in post-processing for comparison with the other method).

The results are shown in Figs. 7, 8 and 9. The pitch of the sensor array is a = 2 μm. The wavelength is 550nm and the aperture NA=0.26 (see Fig. 4). The Poynting vector along y direction, Py(x), is normalized to the source power per unit cell such that the total transmission T, normalized to the source power can be calculated by

T=a2a212Py(x)dx

Figure 7 shows the simulation results of the Poynting vector Py for the thin lens sources (on the left) and for the plane wave sources (on the right) for on-axis pixels, i.e. pixels at the center of the sensor array. Figure 8 presents similar results for off-axis pixels, i.e. pixels on the edge of the sensor with an angle-shift of 10°.

 figure: Fig. 7.

Fig. 7. Poynting results of the simulated structure for on-axis pixels: propagation along the structure (top) and results at Silicon interface, y=0μm (bottom). On the left the 5 pixels with the Gaussian sources (“thin lens”). On the right, the 5 pixels with 16 plane wave sources (top) and different numbers of plane waves (bottom).

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Poynting results of the simulated structure for off-axis pixels (10° shift): propagation along the structure (top) and results at Silicon interface, y=0μm (bottom). On the left the 5 pixels with the Gaussian sources (“thin lens”). On the right, the 5 pixels with 16 plane wave sources (top) and different numbers of plane waves (bottom).

Download Full Size | PDF

Figure 9 shows the Poynting vector at Silicon interface (focal plane of the microlenses) for the central pixel of both cases and for the 2 methods.

 figure: Fig. 9.

Fig. 9. Comparison of the 2 methods: Poynting vector at Silicon interface for the central pixel For on-axis pixels (on the left) and off-axis pixels with 10° shift (on the right)

Download Full Size | PDF

In these figures, there are some variations for 4 and 8 plane waves, but 16, 32 and 64 plane waves appear reasonably converged. However, the small number (4 or 8) of plane waves is still reasonably accurate and could be used for rapid optimization initially that could be verified with a larger number of simulations once the optimal structure was determined.

5. Conclusion

This paper has shown a new methodology to simulate a uniform light source based on FDTD software from Lumerical Solutions. We have demonstrated that the incoherent sum of several focused beam characterized by a given f-number is equivalent to a uniform sum of plane waves with incidence angles included in the cone defined by the previous f-number. This methodology allows a gain in simulation time up to a factor 10 in 3D by using Bloch periodic conditions at the structure’s boundaries

This new light source is perfectly adapted to CMOS Image Sensor design verification and optimization where structures are important and diffraction problems not negligible anymore.

Thus, we are able to predict optical performances like Microlens Gain, Quantum Efficiency, Crosstalk, or Angular Response for different structures. We can then identify and understand potential optical limitations of pixels helping CMOS sensor design and process engineers optimise the pixel. Finally, we can anticipate by selecting design/process solutions and giving specifications to achieve good optical performance for CMOS Image Sensor.

References and links

1. A. El Gamal and H. Eltoukhy, “CMOS Image Sensors. An introduction to the technology, design, and performance limits, presenting recent developments and future directions,” IEEE Circuits & Devices Magazine (May/June 2005).

2. E. R. Fossum, “CMOS Image Sensors: Electronic Camera-On-A-Chip,” IEEE Trans. Electron. Devices 44, 1689–1698 (1997). [CrossRef]  

3. P. B. Catrysse, X. Liu, and A. El Gamal, “QE Reduction due to Pixel Vignetting in CMOS Image Sensors”, Proc. SPIE 3965, 420–430 (2000). [CrossRef]  

4. P. B. Catrysse and B. A. Wandell, “Optical efficiency of image sensor pixels,” J. Opt. Soc. Am A 19, 1610–1620 (2002). [CrossRef]  

5. J. Vaillant and F. Hirigoyen, “Optical simulation for CMOS imager microlens optimization,” Proc. SPIE 5459, 200–210 (2004). [CrossRef]  

6. H. Rhodes, G. Agranov, C. Hong, U. Boettiger, R. Mauritzon, J. Ladd, I. Karasev, J. McKee, E. Jenkins, W. Quinlin, I. Patrick, J. Li, X. Fan, R. Panicacci, S. Smith, C. Mouli, and J. Bruce, “CMOS Imager Technology Shrinks and Image Performance,” IEEE (2004).

7. K. S. Yee, “Numerical solution of initial boundary value problems involving Maxwell’s equations in Isotropic Media,” IEEE Trans. Antennas Propag. 14, 302–307 (1966). [CrossRef]  

8. A. Taflove and S. C. Hagness, Computational Electrodynamics : the finite-difference time-domain method, 2nd Edition, H. E. Schrank, Series Editor (Artech House, Boston, Ma, 2000).

9. Lumerical Solutions, Inc. http://www.lumerical.com.

10. J. W. Goodman, Introduction to Fourier Optics, 3rd Edition (Roberts & Company Publishers, Englewood, Co, 2005), Chap. 5. [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. CMOS image sensor: schematic (left) and SEM picture (center) of CMOS pixel, and the final module (right).
Fig. 2.
Fig. 2. Light shape in case of a uniform pixel illumination provided by an objective lens.
Fig. 3.
Fig. 3. Schematic representation of the light source.
Fig. 4.
Fig. 4. The “thin lens” source simulated by the FDTD software from Lumerical.
Fig. 5.
Fig. 5. Layout of the simulated structure (left) and intensity in the focal plane for different number of Gaussian waves (right).
Fig. 6.
Fig. 6. Schematic of the light source with the different parameters
Fig. 7.
Fig. 7. Poynting results of the simulated structure for on-axis pixels: propagation along the structure (top) and results at Silicon interface, y=0μm (bottom). On the left the 5 pixels with the Gaussian sources (“thin lens”). On the right, the 5 pixels with 16 plane wave sources (top) and different numbers of plane waves (bottom).
Fig. 8.
Fig. 8. Poynting results of the simulated structure for off-axis pixels (10° shift): propagation along the structure (top) and results at Silicon interface, y=0μm (bottom). On the left the 5 pixels with the Gaussian sources (“thin lens”). On the right, the 5 pixels with 16 plane wave sources (top) and different numbers of plane waves (bottom).
Fig. 9.
Fig. 9. Comparison of the 2 methods: Poynting vector at Silicon interface for the central pixel For on-axis pixels (on the left) and off-axis pixels with 10° shift (on the right)

Tables (1)

Tables Icon

Table 1. Comparison of the two methodologies for 3D simulation. In both cases, two polarization states must be simulated to calculate the response to unpolarized light. The Bloch boundary conditions used for the second solution require complex-valued fields

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

I x 0 , y 0 ( x f , y f ) = A 2 λ 2 f 2 t ( x , y ) e 2 λf [ x ( x f + x 0 ) + y ( y f + y 0 ) ] dxdy 2
I f = I x 0 , y 0 ( x f , y f ) d x 0 d y 0
I f = A 2 t ( x , y ) e 2 λf ( x . x f + y . y f ) e 2 λf ( x . x 0 + y . y 0 ) dxdy 2 d ( x 0 λf ) d ( y 0 λf )
I f = A 2 FT { t ( x , y ) e 2 λf ( x . x f + y . y f ) } ( x 0 λf , y 0 λf ) 2 d ( x 0 λf ) d ( y 0 λf )
FT { f ( x , y ) } ( u , v ) 2 dudv = f ( x , y ) 2 dxdy
I f = A 2 t ( x , y ) e 2 λf ( x . x f + y . y f ) 2 dxdy
I f = A 2 t ( x , y ) 2 dxdy
k = k x x + k y y = 2 π λ α x + 2 π λ β y
I f = A 2 ( λf 2 π ) 2 t ( λf 2 π k x , λf 2 π k y ) 2 d k x d k y
t ( k x , k y ) = P ( k x , k y ) = { 1 , k x 2 + k y 2 N A * k 2 0 , else
I PW = A e i ( k x x + k y y ) . W ( k x , k y ) 2 d k x d k y
I PW = A 2 W ( k x , k y ) 2 d k x d k y
T = a 2 a 2 1 2 P y ( x ) dx
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.