Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Sensing fabrication errors in diffraction gratings using high dynamic range imaging

Open Access Open Access

Abstract

We describe the design of a simple instrument for the identification and characterization of fabrication errors in diffraction gratings. The instrument uses an uncooled charge-coupled device (CCD) camera and a high dynamic range imaging process to detect the light scattered off a grating under test in the focal plane of a lens. We demonstrate that the instrument can achieve a dynamic range around nine orders of magnitude and we show that we are able to clearly identify small, periodic fabrication errors in two test gratings that could not be detected with microscopic techniques.

1. Introduction

We describe a low-cost imaging system that is being developed at National Institute of Standards and Technology (NIST) to identify fabrication errors in diffraction gratings [1, 2]. The construction of a simple instrument for error detection in gratings became necessary as part of an ongoing effort to improve the fabrication process of silicon immersion gratings for applications in high-resolution infrared spectrometry [3–5].

While Joseph Fraunhofer appears to have been the first to investigate diffraction gratings immersed in a refractive medium [6], immersed gratings have only in recent years achieved some currency for applications in spectrometry. The value of immersed gratings for high-resolution spectroscopy was recognized by Hulthén in the early 1950’s, who was first to demonstrate an increased resolving power in a reflection grating in contact with a solid refractive immersion medium [7, 8]. A similar grating was much later described by Dekker [9]. For collimated illumination, the angular relationship between incoming light rays and light rays diffracted at a grating with grating constant G (grooves per unit length) that is immersed in a layer of a refractive material with index of refraction n, is described by a grating equation:

n(sin α+sin βm)=mGλ,  m=0,±1,±2,±3
(the notation follows Palmer and Loewen [10]). Equation (1) applies to in-plane diffraction. α denotes the angle of incidence on the immersed grating, βm are the diffraction angles for light diffracted in order m, and λ is the wavelength of the light. Since |sin βm|1 it follows that only a limited number of propagating diffraction orders exists:
n(sin α1)Gλmn(sin α+1)Gλ.

Diffraction orders outside this range become evanescent [11]. The effect of the immersion medium is to increase the number of propagating diffraction orders by a factor of n. It is the access to higher diffraction orders that enables gratings with proportionally higher dispersion and resolving power. This property of immersed gratings is of considerable interest for high-resolution spectrometry at infrared wavelengths because, in general, for a given resolving power the dimensions of diffraction gratings increase in proportion with the wavelength [10]. The large refractive indices of many infrared optical materials make it possible to counteract this effect. A compact immersed grating can achieve the same resolving power as a much larger front surface grating. The reduction in size also makes the gratings easier to fabricate. The appeal of immersion gratings made from silicon for space-based astronomical or remote sensing applications in the infrared at wavelengths larger than 1.1 μm was first noted by Wiedemann and Jennings who describe the fabrication of an immersed silicon test grating [12]. Single crystal silicon is widely available in high quality due to the dominance of the material for semiconductor manufacturing. Two pathways for the fabrication of immersed silicon echelle gratings have been demonstrated. One is to micro-machine the gratings [13]. The other more prevalent pathway is to deep-etch the grating grooves by exploiting the anisotropic etching of crystalline silicon in potassium hydroxide (KOH) [14] using a silicon nitride etch mask [3–5, 15–18].

A critical challenge for the fabrication of immersed silicon gratings, and the primary driver for the devlopment of the instrument described in this paper, is that fabrication tolerances are tightened by a factor of 1/n because the wavefront error caused by groove placement errors increases in proportion to the diffraction order [19, 20]. This, for example, imposes a limit on the groove placement error for immersed silicon gratings that can be as low as 20 nm for gratings with dimensions in the order of 100 mm [3]. Periodic fabrication errors result in discrete spectral features, called “ghosts”, that occur at angles close to the diffraction angles βm. Random errors such as line edge roughness cause diffuse scattering of light. All fabrication methods for diffraction gratings, especially for the large gratings that are required for high-resolution spectrometry, can only pattern a grating area incrementally. Ruled gratings must be ruled groove-by-groove. Electron-beam lithography systems typically pattern only areas of a few mm 2, and larger areas must be patterned in mosaic fashion. Similarly, laser lithography systems generally pattern large areas through the sequential exposure of narrow strips, and full-area lithography methods such as contact lithography or projection lithography rely on photomasks that are created through sequential exposure. An exception are replica gratings, but those inherit the fabrication errors of the replica master. All incremental fabrication methods have fabrication errors associated with the step increment, often periodic, and must be carefully characterized and controlled to ensure that spurious spectral artifacts in the application of the grating remain below acceptable levels. For the immersed silicon gratings mentioned in the preceding paragraphs the relative intensity of ghosts must not exceed 1×104 [3]. Harrison [21] gives a summary of the challenges faced by fabricators of ruled gratings. Comparable problems are encountered with all modern fabrication techniques.

Identifying fabrication errors in gratings is an equally difficult task, especially when the grating area is large, and microscopy is often unequal to the challenge as we will show in section 4.2. Microscopy at magnifications high enough to image small errors has a very limited field of view that is typically below the spatial period of fabrication errors. An alternative way of testing a plane diffraction grating is to illuminate it with monochromatic light and image the distribution of light reflected by the grating near the focus of a lens. Periodic fabrication errors modulate the wavefront reflected by the grating and will manifest themselves as peaks in the focal plane light distribution that are spatially offset from the main reflection. This concept was realized in a focal plane scanner with a photomultiplier to certify gratings made by the Mount Wilson Observatory [22]. The same idea, in modernized form, was used recently for the characterization of transmission gratings fabricated with electron-beam lithography was only recently described by Heusinger et al. [23, 24].

In this paper we describe a focal plane imager for the performance evaluation of diffraction gratings in which the photomultiplier is replaced by a solid state camera, as in [3], and high dynamic range (HDR) imaging that relies on changes in exposure time, camera gain, and laser power to provide the necessary dynamic range. We show that the instrument provides adequate angular range and resolution to identify fabrication errors. An advantage of focal plane imaging over linear focal plane scanning is that the complete, two-dimensional focal plane distribution of light in the vicinity of a selected diffraction order is measured. The design of our instrument is described in the following section 2. Section 3 summarizes our HDR imaging procedure, and two examples of grating measurements are presented in section 4.

 figure: Fig. 1

Fig. 1 Schematic and photographs of the measurement setup with the grating oriented to retro-reflect the incoming beam (Littrow). Numbers identify corresponding components of the setup in schematic and photographs. The inset camera image shows the vicinity of a well-focused beam on the camera sensor.

Download Full Size | PDF

2. Instrument design

The design of our instrument, shown in Fig. 1, is very similar to that of an autocollimator for angle measurements [25]. Light from a frequency stabilized helium-neon laser is coupled into a collimator via an optical fiber. The aspheric collimator has a clear aperture of 23.8 mm and generates a Gaussian beam with an 1/e2 radius of 11 mm. The aperture of the collimator is sufficiently large to ensure that diffraction effects due to the truncation of the Gaussian beam remain undetectable. In the version of the instrument that was used to make the measurements described in section 4 the polarization state of the light exiting the collimator is indeterminate because the optical fiber is not polarization maintaining. A circular beam splitter made of silica glass with a diameter of 75 mm and a thickness of 25 mm directs the beam to the grating under test as shown in Fig. 1. The beam splitter coating has a reflectance close to 0.5. The back side of the beam splitter is coated with a v-type anti-reflection coating for the wavelength of the laser with a reflectance of about 3×103. The grating under test is mounted in a 5-axis mount such that the incoming light is retro-reflected for the desired diffraction order (Littrow configuration [10]). Light reflected by the grating then traverses the beam splitter and is focused by a spherical plano-convex lens with a diameter of 50 mm and a focal length of 100 mm onto the charge-coupled device (CCD) sensor of a camera. The inset image in the upper right corner of Fig. 1 shows the zoomed-in camera image of a well-focused beam on the camera sensor. The choice of focal length and the size of the image sensor together determine the range of angles the instrument can detect. Beam splitter and focusing lens are jointly mounted in a custom mount that was printed from a black plastic material. The fraction of the incoming collimated beam transmitted by the beam splitter is focused by a second lens onto a photodiode that can be used to monitor the laser power. During most measurements this beam is blocked after the beam splitter to eliminate stray light caused by reflections from the lens surfaces and the photodiode. The entire measurement system is enclosed in a light-tight box made from black, rigid foam boards that block room light from entering the camera and also reduce the amount of stray light that enters the camera via reflection from the enclosure walls. As described previously [1, 2], the angular scale of the camera sensor is calibrated using a Ronchi grating with a known grating period of 100 μm. For the measurements presented in section 4 the scale factor is (1.97 ± 0.02) × 10 3 degrees per pixel (using a coverage factor k=1), and the full angular range is approximately 4.5 .

2.1. Laser source

A stabilized He-Ne laser with a wavelength of 632.816 nm in air was chosen for our measurements because it is the lowest cost laser source that is sufficiently wavelength stable. The relative frequency stability of the laser is specified by the manufacturer as ±5×109 over 1 hour, which is also the approximate time it took to complete a typical measurement with high dynamic range. For the gratings we describe here, this wavelength change causes a change in diffraction angle no larger than ±0.4×106 degree. This uncertainty is negligible compared to the angle resolution of about ±2×103 degree that follows from the angular range covered by the camera sensor and the number of pixels. We also explored using an unstabilized external cavity diode laser for our measurements, but we found it to be unsuitable because it has a much larger wavelength spread that can only be made sufficiently small by, for example, locking the laser wavelength to a suitable molecular transition [26].

 figure: Fig. 2

Fig. 2 Laser power as function of beam stop (blade) position. Error bars have a length of two standard deviations.

Download Full Size | PDF

The amount of light from the laser that is coupled into the optical fiber is controlled to ensure that the full dynamic range of the image sensor in the camera is used at the beginning of a measurement sequence. It can also be used to increase the dynamic range of a measurement by acquiring several images with increasing laser power in the manner described in section 3. The laser power is controlled with a razor blade that is mounted on a motorized translation stage and can be moved into the laser beam, this mechanism is shown in the inset on the bottom left of Fig. 1. The motorized stage has a motion resolution of 50 nm and a position repeatability of 0.1 μm, which allows sufficiently fine control of the beam stop (blade) position. Figure 2 shows the average of 30 repeated measurements of the laser power, measured at the exit of the optical fiber, as a function of beam stop position. The standard deviation of the normalized intensity measurements is 1.6×103. The inset in Fig. 2 shows the power repeatability.

The photodiode that is used to measure the power of the light entering the system (item 13 in Fig. 1) can be used to calibrate the relationship of beam block position and laser power. The calibration can then be used to set the laser power to any desired value. For measurements in which the laser power is varied, the beam stop (item 10 in Fig. 1) is removed at the beginning of the measurement sequence to measure the laser power. Then the beam is blocked again and changes to the laser power are made using the previously established calibration curve. For the measurements described in section 4 we did not solely rely on the laser power calibration but the actual relative laser power was measured after each adjustment of the laser power.

The temporal fluctuations of the laser power were investigated using a photodiode placed at the fiber exit. Several times, the voltage at the output of the photodiode amplifier was sampled for about an hour at half-second intervals. The measurements show that long-term laser power changes during a one hour interval were negligible. Changes of approximately 1 % of the laser power could be observed in measurements that were made at a lower sampling rate for a duration of about 24 hours. Those changes occurred overnight and seemed correlated with variations in the laboratory room temperature. Short term fluctuations of the laser power at the second-scale had a standard deviation of 1.8 %. These fluctuations are likely the result of the long air path between the laser head and the fiber coupler that is required to accommodate the laser power controller (see Fig. 1). Pointing fluctuations of the laser [27] and refractive index fluctuations due to air currents in the laser beam path both translate into variations in the amount of light that is coupled into the optical fiber.

2.2. Camera

An uncooled monochrome camera with a charge coupled device (CCD) sensor was used for the measurements described in this paper. The sensor has 2448 × 2048 square pixels with a width of 3.45 μm. The camera returns images with 12 bits resolution. An integrated amplifier allows the camera amplification (gain) to be programmed in a range from 0.1 to 15, and an electronic shutter can be used to program the exposure time in a wide range from 100 μs to 32 s. The programmable sensitivity of the camera, together with control of the laser power, enabled the high dynamic range imaging described in section 3.

 figure: Fig. 3

Fig. 3 Average over all pixels of the camera’s temporal dark noise in raw pixel values (a) and stray light caused by the camera sensor cover glass (b).

Download Full Size | PDF

While the dynamic range of the camera is nominally 12 bits (4096), it is much more limited by the presence of noise. Figure 3(a) shows a measurement of the temporal dark noise of the camera for the full ranges of camera gain and exposure time. Except for the longest exposure times the dark noise is close to 6 bits, and it only weakly depends on the camera settings. For the longest exposure times and highest gains the dark noise increases to about 7 bits. The result in Fig. 3(a) allowed us to filter some of the noise because in most of the images acquired during the high-dynamic range measurements described in section 4 the light is concentrated in small areas of the image sensor. Pixels with a value below a threshold of 120 were considered noise and marked invalid, except for the image in a sequence with the highest sensitivity. Similarly, pixels with values close to the saturation level of 4096 were considered invalid. The cutoff was set at around 4000 to avoid significant nonlinearity in the camera response that can occur close to the saturation level.

The camera had two windows, one removable window in the camera housing and an uncoated cover glass bonded to the frame of the sensor chip package to protect the sensor. It became apparent early on that the cover glass was causing an unacceptable level of stray light. The CCD sensor reflects a significant fraction of the light falling on its surface (a polished silicon surface would reflect about 35 % at 633 nm). In addition, the light is diffracted because the CCD sensor is a grid of pixels. A fraction of this diffracted light is reflected back onto the sensor by the cover glass and causes a background signal over a large part of the sensor area, shown in Fig. 3(b), that far exceeds the light levels we were hoping to detect. This back-scattered light was eliminated by removing the sensor cover glass.

2.3. Stray light management

In addition to the modification of the camera described in section 2.2, several measures were taken to reduce the level of stray light in the instrument. A light baffle with multiple internal knife edges [28], that was printed from a black plastic material, was placed in front of the camera to restrict the viewing angles of the camera to light that is reflected by the grating under test. The baffle is the barrel-shaped structure between the beam splitter mount and the camera in Fig. 1. Any grating diffracts light into a range of angles and some of the light unavoidably strikes parts of the beam splitter and lens mount. The amount of light scattered off the beam splitter mount surfaces was attenuated below our detection limit by wrapping the four posts at the corners of the beam splitter mount with a commercially available foil that has an ultra-black low-reflectance coating.

 figure: Fig. 4

Fig. 4 Stray light due to multiple reflections at optical element surfaces: secondary reflection at the beam splitter (a) and secondary reflection at the focusing lens (b).

Download Full Size | PDF

The stray light caused by multiple reflections at the optical element surfaces proved to be the most difficult to attenuate. This includes light that is diffracted by the camera sensor and then reflected into the forward direction again by one of the lens surfaces or the grating. All possible double reflections were modeled and evaluated using commercial ray-tracing software. The two most significant cases are depicted in Fig. 4. Figure 4(a) shows that a fraction of light from the grating under test is reflected by the back surface of the beam splitter and then reflected again into the forward direction by the beam splitter surface. This generates a beam that is offset from the primary beam and, because it is a reflection from a surface with a large reflectance of 0.5, it carries an appreciable fraction of the power of the primary beam. In the current instrument, this secondary beam partially overlaps with the primary beam and a fraction of it enters the focusing lens and is focused on the camera sensor. This accounts for the spot with a relative intensity of about 1×104 that is visible in the example measurements discussed in section 4. The second significant case of stray light caused by multiple reflections is shown in Fig. 4(b), which shows the result of a double reflection at the surfaces of the focusing lens. The contribution of this reflection to the overall stray light is smaller than in the case of Fig. 4(a) because it is the result of a double reflection at anti-reflection coated surfaces and because the light is not focused onto the camera sensor. However, this stray beam can potentially create a nearly uniform background signal at the camera sensor that, potentially, affects the noise floor of the measurement setup.

For gratings that are fabricated on a glass substrate and are tested in reflection, as is the case in one of the examples in section 4, the reflection of the test beam at the back surface of the grating substrate can become a contributor to stray light that can be difficult to attenuate. We found it sufficient to frustrate the back surface reflection with black adhesive tape that was bonded to the back surface. In cases where better suppression of the back surface reflection is required it may be necessary to bond a filter glass absorber to the back side of the substrate using an optical blocking adhesive.

3. High dynamic range imaging

High dynamic range (HDR) imaging has become a well established method in digital photography for generating images with a dynamic range exceeding that of the camera sensor by combining several images with different exposures [29]. The success of HDR methods for the imaging of real-world scenes was the inspiration to use an HDR method to extend the dynamic range of the irradiance images acquired with our measurement setup. The HDR algorithm we use is a simple form of “exposure bracketing” in which several images are acquired with different exposure parameters, scaled to a common sensitivity scale, and combined, an idea that goes back to the early development of HDR imaging [30, 31]. For an HDR measurement a sequence of N images Ik(k=1N) with varying camera gains gk, exposure times τk, and relative laser powers rk are acquired. In a typical measurement between 20 and 30 images were acquired with exposure parameters gk and τk chosen such that the camera sensitivity σk=gkτk increases in approximately equal increments on a logarithmic scale. A relatively large number of images was chosen to improve the signal-to-noise ratio and to make it easier to, e.g., evaluate the linearity of the HDR imaging process. (The example measurements in section 4 used 28 images.) This ensured that almost all camera pixels had at least two valid measurements within the linear range of the camera (see Fig. 6) to enable a careful analysis of the HDR algorithm described in this section. At each sensitivity 10 images were averaged. The first image is always acquired at the lowest camera sensitivity (smallest g and τ). Before the first image is captured, the laser power is adjusted such that nearly the full dynamic range of the camera sensor is used. The corresponding laser power is used as a reference for changes in the relative laser power and for the first measurement in a sequence r1=1. Figure 1 includes, in the upper right corner, an image of the sensor illumination near the intensity peak at the beginning of a measurement sequence. The exposure times τk were then successively increased until the longest exposure time, 32 sec, was reached. Then the camera gains gk were increased. For the measurements described in section 4 the relative laser power rk was also increased.

Once all images in a sequence are captured, each image is assigned a weight factor Ωk defined as follows:

Ωk=g1τ1r1gkτkrk .

This weight factor Ω1 has a value of 1 for the initial image of a sequence and the weights Ωk decrease with increasing camera sensitivity and laser power. The HDR image H is then calculated as the weighted mean of the valid measurements at each pixel for all images in the measurement sequence:

Hij=1nijk=1nijΩkIk;ij .

In Eq. (4), i and j are the horizontal and vertical pixel coordinates of a pixel in the images H and Ik. Ik;ij is the raw image value of the pixel in measurement k, and nij is the number of valid measurements for pixel (i,j) in a measurement sequence that fall within the dynamic range of the camera. Pixels with values below the noise threshold or close to the saturation level are not included in the weighted mean. The examples described in section 4 illustrate that the HDR algorithm and the relatively simple measurement setup described here can achieve a dynamic range that covers nearly 10 orders of magnitude.

3.1. Linear camera model

We show in section 4 that HDR images obtained with the algorithm expressed through Eq. (4) are a sensitive test for fabrication errors in diffraction gratings. If, however, we wish to go further and measure relative irradiances within the area of the imaging sensor we face the considerable challenge that the linearity of the HDR imaging must be demonstrated over a very large dynamic range. In HDR imaging it is generally possible to recover the response function of the camera from a sequence of images with different exposures. We attempted to use the method described by Debevec and Malik [32] to calculate the camera response function, but this proved unsuccessful because the focal plane images are very different from the real-world scenes for which HDR imaging is typically developed. With the exception of object edges, the irradiance changes gradually in real-world scenes. In the case of our focal plane images, the signal is concentrated in few small areas with signal peaks, the irradiance gradients in the vicinity of the peaks are very large, and most of the image contains noise. An extreme example is the image acquired with the lowest camera sensitivity shown in Fig. 1, which contains only about 10 pixels with values significantly above the noise level. The small number of pixels and the large irradiance gradients made the method described in [32] for the calculation of the camera response function numerically unstable.

We, therefore, did the reverse and assumed a linear camera response. An HDR image sequence was then evaluated based on the assumption of a linear camera model, to detect deviations from the linear model. The simplest linear camera model is one in which the pixel values are proportional to the irradiance on the sensor:

Ik;ij=βEk;ijgkτk=βEk;ijσk .

Ik;ij is the raw value for pixel (i,j) in image k, as in Eq. (4), and Ek;ij is the corresponding irradiance at the pixel that results in the exposure. β is a scale factor describing the sensor response, and σk is the sensitivity gkτk. When the laser power is not varied, the irradiance can be assumed to remain constant for all measurements in an HDR sequence and it can be estimated from the valid pixel value measurements using Eq. (5):

E¯=1nk=1nEk .

Ek are the irradiances calculated from valid pixel values in images Ik of a measurement sequence (pixel indices (i,j) are omitted for clarity).

Figure 5(a) shows the 28 pixel values as a function of normalized camera sensitivity setting σ^k=σk/σ1 for two pixels selected from the images in the grating measurement described in section 4.1. These two pixels were selected because their progression of pixel values as the sensitivity is increased are extreme cases of pixels that conform to the linear camera model and pixels that show erratic behavior. Pixel values below the noise threshold, too close to saturation, or with significant deviation from the camera model are marked with crosses in Fig. 5(a). Pixel values within the acceptable range are marked with open circles. The solid curves show the expected behavior of the pixel values according to Eq. (5) and the estimated irradiance E¯. The blue curves show a pixel that closely follows the linear camera model in Eq. (5). Even though this was the case for most of the pixels, many pixels show significant departures from the linear camera model as illustrated by the red curves in Fig. 5(a).

It is helpful to rewrite Eq. (5) in logarithmic form by introducing normalized irradiances E^k=Ek/E1pk, where E1pk is the peak irradiance calculated from the peak pixel value I1pk in the first image of the measurement sequence. Using normalized irradiance and normalized sensitivity, the linear camera model in Eq. (5) can be rewritten in logarithmic form:

log (E^k)=log (Ik)log (σ^k)log (I1pk) .

When the normalized irradiances are each calculated from a data point in Fig. 5(a) and are plotted in a log-log plot as a function of normalized sensitivity, as suggested by Eq. (7), the plot in Fig. 5(b) is the result. The estimated pixel irradiance values on the left side of the graphs do not conform to the linear camera model as they are at the noise level. Pixel values that conform to the linear model must be on a horizontal line because they each must yield the same value for the constant, normalized irradiance. The horizontal solid lines in Fig. 5(b) represent the mean irradiances at the two pixels estimated from the valid pixel data using Eq. (6). Figure 5(b) suggests a simple algorithm for the identification and rejection of outliers such as those in the red curves in Fig. 5. Starting with the lowest sensitivity we search for the largest set of successive irradiance values such that the deviation of each value from the set average is less than ±10%. For the two pixels in Fig. 5 the valid pixel data that were identified using this algorithm are indicated with circles. Figure 6(a) shows the number of valid values at each pixel when this algorithm is applied to all pixels in the images of the measurement described in section 4.1. In the corners of the image where the irradiance is lowest only one or two valid observations are made even at the highest sensitivity but at the center of the image the number of valid observations that conform to the linear camera model is around 6. The average number of valid observations for the entire image is 3.3. Figure 6(b) shows the central area of Fig. 6(a). A close look at the blue curve in Fig. 5(b) shows that the valid observations have an irradiance that is not constant but depends weakly on the sensitivity as is evident from the slight downward slope in the data indicated with circles. This behavior was also observed in other pixels and indicates a non-linearity in the camera sensitivity. It must be emphasized that our discussion of the camera linearity only extends to the linearity within the dynamic range of the camera at a given sensitivity setting. Work to demonstrate the linearity over the dynamic range of HDR images, e.g., using concepts outlined in [32], is ongoing.

 figure: Fig. 5

Fig. 5 Pixel values for two pixels, shown in red and blue, from an image sequence in a HDR measurement. Raw camera pixel values (a) and normalized irradiances (b). Values conforming to the linear camera model are indicated by circles, non-conforming vales are indicated by crosses.

Download Full Size | PDF

The speckle-like variation in the number of valid observations, and the presence of features that are clearly the result of optical interference suggest that the erratic behavior of pixels is not caused by the imaging sensor but is primarily the result of coherent noise in the light field. In the current measurement setup this noise is the primary limit of the measurement sensitivity.

 figure: Fig. 6

Fig. 6 Number of observations conforming to a linear camera model for the whole image sensor (a) and near the center of the sensor (b). The data are from the measurement described in section 4.1.

Download Full Size | PDF

4. Measurement examples

We illustrate the capability of our instrument with measurements of two gratings. The first grating is an immersed silicon echelle grating that was fabricated at the University of Texas at Austin Astronomy Department using contact lithography with a commercial photomask [3, 16]. Since silicon is not transparent at 633 nm the grating had to be tested as a surface grating in reflection. The second grating is a computer-generated hologram (CGH) that was fabricated at NIST on a borosilicate glass substrate. The hologram has nearly straight lines and, in the first diffraction order, generates an elliptical wavefront for the form measurement of an elliptical mirror intended for the focusing of x-rays [33].

 figure: Fig. 7

Fig. 7 Confocal microscope image of echelle grating topography (a), HDR focal image (b), horizontal section of the HDR image through 0  (c), and vertical section of the HDR image through 0  (d). Red bars in (d) indicate the standard deviation of 6 measurements below and above the mean.

Download Full Size | PDF

4.1. Echelle grating

The photograph in Fig. 1 shows the measurement setup with the echelle grating. The grating was fabricated on a silicon slab of approximately 10 mm thickness and about 100 mm diameter using contact lithography as described by Marsh et al. [3]. The square grating area has a width close to 70 mm. It is mounted such that the grating grooves are oriented in vertical direction. An attempt was made to measure the topography of the grating with confocal microscopy; the result is shown in Fig. 7(a). The grating lines have a trapezoidal cross section with two different slopes that result in blaze angles of 72.1  and 35.8 . The measurement results presented in Fig. 7 were made in Littrow configuration at an angle of incidence of 35.8  because the test beam overfills the grating area at the larger blaze angle.

Figure 7(b) shows the HDR image obtained with the algorithm described in section 3 without changes in the laser power. Only camera gain and exposure time were changed to obtain images at 28 different camera sensitivities that span the full dynamic range of the camera. The images were combined using Eq. (4). The peak at the center of the image in Fig. 7(b) is due to the primary reflection from the grating. It is taken as the origin of the relative angular scales. Figure 7(b) shows a peak at (0.5 , 0.3 ) that is caused by the secondary reflection at the beam splitter discussed in section 2.3. This reflection is present in all measurements made with the current version of the instrument and it is a significant contributor to the stray-light background.

Figure 7(c) is a horizontal section through the center of the image in Fig. 7(b), corresponding to the direction orthogonal to the grating grooves, and Fig. 7(d) is a section in vertical direction. The horizontal section in Fig. 7(c) clearly shows the presence of periodic errors that are similar to Rowland ghosts with a relative intensity of about 1×106 even though their spatial period is smaller than for Rowland ghosts, which typically cluster near the primary peak. A pronounced periodic error is evident in vertical direction of the HDR image Fig. 7(b). The peaks in vertical direction do not line up perfectly because the effect of non-paraxial diffraction becomes noticeable at the high angle of incidence [11]. The section shown in Fig. 7(d) is along the circle that best fits the locations of the peaks. This periodic error is likely a residuum of the fabrication process. The fabrication process for the grating requires to first pattern a silicon nitride layer on the silicon substrate before the grating lines are etched into the silicon with a potassium hydroxide (KOH) etch process. The patterning of the nitride layer was accomplished with contact lithography using a commercial photomask. Photomasks, with the exception of those needed for leading-edge applications, are generally created with laser lithography tools that are used to pattern a chromium layer on a glass substrate. All laser lithography tools pattern the chromium layer incrementally and this step-by-step exposure of the photomask is the likely cause for the periodic error.

 figure: Fig. 8

Fig. 8 HDR focal image of the echelle grating with laser power variation (a) and vertical section of the HDR image through 0° with (red line) and without (black line) increases in the laser power. Note that (a) is plotted with a compressed irradiance scale that allots most of the colorbar range to the low end of the scale.

Download Full Size | PDF

When the dynamic range of the camera is exhausted, a further increase of the dynamic range in a HDR measurement can be achieved by successively increasing the laser power. This was done in the measurement shown in Fig. 8. Once the highest sensitivity setting of the camera was reached, the laser power at the exit of the optical fiber was increased in five steps up to 100 times the initial laser power. The resulting HDR focal image, calculated with Eq. (4) is shown in Fig. 8(a) (all values larger than 1×106 are plotted in the same color). A vertical conical section is shown in Fig. 8(b) together with the corresponding section from Fig. 7(b) without variation of the laser power. Increasing the laser power resulted in a small increase in dynamic range primarily near the edges of the image where the light level is lower. This appears to confirm that the current measurement setup is primarily limited by stray light. An increase in laser power results in a nearly proportional increase in background.

Close inspection of Fig. 8(a) shows the presence of numerous weak spots of focused light that do not fall on either the horizontal or vertical section. While these spots must be caused by periodic fabrication errors, we were unable to identify the nature of the fabrication error.

 figure: Fig. 9

Fig. 9 Micrograph of photon sieve hologram (a), HDR focal image (b), horizontal section in the HDR image through 0° (c), and vertical section in the HDR image through 0° (d). The scale bar in image (a) has a length of 10 μm. Red bars in (d) indicate the standard deviation of 6 measurements below and above the mean.

Download Full Size | PDF

4.2. Computer-generated hologram

Our second example measurement is that of a photon sieve that was fabricated at NIST on a glass substrate using direct-write photolithography. Photon sieves are computer-generated holograms in which the Fresnel zones are replaced by circular apertures that are placed at random positions along the Fresnel zone [34]. The photon sieve shown in Fig. 9(a) was designed to generate a test wavefront in first diffraction order for the form measurement of a shallow elliptical x-ray mirror [33]. It is a transmission phase hologram in which the areas outside the circular apertures are etched into the glass substrate resulting in elevated circular areas. The full hologram covers an area of 20 mm × 40 mm. As in the case of the echelle grating described in section 4.1 the photon sieve was measured in reflection. As the glass substrate is transparent, it was necessary to attenuate the reflection off the back surface of the substrate. A piece of black adhesive tape bonded to the back surface was found to provide sufficient attenuation. The photon sieve was tested in Littrow configuration at the first diffraction order.

In the HDR image Fig. 9(b), which was obtained without laser power variation, the most conspicuous difference from Fig. 9(d) is a noise floor that is approximately an order of magnitude higher than in the case of the echelle grating Fig. 7. Two reasons account for this difference. One is that photon sieves scatter light in a wide range of angles, except for the ±1 diffraction orders, resulting in a diffuse background [34]. The other reason is that the photon sieve has a specular reflection (zero-th order) that is separated from the first order by just over two degrees. The light from this reflection can be scattered at the beam splitter and optical mounts, and contributes to the background and accounts for the background slope in Fig. 9(b).

The horizontal section in Fig. 9(c) shows no periodic errors above the noise level. The vertical section in Fig. 9(d), on the other hand, shows peaks that clearly indicate periodic error in the grating. The error bars indicate the standard deviation in the peak heights calculated from 6 measurements. These peaks in the focal plane image are unambiguously traceable to the lithography system that was used to create the photon sieve pattern. The lithography tool [35] exposes a photoresist in an incremental fashion by sequentially exposing strips that are oriented horizontally in Fig. 9. A small exposure dose error where the strips abut results in a periodic fabrication error. The width of the exposed strips in our lithography system was 65 μm, which could be estimated from the spacing of the beamlets in the lithography tool [35]. When the spatial period of the exposure error is calculated from the spacing of the peaks in Fig. 9(d) the result is (66.5 ± 0.7) μm (k=1). It is remarkable that this fabrication error is so small that it is not discernible in the optical microscopy image Fig. 9(a) but it can be clearly identified in the HDR focal image.

5. Conclusion

We have demonstrated that a simple focal plane imaging instrument in combination with high dynamic range imaging using a simple, uncooled camera can detect minute fabrication errors in diffraction gratings that are difficult to detect with conventional imaging methods such as microscopy. Even the prototype instrument we have described has achieved a dynamic range of about nine orders of magnitude, and it measures the full two-dimensional angular distribution of light scattered in the angular vicinity of the selected diffraction order. One clear shortcoming of the instrument as it is currently constructed is the noise level, which is primarily caused by stray light. Improved designs for the beam splitter and focusing optics, and more efficient trapping of scattered light, should result in a significant lowering of the stray light background. Laser speckle in the images can be reduced by lowering the spatial coherence of the test beam. Finally, once the instrument is no longer limited by stray light, a cooled camera would further increase the sensitivity. The HDR algorithm used for our measurements is time-consuming because we are using image sequences with a large number of exposures. It is desirable to find optimal image sequences with exposure parameters that achieve a desired dynamic range and signal-to-noise ratio with the smallest number of images. Adaptive exposure estimation methods have been described (see, for example, reference [36] and references therein) and it may be possible to use an adaptive exposure method to improve the efficiency of the HDR imaging in our measurements. In the longer term, the development of image sensors capable of single photon detection [37] may make it possible to construct a focal plane imaging system for the inspection of diffraction gratings with a sensitivity that can currently only be reached with photo-multipliers.

Funding

National Aeronautic and Space Administration (NASA) (APRA NNH15AB221); National Institute of Standards and Technology (NIST).

Acknowledgments

We gratefully acknowledge the extended loan of an immersed silicon grating for testing purposes by Dr. Cindy B. Brooks of the University of Texas at Austin, and helpful discussions with Benjamin T. Kidder, also of the University of Texas at Austin, of focal plane imaging for the detection of fabrication errors in gratings. The fabrication of the computer-generated hologram described in this paper was in part performed using tools, processes, and expertise provided by the Center of Nanoscale Science and Technology (CNST) at NIST. The grating topography shown in Fig. 7(a) was measured at NIST by Xiaoyu Alan Zheng.

References

1. M. I. Afzal, S. C. Corzo-Garcia, and U. Griesmann, “A focal plane imager with high dynamic range to identify fabrication errors in diffractive optics,” in Optical Fabrication and Testing, (Optical Society of America, 2017), pp. OW3B–2.

2. S. C. Corzo-Garcia, M. I. Afzal, B. T. Kidder, M. M. Grigas, and U. Griesmann, “A high dynamic range imaging method for the characterization of periodic errors in diffraction gratings,” in Reflection, Scattering, and Diffraction from Surfaces VI, vol. 10750 (International Society for Optics and Photonics, 2018), p. 1075009.

3. J. P. Marsh, D. J. Mar, and D. T. Jaffe, “Production and evaluation of silicon immersion gratings for infrared astronomy,” Appl. Opt. 46, 3400–3416 (2007). [CrossRef]   [PubMed]  

4. C. B. Brooks, B. T. Kidder, M. M. Grigas, U. Griesmann, D. W. Wilson, R. E. Muller, and D. T. Jaffe, “Process improvements in the production of silicon immersion gratings,” in Advances in Optical and Mechanical Technologies for Telescopes and Instrumentation II, vol. 9912 (International Society for Optics and Photonics, 2016), p. 99123Z. [CrossRef]  

5. C. B. Brooks, B. T. Kidder, M. M. Grigas, and D. T. Jaffe, “Process and metrology developments in the production of immersion gratings,” in Advances in Optical and Mechanical Technologies for Telescopes and Instrumentation III, vol. 10706 (2018), p. 1070654.

6. J. Fraunhofer, “Neue Modifikation des Lichtes durch gegenseitige Einwirkung und Beugung der Strahlen, und Gesetze derselben [A new method for modifying light through mutual influence and diffraction of rays, and its laws,],” in, Joseph von Fraunhofer’s Gesammelte Schriften, E. Lommel, ed. (Verlag der Königlichen Akademie, 1888), pp. 91–94.

7. E. Hulthén, “Refraction gratings,” Ark. Fys. 2, 439–441 (1950).

8. E. Hulthén and H. Neuhaus, “Diffraction gratings in immersion,” Nature 173, 442–443 (1954). [CrossRef]  

9. H. Dekker, “An immersion grating for an astronomical spectrograph,” in Instrumentation for Ground-Based Optical Astronomy. Santa Cruz Summer Workshops in Astronomy and Astrophysics, L. B. Robinson, ed. (Springer, 1988), pp. 183–188.

10. C. A. Palmer and E. G. Loewen, Diffraction grating handbook (Richardson Gratings, Newport Corp., 2014), 7th ed.

11. J. E. Harvey and C. L. Vernold, “Description of diffraction grating behavior in cosine space,” Appl. Opt. 37, 8158–8160 (1959). [CrossRef]  

12. G. Wiedemann and D. J. Jennings, “Immersion grating for infrared astronomy,” Appl. Opt. 32, 1176–1178 (1993). [CrossRef]   [PubMed]  

13. Y. Ikeda, N. Kobayashi, Y. Sarugaku, T. Sukegawa, S. Sugiyama, S. Kaji, K. Nakanishi, S. Kondo, C. Yasui, H. Kataza, T. Nakagawa, and H. Kawakita, “Machined immersion grating with theoretically predicted diffraction efficiency,” Appl. Opt. 54, 5193–5202 (2015). [CrossRef]   [PubMed]  

14. K. Williams and R. S. Muller, “Etch rates for micromachining processing,” J. Microelectromechanical systems 5, 256–269 (1996). [CrossRef]  

15. W.-T. Tsang and S. Wang, “Preferentially etched diffraction gratings in silicon,” J. Appl. Phys. 46, 2163–2166 (1975). [CrossRef]  

16. W. Wang, M. Gully-Santiago, C. Deen, D. J. Mar, and D. T. Jaffe, “Manuacturing of silicon immersion gratings for infrared spectrometers,” in Modern technologies in space- and ground-based telescopes, vol. 7739 (International Society for Optics and Photonics, 2010), p. 77394L.

17. J. Ge, B. Zhao, S. Powell, A. Fletcher, X. Wan, L. Chang, H. Jakeman, D. Koukis, Tanner, B. David, D. Ebbets, and P. J. Kuzmenko, “Silicon immersion gratings and their spectroscopic applications,” in Modern Technologies in Space-and Ground-based Telescopes and Instrumentation II, vol. 8450 (International Society for Optics and Photonics, 2012), p. 84502U. [CrossRef]  

18. B. T. Kidder, C. B. Brooks, M. M. Grigas, and D. T. Jaffe, “Manufacturing silicon immersion gratings on 150 mm material,” in Advances in Optical and Mechanical Technologies for Telescopes and Instrumentation III, vol. 10706 (International Society for Optics and Photonics, 2018), p. 1070626.

19. C. Pruss, S. Reichelt, H. J. Tiziani, and W. Osten, “Computer-generated holograms in interferometric testing,” Opt. Eng. 43, 2534–2540 (2004). [CrossRef]  

20. A. F. Fercher, “Computer-generated holograms for testing optical elements: Error analysis and error compensation,” Opt. Acta 23, 347–365 (1976). [CrossRef]  

21. G. R. Harrison, “The challenge of the ruled grating,” Phys. Today 3, 6–12 (1950). [CrossRef]  

22. H. B. Babcock and H. W. Babcock, “The ruling of diffraction gratings at the Mount Wilson Observatory,” J. Opt. Soc. Am. 41776–786 (1951). [CrossRef]  

23. M. Heusinger, M. Banasch, T. Flügel-Paul, and U.-D. Zeitner, “Investigation and optimization of rowland ghosts in high efficiency spectrometer gratings fabricated by e-beam lithography,” in Advanced Fabrication Technologies for Micro/Nano Optics and Photonics IX, vol. 9759 (International Society for Optics and Photonics, 2016), p. 97590A.

24. M. Heusinger, T. Flügel-Paul, and U.-D. Zeitner, “Large-scale segmentation errors in optical gratings and their unique effect onto optical scattering spectra,” Appl. Phys. B 122, 222 (2016). [CrossRef]  

25. J. Z. Malacara, “Angle, distance, curvature, and focal length measurements,” in Optical Shop Testing, D. Malacara, ed. (John Wiley & Sons, 1992), pp. 715–741, 2nd ed.

26. H. R. Simonsen and A. Zarka, “Iodine stabilized extended-cavity diode lasers at λ 633 nm: result of an international comparison,” Metrologia 35, 197–202 (1998). [CrossRef]  

27. J. Gray, P. Thomas, and X. D. Zhu, “Laser pointing stability measured by an oblique-incidence optical transmittance difference technique,” Rev. Sci. Instrum. 72, 3714–3717 (2001). [CrossRef]  

28. R. P. Heinisch and C. L. Jolliffe, “Light baffle attenuation measurements in the visible,” Appl. Opt. 10, 2016–2020 (1971). [CrossRef]   [PubMed]  

29. J. J. McCann and A. Rizzi, The art and science of HDR imaging(John Wiley and Sons, 2012).

30. B. C. Madden, “Extended intensity range imaging,” Tech. Rep. MS-CIS-93-96, University of Pennsylvania (1993).

31. S. Mann and R. W. Picard, “Being ’undigital’ with digital cameras: Extending dynamic range by combining differently exposed pictures,” Tech. Rep. 323, M.I.T. Media Lab Perceptual Computing Section (1994).

32. P. E. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” in Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, (ACM Press/Addison-Wesley Publishing Co., 1997), SIGGRAPH ’97, pp. 369–378.

33. U. Griesmann, Q. Wang, J. A. Soons, and L. Assoufid, “Figure metrology for x-ray focusing mirrors with fresnel holograms and photon sieves,” in Optical Fabrication and Testing, (2014), pp. OTu4A–5.

34. L. Kipp, M. Skibowski, R. L. Johnson, R. Bernd, R. Adelung, S. Harm, and R. Seemann, “Sharper images by focusing soft x-rays with photon sieves,” Nature 414, 184–188 (2001). [CrossRef]   [PubMed]  

35. H. I. Smith, M. E. Walsh, F. Zhang, J. Ferrara, G. Hourihan, D. Smith, R. Light, and M. Jaspan, “An innovative tool for fabricating computer-generated holograms,” J. Phys. Conf. Ser.415, 012037 (2013).

36. M. A. Martínez, E. M. Valero, and J. Hernández-Andrés, “Adaptive exposure estimation or high dynamic range imaging applied to natural scenes and daylight skies,” Appl. Opt. 54, B241–B250 (2015). [CrossRef]  

37. J. J. Ma, S. Masodian, D. A. Starkey, and E. R. Fossum, “Photon-number-resolving megapixel image sensor at room temperature without avalanche gain,” Optica 4, 1474–1481 (2017). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 Schematic and photographs of the measurement setup with the grating oriented to retro-reflect the incoming beam (Littrow). Numbers identify corresponding components of the setup in schematic and photographs. The inset camera image shows the vicinity of a well-focused beam on the camera sensor.
Fig. 2
Fig. 2 Laser power as function of beam stop (blade) position. Error bars have a length of two standard deviations.
Fig. 3
Fig. 3 Average over all pixels of the camera’s temporal dark noise in raw pixel values (a) and stray light caused by the camera sensor cover glass (b).
Fig. 4
Fig. 4 Stray light due to multiple reflections at optical element surfaces: secondary reflection at the beam splitter (a) and secondary reflection at the focusing lens (b).
Fig. 5
Fig. 5 Pixel values for two pixels, shown in red and blue, from an image sequence in a HDR measurement. Raw camera pixel values (a) and normalized irradiances (b). Values conforming to the linear camera model are indicated by circles, non-conforming vales are indicated by crosses.
Fig. 6
Fig. 6 Number of observations conforming to a linear camera model for the whole image sensor (a) and near the center of the sensor (b). The data are from the measurement described in section 4.1.
Fig. 7
Fig. 7 Confocal microscope image of echelle grating topography (a), HDR focal image (b), horizontal section of the HDR image through 0   (c), and vertical section of the HDR image through 0   (d). Red bars in (d) indicate the standard deviation of 6 measurements below and above the mean.
Fig. 8
Fig. 8 HDR focal image of the echelle grating with laser power variation (a) and vertical section of the HDR image through 0° with (red line) and without (black line) increases in the laser power. Note that (a) is plotted with a compressed irradiance scale that allots most of the colorbar range to the low end of the scale.
Fig. 9
Fig. 9 Micrograph of photon sieve hologram (a), HDR focal image (b), horizontal section in the HDR image through 0° (c), and vertical section in the HDR image through 0° (d). The scale bar in image (a) has a length of 10 μm. Red bars in (d) indicate the standard deviation of 6 measurements below and above the mean.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

n ( sin  α + sin  β m ) = m G λ ,    m = 0 , ± 1 , ± 2 , ± 3
n ( sin  α 1 ) G λ m n ( sin  α + 1 ) G λ .
Ω k = g 1 τ 1 r 1 g k τ k r k   .
H i j = 1 n i j k = 1 n i j Ω k I k ; i j   .
I k ; i j = β E k ; i j g k τ k = β E k ; i j σ k   .
E ¯ = 1 n k = 1 n E k   .
log  ( E ^ k ) = log  ( I k ) log  ( σ ^ k ) log  ( I 1 pk )   .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.