Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-throughput, multiplexed pushbroom hyperspectral microscopy

Open Access Open Access

Abstract

We describe a high-throughput hyperspectral microscope. The system replaces the slit of conventional pushbroom spectral imagers with a static coded aperture mask. We present the theoretical underpinnings of the aperture coded spectral engine and describe two proof-of-concept experimental implementations. Compared to a conventional pushbroom system, the aperture coded systems have 32 times greater throughput. Both systems have about a 1nm spectral resolution over the spectral range of 550–665nm. For the first design, the spatial resolution for the system is 5.4μm, while the spatial resolution for the second system ranges from 7.7μm to 1.54μm. We describe experimental results from proof-of-concept applications of the imager to hyperspectral microscopy.

©2008 Optical Society of America

1. Introduction

In this paper, we describe the use of static mask coded-aperture spectroscopy to increase photon efficiency in hyperspectral microscopy (HM). In a hyperspectral image each spatial location contains a vector describing the spectral content of the light rather than a scalar value describing the overall irradiance (as in a traditional image). Currently, spectral imaging is more commonly applied to environmental remote sensing and military target discrimination [1, 2]. However, with the recent explosive growth of biomedical optics, hyperspectral imaging has come to be a major technique in microscopy as well [3, 4].

Straightforward application of traditional hyperspectral techniques to biomedical systems, however, can be problematic. The simplest type of hyperspectral imager combines a tomographic (rotational scanning) or pushbroom (linear scanning) front-end with a traditional slit-based dispersive spectrometer. In biomedical optics, however, the sources tend to be weak and spatially-incoherent. Standard dispersive spectrometers have extremely poor photon collection efficiency for incoherent sources. When the source is also weak, the absolute number of collected photons can be very small. Further, this small number of photons must be apportioned amongst the large number of “cells” in the data cube. As a result, a given spatio-spectral element tends to contain very few photons and hence has a poor signal-to-noise ratio (SNR).

In response to this problem, the hyperspectral imaging community developed a number of different direct-view designs that maximize the light gathering efficiency of the systems [5, 6, 7]. These systems do away with the spectrometer slit altogether and simply view the source through a rotating dispersive element. In this approach, the measurements taken at different rotation angles of the dispersive element are projective measurements through the data cube and can be tomographically reconstructed. While the photon efficiency of this type of approach is quite high, there is a drawback. The geometry of the system necessarily limits the range of angles over which projections are made, resulting in an unsampled region of Fourier space. Consequently, the estimate of the data cube is inexact. In the tomographic community this Fourier under-sampling is known as the missing cone problem, because the unsampled region is a conical volume in Fourier space. Algorithmic approaches have been developed which can sometimes “fill-in” this missing data during post-processing [8].

Over the years there have been several attempts to achieve high throughput while simultaneously avoiding the Fourier undersampling issue present in the tomographic methods. The two most notable are scanning-Michelson Fourier-transform spectrometers, and multiplexed pushbroom designs based on digital micro-mirror (DMM) technology [9, 10]. Both approaches are successful, however they also have some significant drawbacks. The Fourier-transform approach involves a scanning interferometer, and hence requires significant mechanical stability. The DMM approach has much in common with our pushbroom technique described below, but requires a DMM array rather than a simple translation stage.

Recently, we have developed a new class of coded-aperture spectroscopy designed for working with the weak, incoherent sources common to biomedical applications [11, 12]. In addition to working as general spectrometers, this instrument can serve as the spectral engine in a hyperspectral imager. The instrument achieves a powerful compromise—there is no Fourier undersampling, yet the photon collection efficiency is only a factor of two less than the direct-view system (still orders of magnitude above that of traditional whiskbroom, pushbroom, or tunable-filter approaches). The primary drawback to the system architecture is that it is not a snapshot technique—it requires either a sequence of temporal measurements (thereby limiting its usefulness on time-varying signals) or a set of spatially-tiled measurements (sacrificing potential field-of-view to acquire all of the measurements in parallel). With the exception of some very recent work [13, 14], this is a drawback common to all spectral imagers.

This manuscript provides a theoretical treatment of how this new coded-aperture spectrometer can be used as the spectral engine in a hyperspectral imager. Further, we present experimental results from two proof-of-concept hyperspectral microscopes (HM) that we have constructed with these techniques.

2. System theory

This section addresses the underlying theory of the system. The discussion will focus on how our coded-aperture spectral engine can easily be transformed into a mechanically-robust, high-throughput HM. For full details on the theory and operation of the coded-aperture spectrometer, including design of the coded aperture and algorithmic methods for reconstruction, the reader is referred to Ref. [11].

2.1. Measurement diversity

To reconstruct a three-dimensional volume from measurements on a two-dimensional detector plane requires either a time series of measurements or the reduction of the reconstructed volume such that the full series of measurements can be tiled on the available detector array. We refer to these two methods as temporally-sequenced and spatially-sequenced respectively. In the remainder of this paper we discuss only temporally-sequenced systems, although the arguments also hold in general for spatially-sequenced systems. Of course, a series of non-varying measurements is no more informative than a single measurement (ignoring issues of SNR). To provide diversity between the measurements, the system is scanned (in the spatially-sequenced system, the tiled measurements are diverse by design). We consider linearly-scanned pushbroom systems.

2.2. Coded aperture spectral engine

We can model the intensity on the detector plane of our coded-aperture spectrometer as

I(x′,y′)=dλdxdyδ(yy′)δ(xx′+α(λλc))T(x,y)S(x,y;λ).

Here T(x,y) is the transmission pattern of the coded-aperture, S(x,y;λ) is the spectral density of the source at the input aperture (the data cube we hope to estimate), and the Dirac delta functions describe the propagation through a unity-magnification dispersive spectrometer with linear dispersion α and a center wavelength of λc for an aperture located at x=0. Performing the λ and y integrals results in

I(x′,y′)=dxT(x,y′)S(x,y′;xx′α+λc).

We then define E(x″,x′) as multiplication by an analysis function T̃(x″,y′) and integration over the full range of y

E(x″,x′)=y′miny′maxdy′T˜(x″,y′)I(x′,y′)
=y′miny′maxdy′T˜(x′′,y′)dxT(x,y′)S(x,y′;xx′α+λc).

At this point we can go no further in our analysis unless the spectral content of the source S is approximately uniform in y,

S(x,y;λ)I(y)S(x;λ).

When this conditions holds, and T(x,y) and T̃(x,y) are properly designed (see Ref. [11]), E(x″,x′) becomes an estimate of the one-dimensional spectral density of the source

E(x′′,x′)βS(x″;x″x′α+λc).

The section below details how to use a series of measurements of this form to reconstruct the full data cube.

2.3. Pushbroom operation

In the pushbroom system design, we use a linear scan to provide measurement diversity. Between measurements, the source is shifted in y relative to the input aperture of the spectrometer. We can generalize Eq. 1 to include the shift,

I(x′,y′,Δ)=dλdxdyδ(yy′)δ(xx′+α(λλc))T(x,y)S(x,yΔ;λ).

When considered over a range of Δ values, this becomes a three-dimensional data set. As before, we perform the λ and y integrals,

I(x′,y′,Δ)=dxT(x,y′)S(x,y′Δ;xx′α+λc).

Unfortunately, for an arbitrary source, the approximation of Eq. 5 does not hold.

To proceed, we consider the two-dimensional intensity profile in a plane where y′-Δ takes on the constant value p.

Ip(x′,y′)=dΔδ(Δ(y′p))I(x′,y′,Δ)
=dxT(x,y′)S(x,p;xx′α+λc).

However, since p is a constant,

S(x,p;xx′α+λc)=I(p)S(x,xx′α+λc).

Thus, in these planes, the approximation of Eq. 5 not only holds, it is exact. We can therefore use the results of Eq. 6 to write

Ep(x″,x′)=βS(x″;x″x′α+λc).

If we then calculate an estimate of this form for all possible values of p, we can layer the results to form three-dimensional data set

E(x″,p,x′)=βS(x″,p;x″x′α+λc),

which is directly proportional to the data cube we wished to measure. Heuristically, it is easy to understand how this procedure works. A source with structure in y cannot meet the uniformity requirement needed for processing. However, working in a plane where y′-Δ is constant is equivalent to piecing together an intensity profile by selecting rows from each measurement that correspond to only a single given y-coordinate in the source (the coordinate y=p in the image taken with no shift). Thus, by construction, these intensity profiles have no variation in y. These planes can then be processed according to the normal procedure and then layered to build up information about the y structure of the source.

3. First proof-of-concept experimental implementation

3.1. System design

To test these hyperspectral imaging ideas, we have constructed two proof-of-concept HM systems. This section describes the optical and mechanical features of the first experimental prototype. A system schematic is shown in Fig. 1 (Left). The microscope front end images the sample onto two intermediate image planes. A simple detector placed at the top port allows for focusing and alignment of the sample. The second image plane is coincident with the input aperture of the coded aperture spectral engine. During operation, the spectral engine is scanned vertically in this plane while a sequence of measurements are made. More detailed descriptions of the various subsystems are provided below. Figure 1 (Right) is a photograph of the first prototype system.

 figure: Fig. 1.

Fig. 1. First prototype HM system. (Left) System schematic. (Right) Photograph of operational prototype.

Download Full Size | PDF

3.1.1. Microscope front-end

The microscope front end is the portion of the optical path that runs from the sample to the intermediate image planes. The front end begins with a Nikon 10x infinity-corrected microscope objective. The Nikon objective was chosen because it was fully achromatized, and did not require a matched tube lens for color-free performance. Next in the optical path is a 250mm focal-length achromat as the tube lens. A 90/10 beamsplitter is used to create two intermediate image planes. The low power image plane is coupled to a detector array to provide easy focusing and alignment of the sample, and to allow collection of comparison images. The high-power image plane is coincident with the input plane of the coded aperture spectral engine.

3.1.2. Coded aperture spectral engine

The spectral engine is a compact version of the static multimodal multiplex spectrometer design described in [11, 12] and was built from on-hand optical components. An optical schematic of the system is shown in Fig. 2 (left). The system consists of two Petzval-pairs arranged in an approximate 4- f imaging arrangement. A holographic transmission grating sits at the center of the optical system and provides the required dispersion. The output image plane is coincident with the detector plane of an SBIG ST-7XME CCD camera. The input aperture of the spectrograph is a quartz-on-chrome transmission pattern based on an order-64 S-matrix with the rows shuffled to avoid spurious correlations as described in [11]. The specific mask pattern that was used is shown in Fig. 2 (right). The mask features are 54μm on a side—spanning exactly 6 pixels on the detector. The resulting spectrograph has a spectral range of approximately 550–665 nm with a spectral resolution of approximately 1 nm.

 figure: Fig. 2.

Fig. 2. (Left) Optical design of the spectrograph. (Right) Aperture mask used in the spectral engine. Black (white) indicates opaque (transparent) regions.

Download Full Size | PDF

3.1.3. Support structure and scanning mechanism

The majority of the mechanical structure of the HM was designed in a 3D computer-aided-design (CAD) program and “printed” on a rapid-prototyping machine. The material used is a UV-cure polymer with acrylic-like mechanical and thermal properties. The individual subsystems (microscope front-end and coded aperture spectral engine) are mounted on several large aluminum support posts. The post for the spectral engine is placed on a computer-controlled translation stage. During operation of the HM, this stage is activated to scan the input aperture of the coded aperture spectral engine across the intermediate image plane formed by the microscope front end.

3.2. Calibration and processing

As discussed in the theory section, the measurements acquired during experimental tests cannot be processed directly, but instead must be used to synthesize intermediate images which are then processed. This section details the calibration and processing procedures used with the first prototype system. In almost all cases, we utilize extremely straightforward methods—more advanced calibration and reconstruction algorithms could no doubt be developed.

Prior to any experimental run, a calibration measurement is recorded. The need for this is driven by the fact that the spectrograph and primary mechanical structures are constructed of rapid-prototyping polymer. As a result, the mechanical alignment of the system is not as robust as if the system were constructed from metallic components. Temperature stability is a particular concern. Therefore, our reconstruction techniques are designed to use a calibration measurement acquired as part of each experimental run.

The calibration measurement is generated by using an Argon or Krypton discharge lamp to front-illuminate a sheet of paper lying in the object plane of the system. This produces a spatially-uniform source with a small number of narrow spectral features. As the source is spatially-uniform, there is no need to scan; the measurements from multiple scan positions are equivalent up to the shot and detector noise. The recorded image contains a small number of images of the spectrograph aperture code—one for each spectral line of the discharge lamp that falls within the spectral range of the instrument. From this image, we algorithmically extract two parameters: 1) the correspondence between rows of the aperture code and rows on the detector plane and 2) a parabolic fit to the ‘smile” curvature introduced into the image by the presence of the diffraction grating.

The experiment then proceeds by acquiring the full set of data at the multiple scan positions. Each acquired image is processed using the calibration parameters. First, the parabolic fit is used to generate row-dependent shifts that can be applied to each row in the image to correct for the smile curvature. Then, the correspondence between aperture code rows and detector rows is used to eliminate detector rows that fall outside the aperture code. Finally, the individual detector rows corresponding to a given aperture code row are binned to produce an overall response for each row of the aperture code. These processed images are then stacked to form I xyΔ a discrete representation of I(x′,y′,Δ) (see Eq. 7).

As mentioned previously, these images cannot be directly processed. The next step in the reconstruction is to form a set of synthetic images. The experimental system is designed so that the scan shift corresponds to precisely the height of a row in the aperture code. Thus, the first synthesized image is formed by taking the first row from the first Δ-slice of I xyΔ, the second row from the second Δ-slice, and so on, taking the n-th row from the n-th Δ-slice. In general, the n-th row of the k-th synthesized image is taken from the (n+k-1)-th row of the n-th Δ-slice.

The individual synthesized images can then be processed by performing a column-wise non-negative least-squares fit between each of the measured columns and the columns of the aperture code. The result is E pxx a discrete representation of Ep(x″,x′). These two dimensional structures can then be layered to produce E xpx a discrete representation of the full 3D reconstruction E(x″,p,x′), which is isomorphic to the source datacube as shown in Eq. 13.

3.3. Experimental results

3.3.1. Monochromatic source

As an initial test of the first prototype system, we used it to estimate the data cube from a very simple source—a chrome-on-glass transmission target back-illuminated with a HeNe laser-light that had been passed through a diffuser. The transmission pattern of the target is shown in Fig. 3. The pattern is constructed from a collection of square opaque and transparent regions approximately 3 μm on a side, arranged in a 64×64 grid. For a source of this type, the data cube is empty with the exception of a single plane that corresponds to the laser wavelength and contains the spatial structure of the transmission mask.

The prototype system was used to make 128 distinct measurements. Between measurements, the computer controlled translation stage moved the spectral engine 54 μm (corresponding to one row on the aperture code) in y with respect to the intermediate image plane. The data were processed according to the description in Sec. 2.3. Results from the experiment are shown in Fig. 4. Figure 4 (left) is the image produced by summing through the datacube along the wavelength axis. The result therefore corresponds to an estimate of an intensity image of the target. Fine features are clearly visible, including the serifs on the letters, the jagged nature of the letterforms, and the fact that the back-illumination was not spatially uniform. Figure 4 (right) shows spectral estimations generated from the reconstructed data cube. Two spatial locations were selected (corresponding to an “on” and an “off” spatial pixel in the target), and the spectral data for each is shown in the figure. The top plot is for the “on” pixel, and clearly shows a strong monochromatic line at a spectral channel corresponding to the HeNe wavelength. The lower plot is for the “off” pixel and shows no strong spectral components (the vertical scale is identical to that of the upper plot). There is noise in both plots, but it exists at a magnitude that is not visible on these scales.

 figure: Fig. 3.

Fig. 3. Transmission pattern used in the test involving a monochromatic source.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. (Left) Intensity image of the source generated by summing all the spectral channels in the reconstructed data cube. (Right) Spectral estimates for “on” (top) and “off” spatial locations.

Download Full Size | PDF

The primary visible artifact in the spatial reconstruction is some slight vertical banding. Currently, we believe that this banding arises from fluctuations in the source irradiance during the scanning process. If the source irradiance is not constant in time, this will be reflected in an overall modulation of the brightness from acquisition to acquisition. When these acquired images are used to synthesize the intermediate images, the acquisition-dependent brightness interacts with the spatial variation of the source and the aperture code to create a position-dependent brightness modulation in the synthesized image. When these synthesized images are reconstructed they produce two effects: 1) an overall fluctuation in the mean signal level between synthesized images that results in brightness variations between the reconstructed layers (producing the visible banding) and 2) random noise in the direction perpendicular to the banding. Ultimately, this result is not surprising, as the reconstruction method explicitly assumes object constancy during the acquisition process.

In addition, there is one further issue that is not easily shown in a manuscript. The image of DUKE in the reconstructed data cube is not perfectly orthogonal to the λ-axis. This is equivalent to a disagreement about the laser wavelength from the left to right side of the image. This effect likely results from slight errors in the calibration process and should be correctable with improved algorithms.

3.3.2. Quantum-dot agglomeration

The first prototype system was then tested on a more complicated source. A solution of CdTe quantum dots was prepared and deposited on a microscope slide and allowed to dry. A search of the slide with a regular microscope revealed that a pre-existing scratch on the surface of the slide had become the nucleation site for an agglomeration of the quantum dots. This region was selected as the spatial target. For spectral content, the dots were excited in the UV and violet through use of a small blacklight source. The resulting fluorescence was broadband, yielding and extended source both spatially and spectrally.

As in the previous experiment, the prototype was used to make 128 distinct measurements, with a 54 μm shift of the spectral engine occurring between measurements. An intensity image of the object, as recorded by the alignment/focusing detector is shown in Fig. 5 (top left). In Fig. 5 (top right) the same image is shown, but spatially downsampled to a resolution that matches that expected from the HM. Figure 5 (bottom) is the fluorescence spectra of the quantum dots as measured by an Ocean Optics USB2000 spectrometer.

 figure: Fig. 5.

Fig. 5. (Top left) Baseline image of the quantum dot agglomeration. (Top right) Baseline image, downsampled to expected spatial resolution of HM. (Bottom) Baseline fluorescence spectrum of the quantum dots.

Download Full Size | PDF

The results from the experiment are shown in Fig. 6. Figure 6 (left) shows the intensity image of the scene generated by summing all of the spectral channels. The result is similar to Fig. 5 (top right) which is the downsampled baseline image. The spatial resolution of the reconstruction is slightly worse than expected. Again, there is noticeable banding in the spatial reconstruction, but of a larger magnitude that seen in the monochromatic experiment. This is consistent with our object-fluctuation theory, as the excitation blacklight for this experiment was an inexpensive fluorescent blacklight with significant 60 Hz ripple. Figure 6 (right) shows the spectra associated with “on” and “off” locations in the reconstruction. The “on” reconstruction is a close match to Fig. 5 (bottom), the baseline spectrum of the quantum dots. The “off” is greatly reduced in magnitude compared to the “on” spectrum, but contains noise features that are significantly larger than were observed in the monochromatic experiment.

 figure: Fig. 6.

Fig. 6. (Left) Intensity image of the source generated by summing all the spectral channels in the reconstructed data cube. (Right) Spectral estimates for “on” (top) and “off” spatial locations.

Download Full Size | PDF

4. Second proof-of-concept experimental implementation

While the first system prototype demonstrated an impressive SNR, the results were obtained from a structure that was almost entirely constructed of a rapid-prototyping polymer of less-than-ideal mechanical rigidity. In addition, compared to a laboratory grade microscope, the optical quality of the custom microscope and the effective spatial resolution were constrained by the single available objective. Further, the microscope and spectrograph subsystems were individually mounted to the optical table. During the scanning process, vibration and misalignment were sometimes observed. To begin to address these issues, we constructed a second prototype.

4.1. Modifications from first prototype system

In the second prototype HM, the aperture-coded spectrograph of the first prototype is optically-interfaced to an output port on a laboratory grade, Zeiss Axioplan 2 microscope. For mechanical stability, the scanning system for the spectrograph is directly connected to the microscope frame. Mechanical vibrations are therefore reduced, and largely limited to table vibrations.

The different optical system of the Zeiss required use of coupling optics. The field size at the output ports of the microscope is approximately 25mm, while the mask size at the entrance of the spectrograph is approximately 3mm. An off-the-shelf Nikon 0.35× demagnifier was used to optically interface the two systems. A schematic and photograph of the second prototype system is shown in Fig. 7.

As the ultimate application for the system was to be laser-induced fluorescence microscopy, the sample plane of the Axioplan 2 was modified by removing the condenser and sample holder. A smaller sample holder was fabricated with a rapid prototyping machine and attached to a motion-controlled, multi-axis stage for easy focusing and transverse sample motion. As in the first prototype system, two detector arrays were used. The spectrograph again used the SBIG ST-7XME CCD array as its detector. For baseline image analysis a Diagnostic Instruments SPOT camera was coupled to another microscope exit port.

4.2. Calibration and processing

Although we have recently begun experimenting with more advanced nonlinear reconstruction techniques, the results from the second prototype system that are presented in this manuscript are formed using the same calibration and reconstruction process as described in Sec. 3.2.

 figure: Fig. 7.

Fig. 7. (Left) Second prototype system schematic. (Right) Second prototype system photograph..

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. (Left) Baseline image of the microsphere as recorded by the SPOT camera. (Right) Baseline spectrum of the microspheres as recorded by the Ocean Optics USB2000 spectrometer.

Download Full Size | PDF

4.3. Experimental results

4.3.1. Fluorescent microspheres

The second prototype was tested with a broadband spectral source. Fluorescent microspheres from Invitrogen were prepared in a 10:1 solution with deionized water and adhered to a microscope slide with a poly-l-lysine solution. The particular sample was created using 0.2μm diameter, orange-fluorescent carboxylate-modified microspheres. Peak excitation occurs with 540nm light, and peak emission occurs at 560nm. For these experiments, the excitation source was a frequency-doubled YAG laser, operating at 532nm, which was within the excitation profile of the microspheres. A region of the microscope slide containing a single microsphere was chosen as the test target. An image of the microscope field-of-view using the baseline SPOT camera is shown in Fig. 8 (left). The baseline fluorescence spectrum of these microspheres (using 532nm excitation) was measured with an Ocean Optics USB2000 spectrometer, and is shown in Fig. 8 (right).

Results from the experiment are shown in Fig. 9. Figure 9 (left) is the intensity image of the source produced by summing through the datacube along the wavelength axis. Note that the image clearly shows a single microsphere corresponding to that in the baseline image. Figure 9 (right) shows spectral estimations generated from the reconstructed data cube. Two spatial locations were selected (corresponding to an “on” and an “off” spatial pixel in the target), and the spectral data for each is shown in the figure. The top plot is for the “on” pixel, and clearly shows a spectral feature resembling the baseline spectrum shown in Fig 8 (right). The lower plot is for the “off” pixel and shows no strong spectral components (the vertical scale is identical to that of the upper plot).

 figure: Fig. 9.

Fig. 9. (Left) Intensity image of the source generated by summing all the spectral channels in the reconstructed data cube. (Right) Spectral estimates for “on” (top) and “off” spatial locations.

Download Full Size | PDF

Again we see some significant banding artifacts, as well as some “ghosts” that were not seen in the results from the first prototype. At this point, we have not been able to determine the source of the ghost artifacts and further investigations are underway. Regardless, the overall spatial and spectral reconstructions strongly resemble the baseline images and spectra and add further proof-of-concept support to the general system architecture.

5. Conclusions

In this manuscript, we have proposed a coded aperture spectrometer as a spectral engine in a hyperspectral imager. Two proof-of-concept systems were constructed and tested. The experimental results provide an initial validation of the system architecture and theory. The development of the HM is a continuing effort—future work will focus on further improvements to the second prototype system and exploration of system application to laser-induced fluorescence microscopy.

Acknowledgments

This manuscript reports work conducted when all of the authors were on the staff of Duke University. This work was supported by AFOSR grant #313–6057.

References and links

1. W. Smith, D. Zhou, F. Harrison, H. Revercomb, A. Larar, A. Huang, and B. Huang, “Hyperspectral remote sensing of atmospheric profiles from satellites and aircraft,” Proc. SPIE 4151, 94 – 102 (2001). [CrossRef]  

2. C. Stellman, F. Olchowski, and J. Michalowicz, “WAR HORSE (wide-area reconnaissance: hyperspectral overhead real-time surveillance experiment),” Proc. SPIE 4379, 339–346 (2001). [CrossRef]  

3. T. Pham, F. Bevilacqua, T. Spott, J. Dam, B. Tromberg, and S. Andersson-Engles, “Quantifying the absorption and reduced scattering coefficients of tissuelike turbid media over a broad spectral range with noncontact Fourier-transform hyperspectral imaging,” Appl. Opt. 39, 6487–6497 (2000). [CrossRef]  

4. R. Schultz, T. Nielsen, J. Zavaleta, R. Ruch, R. Wyatt, and H. Garner, “Hyperspectral imaging: A novel approach for microscopic analysis,” Cytometry 43, 239 – 247 (2001). [CrossRef]   [PubMed]  

5. M. Descour and E. Dereniak, “Computed-tomography imaging spectrometer: experimental calibration and reconstruction results,” Appl. Opt. 34, 4817–4826 (1995). [CrossRef]   [PubMed]  

6. P. Bernhardt, “Direct reconstruction methods for hyperspectral imaging with rotational spectrotomography,” J. Opt. Soc. Am. A 12, 1884–1901 (1995). [CrossRef]  

7. J. Mooney, V. Vickers, M. An, and A. Brodzik, “High-throughput hyperspectral infrared camera,” J. Opt. Soc. Am. A 14, 2951–2961 (1997). [CrossRef]  

8. A. Brodzik and J. Mooney, “Convex projections algorithm for restoration of limited-angle chromotomographic images,” Journal of the Optical Society of America A 16(2), 246–257 (1999). [CrossRef]  

9. C. Snively, G. Katzenberger, and J. Lauterbach, “Fourier-transform infrared imaging using a rapid-scan spectrometer,” Opt. Lett. 24, 1841–1843 (1999). [CrossRef]  

10. A. Wuttig and R. Riesenberg, “Sensitive Hadamard transform imaging spectrometer with a simple MEMS,” Proc. SPIE 4881, 167–178 (2003). [CrossRef]  

11. M. Gehm, S. McCain, N. Pitsianis, D. Brady, P. Potuluri, and M. Sullivan, “Static two-dimensional aperture coding for multimodal, multiplex spectroscopy,” Appl. Opt. 45, 2965–2974 (2006). [CrossRef]   [PubMed]  

12. S. McCain, M. Gehm, Y. Wang, N. Pitsianis, and D. Brady, “Coded Aperture Raman Spectroscopy for Quantitative Measurements of Ethanol in a Tissue Phantom,” Appl. Spect. 60, 663–671 (2006). [CrossRef]  

13. M. Gehm, R. John, D. Brady, R. Willett, and T. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express 15, 14013–14027 (2007). [CrossRef]   [PubMed]  

14. A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47, 44–51 (2008). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. First prototype HM system. (Left) System schematic. (Right) Photograph of operational prototype.
Fig. 2.
Fig. 2. (Left) Optical design of the spectrograph. (Right) Aperture mask used in the spectral engine. Black (white) indicates opaque (transparent) regions.
Fig. 3.
Fig. 3. Transmission pattern used in the test involving a monochromatic source.
Fig. 4.
Fig. 4. (Left) Intensity image of the source generated by summing all the spectral channels in the reconstructed data cube. (Right) Spectral estimates for “on” (top) and “off” spatial locations.
Fig. 5.
Fig. 5. (Top left) Baseline image of the quantum dot agglomeration. (Top right) Baseline image, downsampled to expected spatial resolution of HM. (Bottom) Baseline fluorescence spectrum of the quantum dots.
Fig. 6.
Fig. 6. (Left) Intensity image of the source generated by summing all the spectral channels in the reconstructed data cube. (Right) Spectral estimates for “on” (top) and “off” spatial locations.
Fig. 7.
Fig. 7. (Left) Second prototype system schematic. (Right) Second prototype system photograph..
Fig. 8.
Fig. 8. (Left) Baseline image of the microsphere as recorded by the SPOT camera. (Right) Baseline spectrum of the microspheres as recorded by the Ocean Optics USB2000 spectrometer.
Fig. 9.
Fig. 9. (Left) Intensity image of the source generated by summing all the spectral channels in the reconstructed data cube. (Right) Spectral estimates for “on” (top) and “off” spatial locations.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

I ( x′ , y′ ) = dλdxdyδ ( y y′ ) δ ( x x′ + α ( λ λ c ) ) T ( x , y ) S ( x , y ; λ ) .
I ( x′ , y′ ) = dxT ( x , y′ ) S ( x , y′ ; x x′ α + λ c ) .
E ( x″ , x′ ) = y′ min y′ max d y′ T ˜ ( x″ , y′ ) I ( x′ , y′ )
= y′ min y′ max d y′ T ˜ ( x′′ , y′ ) dxT ( x , y′ ) S ( x , y′ ; x x′ α + λ c ) .
S ( x , y ; λ ) I ( y ) S ( x ; λ ) .
E ( x′′ , x′ ) βS ( x″ ; x″ x′ α + λ c ) .
I ( x′ , y′ , Δ ) = dλdxdyδ ( y y′ ) δ ( x x′ + α ( λ λ c ) ) T ( x , y ) S ( x , y Δ ; λ ) .
I ( x′ , y′ , Δ ) = dxT ( x , y′ ) S ( x , y′ Δ ; x x′ α + λ c ) .
I p ( x′ , y′ ) = d Δ δ ( Δ ( y′ p ) ) I ( x′ , y′ , Δ )
= dxT ( x , y′ ) S ( x , p ; x x′ α + λ c ) .
S ( x , p ; x x′ α + λ c ) = I ( p ) S ( x , x x′ α + λ c ) .
E p ( x″ , x′ ) = βS ( x″ ; x″ x′ α + λ c ) .
E ( x″ , p , x′ ) = βS ( x″ , p ; x″ x′ α + λ c ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.