A novel hyperspectral imaging system has been developed that takes advantage of the tunable path delay between orthogonal polarization states of a liquid crystal variable retarder. The liquid crystal is placed in the optical path of an imaging system and the path delay between the polarization states is varied, causing an interferogram to be generated simultaneously at each pixel. A data set consisting of a series of images is recorded while varying the path delay; Fourier transforming the data set with respect to the path delay yields the hyperspectral data-cube. The concept is demonstrated with a prototype imager consisting of a liquid crystal variable retarder integrated into a commercial 640x480 pixel CMOS camera. The prototype can acquire a full hyperspectral data-cube in 0.4 s, and is sensitive to light over a 400 nm to 1100 nm range with a dispersion-dependent spectral resolution of 450 cm−1 to 660 cm−1. Similar to Fourier transform spectroscopy, the imager is spatially and spectrally multiplexed and therefore achieves high optical throughput. Additionally, the common-path nature of the polarization interferometer yields a vibration-insensitive device. Our concept allows for the spectral resolution, imaging speed, and spatial resolution to be traded off in software to optimally address a given application. The simplicity, compactness, potential low cost, and software adaptability of the device may enable a disruptive class of hyperspectral imaging systems with a broad range of applications.
© 2015 Optical Society of America
1.1 Hyperspectral imaging
Hyperspectral imaging (HSI) refers to methods and devices for acquiring hyperspectral data sets or data-cubes, which typically comprise images where continuously sampled, finely resolved spectral information is provided at each pixel. The field had its genesis in the 1970s and 1980s after the invention of the charge-coupled device, and developed with and alongside the LANDSAT and other NASA remote-sensing programs .
While HSI has traditionally been applied to remote sensing for geology, mineral identification, forestry, and other tasks , the application space of HSI is now quite diverse. For example, HSI can be applied to defense sector problems such as camouflaged target detection or detection of vehicle tracks [3,4]. Medical tasks such as assessing tissue oxygenation and diagnosing melanoma and other cancers benefit from the analytical capability of HSI . In the emerging field of precision agriculture, especially in combination with the prevalence of unmanned aerial vehicles (UAVs), HSI provides a wealth of information with regard to plant and soil conditions and how they are affected by water, fertilizers and other parameters .
While the value of HSI has been known for some time, its use has been restricted due to a number of inhibiting factors: the high cost of sensors, their bulkiness, and the lack of trained specialists with the competencies necessary to obtain and analyze hyperspectral data [1,2]. Novel approaches to HSI are needed that address these limitations to enable its more widespread adoption and to allow more researchers to participate in the maturation of HSI data analysis techniques.
In this article, we present an HSI sensor with potential for wide deployment. We take advantage of the tunable birefringence properties of a liquid crystal layer and perform interferometry between the polarization states of light traveling through the liquid crystal as the relative optical path length between the two polarizations is changed. In this manner we have constructed an HSI sensor similar to a Michelson interferometer, yet with vibration robustness due to its common path nature and lack of moving parts. The sensor’s cost, size, and simplicity may be amenable to integration anywhere image sensors are presently used.
The article provides a brief review of existing HSI sensors before delving into the details of our HSI sensor prototype, including key enabling features relating to the liquid crystal cell’s viewing angle and speed of operation. An experimental data set taken with a combination of monochromatic sources demonstrates the operating principle of the sensor. We provide a theoretical framework for our sensor and show its mathematical equivalence to a Michelson interferometer. We conclude with a comparison to the existing HSI techniques described at the beginning of the paper.
1.2 Existing HSI methods
Myriad technical approaches have been described in the literature to generate hyperspectral data; they are summarized in [5,7,8]. Briefly, these approaches break down into multiple distinct classes based on which axes of the data-cube are sampled at a given instant of time. For example, dispersive approaches to hyperspectral imaging can instantaneously sample along the spectral axis and along one spatial axis, but the other spatial axis must be scanned in time to build up a full data-cube. Typically, light that impinges on an entrance slit is dispersed through a grating or a prism, and the dispersed light is imaged onto a two-dimensional detector array. By scanning the slit relative to the scene in a pushbroom manner, the full data-cube is built up.
“Staring” hyperspectral imagers based on wavelength-tunable filters are capable of instantaneously acquiring the 2d spatial information of a data-cube within a particular wavelength band. They rely on the presence of a wavelength-tunable filter in front of a 2d detector array. For example, a wavelength-tunable filter can be made out of liquid crystals, following a Lyot  or Šolc  design, or based on holographic polymer-dispersed liquid crystals [11,12]. Or, it may be based on a Fabry-Perot interferometer , an acousto-optical tunable filter (AOTF) , or a motorized filter wheel .
“Snapshot” filters attempt to sample the full data-cube instantaneously . For a given image sensor area, a fixed tradeoff between spatial and spectral resolution must be chosen. Cameras based on tiled or mosaic filter arrays are simply extensions of the common RGB Bayer mosaic, and are already commercially available.
Spectral imaging based on Fourier transform spectroscopy can combine the throughput or Jacquinot advantage of tiled-filter HSI and tunable-filter HSI with an additional spectral or Fellgett multiplexing advantage. The whole spectrum is sampled at different points in Fourier space, yielding an interferogram. The interferogram is Fourier transformed to yield the optical spectrum. Typically, this is accomplished using Michelson interferometry. The spectral/spatial resolving power and imaging speed can be easily traded off in software to suit the application without compromising optical throughput.
Spectral multiplexing can also be accomplished with compressed sensing [14,15]. Some approaches rely on deconvolution of overlapped spectral/spatial information , but deconvolution is an ill-conditioned problem requiring high signal-to-noise.
Our hyperspectral imaging system combines the potential low cost and compactness of snapshot HSI systems with the high optical throughput and software reconfigurability of Fourier-based systems, and is vibration insensitive. The foundation for such an imaging system, based on polarization interferometry through a liquid crystal variable retarder, was laid in 1990 by K. Itoh et al. in . This demonstration, however, was not extended to imaging, possibly because key issues related to the viewing angle and acquisition time were not resolved, and, perhaps at the time, the computational burden was too intense. Our contribution is to enable this promising concept by resolving these key issues.
2. The device
2.1 Basic concept
We demonstrate a concept for HSI based on the combination of a liquid crystal (LC) spectral encoder with a traditional monochrome image sensor, as shown in Fig. 1. The LC spectral encoder encodes the spectral information into an interferogram at each point in an image, just as a Michelson interferometer does. However, rather than interfering light that travels across two separate arms of an interferometer, the spectral encoder shown in Fig. 1(a) interferes light that travels over a common path but with two orthogonal polarizations.
A first polarizer polarizes the incoming light in an incident polarization direction, nominally 45 degrees to the rubbing (alignment) direction of the LC cell. The LC cell’s rubbing direction is indicated by an arrow on each electrode in Fig. 1(a) and is the preferred direction along which the LC molecules orient; therefore, light polarized in this direction (extraordinary ray or e-ray) will be retarded with respect to light polarized orthogonally (ordinary ray or o-ray). The LC cell then functions as an electrically tunable birefringent element—by varying the voltage across the LC cell, the LC molecules change their orientation, and it is possible to create a variable optical path delay between the e-ray and the o-ray. This path delay causes a wavelength-dependent phase shift between the two rays, thereby leading to a wavelength-dependent change in the polarization state. A second polarizer, or analyzer, oriented either parallel or perpendicular to the first polarizer, changes this wavelength-dependent polarization state into a wavelength-dependent intensity pattern by interfering the two rays.
As will be shown in Section 4, this intensity pattern (as a function of path delay) is equivalent to an interferogram generated by a Michelson interferometer. Thus, the intensity pattern corresponds to the cosine transform of the spectrum of the incident light. By recording a series of images as the voltage on the LC cell is changed, the interferograms at all points in an image can be sampled simultaneously, and the hyperspectral data-cube can be recovered by inverse cosine transform along the optical path delay axis.
2.2 Enabled prototype
Our particular camera prototype, shown in Fig. 1(c), was made by incorporating our spectral encoder (consisting of a total of 100 μm of E7 liquid crystal) into a commercial monochrome CMOS imager. It is capable of acquiring a hyperspectral data-cube in 0.4 seconds, with an imaging speed limited by the 500 fps acquisition rate of the camera and not by the switching time of the LC variable retarder. It is compatible with wide-angle lenses that increase the field-of-view to approximately 80°, and the lens aperture can be opened to ~f/2.0 before appreciable decrease in the modulation of the interferogram occurs.
If we were to use a single-layer LC cell with antiparallel alignment layers, as done in  and shown in Fig. 2(a), we would not have an imaging device. This is because the relative optical path delay between the e-ray and the o-ray has a first-order dependence on the angle of incidence ψ of a given light ray with respect to the LC cell. This first-order dependence is most significant when the LC director θ is at 45° with respect to the surface of the LC cell; the optical path delay versus angle of incidence with θ = 45° is plotted as the blue line in Fig. 2(c). In order to maintain coherence (i.e., similar path delay) between the rays of light that form the interferogram at a given pixel, the lens aperture would have to be restricted so that the angle of incidence of the rays did not vary by more than about a degree; this would severely restrict the light throughput.
Rather, we chose to use a double-nematic structure originally used for a display application  whereby the top half of the cell is mirrored with respect to the bottom half as in Fig. 2(b). This arrangement causes the relative path delay of the e-ray and o-ray to increase as they travel through both halves of the cell, but the first order angular dependence of the first half of the cell is negated by that of the second half. Thus, we achieve a second-order angular dependence of the path delay, as shown in Figs. 2(c) and 2(d), again plotted with the LC director θ held at 45° relative to the cell's surface. The comparison between path delays in Figs. 2(c) and 2(d) shows a ca. 45-fold reduction in path delay variation over incidence angles that result from realistic imaging optics (f/1.5, or ψ ≈ ± 12° in the LC).
The symmetry of the double-nematic structure can also be achieved with a π-cell or Bos cell where the top and bottom alignment layers are rubbed in the same direction . However, by breaking the liquid crystal cell into two layers of equal thickness, we decrease the relaxation time of the liquid crystal cell by a factor of 4, because the relaxation time depends on the thickness squared .
Ideally, we would have direct control over the optical path delay of the liquid crystal so that we could take images with respect to optical path delay. However, we only have control of the electrode voltage waveform and the timing of the acquisition of individual image frames while this voltage is changed. The liquid crystal director orientation can be described by the following partial differential equation :
Here, γ is the rotational viscosity; k1 and k3 are the splay and bend elastic constants of the LC; is the director orientation at a certain point in time and space, with the ẑ-axis directed along the optical axis of the system; E is the ẑ-component of the electric field, also a function of time and space since we only have control over the electric displacement ; and ε∥ and ε⊥ are the parallel and perpendicular components of the LC’s dielectric tensor with respect to the ẑ-axis.
If the director distribution is known, the retardance at a given point in the LC cell with local thickness can be calculated from the following:
Here, ne is the extraordinary refractive index and no the ordinary refractive index of the LC.
In theory, the optical path delay at a given voltage can be measured or calculated, and then a voltage series can be adiabatically traversed to create a uniform sampling grid in optical path delay. However, the calculated relaxation time of a 50 μm layer of E7 at 20 degrees Celsius (one layer of our double-nematic cell) is given by :
and works out to tdecay ≈ 6 s. A typical interferogram data-cube has 200 slices, and if we wait 3tdecay/slice, a full data-cube would require a 1 hr acquisition time. To acquire a full image stack in a reasonable amount of time (~1 s with precise control of the retardance vs. time) the liquid crystal must be driven dynamically rather than adiabatically. If necessary, the LC cell can be actively driven to its original state between data-cube acquisitions.
Dynamic control of the liquid crystal presents its own problems—the parameters of Eqs. (1)–(3) all have a strong dependence on the order parameter S, which is dependent on temperature. A valid approximation of the order parameter well below the critical temperature (melting point) TC of the LC is the Haller approximation:
where T is the temperature, and β is a material-dependent constant usually between 0.20 and 0.25 [21,22]. Birefringence and dielectric anisotropy are linearly proportional to S, elastic constants are proportional to S2, and rotational viscosity is proportional to S but also has an exponential temperature dependence . A given voltage waveform will therefore produce a certain distribution of director orientations as a function of temperature, time, and cell position, which yields an optical path delay that is a function of temperature, time, cell position, and wavelength. If a waveform is chosen, at a given calibration temperature, to produce a linearly increasing path delay as a function of time, the interference fringes recorded of a monochromatic light source will be coherent and spectral resolution will only be limited by the number of fringes recorded. Temperature changes, if not accounted for in the waveform, alter the dynamics of the LC director, decreasing the coherence of the recorded interference fringes and hence the spectral resolving power. The operating temperature range of our system must lie within the nematic range of the LC, and temperature variation within that range can be minimized by using a liquid crystal with a high TC.
By incorporating a monochromatic light source at a known wavelength as a phase reference as in Fig. 1(b), the time dependence of the optical path delay for a given applied voltage waveform can be extracted. This information can be used to find the optimal driving waveform for a given exposure time and a given temperature in our HSI system.
3. Experiment and results
Two experimental data sets acquired with the prototype imager are presented here, one of a contrived scene to demonstrate the operating principle, and one of a natural scene. The first scene, shown as imaged by a standard RGB camera in Fig. 3, consists of three diffuse laser spots (445 nm, 532 nm, 635 nm) that were projected onto a piece of white paper, as well as the built-in phase reference at 850 nm. Laser emission spots were chosen as subjects for the contrived scene because they are sources of monochromatic light at known wavelengths. This allowed for direct measurement of the prototype imager’s spectral resolution at each laser wavelength, as well as comparison of the laser nominal wavelengths with their measured wavelengths.
The prototype imager was used to acquire a stack of 200 images of the scene while varying the optical path delay of the liquid crystal with an applied time-varying voltage, typically less than 5 V. As shown in the four lower plots of Fig. 4, each laser spot creates an interferogram that oscillates with respect to path delay at a rate inversely proportional to the wavelength. Note that these interferograms are oversampled, even at the shortest measured wavelengths.
Upon Fourier transformation along the optical path delay axis, the interferogram data set yields a hyperspectral data-cube. The spectrum of each laser spot as sampled from the hyperspectral data-cube is shown in the upper plot of Fig. 4. The measured center wavelength and FWHM of each of the spectral peaks was extracted by fitting to a Lorentzian curve and is shown below in Table 1. The theoretical FWHM was calculated using Eq. (16).
A notable feature of the interferogram of the 445 nm laser is the modulation depth or fringe contrast decreasing at increasing path delay. This is due to spontaneous emission from the 445 nm laser source, visible in Fig. 3 as a blue halo around the laser spot, and verified with our hyperspectral camera as well as a spectrometer (data not shown). The spontaneous emission is not strong enough to significantly modify the measured FWHM of the laser line although it appears as a spectrally broad background.
The second data set consists of an arrangement of two kinds of flowers (Rhododendron indicum and Lobelia erinus) and one leaf, as imaged by a standard RGB camera in Fig. 5(a). Because it is difficult to present a full hyperspectral data-cube in a paper, we have chosen to show the three spectral bands that appear to differ from each other the most. A notable feature is the relative brightness of the purple flowers around 680 nm; this is not discernible from the RGB image.
4.1 Comparison to Michelson interferometer
Here we describe the theory of operation of our device and show its mathematical equivalence to a Michelson interferometer, at least with the following set of underlying assumptions: all light rays travel along the optical axis in the ẑ-direction and are therefore normally incident to the LC cell; the incident light is assumed to be unpolarized; and all polarizers are ideal.
The first step is to describe an incident field in terms of its intensity as a function of wavelength λ or wavenumber σ = 1/λ. If the spectral power density at wavenumber σ can be described as S(σ) (not to be confused with the order parameter S of the LC), the total intensity is given by
On average, 50 percent of the light will pass through the first polarizer. The polarization state of the light after the first polarizer can be represented by the normalized Jones vector (in a basis given by the horizontal and vertical polarizations of light)
The liquid crystal adds a path retardation Δz and therefore a phase retardation Γ = 2πσΔz to the second (vertical polarization) component of the Jones vector; therefore, after the liquid crystal, the Jones vector is given by
The second polarizer, parallel to the first polarizer, can be modeled mathematically by projecting the polarization state after the liquid crystal onto the initial polarization direction with the projection operator ; this yields
Since the Jones vector represents the polarization of the electric field, the modulus squared represents the fraction of the initial spectral power density that passes through the second polarizer:
where here the wavelength dependence of the retardance has been specifically included. Therefore, the recorded intensity as a function of path delay in the liquid crystal is given by
The interferogram expression derived in Eq. (11) is identical to that derived for a Michelson interferometer (see for example ), with the exception that the theoretical maximum average light throughput is 25% due to the presence of the two polarizers rather than 50% for the Michelson. Note that both polarizers could be replaced with polarizing beamsplitters, for example as in , to increase the throughput to near 100%, but the geometry required to incorporate the beamsplitters would compromise the compactness of the device.
The input optical spectrum can be recovered from this interferogram, in theory, by inverse cosine transform as a function of the optical path delay. In practice, a more involved reconstruction that uses the inverse Fourier transform and accounts for phase errors is typically used .
Each recorded interferogram at a given wavenumber has contributions from rays that travel at different positions and angles through the LC cell and therefore may experience slightly different path delays. If the magnitude of the path delay variations approaches the wavelength, these variations would be the limiting factor in determining the spectral resolution, just as would be the case for a Michelson interferometer. Therefore, uniformity in optical path delay is essential for the best spectral resolution.
4.2 Spectral resolution
In theory, two signals, at two different wavenumbers σ1 and σ2, can be cleanly separated if the number of oscillations generated by each signal at maximum optical path delay differs by at least 1. That is,
Assuming that the director distribution is uniform across the thickness of the cell, and that the cell thickness is a constant d, the maximum optical path delay is represented by an effective thickness
Here, Δn is the birefringence of the LC, given by Δn = ne – no, and N is the number of oscillations observed in the interferogram of a monochromatic source at wavelength λ with cell temperature T. The effective thickness is less than the actual cell thickness because not all points of the liquid crystal contribute maximally to the optical path delay; in addition, the optical path delay cannot be brought all the way to zero as there is a residual path delay at zero applied volts. It is reasonable to assume that the effective thickness is independent of wavelength and temperature.
The birefringence of E7 as a function of temperature and wavelength can be calculated from information available in the literature . With the four known wavelengths λ from Fig. 4, each with a known number of oscillations N, we calculate an effective LC cell thickness of 83.2 ± 0.8 μm. The known wavelength and temperature dependence of the birefringence therefore yield an explicit expression for the wavelength and temperature dependent maximum optical path delay:
Thus, the expression for the resolution in wavenumbers Δσ = |σ1 – σ2| is:
or, expressed in terms of wavelength:
These two equations are plotted below in Fig. 6.
Figure 6 illustrates that the resolution in terms of wavenumbers, at a given temperature, shows a wavelength dependence especially at shorter wavelengths because of the dispersion of the liquid crystal. However, the resolution in terms of wavelength varies strongly with the wavelength, as expected for an interferometer. The independently-sampled spectral bands are indicated by alternating bars of lighter and darker color. In theory, our system can sample approximately 28 independent bands ranging in resolution from 450 cm−1 to 660 cm−1, or from 7.1 nm to 80 nm, at the extremes of 400 nm and 1,100 nm. This calculated spectral resolution is compared to our experimental demonstration in Table 1.
4.3 Comparison to existing techniques
A difference between Fourier spectroscopy techniques with respect to techniques that measure the light spectrum directly is their combination of spatial and temporal multiplexing. No entrance slits are required on Fourier systems; therefore, the instantaneous light throughput is higher than when a slit is used, and generating an image does not require spatial scanning. For certain kinds of systems, especially those that are limited by detector noise, temporal multiplexing brings its own signal to noise advantages. In general, the advantages of temporal multiplexing depend on the relative contributions of different noise factors (photon shot noise, detector readout noise, dark current noise, digitization noise, and source and background fluctuations), as well as the shape or sparsity of the measured optical spectrum . Widespread adoption of any image-sensor-based technology, including HSI, relies on uncooled, lower-cost camera systems that are generally limited by readout and quantization noise, especially at low light levels. Therefore, an ubiquitous HSI sensor that records a Fourier encoding of the optical spectrum would maintain the Fellgett (or temporal) multiplexing advantage.
Another benefit of performing imaging spectroscopy in the Fourier domain is the ability to easily trade off spectral resolving power and imaging speed to suit the application at hand. Such a tradeoff can be made in software by varying the maximum optical path delay over which a data set is recorded. A shorter optical path delay can be recorded in less time than a longer one, but the tradeoff is a coarser spectral resolution. In addition, because spatial resolution can be traded off for imaging speed with the use of binning and skipping pixels on the image sensor, it is possible to trade off spatial and spectral resolution, again in software.
While the aforementioned advantages of Fourier imaging spectroscopy apply to HSI technologies based on both Michelson interferometers and liquid crystal polarization interferometers, the Michelson interferometer is significantly more vibration sensitive because the relative path length between the two arms has to be defined to a fraction of a wavelength. The precision, bulk, and sensitivity of this mechanical movement preclude traditional Fourier imaging spectroscopy from applications which our liquid crystal technology can otherwise enable.
While Fourier imaging spectroscopy systems are good general-purpose HSI instruments because of their software adaptability to different applications, they are not a replacement for purpose-built multispectral systems that must image at a number of discrete wavelength bands that are known ahead of time, for example for the determination of tissue oxygenation. Such snapshot multispectral or HSI systems are especially useful if an instantaneous snapshot of the data-cube is required and motion artifacts can’t be tolerated. However, they have a fixed tradeoff between spatial and spectral resolution because the number of pixels per band is set by the total number of pixels on the image sensor divided by the number of bands. If lower spectral resolution is to be traded for higher spatial resolution in software, for example by combining spatial information from the pixels under different spectral filters, optical throughput would be compromised because of the unnecessarily narrow spectral filter in front of each pixel.
While staring HSI systems are spatially multiplexed and are able to image instantaneously in a single wavelength band, they generally cannot trade off throughput and spectral resolution in software. For example, in order to change the spectral resolution of a Fabry-Perot, the reflectivity of the mirrors has to be changed. Also, the liquid crystal tunable filters (LCTFs) used in staring HSI systems are complex and multi-layered devices with significant cost. Fabry-Perot filters and LCTFs suffer from multiple orders and therefore have a free spectral range of an octave at best, from the fundamental to the second harmonic. AOTFs, also used in staring HSI systems, require significant power and are bulky.
Because dispersive HSI systems rely on the use of an entrance slit, they are useful for applications where motion is naturally present along the slit width such as in satellite or airborne imagery. However, the presence of the slit implies a lack of spatial multiplexing. Furthermore, angular dispersion needs to be converted to spatial dispersion over some distance, necessitating a certain minimum footprint. For optimal light throughput at a given spectral resolution, the slit size must be adjusted; thus, there is not an optimally accessible software tradeoff between optical throughput and spectral resolution.
Other methods of imaging spectroscopy based on polarization interferometry through birefringent media have been proposed. For example, a method based on a stack of liquid crystal variable retarders was proposed, where the ratio in thickness between adjacent cells is a factor of two . Therefore, a particular optical path delay can be obtained by on-off switching of the cells in the stack. This approach has the advantage of being insensitive to liquid crystal dynamics, although it may be limited in switching speed because the liquid crystal layers must fully relax before taking an image at a certain optical path delay. Also, it may be complicated to fabricate because of the number of layers necessary, which is on the order of log2(2N) where N is the number of fringes at the minimum detectable wavelength.
Some approaches are based on mechanical scanning of Wollaston prisms to generate a variable optical path delay, although such mechanical scanning is not compatible with the cheapest, most robust devices [24,27]. There also exists a snapshot version of a polarization interferometer based on a combination of a microlens array and a birefringent optic that forms images at different optical path delays simultaneously .
Regardless of how a hyperspectral data-cube is obtained, the communications link between the camera and the data recording medium presents a significant bottleneck. Compressed sensing may have a role to play in the selection of an optimal subset of optical path delays for a given application, dramatically reducing the amount of data it is necessary to obtain .
We have developed a hyperspectral camera technology based on the combination of a double-nematic liquid crystal cell and a dynamic driving scheme. This technology is based on a Fourier encoding of the optical wavelength via polarization interferometry, and therefore benefits from software selectable tradeoffs between imaging speed and spectral and spatial resolution without sacrificing optical throughput. Unlike traditional Michelson systems, it is robust to vibration, and it has the potential to be integrated anywhere image sensors are currently used. This may enable a new class of hyperspectral imaging applications that take advantage of this sensor’s potential for low cost and widespread adoption.
The authors gratefully acknowledge Jacob Ho and Prof. Hoi Sing Kwok of The Hong Kong University of Science and Technology for fabricating the liquid crystal cells used in the prototype imager.
References and links
1. E. Ben-Dor, T. Malthus, A. Plaza, and D. Schläpfer, “Hyperspectral remote sensing,” in Airborne Measurements for Environmental Research: Methods and Instruments (Wiley, 2013), pp. 419–465.
2. F. Kruse, “Advances in hyperspectral remote sensing for geologic mapping and exploration,” in Proceedings 9th Australasian Remote Sensing Conference, Sydney, Australia (1998).
3. W. Hua, X. Liu, and J. Yang, “On combining spectral and spatial information of hyperspectral image for camouflaged target detecting,” in International Conference on Optical Instruments and Technology (OIT2013), X. Lin and J. Zheng, eds. (International Society for Optics and Photonics, 2013), p. 90451A. [CrossRef]
4. D. Manolakis, D. Marden, and G. A. Shaw, “Hyperspectral image processing for automatic target detection applications,” Linc. Lab. J. 14, 79–116 (2003).
6. M. L. Whiting, S. L. Ustin, P. Zarco-Tejada, A. Palacios-Orueta, and V. C. Vanderbilt, “Hyperspectral mapping of crop and soils for precision agriculture,” Proc. SPIE 6298, 62980B (2006). [CrossRef]
7. N. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng. 52(9), 090901 (2013). [CrossRef]
8. N. Gat, “Imaging spectroscopy using tunable filters: a review,” Proc. SPIE 4056, 50–64 (2000). [CrossRef]
9. B. Lyot, “Optical apparatus with wide field using interference of polarized light,” C. R. Acad. Sci.(Paris) 197, 1593 (1933).
10. I. Šolc, “Birefringent chain filters,” J. Opt. Soc. Am. 55(6), 621–625 (1965). [CrossRef]
11. S. K. Shriyan, “Tunable electro-optic thin film stack for hyperspectral imaging,” PhD Thesis, Drexel University (2011).
12. S. K. Shriyan, E. Schundler, C. Schwarze, and A. K. Fontecchio, “Electro-optic polymer liquid crystal thin films for hyperspectral imaging,” J. Appl. Remote Sens. 6(1), 063549 (2012). [CrossRef]
14. M. Golub, N. Menachem, A. Amir, A. Kagan, V. Zheludev, and R. Malinsky, “Snapshot spectral imaging based on digital cameras,” U.S. patent US 20130194481 A1 (2013).
15. M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express 15(21), 14013–14027 (2007). [CrossRef] [PubMed]
16. M. A. Golub, M. Nathan, A. Averbuch, E. Lavi, V. A. Zheludev, and A. Schclar, “Spectral multiplexing method for digital snapshot spectral imaging,” Appl. Opt. 48(8), 1520–1526 (2009). [CrossRef] [PubMed]
18. Y. Itoh, H. Seki, T. Uchida, and Y. Masuda, “Double-layer electrically controlled birefringence liquid-crystal display with a wide-viewing-angle cone,” Jpn. J. Appl. Phys. 30(Part 2, No. 7B), L1296–L1299 (1991). [CrossRef]
19. P. Yeh and C. Gu, Optics of Liquid Crystal Displays (John Wiley & Sons, 2010).
20. P. J. Bos and K. R. Koehler, “The pi-cell: a fast liquid-crystal optical-switching device,” Mol. Cryst. Liq. Cryst. (Phila. Pa.) 113, 329–339 (1984).
21. J. Li, C.-H. Wen, S. Gauza, R. Lu, and S.-T. Wu, “Refractive indices of liquid crystals for display applications,” J. Displ. Technol. 1(1), 51–61 (2005). [CrossRef]
22. H. Wang, “Studies of liquid crystal response time,” PhD Thesis, University of Central Florida (2005).
23. C. D. Porter and D. B. Tanner, “Correction of phase errors in Fourier spectroscopy,” Int. J. Infrared Millim. Waves 4(2), 273–298 (1983). [CrossRef]
24. W. Amos, “Imaging system and method for Fourier transform spectroscopy,” U.S. patent 6,519,040 (1997).
25. J. D. Winefordner, R. Avni, T. L. Chester, J. J. Fitzgerald, L. P. Hart, D. J. Johnson, and F. W. Plankey, “A comparison of signal-to-noise ratios for single channel methods (sequential and multiplex) vs multichannel methods in optical spectroscopy,” Spectrochim. Acta Part B At. Spectrosc. 31, 1–19 (1976).
26. T. H. Chao, “Electro-optic imaging Fourier transform spectrometer,” in IEEE Aerospace Conference Proceedings (IEEE, 2007) pp. 1–6.