Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Light-guide snapshot imaging spectrometer for remote sensing applications

Open Access Open Access

Abstract

A fiber-based snapshot imaging spectrometer was developed with a maximum of 31853 (~188 x 170) spatial sampling and 61 spectral channels in the 450nm-750nm range. A compact, custom-fabricated fiber bundle was used to sample the object image at the input and create void spaces between rows at the output for dispersion. The bundle was built using multicore 6x6 fiber block ribbons. To avoid overlap between the cores in the direction of dispersion, we selected a subset of cores using two alternative approaches; a lenslet array and a photomask. To calibrate the >30000 spatial samples of the system, a rapid spatial calibration method was developed based on phase-shifting interferometry (PSI). System crosstalk and spectral resolution were also characterized. Preliminary hyperspectral imaging results of the Rice University campus landscape, obtained with the spectrometer, are presented to demonstrate the system’s spectral imaging capability for distant scenes. The spectrum of different plant species with different health conditions, obtained with the spectrometer, was in accordance with reference instrument measurements. We also imaged Houston traffic to demonstrate the system’s snapshot hyperspectral imaging capability. Potential applications of the system include terrestrial monitoring, land use, air pollution, water resources, and lightning spectroscopy. The fiber-based system design potentially allows tuning between spatial and spectral sampling to meet specific imaging requirements.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Hyperspectral imaging (HSI) systems sample the spectral irradiance of an observed scene with high-resolution (typically better than 10 nm) [1–4]. The measured intensities over all spatial and spectral coordinates constitute a three-dimensional (3D) datacube, I(x, y, λ). In order to record the 3D data using a 2D detector, conventional HSI systems require scanning [5,6], typically with a “whiskbroom” (point-by-point) or “push-broom” (line-by-line) scan. In recent years, however, various snapshot systems have been developed with parallel acquisition techniques [2]. These methods can acquire spatial and spectral data instantaneously. For example, three snapshot systems were previously developed by our group using an image slicing mirror [7], lenslet array [8] and optical fiber bundle [9]. Two snapshot imaging spectrometers based on the linear variable filter have also been developed recently [10,11]. Compared with scanning techniques, snapshot imagers enable mitigation of motion artifacts, recording of fast-changing objects (gas leak detection, lightning spectroscopy), simplification of scene mosaicking and high-speed analysis/data conditioning.

Optical fibers provide a high degree of design freedom and have been previously used in building imaging spectrometers [12–19], typically in the form of light-guiding fiber bundles. Compared with other techniques such as the image-slicing mirror [7] or lenslet array [8], optical fibers enable the building of imaging spectrometers with optimized compactness [2]. Although the systems based on linear variable filters are compact [10,11], the operation principle of duplicating images divides the amount of light to be detected for any wavelength. Moreover, in systems based on fiber bundles, the ability to arbitrarily reformat the fibers’ input/output configurations makes it possible to build tunable systems in the future, which will allow adjustment between spatial and spectral sampling to meet specific application requirements. Fiber-based snapshot imaging spectrometers have found successful applications across multiple domains [13–19], including surveillance and remote sensing, missile plume phenomenology [13,14], and combustion diagnostics [15]. In the area of biomedical applications, they have been used for retinal imaging [16,17], fluorescence imaging [18] and phantom tissue imaging [19]. Typical fiber-based imaging spectrometers utilize a 2D-to-1D dimension reduction fiber bundle with a square-to-line geometry [13–19]. In systems with this type of fiber bundle, the spatial sampling (the number of fibers) is limited by the detector array length. Therefore, the highest reported spatial sampling in the literature with the 2D-to-1D fiber bundle structure is 44 x 40 = 1760, achieved by splitting and redirecting a single fiber bundle to four separate spectrometers [14]. The low spatial sampling limits this class of spectrometers from having broad application, e.g., agriculture monitoring [20], ocean surface characterization [21], lunar surface sensing [22], coastal observation [23], urban imaging [24] and lightening characterization [25].

In previous work, we developed a fiber-based system with 96 x 42 = 4032 spatial samples using a custom-fabricated fiber bundle [9]. However, the system suffered from multiple issues caused by the long and large fiber bundle output area (95mm x 24mm). In order to accommodate the large output with the associated imaging optics, we had to introduce a 0.33x de-magnifying image taper, causing significant light loss (1.4% light throughput). Additionally, because the fiber groups were separated along one direction only, the fiber bundle output had a high aspect ratio (close to 4:1), which did not entirely match the image taper input area (circular with 60mm diameter). Therefore, only 96 x 42 out of 96 x 81 fibers at the output were coupled to the image taper and utilized in imaging. This severely reduced the overall number of samples.

This paper focuses on the advancement of fiber-based spectrometers (increase cube size) to broaden its applications., We developed a fiber-based snapshot imaging spectrometer with a maximum of 31853 (approximately 188 x 170) spatial samples, which greatly exceeds all existing fiber-based systems. To the best of our knowledge, this is the highest spatial sampling reported in the literature. Also, a total of 61 spectral bands was acquired for each spatial point in the 450nm-750nm spectral range. The above sampling was achieved by the development of a compact and miniaturized custom fabricated fiber bundle (6x6mm input area, 25x13mm output area, and 100mm length). The input was comprised of 90 x 100 multi-core fibers (36 cores per fiber) which were re-organized to 45 x 200 at the output, with 500µm gaps between adjacent rows. To avoid overlapping between the cores (6x6 in the direction of dispersion) in the multi-core fiber, a subset of cores in the fiber bundle was selected by coupling with a lenslet array or a photomask. A rapid, spatial calibration technique, based on phase-shifting, together with a spectral calibration was developed to build a look-up table of the >30000 x 61 spatial-spectral samples. The system design, fabrication, calibration, and characterization are described in detail in Sections 2 through 5. Preliminary hyperspectral imaging results of the Rice University landscape is presented as a proof-of-concept in section 6. Houston city traffic was also imaged to demonstrate the system’s snapshot hyperspectral imaging capability (dynamic scene), and this is also described in section 6. Current experiments were focused on the visible (VIS) range due to the spectral response of the camera, though the shortwave-infrared (SWIR) range can be captured using a SWIR camera. The fiber-based light-guide system design potentially allows the tuning between spatial and spectral sampling to meet specific application requirements. The system presented here has a range of remote sensing applications including lightning spectroscopy [25], coastal and estuarine observation [23], urban landscape imaging [24], and agriculture [20]. Beyond remote sensing, the system can also be used in biomedical imaging such as fluorescence imaging [26] and disease diagnosis [27].

2. Design of a light-guide snapshot imaging spectrometer

2.1. System principle

The principle of a light-guide snapshot imaging spectrometer is illustrated in Fig. 1 The object is imaged by an image relay lens and sampled by a custom designed and fabricated fiber bundle. Specifically, at the input end of the fiber bundle, the image is sampled by a matrix of densely packed fibers. At the output end, the fibers are re-aligned into rows with void spaces between the rows, instead of a single row of fibers as in conventional fiber-based systems. The output end is then imaged by a re-imaging system introducing dispersion within the gaps created so that the raw image captured by the detector includes both spatial and spectral content. Therefore, the theoretical upper limit of the total datacube size is the number of pixels on the detector. By adjusting the gap space between fiber rows at the fiber bundle output in the future, we could trade-off between the spectral sampling and spatial sampling. After building a look-up table within the calibration process (assigning x, y, λ values to camera coordinates) it is possible to build a 3D datacube. The calibration process will be discussed in Section 4.

 figure: Fig. 1

Fig. 1 Schematic illustration of the principle of fiber-based imaging spectrometer system.

Download Full Size | PDF

2.2. Analysis of system design

The goal of the system design was to have a compact system while maintaining high spatial sampling. Figure 2(a) illustrates a simple geometrical model of the coupling between the fiber bundle and the collector lens. To obviate the coupling loss, the fiber bundle output area (diagonal length L) together with the fibers’ numerical aperture (NA) defines a minimum collector lens diameter (D), which, in turn, determines the volume of the whole dispersive re-imaging optical system (shown in Fig. 1). In our previous publication [9], the fiber bundle had a 95mm x 24mm output area which could not be coupled with any standard off-the-shelf collector lens. As a compromise, we chose to de-magnify the output area using a 0.33x image taper, which introduced significant light loss (1.4% light throughput – due to high taper NA). Additionally, the high aspect ratio of the output (95mm x 24mm, close to 4:1) did not fully match the image taper input area (circular with 60mm diameter). As a result, only 96 x 42 out of 96 x 81 fibers at the output was utilized in imaging.

 figure: Fig. 2

Fig. 2 (a) Schematic illustration of the minimum collector lens diameter (D) determined by the fiber bundle output area (diagonal length L) and the fibers’ numerical aperture (NA). (b) A micrograph of the Schott multicore fiber ribbon’s cross-section, with annotations indicating dimension. (c) A heat map of the required collector lens f/# calculated using different values of L and D, with fiber NA 0.65. (d) A heat map of the required collector lens f/# calculated using different values of L and D, with fiber NA 0.28.

Download Full Size | PDF

Therefore, to avoid coupling loss, the keys to optimize the system design are: (1) miniaturize the fiber bundle output area with a proper aspect ratio (close to a square) and (2) select an optimal fiber NA.

2.2.1. Miniaturized fiber bundle output area

Based on the system principle (Section 2.1), assuming the core-to-core distance in the fiber bundle is on the same magnitude with the core size, such that the inter-core spaces are fully used for dispersion, and the use of the detector pixels are maximized (no void space between dispersed fiber cores), the fiber bundle output area was essentially the product of the fibers’ core size, the system spatial sampling, and the spectral sampling, as illustrated in Eq. (1). Thus, in order to scale down the fiber bundle, as well as improve the system’s spatial sampling while maintaining the spectral sampling, the fibers’ core size need to be miniaturized:

outputarea=corearea×spatialsampling×spectralsampling

Available small diameter, unjacketed individual fibers are typically 125-250 microns with cores ranging from 5 to 240 microns. This quickly drives up the size of the bundle device and prohibits the implementation of practical spectrometers. Based on the above criteria, we adopted a fiber ribbon with 10µm core size manufactured by Schott. Figure 2(b) shows the micrograph of the Schott fiber ribbon’s cross-section. The manufacturing conditions required multi-core assemblies to provide sufficient mechanical strength. Each fiber block contained 36 cores (10µm in diameter, fused silica) forming a 6x6 matrix (Fig. 2(b)). The cladding between the 36 cores was made of fused silica with lower refractive index, giving the fiber a 0.65 numerical aperture (NA) and a 62.5 x 62.5µm cross-sectional area. Since the 36 cores in each fiber were rigid and inseparable, only a subset of cores was selected to sample the object to avoid overlapping between the cores after dispersion. Two methods using lenslet array and photomask, respectively, are described in Section 2.4.

2.2.2. Fiber NA

According to Fig. 2(a), the required f/# (F/D) of the collector lens with no light loss can be calculated given a diagonal output length (L), fiber NA (α), and the lens diameter (D) by the following equation:

f#=DL2Dtan(arcsinα)

With α = 0.65 in the fiber ribbon manufactured by Schott, the required f/# were calculated using a set of values of L and D, shown in Fig. 2(c). Any of these cases required an f/# of the collector lens below 0.5, which is impractical or not feasible for manufacture. To lower the fiber NA, the fiber ribbons were re-manufactured by Schott using a new refractive-index combination. The re-manufactured fiber ribbon had a 0.28 NA, which allowed for a more practical f/# of the collector lens enabling us to build a miniaturized system according to the calculation result shown in Fig. 2(d). The re-manufactured fiber ribbons had the same structural dimensions as described above (Fig. 2(b)), and served as the building block of the fiber bundle in the presented system. For future work, a more compact fiber ribbon will be custom fabricated in house by winding and tapering single fibers.

2.3. Fiber bundle dimensions

A single row of 200 multicore fibers (Fig. 2(b) shows a segment of 10 fibers) served as the building unit of the fiber bundle. The input end of the bundle was designed to comprise 90 x 100 tightly packed multi-core fibers (5.625 x 6.25mm estimated area, 540 x 600 cores). Figure 3(a) left presents 4 x 6 fibers for simplicity where each row of fibers is marked with a different color. At the output end, in order to keep a proper aspect ratio (closer to a square), every two vertically stacked ribbons at the input end were reformatted and positioned side by side horizontally as one single row. Then each two adjacent rows were separated by a 500µm gap (Fig. 3(a) right). As a result, there were 45 x 200 fibers (270 x 1200 cores) at the output end with an estimated area of 25.3 x 12.5mm (diagonal length about 30mm).

 figure: Fig. 3

Fig. 3 (a) A simplified plot showing the designed input end with 4 x 6 fibers reformatted into 2 x 12 fibers at the output. Each row of fibers is marked with a different color. (b) A simplified plot showing the same input end as (a), and the coupling of each exit lenslet’s sub-pupil image into a 10µm diameter core, creating void space for dispersion without overlapping. (c) The reformatted output end of the same fibers in (b). (d) The dispersed cores in (c).

Download Full Size | PDF

2.4. Core selection

Because the 6 x 6 cores in the fibers were rigid in fused silica and inseparable, to avoid overlapping between the cores after dispersion, the incoming light was coupled to a subset of the cores at the input end. For this purpose, two alternative core-selection approaches were introduced with the aid of a (1) lenslet array and a (2) photomask. However, neither approach is necessary in the idealized system principle and will not be required in the future when we implement fiber bundles built of single-core fiber ribbons.

Figure 4(a) demonstrates approach (1). In this case, the relay lens (typically NA<0.1 in image space) creates an image of the object at the surface of the lenslet. The lenslet array samples the image and couples only to selected cores. Each lenslet effectively images the exit pupil of the relay lens and forms a sub-pupil image, which is a spot incorporating all the light from the image portion that fell onto the spatial extent of the lenslet’s diameter [8]. The input end of the fiber bundle was positioned at the lenslets’ focal plane, coupling each exit sub-pupil image into a 10µm diameter core (Fig. 3(b) demonstrates the coupling with the use of a rectangular 33µm pitch lenslet array as an example) and the void space of un-coupled cores was used for dispersion (Figs. 3(c)−3(d)). In this approach, the aperture stop of the fore-optics needed to be minimized to provide low NA incident light. With an increase of NA the focused spot size increases diameter and thus may cause coupling to neighboring fibers. An off-the-shelf lenslet array with 37µm pitch (rectangular aligned lenslets) and 188µm focal length was purchased from Flexible Optical B.V. (OKO Tech) and used in the system: custom lenslet arrays are prohibitively expensive and use fabrication techniques that do not allow micron level precision during ribbon stacking. Inevitably, some sub-pupil images fell between 2 or 4 cores, leading to some obscured and lower intensity sampling.

 figure: Fig. 4

Fig. 4 Schematic system layout with two alternative approaches to selectively sample the object with a subset of cores using lenslet array (a) and a photomask (b).

Download Full Size | PDF

Approach (2) used a photomask with pinholes positioned in front of the fiber bundle input end, as shown in Fig. 4(b). Similar to approach (1), the regularly distributed pinholes on the mask selected only a subset of cores and made room for their dispersion in the void space of the unselected cores. A series of photomasks designed with a range of pinhole diameters from 5µm to 10µm, and pinhole pitches from 20µm to 37µm were fabricated by Digidat, Inc. Figure 5 displays an example of the custom designed pinhole patterns which maximizes the number of selected cores (6 cores per fiber). Typically, higher pinhole density and larger pinhole diameter lead to higher system spatial sampling and throughput, at the expense of higher crosstalk (see Section 5 for quantitative results). Similar to the lenslet array approach, the irregularity in the alignment of the fibers in the manually fabricated bundle introduces a certain degree of mismatch between the pinhole and the cores, leading to some loss in spatial sampling.

 figure: Fig. 5

Fig. 5 (Left) The design of a photomask to maximize the core density. (Center) The perfect alignment of the pinholes with the fiber cores. (Right) The sketch plot of dispersed cores in one row of fibers in the center Fig.

Download Full Size | PDF

Because the lenslet array required low NA (below 0.1) fore-optics, the aperture stop in the image relay objective needed to be closed, which introduced light loss. Compared with the lenslet array, the photomask could work with higher NA (above 0.1) fore-optics but introduces light loss by cutting off the light that fell out of the selected cores. Note that both approaches are only necessary for multi-core implementation and will be eliminated in the future using single-core fiber ribbons which are custom fabricated in house by winding and tapering single fibers.

2.5. Optical re-imaging system

Even with minimized core size and NA, the fiber bundle still placed tight restrictions on the subsequent optical re-imaging system, especially on the choice of collector and condenser lens. As illustrated in the geometrical model (Fig. 2(a)), on the one hand, the object space NA needed to be high enough to couple with the 0.28 NA output beam from the fiber bundle while on the other hand, a 30mm diameter field of view was required to image the fiber bundle’s output end (diagonal length L = 30mm), with a point spread function (PSF) within 10µm to resolve the cores. Figure 6 plots the required f/# of the collector lens depending on the lens diameter, for the 30mm diagonal fiber bundle output area and 0.28 fiber NA. Additionally, the magnification was required to be close to 1x to fully use the camera pixels as well as avoid increasing the system volume. Choices of satisfactory commercially available optics were limited.

 figure: Fig. 6

Fig. 6 The required f/# of the collector lens depending on the lens diameter for the 30mm diagonal fiber bundle output area and 0.28 fiber NA.

Download Full Size | PDF

To search for suitable collector and condenser lens, three sets of lens combinations were tested in the system: (1) two photographic objectives with lowest available f/# (Mitakon Zhongyi Speedmaster 85mm f/1.2 Lens, Sigma 85mm F1.4 EX DG HSM Lens) (2) a pair of identical microscope objectives (Olympus MVPLAPO 1X, NA = 0.25, WD = 65mm, FOV = 34.5mm) and (3) a pair of identical achromatic doublet lens (Edmund #33-918, 75mm Dia. x 100mm FL, MgF2 Coated). The first combination introduced longitudinal chromatic aberration, and the second combination suffered from field curvature (2mm in depth). Both aberrations were beyond the system’s tolerance range.

For combination (3), multiple configurations were simulated in the Zemax OpticStudio with different lens orientations (surface with the higher/lower curvature facing the object), distances (twice the focal length as a telecentric system/as close as possible) and aperture (from 0.15 NA to 0.05 NA). For each configuration, the image plane was optimized for four fields with different object heights: 0mm, 5mm, 10mm, and 15mm. One single wavelength 588nm was used in the simulation since both lenses are achromatic doublets and 588nm is roughly in the middle of the designed spectral range. The disperser (prism) was not included in the simulation. Figure 7 compares four representative configurations with simulation results, including the optical schematics and spot diagram. In Figs. 7(a)−7(c) the system NA was set to 0.05 by closing the aperture stop.

 figure: Fig. 7

Fig. 7 Zemax OpticStudio simulation of four representative configurations for a pair of identical achromatic doublet lens. (a) Telecentric setup (lens distance twice the focal length), 0.05 system NA, un-conventional lens orientation (higher curvature surface facing the object). (b) Compact setup (lens distance as close as possible, was set to 40mm to leave space for the disperser and aperture stop), 0.05 system NA, conventional lens orientation (lower curvature surface facing the object). (c) Compact setup, 0.05 system NA, un-conventional lens orientation, was chosen as the final solution. (d) Same setup as (c), with a 0.15 system NA.

Download Full Size | PDF

Comparing their system performances, Fig. 7(c) was the only one providing satisfactory performance and was chosen as the final solution, although the lens orientation was against the convention (lower curvature surface facing the object, comparing Figs. 7(b) and 7(c)). Unfortunately, the aperture stop in the system needed to be closed (0.05 NA at both object and image space) to meet the performance requirement (comparing Figs. 7(c) and 7(d)), leading to significant throughput loss. As described in details in section 5.4, the optical re-image system provided 3% throughput which was the major source of the system’s light loss. In the future, we plan to develop a custom optical system to allow operation at higher NA and improve throughput while meeting satisfactory resolution.

3. Fiber bundle fabrication

As a critical element in the imaging spectrometer, the fiber bundle was custom fabricated by assembling single fiber ribbons manufactured by Schott. The process included gluing, diamond saw cutting and end surface polishing. As stated in Section 2.3, the complete fiber bundle consisted of 90 x 100 multi-core fibers (5.6 x 6.3mm) at the input and 45 x 200 multi-core fibers (25.3 x 12.5mm) at the output end.

Figure 8 shows the sketch and photos of bundle fabrication steps. Figure 8(a) presents a sketch of the overall assembly process.

 figure: Fig. 8

Fig. 8 Fiber bundle fabrication process. (a) The schematic plot for the whole assembling and cutting procedure. (b) Photo of a fiber ribbon segment which was cut into 8 inches, containing two complete fixed sections and one complete freeform section. (c) Photo showing the fixed section at the input side split into two halves by a razor blade. (d) Photo of the two halves of the input end stacked above each other. (e) Photo of the whole segment placed into the gluing mold made of laser-cut acrylic board, showing the room for the gluing epoxy. (f) Photo of the assembled bundle in the mold cut by the diamond saw.

Download Full Size | PDF

To facilitate the fiber bundle fabrication, the fibers in the ribbon were glued by diluted epoxy in a custom pattern (Fig. 8(b)) by Schott: along the light propagating direction, the fibers were glued as a solid uniform row for 2 inches (“fixed section”, to organize the bundle’s input and output alignment), leaving the next 2 inches unglued loose fibers (“freeform section”, to provide flexibility to re-arrange the fibers between both ends). Figure 8(b) presents a fiber ribbon segment which was cut into an 8-inch piece using a standard paper cutter, containing two complete fixed sections and one complete freeform section. This element served as a building block in the fiber ribbon fabrication. For each 8-inch segment, first, the fixed section at the input side was split into two halves by a razor blade along the light propagation direction (Fig. 8(c)). The two halves were then stacked above each other (Fig. 8(d)) before the whole segment was placed into the gluing mold (made of laser-cut acrylic board, Fig. 8(e)). To provide separations at the output side, a 500µm thick acrylic sheet was placed above the ribbon at the output end. After performing the same operation on 45 layers, a fiber bundle was assembled with 90 x 100 fibers at the input and 45 x 200 at the output. The stacked ribbons were glued with epoxy (Hardman Double/Bubble Machineable Epoxy) in the mold while compressed by a mechanical clamp. Then the mold together with the glued ribbons was mounted on a diamond saw (EXTEC Labcut 1010 Low-Speed Diamond Saw). Both the input and output ends were cut by the saw in the middle of their fixed sections (Fig. 8(f)).

The red dashed squares in Figs. 9(a) and 9(b) indicate the fiber bundle’s input and output side after diamond saw cutting, respectively. Figures 9(c) and 9(d) present the micrograph of the cross-section of the fibers at the input and output side respectively (back illuminated and imaged by Zeiss Primo Star Binocular Microscope). The inhomogeneity of brightness on both end surfaces resulted from the debris of epoxy, acrylic and fiber material during the cutting process. To remove the residue and improve the throughput, the fiber bundle was mounted on an automatic polisher (Nanopol fiber polishing system, Ultra-Tec, Fig. 9(e)). Both ends were polished by a sequence of pads (15-µm silicon carbide for 10min, 1-µm silicon carbide for 10 min, and finally 0.1-µm diamond pads for 10 min). Figure 9(f) presents the micrograph of the cross-section of the fibers after polishing, showing the improvement in the uniformity of brightness on the fibers’ end surface.

 figure: Fig. 9

Fig. 9 (a) Photo of the fiber bundle’s input side after diamond saw cutting, with red dashed square indicating the fibers. (b) Photo of the fiber bundle’s output side after diamond saw cutting, with the red dashed square indicating the fibers. (c) Micrograph of the fibers at the input end before polishing, showing the tightly packed 36-core fibers. (d) Micrograph of one row of fibers at the output end before polishing (e) Photo of the fiber bundle mounted on an automatic polisher (Nanopol fiber polishing system, Ultra-Tec) for polishing. (f) Micrograph of one row of fibers at the output end after polishing.

Download Full Size | PDF

A photograph of the complete system setup is presented in Fig. 10. The presented system has a physical dimension of 600mm (L) x 130mm (W) x 150mm (H). The system can be mated to a variety of fore-optics, both telescopic and microscopic. In the photo the object image was relayed by a photographic lens (Olympus M. Zuiko Digital ED 75mm F1.8). A wedge prism made of SF4 with 6° beam deviation was used as the disperser. The detector was an Imperx Bobcat B6620 CCD camera.

 figure: Fig. 10

Fig. 10 Photo of the system setup.

Download Full Size | PDF

As described in approach (1) in Section 2.4, a lenslet array (37µm pitch rectangular alignment, 188µm focal length) was used to selectively couple the incident light into the cores. Figure 11(a) presents a portion of the single-wavelength image captured by the detector using a 1-nm wide band-pass filter (632.8nm). Illuminated individual cores can be distinguished. The corresponding broadband raw image using the same lenslet array is displayed in Fig. 11(b). The light from individual cores was dispersed into lines, due to the broadband nature of the flat-field object (a portion of the uniform bright daytime sky). Alternatively, a photomask (37µm pitch rectangular alignment, 10µm pinhole diameter) was positioned in front of the fiber bundle input end as described in approach (2), Section 2.4. Figures 11(c) and 11(d) show the single-wavelength (632.8nm) and broadband images respectively.

 figure: Fig. 11

Fig. 11 (a) A portion of the single-wavelength image captured by the detector using a 1-nm wide bandpass filter (632.8nm), with the fiber bundle coupled with a lenslet array (37µm pitch rectangular alignment, 188µm focal length). (b) The corresponding broadband image of (a). (c) A portion of the single-wavelength image captured by the detector using a 1-nm wide bandpass filter (632.8nm), with the fiber bundle coupled with a photomask (37µm pitch rectangular alignment, 10µm pinhole diameter). (d) The corresponding broadband image of (c).

Download Full Size | PDF

4. Calibration and image reconstruction

Typically when an object is imaged by a snapshot hyperspectral imaging (HSI) system, the voxels in the 3D datacube are re-organized and encoded as pixels on the 2D detector. This process can be described as a mapping from a 3D datacube to a 2D matrix. Then in order to decode the information in the 2D “raw image” and reconstruct the object datacube, a reversed mapping is necessary. In our system, this mapping with the corresponding lookup table was determined experimentally in the calibration procedure. Note that this process compensates for any spatial irregularities/defects caused by the bundle fabrication procedure.

4.1. Spatial calibration based on phase-shifting algorithms

The spatial calibration approach determines the intensity at positions on the detector corresponding to a certain spatial sampling (a core in fiber-based system) in the datacube. A conventional spatial calibration procedure typically comprises scanning using a pinhole [28] or a slit [29,30]. It is noteworthy that in scanning calibration, the number of images required is proportional to the number of the system’s spatial sampling. In our system, with spatial sampling grown to >30000, the scanning calibration would be both time consuming and storage space limited. Moreover, the irregularity of the fiber distribution introduced by the manual fabrication process increase the difficulties for a scanning calibration. Therefore, we developed a spatial calibration method with a fast procedure by incorporating the phase-shifting interferometry (PSI) algorithms [31,32]. The number of images required in the phase-shifting calibration was significantly reduced to 48 and was independent of the number of spatial samples in the HSI system. The calibration time was accommodated within 5 minutes. This calibration method is also applicable to any other imaging device with spatial displacement and distortion, such as incoherent optical fiber bundle imaging [33].

Phase Shifting Interferometry (PSI) is essentially an optical interference approach that recovers the phase information by analyzing the phase-shifted interferograms [31,32]. The calibration procedure starts with imaging a set of phase-shifted sinusoidal patterns. The patterns in the set had the same spatial frequency and modulation, but their phases were shifted by a constant step. The patterns can be represented by the equation below:

Ii(x,y)=I'+I"cos[ψ(x,y)+φi]
where i is the order number of the pattern in the set, (x,y) is the spatial coordinate in the object plane, ψ(x,y) is the phase representing the spatial position, φi is the phase shift for each pattern, I' is the intensity offset constant and I'' is the intensity modulation. Figure 12(a) left column lists a set of shifted phases where i ranges from 0 to 5 and φi=π3i The middle column in Fig. 12(a) shows the six corresponding phase-shifted patterns in the horizontal (x) direction. Since the phase ψ(x,y) is linear for the spatial position (x,y) in the object plane, it serves as the position encoder in the calibration: (x,y) was encoded as the phase ψ(x,y) in the patterns and then captured on the detector (u,v). According to the principle of PSI, the phase term ψ(x,y) can be calculated from at least three phase-shifted images Ii(x,y) [31,32]. Various families of PSI algorithms have been developed such as Carrie [34], Hariharan [35], Averaging [36,37], N-bucket [38] and N + 1 Algorithm [39]. The algorithms with improved robustness to errors usually require collecting more phase-shifted images [31]. To balance the accuracy and simplicity, we selected the six-step algorithm [38] for our calibration method, which calculates the phase as below:

 figure: Fig. 12

Fig. 12 (a) (left numbers) A set of shifted phases between 0 and 2π with a constant step π/3; (first column images) the corresponding 6 phase-shifted sinusoidal patterns generated for the x-direction only, with the same spatial frequency but phases shifted by a constant step shown in the left column. (Second column images) A portion of the corresponding raw images recorded on the detector. (b) The calculated phase in x-direction plotted for all the pixels in the raw image (c) The calculated phase in y-direction plotted for all the pixels in the raw image.

Download Full Size | PDF

tanψ(x,y)=3(I2(x,y)+I3(x,y)I4(x,y)I5(y,y))2I1(x,y)+I2(x,y)I3(x,y)2I4(x,y)I5(x,y)+I6(x,y)

Experimentally, the phase-shifted patterns (Fig. 12(a), first column images, showing only the x-direction’s patterns) were virtually generated in Matlab 2011a (Mathworks) and consecutively projected on to the the system’s object plane by an image projector (Texas Instruments LightCrafter4500 DMD kit). The non-linearity of the projector was gamma-corrected to avoid intensity distortion [40]. While projecting each phase-shifted pattern, a corresponding image was collected on the detector (second column images in Fig. 12(a)). With all the six images recorded for the six-step PSI algorithm (Eq. (4), the phase was calculated for each pixel. To reduce the noise, this process was repeated five times, and the results were averaged. By performing the same procedure in the two perpendicular directions (x and y) respectively, the complete 2D coordinates (x,y) in the object plane was obtained for every pixel (u,v) in the detector plane. Figures 12(b) and 12(c) show the calculated phase in x and y directions respectively for all pixels. Other than the phase, the modulation I''(x,y) was also computed for every pixel. The modulation serves as an indicator of reliability in the later image processing.

In order to increase the signal-to-noise ratio and improve the calibration precision, multi-period sinusoidal patterns were also applied, and the phase-unwrapping algorithm [41] was used to recover the phases. Figure 13 illustrates the comparison of a reconstructed 1951 USAF resolution target image using one, two, four and eight periods of sinusoidal patterns (shown below the corresponding sinusoidal patterns). Improved precision could be seen with the increased number of periods.

 figure: Fig. 13

Fig. 13 Comparison of the reconstructed image using multi-period phase-shifting calibration.

Download Full Size | PDF

4.2. Spectral calibration

Spectral calibration determines the raw image pixel (u,v)correspondence on the datacube’s spectral coordinate (λ), i.e., determines the positions to find a certain wavelength on the detector. For each fiber core in our system, different wavelengths are separated by the non-linear dispersive element (prism) and aligned in a dispersed line. In order to locate the spatial positions of each wavelength on the line, first, three single-channel flat-field images were acquired with 1-nm narrow band filters (488.0nm, 514.5nm, and 632.8nm). The image region property functions in Matlab 2011a (Mathworks) was used to identify individual fiber cores and compute their centroids (Figs. 14(a) and 14(b)). For each fiber core, the three wavelengths’ centroids were grouped according to their relative positions (aligned in one single line). Figure 14(c) presents a portion of the flat field image superimposed by the centroid positions of 488.0nm, 514.5nm, and 632.8nm marked with blue, green and red dots, respectively. Then for each group, the complete line with all desired wavelength positions was located using polynomial interpolation [42] (Fig. 14(d)). In this way, a lookup table was created in the form shown in Fig. 15(a).

 figure: Fig. 14

Fig. 14 (a) A portion of the thresholded single-wavelength (632.8nm) image. (b) Identified individual fiber cores with computed centroids shown in red circles. (c) A portion of the flat field image superimposed by the centroid positions (488.0nm, 514.5nm, 632.8nm) marked with blue, green and red circles respectively. (d) The complete lines with all desired wavelength positions interpolated (red stars) according to the three single-wavelength locations in (c).

Download Full Size | PDF

 figure: Fig. 15

Fig. 15 (a) The look-up table after the spectral calibration, designating the coordinate on the detector to find each voxel in the datacube. (b) The fiber core’s final phase averaged from all wavelengths. (c) The final lookup table, with two appended columns representing the fiber core’s x and y coordinate on the object plane.

Download Full Size | PDF

4.3. Final lookup table and image reconstruction

The phase-shifting calibration provided the phase and modulation value for each pixel on the detector. However, one fiber core’s dispersed line on the detector occupies approximately 2x80 pixels. The final phase for each fiber core was determined in two steps: (1) a phase together with a modulation was interpolated at every wavelength position of the core (a single row in Fig. 15(a)). (2) As shown in Fig. 15(b), the fiber core’s final phase was the average of all the phase values using their modulations as the weight. As a result, two columns representing the fiber core’s x and y coordinate on the object plane were appended to the lookup table (Fig. 15(c)).

Furthermore, the modulations for all wavelengths in Fig. 15(b) were also added up to determine the core’s final modulation. Due to the imperfection in the fabrication procedure and the mismatch between the cores and the lenslet/pinhole, a few fiber cores have lower transmission than usual. These low-transmission fibers caused dark spot artifacts on the reconstructed image (Fig. 16(a)). Therefore, the cores with modulations below a certain threshold were removed from the lookup table. As shown in Figs. 16(b)−16(d), with a higher threshold, the reconstructed image is more homogeneous at the expense of lower spatial sampling.

 figure: Fig. 16

Fig. 16 The reconstructed image using different thresholds to filter-out the cores with low modulations.

Download Full Size | PDF

Before reconstructing the datacube, a flat-field image (f) was captured by the detector when imaging the uniform light source (for example, a uniform white daytime sky, or a collimated beam of a halogen lamp, depending on the illumination source of the object). Then a dark field image (d) was captured when the aperture of the fore-optics was covered. The flat-field corrected object image (C) can be obtained from the raw image (R, see Fig. 17(a)) by the following equation:

 figure: Fig. 17

Fig. 17 (a) Raw image captured on the detector when imaging buildings on the Rice University campus. (b) The composite image reconstructed from the raw image in (a) without flat-field correction (c) a flat-field image of a uniform white daytime sky. (d) Reconstructed image with flat-field correction.

Download Full Size | PDF

C=Rdfd

Having the final lookup table and the flat-field corrected object image (C), each single-channel image was reconstructed in two steps: (1) the coordinates for all fibers on the detector corresponding to that wavelength was provided by one single column in Fig. 15(c). The fibers’ intensities were obtained by interpolation at these coordinate positions on the raw image. (2) The interpolated intensities were remapped on a mesh grid according to their phase values (last two columns in Fig. 15(c)), and a linear interpolation algorithm was used to estimate the values on the grid points.

As an example, Fig. 17(a) shows a raw image captured on the detector when imaging buildings at Rice University and Fig. 17(b) shows the composite image directly reconstructed from Fig. 17(a) without flat-field correction (the composite image combined all spectral channels and was pseudo-colored using a wavelength-to-RGB conversion algorithm [43]). The intensity heterogeneity on the reconstructed image came from the variation of the fiber transmission. Figure 17(c) presents a flat-field image of a uniform white daytime sky (the dark arc on the right side was a polishing scratch). Figure 17(d) is the reconstructed image after flat-field correction, where the non-uniformity was eliminated (Fig. 21 presents the comparison of a set of reconstructed composite images with reference images captured by a DSLR camera).

5. System characterization

5.1. Spatial sampling

As mentioned in Section 2.4, the irregularity in the manually fabricated fiber bundle inevitably results in a number of mismatched lenslets/pinholes, leading to some loss in spatial sampling. The system coupled with the lenslet array yielded 15580 cores in the final lookup table, which corresponded to 130 x 120 spatial sampling for a 10:9 aspect ratio. The photomask provided higher spatial sampling through the freedom of custom-designed patterns. The maximum spatial sampling obtained with the photomask design is shown in Fig. 5, where the final lookup table consisted of 31853 fiber cores (~188 x 170 spatial sampling). Higher pinhole density and diameter provides higher spatial sampling by having more selected cores (shown in Table 1 and Table 2), at the cost of higher crosstalk between selected cores (discussed in the next paragraph).

Tables Icon

Table 1. Spatial samplings provided by photomasks with 10µm pinholes and different pitch

Tables Icon

Table 2. Spatial samplings provided by photomasks with a 31.5µm pitch and different pinhole diameters

5.2. Spectral crosstalk

To quantify the system crosstalk, a randomly selected core in the fiber bundle was illuminated by a 10µm pinhole. The intensities recorded for all the wavelengths of the illuminated core was summed and denoted as Si. The intensities recorded for the neighboring cores (excluding the illuminated core) within 100 pixels radius of the illuminated core were summed and denoted as Sn. The system crosstalk was defined as Sn/(Sn+Si)×100%. The measurement was repeated for five random cores, and the result was averaged. The intensity of crosstalk in the system coupled with the lenslet array was found to be 22.0% with a standard deviation of 13.9%. Table 3 and Table 4 lists the crosstalk and standard deviation of the photomasks with three different pitches (10µm pinhole diameter) and three different pinhole diameters (31µm pitch). Lower pinhole density and smaller pinhole size reduced crosstalk, at the cost of lower spatial sampling.

Tables Icon

Table 3. Crosstalk using photomasks with 10µm pinholes and different pitch

Tables Icon

Table 4. Crosstalk using photomasks with a 31.5µm pitch and different pinhole diameters

5.3. Spectral resolution

As described in detail in the previous publication [9], a series of images were acquired using five 1-nm filters centered at 488, 514, 532, 589 and 633nm wavelengths to evaluate the system spectral resolution. The averaged intensity profiles are shown in Fig. 18, with the average values of full width half maximum (FWHM) listed together with the standard deviation (STD) across the field.

 figure: Fig. 18

Fig. 18 The spectral response of filters at 488, 514, 532, 589 and 633nm.

Download Full Size | PDF

5.4. System throughput

As described in section 2.5, to meet the optical performance requirement, the aperture stop in the optical re-imaging system needed to be closed, leading to a 0.05 NA at both object and image space. When coupling with a 0.28 NA fiber bundle, the light throughput could be estimated using the following equation, where r1 is the radius of the re-imaging system entrance pupil, r2 is the radius of the fibers’ light cone’s cross-section at the entrance pupil plane. F is the effective focal length of the optical re-imaging system:

π12π22=π[Ftan(arcsin0.05)]2π[Ftan(arcsin0.28)]2=2.95%

Using the same method described in the previous publication [9], the overall system throughput was measured as 2.4%. Therefore, the optical re-imaging system had a major contribution to the system’s light loss. Note that this limitation comes from the commercially available optical elements instead of the system principle (section 2.1). In the future, a custom optical system is planned to be developed to allow operation at a higher NA and improve throughput while meeting satisfactory resolution.

6. Imaging results

The Rice University’s campus landscape outside our lab window was chosen as the imaging object to examine the system’s spectral imaging capability for distant scenes. As shown in Fig. 19(a), the scene outside the window was redirected 90 degrees by a 48 x 48 inch mirror and relayed to the system’s object plane by a photographic objective (Mitakon Zhongyi Speedmaster 85mm f/1.2 Lens). The angle of the mirror was adjustable in order to change the field of view across the landscape quickly.

 figure: Fig. 19

Fig. 19 (a) The schematic system layout showing that the campus landscape scene outside the window was redirected 90 degrees by a mirror and relayed to the system’s object plane by a photographic objective. (b) The image of the fresh leaf relayed to the system’s object plane by two photographic objectives.

Download Full Size | PDF

Both the lenslet array and the photomask provides comparable system throughput. Depending on the weather and sunlight situation, the integration time to take a single snapshot hyperspectral image varies between 100 and 400 milliseconds. Figure 20 (left) presents a series of reconstructed single-channel images for one field of view. 12 images out of 61 were selected for display. The channels were pseudo-colored using a wavelength-to-RGB conversion algorithm [43]. Figure 20 (right) shows a pseudo-colored composite image formed by combining all spectral channels, together with four spectral features of interest: a spectrum from trees, a spectrum from a red roof, a spectrum from the brick wall, and a spectrum from blue window glass. Figure 21 top row presents a set of composite images taken by our system for six different fields of views by adjusting the mirror’s angle. The bottom row lists the corresponding RGB images of the same field of view taken by a Digital Single Lens Reflex (DSLR) camera (Canon EOS 5D Mark IV DSLR Camera body, Mitakon Zhongyi Speedmaster 85mm f/1.2 Lens). Except for the detail difference caused by slightly different sunlight conditions, the features of the objects (building, windows, roof, trees, etc.) resolved by the imaging spectrometer show good consistency with the RGB camera images.

 figure: Fig. 20

Fig. 20 (Left half) A series of reconstructed single-channel images for one field of view. 12 images out of 61 were selected for display. (Right half) The pseudo-colored composite image formed by combining all spectral channels, together with four spectral features of interest: a spectrum from trees, a spectrum from a red roof, a spectrum from the brick wall, and a spectrum from blue glass.

Download Full Size | PDF

 figure: Fig. 21

Fig. 21 (Top row) A set of composite images taken by our system for six different fields of views by adjusting the mirror’s angle. (Bottom row) The corresponding RGB images of the same field of view (sunlight condition slightly different) taken by a Digital Single Lens Reflex camera (Canon EOS 5D Mark IV DSLR Camera body, Mitakon Zhongyi Speedmaster 85mm f/1.2 Lens).

Download Full Size | PDF

Three different species of tree were imaged by our system within one field of view: (1) Live Oak, (2) Laurel Oak and (3) Bald Cypress. Figure 22(a) presents the reconstructed composite image of the scene in which the positions of the trees were marked by white rectangles. The mean reflectance spectrums for the three trees (Fig. 22(b), top row) were determined by averaging the spectra for every pixel over the marked area in Fig. 22. For a comparison, fresh leaves from the three imaged trees were collected (Fig. 22(c)) and imaged by our system in the lab. The leaves were illuminated by a halogen lamp (Thorlabs QTH10 Quartz Tungsten-Halogen Lamp) in reflection mode (Fig. 19(b)), and the image was relayed by two photographic objectives (Mitakon Zhongyi Speedmaster 85mm f/1.2 Lens, Sigma 85mm F1.4 EX DG HSM Lens). The same halogen lamp was collimated and placed in front of the fore-optics to provide the flat-field correction image (see section 4.4). The imaged area for each leaf was marked by white rectangles in Fig. 22. The averaged reflectance spectra over the whole area are shown in Fig. 22(b) middle row. To provide reference data, the same area of each leaf was also measured with an OceanOptics modular spectrometer (Fig. 22(b) bottom row). Good consistency is shown between the tree’s spectrum in the distant scene and the corresponding leaf’s spectrum, as well as in the OceanOptics measurement.

 figure: Fig. 22

Fig. 22 (a) The reconstructed composite image of the scene in which the positions of the trees were marked in white rectangles: (1) Live Oak, (2) Laurel Oak and (3) Bald Cypress. (b) Horizontal axis for all plots: wavelength/nm; vertical axis for all plots: normalized intensity. Top row: the mean reflectance spectrum for the three trees determined by averaging the spectrum for every pixel over the marked area in (a). Middle row: the mean reflectance spectrum for the three leaves from the three trees determined by averaging the spectrum for every pixel over the marked area in (c). Bottom row: the measured spectrum of the same area of each leaf provided by an OceanOptics modular spectrometer as the reference data. (c) Photo of the fresh leaves from the three imaged trees collected from the campus, with the imaged area marked as white rectangles.

Download Full Size | PDF

We also collected the leaves from a Bald Cypress tree at four different stages of color (Fig. 23, left photo). This provides a precursor to a possible application to study plant health from an airborne or orbiting platform. The leaves were imaged under the same conditions as described in Fig. 19(b) and the reconstructed composite images are shown in the top row in Fig. 23 right. The averaged reflectance spectrum over the whole imaged area (middle row, Fig. 23 right) was in good agreement with the OceanOptics measurement results (bottom row in Fig. 23 right).

 figure: Fig. 23

Fig. 23 Horizontal axis for all plots: wavelength/nm; vertical axis for all plots: normalized intensity. Left photo: 4 collected leaves from a Bald Cypress tree at four different stages of color changing with the imaged area marked as white rectangles. First row on the right: the reconstructed composite images of the four leaves. Second row on the right: averaged reflectance spectrum over the whole imaged area of the leaves. Third row on the right: the measured spectrum of the same area of each leaf provided by an OceanOptics modular spectrometer as the reference data.

Download Full Size | PDF

To demonstrate the system’s snapshot imaging capability for dynamic scenes, we imaged city traffic on Main Street in Houston using the continuous capture mode of the detector. Note that the maximum readout rate for Imperx Bobcat B6620 used in the system was approx. 3.6 frames / second and limited by readout electronics (in the future we plan to replace this sensor with faster readout, low noise image sensors). Thus, using an exposure time of 282ms (same as the frame rate), a series of 45 raw images were obtained within 13 seconds, recording the continuous process of cars moving at around 30mph. A hyperspectral video was made by reconstructing the datacube for each frame and combining the pseudo-colored composite image (in the Visualization 1). Figures 24(a)−24(c) presents three frames of the video as an example. The objects (trees, cars, traffic light, etc.) are marked in Fig. 24(a). The spectra shown in Figs. 24(d)−24(f) were averaged over all the spatial samplings in the area marked ①, corresponding to the frames shown in Figs. 24(a)−24(c) respectively. When the red car in the frame (c) passed the marked area ①, the change in the spectrum (Fig. 24(d)), as compared to the white car in (e) and the empty road in (f) was evident. The spectra shown in Figs. 24 (g)−24(i) were averaged over the area marked ② in the frame (a)-(c) respectively, which was an area containing mostly trees. As expected, no noticeable change was found for the spectrum in the three frames. The area marked ③ on Fig. 24(a) covers a traffic signal light. Similar to the trees, since the light was green during the whole video, no noticeable change was found for the spectrum in the three frames (Figs. 24(j)−24(i) corresponding to Figs. 24(a)−24(c)), except for the noise as a result of the low number of samplings (only 5 fibers in area ③). Figures 24(m) and 24(n) presents two single-channel images of the frame shown in Fig. 24(a). It could be seen that the red car appeared as a dark spot in the green channel (520nm) and was much brighter in the red channel (650nm).

 figure: Fig. 24

Fig. 24 Sample frames of the hyperspectral “video” recording (see Visualization 1) of the urban traffic on Main Street, Houston. Horizontal axis for all plots: wavelength/nm; vertical axis for all plots: normalized intensity. The spectra shown in (d)-(i) were averaged over the respective areas marked in frames (a)-(c).

Download Full Size | PDF

7. Conclusions and discussions

In conclusion, we have presented a fiber-based snapshot imaging spectrometer which provided a maximum of 188 x 170 x 61 datacube (number of samples refers to the cube size of x, y, λ respective values) in the 450nm-750nm visible range. The custom designed fiber bundle was fabricated by assembling single ribbons of multi-core fibers. Two methods using a lenslet array and photomasks were used to sample the object by a subset of selected cores. A rapid non-scanning spatial calibration method was developed to calibrate the >30000 spatial samplings of the system within 5 minutes by acquiring only tens of images. The imaging results of the landscape and vegetation on the Rice University campus demonstrated the system’s spectral imaging capability and showed significant promise for a fiber-based snapshot imaging spectrometer to be used in remote sensing applications.

While still facing a few technological challenges, we have demonstrated a pathway to a compact imaging spectrometer based on fiber bundles with a large datacube. The innovation of our work include: (1) The spatial sampling has been improved approximately an order of magnitude when compared to the reported spatial samplings in fiber-based systems in the literature [9,14]. (2) We adopted a commercial fiber ribbon with 10µm core size to miniaturize the fiber bundle and optimized the fiber NA to achieve system compactness. The presented system has a volume 600x150x150mm, which is not far from the required size for an Unmanned Aerial Vehicle (UAV). (3) Our fiber bundle design with gaps between rows at the output is not only a method to increase the spatial sampling (compared to the traditional square-to-line geometry), but also potentially allows tuning between spatial and spectral sampling (by adjusting the gap space) to meet specific application requirements. (4) The spatial calibration based on phase-shifting technology provides a novel and rapid calibration method for imaging spectrometers with high spatial sampling. This method can be further extended to the general calibration of both coherent and incoherent fiber bundles, as well as any other imaging device with spatial displacement and distortion.

Because of the spectral response of the camera, the current experiment was focused on the visible region only, although the short wave infrared region (SWIR) can be captured using a different camera to enable the system’s usage in a wider range of remote sensing applications. The current prototype system has a volume of 600mm (L) x 130mm (W) x 150mm (H), which is also not far from the required size in an Unmanned Aerial Vehicle (UAV). The system will be further miniaturized in the future with smaller fiber bundle size and custom designed imaging optics, to fit in a broader range of remote sensing platforms. Beyond remote sensing, another application is in biomedical imaging, such as fluorescence imaging and disease diagnosis such as retinal imaging, when the system is coupled with a microscope and other medical imaging devices. Compared to other snapshot imaging spectrometers such as IMS (datacube 350 × 350 × 46) [44,45] and CTIS (datacube 203 × 203 × 55) [46] used in cell signaling, cancer diagnostics, retinal imaging, etc., the datacube of our system (188 x 170 x 61) is already comparable. In the future with the custom fabricated imaging optics and further miniaturized fiber bundle, an even larger datacube will be achieved.

As a future step, a more compact fiber bundle will be made of custom in-house fabricated fiber ribbons by winding and tapering single fibers, which will further miniaturize the volume of the system. The aid of lenslet array/photomask is needed purely due to the off-the-shelf multi-core fibers purchased from Schott. We do not consider the lenslet array/photomask as a part of the system principle. Single-core fibers will be used in the custom ribbon fabrication, to eliminate the core selection procedures (lenslet array or photomask). A new dispersive re-imaging optical system will also be custom designed and optimized to open its aperture stop and achieve higher light throughput. Moreover, the fiber-based light-guide system design potentially allows for tuning between spatial and spectral sampling to meet specific application requirements. Switching the photomask is also a potential approach for tuning the system’s spatial sampling to accommodate the space for spectral sampling, without changing the field of view. Additionally, future developments will add a reference camera to aid our system. Similar to [47,48], we will work on a hybrid system utilizing a color high-spatial resolution reference camera and a low-resolution spectral imager to combine the measurements and increase data content.

Funding

NASA ESTO program NNX17AD30G.

Acknowledgments

We would like to acknowledge all the members in Tkaczyk lab for the helpful discussions and assistance.

Disclosures

Dr. Tomasz Tkaczyk has financial interests in Attoris LLC focusing on applications and commercialization of hyperspectral imaging technologies.

References

1. L. Gao and L. V. Wang, “A review of snapshot multidimensional optical imaging: measuring photon tags in parallel,” Phys. Rep. 616, 1–37 (2016). [CrossRef]   [PubMed]  

2. N. A. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng. 52(9), 090901 (2013). [CrossRef]  

3. G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” J. Biomed. Opt. 19(1), 010901 (2014). [CrossRef]   [PubMed]  

4. L. Gao and R. T. Smith, “Optical hyperspectral imaging in microscopy and spectroscopy - a review of data acquisition,” J. Biophotonics 8(6), 441–456 (2015). [CrossRef]   [PubMed]  

5. P. Mouroulis, R. O. Green, and T. G. Chrien, “Design of pushbroom imaging spectrometers for optimum recovery of spectroscopic and spatial information,” Appl. Opt. 39(13), 2210–2220 (2000). [CrossRef]   [PubMed]  

6. P. J. Cutler, M. D. Malik, S. Liu, J. M. Byars, D. S. Lidke, and K. A. Lidke, “Multi-color quantum dot tracking using a high-speed hyperspectral line-scanning microscope,” PLoS One 8(5), e64320 (2013). [CrossRef]   [PubMed]  

7. L. Gao, R. T. Kester, N. Hagen, and T. S. Tkaczyk, “Snapshot image mapping spectrometer (IMS) with high sampling density for hyperspectral microscopy,” Opt. Express 18(14), 14330–14344 (2010). [CrossRef]   [PubMed]  

8. J. G. Dwight and T. S. Tkaczyk, “Lenslet array tunable snapshot imaging spectrometer (LATIS) for hyperspectral fluorescence microscopy,” Biomed. Opt. Express 8(3), 1950–1964 (2017). [CrossRef]   [PubMed]  

9. Y. Wang, M. E. Pawlowski, and T. S. Tkaczyk, “High spatial sampling light-guide snapshot spectrometer,” Opt. Eng. 56(8), 081803 (2017). [CrossRef]   [PubMed]  

10. M. Hubold, R. Berlich, C. Gassner, R. Brüning, and R. Brunner, “Ultra-compact micro-optical system for multispectral imaging,” Proc. SPIE 10545, 105450V (2018).

11. T. Mu, F. Han, D. Bao, C. Zhang, and R. Liang, “Compact snapshot optically replicating and remapping imaging spectrometer (ORRIS) using a focal plane continuous variable filter,” Opt. Lett. 44(5), 1281–1284 (2019). [CrossRef]   [PubMed]  

12. D. W. Fletcher-Holmes and A. R. Harvey, “Real-time imaging with a hyperspectral fovea,” J. Opt. A, Pure Appl. Opt. 7(6), S298–S302 (2005). [CrossRef]  

13. N. Gat, G. Scriven, J. Garman, M. De Li, and J. Zhang, “Development of four-dimensional imaging spectrometers (4D-IS),” Proc. SPIE 6302, 63020M (2006). [CrossRef]  

14. J. Kriesel, G. Scriven, N. Gat, S. Nagaraj, P. Willson, and V. Swaminathan, “Snapshot hyperspectral fovea vision system (HyperVideo),” Proc. SPIE 8390, 83900T (2012). [CrossRef]  

15. P. S. Hsu, D. Lauriola, N. Jiang, J. D. Miller, J. R. Gord, and S. Roy, “Fiber-coupled, UV-SWIR hyperspectral imaging sensor for combustion diagnostics,” Appl. Opt. 56(21), 6029–6034 (2017). [CrossRef]   [PubMed]  

16. B. Khoobehi, A. Khoobehi, and P. Fournier, “Snapshot hyperspectral imaging to measure oxygen saturation in the retina using fiber bundle and multi-slit spectrometer,” Proc. SPIE 8229, 82291E (2012). [CrossRef]  

17. B. Khoobehi, K. Firn, E. Rodebeck, and S. Hay, “A new snapshot hyperspectral imaging system to image optic nerve head tissue,” Acta Ophthalmol. 92(3), e241 (2014). [CrossRef]   [PubMed]  

18. N. Bedard and T. S. Tkaczyk, “Snapshot spectrally encoded fluorescence imaging through a fiber bundle,” J. Biomed. Opt. 17(8), 080508 (2012). [CrossRef]   [PubMed]  

19. H. T. Lim and V. M. Murukeshan, “A four-dimensional snapshot hyperspectral video-endoscope for bio-imaging applications,” Sci. Rep. 6(1), 24044 (2016). [CrossRef]   [PubMed]  

20. A. Jung, R. Michels, and G. Rainer, “Portable snapshot spectral imaging for agriculture,” Acta Agraria Debreceniensis 150, 221–225 (2018). [CrossRef]  

21. C. Carrizo, A. Gilerson, R. Foster, A. Golovin, and A. El-Habashi, “Characterization of radiance from the ocean surface by hyperspectral imaging,” Opt. Express 27(2), 1750–1768 (2019). [CrossRef]   [PubMed]  

22. R. M. Suggs, W. J. Cooke, R. J. Suggs, W. R. Swift, and N. Hollon, “The NASA lunar impact monitoring program,” in Advances in Meteoroid and Meteor Science (Springer, 2007), pp. 293–298.

23. S. Ackelson, T. Bell, H. Dierssen, J. Goodman, R. Green, L. Guild, E. Hochberg, V. V. Klemas, S. Lavender, C. Lee, P. Minnett, F. Muller-Karger, J. Ortiz, S. Palacios, D. R. Thompson, K. Turpie, and R. Zimmerman, Global Observations of Coastal and Inland Aquatic Habitats, (NASA, 2016), pp. 1−18.

24. G. Dobler, M. Ghandehari, S. E. Koonin, and M. S. Sharma, “A hyperspectral survey of New York City lighting technology,” Sensors (Basel) 16(12), 2047 (2016). [CrossRef]   [PubMed]  

25. C. Weidman, A. Boye, and L. Crowell, “Lightning spectra in the 850‐to 1400‐nm near‐infrared region,” J. Geophys. Res. D Atmospheres 94, 13249–13257 (1989). [CrossRef]  

26. R. Y. Tsien, “Fluorescent probes of cell signaling,” Annu. Rev. Neurosci. 12(1), 227–253 (1989). [CrossRef]   [PubMed]  

27. D. T. Dicker, J. Lerner, P. Van Belle, S. F. Barth, D. Guerry 4th, M. Herlyn, D. E. Elder, and W. S. El-Deiry, “Differentiation of normal skin and melanoma using high resolution hyperspectral imaging,” Cancer Biol. Ther. 5(8), 1033–1038 (2006). [CrossRef]   [PubMed]  

28. M. Descour and E. Dereniak, “Computed-tomography imaging spectrometer: experimental calibration and reconstruction results,” Appl. Opt. 34(22), 4817–4826 (1995). [CrossRef]   [PubMed]  

29. N. Bedard, N. Hagen, L. Gao, and T. S. Tkaczyk, “Image mapping spectrometry: calibration and characterization,” Opt. Eng. 51(11), 111711 (2012). [CrossRef]   [PubMed]  

30. D. Jackson, T. Bartindale, and P. Olivier, “FiberBoard: compact multi-touch display using channeled light,” in Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces (ACM, 2009) pp. 25–28. [CrossRef]  

31. D. Malacara, Optical shop testing (John Wiley & Sons, 2007), Chap. 14.

32. R. Leach, Optical measurement of surface topography (Springer, 2011)

33. P. R. Fernández, J. L. Lázaro Galilea, A. Gardel Vicente, I. Bravo Muñoz, A. E. Cano García, and C. Luna Vázquez, “Improving the calibration of image sensors based on IOFBs, using Differential Gray-Code Space Encoding,” Sensors (Basel) 12(7), 9006–9023 (2012). [CrossRef]   [PubMed]  

34. P. Carré, “Installation et utilisation du comparateur photoélectrique et interférentiel du Bureau International des Poids et Mesures,” Metrologia 2(1), 13–23 (1966). [CrossRef]  

35. P. Hariharan, B. F. Oreb, and T. Eiju, “Digital phase-shifting interferometry: a simple error-compensating phase calculation algorithm,” Appl. Opt. 26(13), 2504–2506 (1987). [CrossRef]   [PubMed]  

36. J. Schwider, R. Burow, K. E. Elssner, J. Grzanna, R. Spolaczyk, and K. Merkel, “Digital wave-front measuring interferometry: some systematic error sources,” Appl. Opt. 22(21), 3421–3432 (1983). [CrossRef]   [PubMed]  

37. J. C. Wyant and K. N. Prettyjohns, U.S. Patent No. 4,639,139. Washington, DC: U.S. Patent and Trademark Office (1987).

38. J. H. Bruning, D. R. Herriott, J. E. Gallagher, D. P. Rosenfeld, A. D. White, and D. J. Brangaccio, “Digital wavefront measuring interferometer for testing optical surfaces and lenses,” Appl. Opt. 13(11), 2693–2703 (1974). [CrossRef]   [PubMed]  

39. K. G. Larkin and B. F. Oreb, “Design and assessment of symmetrical phase-shifting algorithms,” J. Opt. Soc. Am. A 9(10), 1740–1748 (1992). [CrossRef]  

40. H. Guo, H. He, and M. Chen, “Gamma correction for digital fringe projection profilometry,” Appl. Opt. 43(14), 2906–2914 (2004). [CrossRef]   [PubMed]  

41. D. C. Ghiglia and M. D. Pritt, “Two-dimensional phase unwrapping: theory, algorithms, and software,” (Wiley, 1998).

42. R. Keys, “Cubic convolution interpolation for digital image processing,” IEEE Trans. Acoust. Speech Signal Process. 29(6), 1153–1160 (1981). [CrossRef]  

43. D. Bruton [Accessed March 27, 2019];Color Science. http://www.midnightkite.com/color.html.

44. A. D. Elliott, L. Gao, A. Ustione, N. Bedard, R. Kester, D. W. Piston, and T. S. Tkaczyk, “Real-time hyperspectral fluorescence imaging of pancreatic β-cell dynamics with the image mapping spectrometer,” J. Cell Sci. 125, 4833–4840 (2012). [CrossRef]   [PubMed]  

45. J. G. Dwight, C. Y. Weng, R. E. Coffee, M. E. Pawlowski, and T. S. Tkaczyk, “Hyperspectral image mapping spectrometry for retinal oximetry measurements in four diseased eyes,” Int. Ophthalmol. Clin. 56(4), 25–38 (2016). [CrossRef]   [PubMed]  

46. B. Ford, M. Descour, and R. Lynch, “Large-image-format computed tomography imaging spectrometer for fluorescence microscopy,” Opt. Express 9(9), 444–453 (2001). [CrossRef]   [PubMed]  

47. Y. Murakami, K. Nakazaki, and M. Yamaguchi, “Hybrid-resolution spectral video system using low-resolution spectral sensor,” Opt. Express 22(17), 20311–20325 (2014). [CrossRef]   [PubMed]  

48. T. Mu, S. Pacheco, Z. Chen, C. Zhang, and R. Liang, “Snapshot linear-Stokes imaging spectropolarimeter using division-of-focal-plane polarimetry and integral field spectroscopy,” Sci. Rep. 7(1), 42115 (2017). [CrossRef]   [PubMed]  

Supplementary Material (1)

NameDescription
Visualization 1       Hyperspectral “video” recording of the urban traffic on Main Street, Houston, TX. Horizontal axis for all plots: wavelength/nm; vertical axis for all plots: normalized intensity. The spectra shown is averaged over the respective areas marked in the c

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (24)

Fig. 1
Fig. 1 Schematic illustration of the principle of fiber-based imaging spectrometer system.
Fig. 2
Fig. 2 (a) Schematic illustration of the minimum collector lens diameter (D) determined by the fiber bundle output area (diagonal length L) and the fibers’ numerical aperture (NA). (b) A micrograph of the Schott multicore fiber ribbon’s cross-section, with annotations indicating dimension. (c) A heat map of the required collector lens f/# calculated using different values of L and D, with fiber NA 0.65. (d) A heat map of the required collector lens f/# calculated using different values of L and D, with fiber NA 0.28.
Fig. 3
Fig. 3 (a) A simplified plot showing the designed input end with 4 x 6 fibers reformatted into 2 x 12 fibers at the output. Each row of fibers is marked with a different color. (b) A simplified plot showing the same input end as (a), and the coupling of each exit lenslet’s sub-pupil image into a 10µm diameter core, creating void space for dispersion without overlapping. (c) The reformatted output end of the same fibers in (b). (d) The dispersed cores in (c).
Fig. 4
Fig. 4 Schematic system layout with two alternative approaches to selectively sample the object with a subset of cores using lenslet array (a) and a photomask (b).
Fig. 5
Fig. 5 (Left) The design of a photomask to maximize the core density. (Center) The perfect alignment of the pinholes with the fiber cores. (Right) The sketch plot of dispersed cores in one row of fibers in the center Fig.
Fig. 6
Fig. 6 The required f/# of the collector lens depending on the lens diameter for the 30mm diagonal fiber bundle output area and 0.28 fiber NA.
Fig. 7
Fig. 7 Zemax OpticStudio simulation of four representative configurations for a pair of identical achromatic doublet lens. (a) Telecentric setup (lens distance twice the focal length), 0.05 system NA, un-conventional lens orientation (higher curvature surface facing the object). (b) Compact setup (lens distance as close as possible, was set to 40mm to leave space for the disperser and aperture stop), 0.05 system NA, conventional lens orientation (lower curvature surface facing the object). (c) Compact setup, 0.05 system NA, un-conventional lens orientation, was chosen as the final solution. (d) Same setup as (c), with a 0.15 system NA.
Fig. 8
Fig. 8 Fiber bundle fabrication process. (a) The schematic plot for the whole assembling and cutting procedure. (b) Photo of a fiber ribbon segment which was cut into 8 inches, containing two complete fixed sections and one complete freeform section. (c) Photo showing the fixed section at the input side split into two halves by a razor blade. (d) Photo of the two halves of the input end stacked above each other. (e) Photo of the whole segment placed into the gluing mold made of laser-cut acrylic board, showing the room for the gluing epoxy. (f) Photo of the assembled bundle in the mold cut by the diamond saw.
Fig. 9
Fig. 9 (a) Photo of the fiber bundle’s input side after diamond saw cutting, with red dashed square indicating the fibers. (b) Photo of the fiber bundle’s output side after diamond saw cutting, with the red dashed square indicating the fibers. (c) Micrograph of the fibers at the input end before polishing, showing the tightly packed 36-core fibers. (d) Micrograph of one row of fibers at the output end before polishing (e) Photo of the fiber bundle mounted on an automatic polisher (Nanopol fiber polishing system, Ultra-Tec) for polishing. (f) Micrograph of one row of fibers at the output end after polishing.
Fig. 10
Fig. 10 Photo of the system setup.
Fig. 11
Fig. 11 (a) A portion of the single-wavelength image captured by the detector using a 1-nm wide bandpass filter (632.8nm), with the fiber bundle coupled with a lenslet array (37µm pitch rectangular alignment, 188µm focal length). (b) The corresponding broadband image of (a). (c) A portion of the single-wavelength image captured by the detector using a 1-nm wide bandpass filter (632.8nm), with the fiber bundle coupled with a photomask (37µm pitch rectangular alignment, 10µm pinhole diameter). (d) The corresponding broadband image of (c).
Fig. 12
Fig. 12 (a) (left numbers) A set of shifted phases between 0 and 2π with a constant step π/3; (first column images) the corresponding 6 phase-shifted sinusoidal patterns generated for the x-direction only, with the same spatial frequency but phases shifted by a constant step shown in the left column. (Second column images) A portion of the corresponding raw images recorded on the detector. (b) The calculated phase in x-direction plotted for all the pixels in the raw image (c) The calculated phase in y-direction plotted for all the pixels in the raw image.
Fig. 13
Fig. 13 Comparison of the reconstructed image using multi-period phase-shifting calibration.
Fig. 14
Fig. 14 (a) A portion of the thresholded single-wavelength (632.8nm) image. (b) Identified individual fiber cores with computed centroids shown in red circles. (c) A portion of the flat field image superimposed by the centroid positions (488.0nm, 514.5nm, 632.8nm) marked with blue, green and red circles respectively. (d) The complete lines with all desired wavelength positions interpolated (red stars) according to the three single-wavelength locations in (c).
Fig. 15
Fig. 15 (a) The look-up table after the spectral calibration, designating the coordinate on the detector to find each voxel in the datacube. (b) The fiber core’s final phase averaged from all wavelengths. (c) The final lookup table, with two appended columns representing the fiber core’s x and y coordinate on the object plane.
Fig. 16
Fig. 16 The reconstructed image using different thresholds to filter-out the cores with low modulations.
Fig. 17
Fig. 17 (a) Raw image captured on the detector when imaging buildings on the Rice University campus. (b) The composite image reconstructed from the raw image in (a) without flat-field correction (c) a flat-field image of a uniform white daytime sky. (d) Reconstructed image with flat-field correction.
Fig. 18
Fig. 18 The spectral response of filters at 488, 514, 532, 589 and 633nm.
Fig. 19
Fig. 19 (a) The schematic system layout showing that the campus landscape scene outside the window was redirected 90 degrees by a mirror and relayed to the system’s object plane by a photographic objective. (b) The image of the fresh leaf relayed to the system’s object plane by two photographic objectives.
Fig. 20
Fig. 20 (Left half) A series of reconstructed single-channel images for one field of view. 12 images out of 61 were selected for display. (Right half) The pseudo-colored composite image formed by combining all spectral channels, together with four spectral features of interest: a spectrum from trees, a spectrum from a red roof, a spectrum from the brick wall, and a spectrum from blue glass.
Fig. 21
Fig. 21 (Top row) A set of composite images taken by our system for six different fields of views by adjusting the mirror’s angle. (Bottom row) The corresponding RGB images of the same field of view (sunlight condition slightly different) taken by a Digital Single Lens Reflex camera (Canon EOS 5D Mark IV DSLR Camera body, Mitakon Zhongyi Speedmaster 85mm f/1.2 Lens).
Fig. 22
Fig. 22 (a) The reconstructed composite image of the scene in which the positions of the trees were marked in white rectangles: (1) Live Oak, (2) Laurel Oak and (3) Bald Cypress. (b) Horizontal axis for all plots: wavelength/nm; vertical axis for all plots: normalized intensity. Top row: the mean reflectance spectrum for the three trees determined by averaging the spectrum for every pixel over the marked area in (a). Middle row: the mean reflectance spectrum for the three leaves from the three trees determined by averaging the spectrum for every pixel over the marked area in (c). Bottom row: the measured spectrum of the same area of each leaf provided by an OceanOptics modular spectrometer as the reference data. (c) Photo of the fresh leaves from the three imaged trees collected from the campus, with the imaged area marked as white rectangles.
Fig. 23
Fig. 23 Horizontal axis for all plots: wavelength/nm; vertical axis for all plots: normalized intensity. Left photo: 4 collected leaves from a Bald Cypress tree at four different stages of color changing with the imaged area marked as white rectangles. First row on the right: the reconstructed composite images of the four leaves. Second row on the right: averaged reflectance spectrum over the whole imaged area of the leaves. Third row on the right: the measured spectrum of the same area of each leaf provided by an OceanOptics modular spectrometer as the reference data.
Fig. 24
Fig. 24 Sample frames of the hyperspectral “video” recording (see Visualization 1) of the urban traffic on Main Street, Houston. Horizontal axis for all plots: wavelength/nm; vertical axis for all plots: normalized intensity. The spectra shown in (d)-(i) were averaged over the respective areas marked in frames (a)-(c).

Tables (4)

Tables Icon

Table 1 Spatial samplings provided by photomasks with 10µm pinholes and different pitch

Tables Icon

Table 2 Spatial samplings provided by photomasks with a 31.5µm pitch and different pinhole diameters

Tables Icon

Table 3 Crosstalk using photomasks with 10µm pinholes and different pitch

Tables Icon

Table 4 Crosstalk using photomasks with a 31.5µm pitch and different pinhole diameters

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

output area = core area × spatial sampling × spectral sampling
f#= DL 2Dtan(arcsinα)
I i ( x,y )=I'+I"cos[ ψ( x,y )+ φ i ]
tanψ( x,y )= 3 ( I 2 ( x,y )+ I 3 ( x,y ) I 4 ( x,y ) I 5 ( y,y ) ) 2 I 1 ( x,y )+ I 2 ( x,y ) I 3 ( x,y )2 I 4 ( x,y ) I 5 ( x,y )+ I 6 ( x,y )
C= Rd fd
π 1 2 π 2 2 = π [ Ftan( arcsin0.05 ) ] 2 π [ Ftan( arcsin0.28 ) ] 2 =2.95%
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.