Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Ruggedized, field-ready snapshot light-guide-based imaging spectrometer for environmental and remote sensing applications

Open Access Open Access

Abstract

A field-ready, fiber-based high spatial sampling snapshot imaging spectrometer was developed for applications such as environmental monitoring and smart farming. The system achieves video rate frame transfer and exposure times down to a few hundred microseconds in typical daylight conditions with ∼63,000 spatial points and 32 spectral channels across the 470nm to 700nm wavelength range. We designed portable, ruggedized opto-mechanics to allow for imaging from an airborne platform. To ensure successful data collection prior to flight, imaging speed and signal-to-noise ratio was characterized for imaging a variety of land covers from the air. The system was validated by performing a series of observations including: Liriope Muscari plants under a range of water-stress conditions in a controlled laboratory experiment and field observations of sorghum plants in a variety of soil conditions. Finally, we collected data from a series of engineering flights and present reassembled images and spectral sampling of rural and urban landscapes

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Imaging spectrometers allow for direct observations of a scene while recording spatial and spectral wavelength information beyond the basic tricolor RGB of common commercial cameras. Each image is comprised of a three-dimensional datacube (x, y, λ), where x and y denote the spatial information and λ the spectral content [1,2]. Imaging spectrometers are used in a number of applications, such as astronomy [2], smart farming [3,4], remote sensing [2,5,6], biomedical imaging [7,8], and food safety [9].

The 3D data cube can be generated via a number of approaches; point scanning where each spatial point with all its associated spectral channels is scanned until a complete image is formed, line scanning (also called push-broom), where a line of spatial points and their associated spectral channels are scanned, wavelength scanning where the entire spatial image is captured one spectral channel at a time, or snapshot acquisition where the entire datacube is acquired at once [2]. Scanning techniques require either moving parts, such as a galvo mirror, or the movement of the observing platform, as is the case with some line scanning remote sensing spectrometers [2].

Snapshot imaging spectrometers capture the datacube in a single exposure event, either directly with full sampling or through computational methods such as compressive sensing [10] to under sample and then reconstruct the increased sampling data cube. The simultaneous acquisition of the datacube enabled by snapshot imaging has many significant advantages over scanning systems, such as the ability to monitor fast, dynamic processes and to ameliorate effects of system jitter. This method also allows for an increase of the signal-to noise ratio (SNR) as all data are acquired in parallel. A moving, snapshot platform allows for progressive sampling of the same scene, quickly creating a large aggregate datacube with high spatial resolution and many spectral samples [2].

A subset of snapshot imaging techniques, called integral field spectroscopy [2], utilize mirror arrays [11], lenslet arrays [12], or fiber bundles [6] to physically separate datacube voxels on the sensor. Optical fibers can be used to reformat the image while also providing a high degree of design flexibility and compactness. This enables integration with a simple layout and, in comparison to a lenslet array, can have an arbitrary number of spectral channels while more efficiently utilizing the sensor area. Moreover, the flexibility of optical fibers offers exciting potential for tuning the balance between spatial and spectral information through select reorganization of the fiber bundles, thereby enabling the adjustment of spectral and spatial organization. Fiber based spectrometers have a long history of use in astronomy [2] and have been utilized in many other disciplines such as biomedical imaging [8,13] or remote sensing [14,15].

In our previous work, we have presented a benchtop prototype of a Tunable Lightguide Imaging Snapshot Spectrometer (TuLIPSS) [6]. This prototype system was limited by low throughput of light, the camera used was capped at a restrictive frame transfer rate of 3 frames per second (fps), and the system could not be easily moved from the laboratory. These limitations inhibited the ability to image under variable light conditions and to record at video rates, therefore not fully leveraging the innate advantages of snapshot spectroscopy discussed above. To address these issues, we have developed a ruggedized, laptop-operated snapshot imaging system for remote sensing from an airborne platform.

The system presented here improved spatial sampling, light throughput, and imaging speed. With the current implementation, we are able to double the number of spatial samples, from ∼32,000 to ∼63,000, which, to the best of our knowledge, exceeds the performance of all existing and previously demonstrated fiber-based spectrometers. Our aims were to evolve the TuLIPSS prototype to a field-ready system and to validate and assess its performance in both lab and field conditions. Validation of the system in the field was achieved through a set of observations of variety of sorghum grown in two soil types. Finally, we performed engineering flights on a Cessna 172 to demonstrate the field ready and ruggedized status of the current device.

In subsequent sections below, we discuss the system principle and implementation, present analysis of the system performance and detail the post-processing methods unique to the snapshot fiber-based design, including calibration, image reconstruction, and image mosaicking. Finally, we provide results of a series of observational experiments and a summary and discussion of the TuLIPSS system and its future prospects.

2. System principle and implementation

In this section we lay out the system principle for the TuLIPSS system and discuss the key factors in the selection of the optical components that make up the instrument. The principle of this fiber-based imaging spectrometer is displayed in Fig. 1. Here we focus on general characteristics and current implementation details; more details on previous iterations are discussed in [6].

 figure: Fig. 1.

Fig. 1. Schematic illustrating the principle of the snapshot imaging system.

Download Full Size | PDF

Fiber bundles can be leveraged to reformat an incoming image and to create physical space on the detector to allow each spatial point (a single fiber core) to be dispersed as a line. Light from a distant object is imaged by the fore-optics onto the input end of the custom fabricated fiber bundle which splits the image into separated rows at the output. A relay system, functioning much like an infinity corrected microscope, collimates the image of the fiber bundle output, selects the target bandwidth using a bandpass filter, separates wavelengths with a wedge prism, and finally reimages the dispersed signal onto the image sensor.

In this work, a Nikon AF Nikkor 50mm f/1.4D photographic lens was utilized to image the scene onto the input of the fiber bundle (Table 1). Note that the front objective can be easily changed to accommodate specific imaging conditions or application requirements (object distance, field). The custom fiber bundle was manufactured in-house from ribbons of Schott multicore fiber strands with a 10µm diameter core (see details below). The Olympus MVX-TLU from a MVX microscope acts as the relay objective and an Olympus MVPLAPO 0.63 X acts at the relay tube lens. Both components need to work in combination as they correct for field curvature. The filter is chosen to select a spectral bandwidth from 450nm to 705nm (Semrock FF01-715/SP-38 filter) and the Ross Optical wedge prism (P-WRC055) introduces 4° of dispersion. The image sensor is a sCMOS PCO Edge 5.5 camera selected for its low read noise (1.0e- median), 28fps transfer rate for global shutter, and USB 3.0 connectivity, which allows the sensor to be easily controlled by a laptop.

Critical to the function of the system, the fiber bundle is a dense matrix of fibers at the input, which are separated at the output into vertical columns, providing space on the detector for dispersion (while here we focus on system specific bundle and mask, more details on the fabrication process can be found in [6]). The spatial resolution of a reconstructed image is determined by the size and spacing of the fiber cores at the input of the fiber bundle, whereas the spectral resolution is determined by the width of the gap at the fiber bundle output and the degree of dispersion introduced by the prism. To minimize optical system size and maximize the number of samples, it is important to construct the fiber bundle with small diameter fibers (including core and cladding). The smallest suitable commercially available fibers are manufactured by Schott with a 10µm diameter arranged in a multifiber strand with a 6 × 6 block arrangement (Fig. 2). While single 10µm fiber ribbons would be optimal, the block format is necessary for mechanical strength and stability. To avoid overlapping fiber cores after dispersion, fiber cores are selected from each row by utilizing a patterned photomask with 7µm wide diagonal slits placed at the input of the fiber bundle (Fig. 2). The utilization of slit photomasks and subsequently adapted calibration software doubled the available spatial samples compared to the previously reported modality.

 figure: Fig. 2.

Fig. 2. Image of fiber bundle (a) Fiber bundle input without a photomask. (b) Fiber bundle output without photomask. (c) Fiber bundle input with slit photomask applied so that 6 fibers are selected from 36 within a multifiber strand. (d) Fiber bundle output with slit photomask.

Download Full Size | PDF

The Schott fibers were custom manufactured for this project as ribbons of 200 fiber blocks. By cutting one end of each ribbon in half and then stacking them, and then stacking similarly processed ribbons on top, a condensed, square input of fiber strands is fabricated. The other end of the ribbon section, the output, has spacers inserted between each row to create room for dispersion. The bundle is then immersed and fixed in optical epoxy followed by a polishing procedure.

To allow for in-field and airborne imaging while maintaining optical alignment and calibration, the TuLIPSS system was housed in a custom machined-aluminum enclosure (Fig. 3). An inset under the camera allows for the incorporation of a CPU fan for air intake and vents alongside the camera allow for air outflow. A 90° bracket was designed to enable the placement of the system for engineering flights on a Cessna 172 aircraft. Additionally, a bar can be attached to the side of the enclosure, allowing for a high spatial sampling RGB reference camera to be mounted in the same plane as the fore-objective (FL3-U3-88S2C-C equipped with an Edmunds optics 50mm f/2 LF 85-203 objective).

 figure: Fig. 3.

Fig. 3. (a) Solidworks schematic of the snapshot imaging system. (b)-(e) Images of system (a) from above with the lid on, (b) from underneath, (c) for above with the lid removed, and (e) from the front. Note a RGB reference camera has been attached to the side.

Download Full Size | PDF

The system’s housing has dimensions 22cm (W) x 61cm (L) x 14.5cm (H); however, the volume could be reduced for a low weight optimized implementation, by, for example, folding an optical system to obtain a square aspect ratio. An additional goal of the current housing design was to allow for the easy interchange of components, e.g. prisms and objectives/tube lenses of the relay system. Note that different dispersion requirements may necessitate prisms with different dispersion angles which in turn requires a change in the angle of the platform.

Operation in the field (Fig. 4(a)) utilizes the raw image taken by the system, a reconstructed pseudo-color image, and a high spatial sampling reference camera with an overlapping field of view. As illustrated in Fig. 4, each raw image can be reassembled into 32 spectral channel images (Fig. 4(c)), a pseudo-color composite image (Fig. 4(d)) and the extraction of spectra from within the image (Fig. 4(e)).

 figure: Fig. 4.

Fig. 4. Rice campus observed through a laboratory window. (a) TuLIPSS system and the control laptop displaying from left to right: a raw image (b), a pseudo-colored reassembled image (d), and an image from the RGB camera. (c) A selection of 11 of the 32 spectral channels from 470nm to 700nm. (e) An example of spectrum from within the imaged scene.

Download Full Size | PDF

3. Analysis of the system design

The goal of this work was to design an imaging system capable of video rate imaging in a portable form factor that enables data collection in the field with minimal setup. While snapshot acquisition of the datacube is simultaneous, it is not inherently fast. An imaging spectrometer that requires a long exposure time for a dynamic scene will result in motion blur, while a fast scanning spectrometer may produce artifacts [2]. To fully leverage the inherent advantage of simultaneous datacube acquisition in snapshot imaging, the overall light-throughput of the system is critical. Additional considerations, such as the match between fiber core image size to pixel size and the degree of dispersion, affect photon flux at the sensor and therefore impact final imaging speed as well.

As explored in our radiometric design model for the TuLIPSS system [16], the constraints governing the imaging of the fiber bundle are considerable. The relay optics that reimage the fiber bundle output should, optimally, match the fiber core’s numerical aperture (NA) to avoid coupling losses while also achieving a large field-of-view (FOV) to image the entire bundle (note that this implementation utilizes 0.28 NA fibers and the output area of the bundle is 12.5 × 15.2mm2). In addition, as individual fibers sample a collected image (by fore-optics) at the input, the relay system does not need to be diffraction limited at fibers’ NA but should perform such that the point spread function is smaller than the pixels of the image sensor across the field. Utilizing the radiometric design model [16], we identified a high-performance stereomicroscope objective and tube lens from the Olympus MVX microscope as having a suitable FOV while operating at 0.0995 collection NA. Other available lens options worked at magnifications resulting in lower overall spatial sampling (see [16] for more details). Thus, the selected lens combination was a reasonable compromise between light throughput (permitting micro-second level integration times) and high image sampling. The relay system’s NA and magnification, the sensor’s quantum efficiency and low noise, and the selected dispersion allowed for high photon flux per pixel and achieved an estimated 13.0-18.5x improvement in imaging speed in comparison to the previous implementation.

The overall system performance was assessed under normal daylight conditions by taking the instrument outdoors, where illumination can vary from 120,000 Lux on a bright day to a few thousand Lux on an overcast day. The ambient brightness during our assessment was measured to be 86,300 Lux, consistent with a sunny day, and images were taken of a variety of objects – a tree, pavement, a brick walkway, and a hedge – with a 1ms exposure for all objects followed by an exposure below pixel saturation or at 100ms (the limit for the sensor’s global shutter), whichever was lower (Fig. 5). The two signal values were used to calculate a linear fit for the signal across the exposure until it saturated. Dark frames were taken by covering the fore objective for each exposure to determine stray light and then used to calculate the expected background for the range of exposure times from 0ms to 215ms. As the sensor’s read noise has a median of 1e-, the dark current is less than 0.8e-, and the quantization noise is 0.13e-, the sensor is effectively shot noised limited above 2e-. Therefore, the signal to noise ratio was calculated using the signal converted to photoelectrons using the A/D conversion factor (0.46) and then divided by the square root of the signal and background multiplied by the A/D factor (Eq. (1)).

$$SNR = \,\frac{{\left( {Signal} \right)\,x\,0.46}}{{\sqrt {\left( {Signal + Background} \right)\,x\,0.46} }}$$
With a well depth of 30,000e-, the maximum possible dynamic range for the sensor is 173.2. If a higher limit is required by a specific application, such as measurements of atmospheric radiance where the majority of the signal is from atmospheric scattering [17], a sensor with a higher well capacity can readily be incorporated. Under the illumination conditions reported, the exposure time needed to achieve the maximum possible SNR ranged from 208ms for the dimmest object, a tree, to 10ms for the brightest object, the brick walkway. The measured SNR at 1ms, a video rate exposure, ranged from 13.8 to 54.2.

 figure: Fig. 5.

Fig. 5. SNR of variety of objects in remote scenes.

Download Full Size | PDF

The full width half maximum (FWHM) was measured using three 1 nm filters, ranging from 13.6nm at 488nm to 28.3nm at 632.8nm (Fig. 6). The increasing FWHM is a result of the nonlinear dispersion of the wedge prism. Future versions of the TuLIPSS system may need a linearized prism to optimally sample a large spectral bandwidth. A summary of operating system parameters is presented in (Table 2).

 figure: Fig. 6.

Fig. 6. FWHM at three wavelengths across TuLIPSS spectral range.

Download Full Size | PDF

Tables Icon

Table 2. System specifications

4. Methods

4.1 Calibration

As described in [6], a calibration procedure (creating a coordinate look-up table) is required to reassemble the raw image into a spatial-spectral datacube. For spatial calibration, a six-step phase-shifting interferometry algorithm is used to project images in the x and y direction on the sensor and then to generate coordinates for image reassembly for each pixel. For spectral calibration, at least three narrowband filters are used to locate the position of the fiber cores for that wavelength on the sensor. From the identified positions, the expected dispersion from a BK7 prism with the specified dispersion is calculated and checked against the dispersion for each of the other recorded positions. Previously, the position of the fibers on the sensor was detected by using MATLAB’s image region property functions to compute the centroid; however, computing a core’s centroid often made mistakes between cores, had trouble adapting to different relay system magnifications, and could not be used with a slit patterned photomask. Instead, MATLAB’s image region property is used to find the bounding box of the core or slit image and this result is then compared to a fiber modulation intensity map determined during the spatial calibration process to remove erroneously identified pixels.

4.2 Datacube flatfield correction

Prior to assembling the datacube, the raw image must be flatfield-corrected to account for variations across a uniform image from fiber quality variability and source illumination (Fig. 7). The flatfield-corrected image (C) is determined from the ratio of the dark-corrected raw image, (R - dR), to the dark-corrected flatfield, (f - df), where dR and df are the dark images corresponding to dark images at the exposure time of the raw image and the flatfield, respectively (Eq. (2)). Reassembled images, either multichannel composites or singular spectral channels, were pseudo-colored using a wavelength-to-RGB conversion algorithm [18].

$$C = \frac{{R - {d_R}}}{{f - {d_f}}}$$
A flatfield with uniform brightness and spectra is essential to datacube quality as system variations, especially fiber core quality variation, can affect the quality of the image and spectra. The flatfield for a lab experiment which uses a halogen lamp as the illumination source, is determined using the same illumination source viewed through a diffuser. In the field, a white photography screen is used to determine the flatfield. In the absence of a photography screen a cloud that encompasses the entire image can be utilized as a flatfield or a flatfield can be derived from a composite observation of a cloud scanned across the image. Flatfields for the engineering flights are taken using the photography screen on the ground prior to the flight.

 figure: Fig. 7.

Fig. 7. a) Reassembled image without flatfield correction. Dark spots are due to fiber quality variation and color change is due to influence of illumination source’s spectra (sunlight). b) Reassembled image with flatfield correction. Raw images shown of scene (c) and flatfield (d) were normalized for display purposes.

Download Full Size | PDF

During the engineering flights, additional stresses due to aircraft vibration and temperature gradients pushed the limits of the ruggedization and we observed a systematic translation of the image by one pixel (6.5µm). The image of the fiber bundle itself is static, so positional adjustment of the flatfield to match that of the scene image restores image quality. Additionally, a shift of the lookup table generated during calibration is required if the scene image has shifted. This can be due to stresses experienced while in flight or, as in the case of the sorghum data taken at Texas A&M (see section 5.2), rotation of the circular wedge prism shifts the image position (while leaving the image itself unchanged). This can be corrected by manually checking the location of the narrowband filters used to calibrate the image to the scene image or the scene image against the flatfield.

4.3 Image mosaicking

To best display the image scene, we create a mosaic of the reassembled spectral images using a custom MATLAB script which identifies and matches image features between images to generate x and y translations (Fig. 8).

 figure: Fig. 8.

Fig. 8. Process for mosaicking images.

Download Full Size | PDF

The process to create the mosaic first involves converting the false colored composite images that utilize all wavelengths into grayscale images within the customized MATLAB script. Features are identified in the first image of the sequence with the MATLAB function detectSurfFeatures which utilizes the Speeded-Up Robust Features (SURF) algorithm to find what are commonly called blob features. The 50 strongest features are shown in green in Fig. 8 step 2. The MATLAB function extractionFeatures returns the location of the identified features in the image. This process is repeated for a second image and translations are generated by the distance between identified features shared between images using the MATLAB function matchFeatures. Figure 8 step 4 shows a color image where the first image is blue, the second red, and overlapping areas are white. The translation is shown by the yellow arrow pointing from the green cross to the red circle. The median of the translations is taken to remove erroneous outliers and the second image is averaged on top of the first image. Reassembled images are oversampled such that each fiber core has more than one pixel surrounding it, allowing blurring to be mitigated. The second image is then utilized as the comparison to the third image and so on. The process is repeated as necessary to stitch together a mosaic of the desired scene. The list of translations used to assemble the mosaic is saved and can be applied to any individual spectral channel, or to an image with spectral features highlighted, as the snapshot nature TuLIPSS creates identical spatial information for each image.

4.4 Spectral feature extraction

Objects in a scene can be selected based on their spectral signature and highlighted in an image frame, enabling spectrally information-rich images. The process for this paper is outlined in Fig. 9 using trees seen from the Rice Bioscience Collaborative research building as an example. A selected region containing only the spectrum of interest is sampled to empirically determine selection criteria (Fig. 9(a) and 9(b)). For trees, this is the ratio between the blue wavelengths and the chlorophyll peak at 550nm, the ratio between the 550nm peak and the 660nm absorption dip, and the maximum intensity of the spectra as trees reflect less light than many objects in an urban setting. The use of ratios instead of unmixing or correlation algorithms was primarily motivated by this method’s fast-processing characteristics while working with data containing a range of signals within object classes. For example, vegetation can span a range of values from bright green to brown as shown in Fig. 9(b). This range of ratios easily encompass signals from the entire spectral signature category. Selection criteria are tested in a separate image to ensure high capture (over 90%) of the desired object and low capture (less than 1%) of diverse spectra across the scene (Fig. 9(c)). Once spectra selection criteria have been determined, the spectral information can be applied across the whole image or a series of images to highlight spectral information in the scene (Fig. 9(d) and 9(e)). Table 3 provides selection criteria all objects in Fig. 9(e).

 figure: Fig. 9.

Fig. 9. Process for feature extraction in an image.

Download Full Size | PDF

Tables Icon

Table 3. Feature selection criteria for Fig. 9

5. Laboratory and field measurements

We performed a series of water-stress observations in a controlled laboratory experiment, field observations of sorghum in two soil types and spectral sampling of rural and urban landscapes. In preparation for the airborne tests where the aircraft motion can result in smearing of the images, we focused on rapidly changing scenes of moving objects (e.g. passing cars). This allowed us to assess the achievable imaging speeds in uncontrolled, outdoor conditions.

5.1 In lab water stress monitoring

Plant stress is a major factor in agricultural management and abiotic stressors such as deficient water, temperature extremes, high salinity, ultraviolet radiation, and nitrogen stress [19,20] can all affect the spectrum of the stressed plants, particularly across the visible wavelength region. The spectrum is mainly influenced by the pigments chlorophyll a and b, which strongly absorb light around 670nm [20]. Abiotic stress causes a shift around this “red edge” and increase the reflectance of light [21]. Plant stress monitoring is a natural subset for smart farming, is easily accomplished from an airborne platform, and will only grow in importance as climate change brings increased drought and unpredictable changes in other abiotic stressors [19,21].

To test our instrument on a specific application, we designed and executed a plant stress experiment adapted from Marchin et al. [22]. Potted plants, Liriope Muscari, were placed on top of floral foam columns which were then placed in tubs of water. Persistent soil water moisture levels are induced by varying the distance between the top of the water and the bottom of the plant. All plants were maintained at a well-watered state (1 cm difference) for one month, and then split into three groups (Day 1). One group was kept at a well-watered state, one group had its water level reduced by 2 cm each day until a difference of 15 cm was reached (medium drought: Day 8), and a third group had its water level reduced 2 cm each day until 15cm difference was reached, at which point the water level was reduced 1 cm each day until 22cm difference was reached (severe drought: Day 15). These states were then maintained over a one-month period (Day 41). The plants were imaged on Days 1, 15, and 41.

As a reference measurement, an Ocean Optics USB4000 spectrometer was used to validate the TuLIPSS measurements. Measurements between the Ocean Optics spectrometer and the TuLIPSS system can be difficult to compare directly as, unlike TuLIPSS, the Ocean Optics does not reconstruct an image and only provides a single point source reading from an optical fiber. While leaf reflectance spectra are sensitive to stress, factors such as plant size, age, and measurement angle [21] can confound measurements between plants and even within a single leaf. Consequently, the TuLIPSS system may measure variations even within a single leaf that are not apparent from the single-point measurements of the Ocean Optics spectrometer.

To precisely compare measurements between spectrometers while mitigating the confounding factors of inter and intra leaf variation, a small select area of a single leaf was illuminated using a halogen lamp and used to match spectra between TuLIPSS and the reference spectrometer (Fig. 10(a)). With this relationship established, whole plant averages provided by the TuLIPSS system imaging at Day 41 were compared to spectral components of green and brown leaves as measured by the reference spectrometer. The well-watered plant 1 closely matches a reference measurement of a green leaf, indicating green leaves dominated the spectrum, while the severe drought plant 7 closely matched a reference dead leaf spectrum. The severe drought plant 6 matched a weighted average of the two previous references, with a ratio of green to dead leaf of 1 to 1.4. These comparisons show the TuLIPSS measurements are in good agreement with the Ocean Optics reference spectrometer for both a select area and as tracking spectra across a whole plant. The TuLIPSS system does appear to have trouble at the edge of its range, as the decreasing resolution and the filter cut off makes recording fast spectral changes, such as in the plant’s red edge, difficult to capture.

 figure: Fig. 10.

Fig. 10. (a) Illumination of a small area of a leaf for comparison of TuLIPSS and Ocean Optics spectra. (b), (c), (d) Comparison of TuLIPSS signal to an overall plant health state at day 41 to a specific spectral measurement by ocean optics. (b) Plant 1 (well-watered) in comparison to a reference measurement of a green leaf. (c) Plant 7 (severe drought) compared to a dead leaf. (d) Plant 6 (severe drought) compared to a weighted average of ocean optics green and dead leaves.

Download Full Size | PDF

Imaging the individual plants’ spectra over time using TuLIPSS (Fig. 11) generated two main conclusions: 1) a relative increase in green as seen when moving from day 1 to day 15 for some medium and severe drought and 2) the increase in signal in the red region (600 – 700nm) as leaves begin to brown, seen in all severe drought groups and one medium drought plant. The TuLIPSS system was able to differentiate and track plant stress over time.

 figure: Fig. 11.

Fig. 11. Change in plant spectra over time grouped by applied water stress. Reconstructed image of each plant at Day 1 time point of the tracking of fiber cores used for spectra in red at bottom.

Download Full Size | PDF

5.2 Field-grown bioenergy sorghum differentiation by leaf senescence

To further explore TuLIPSS’ potential for smart farming applications, field experiments were conducted at the Texas A&M AgriLife Research Field Laboratory in Burleson County, Texas, USA. Energy sorghum hybrid TX08001, Sorghum bicolor (L. Moench) seed was planted in freshly tilled soil on 6/5/20 and 7/10/20 to a depth of 5 cm and a row spacing of 76 cm in two soil types: 1) Weswood silt loam, a soil containing 25% clay and mixed minerology and 2) Roetex clay, a soil containing 49% clay and mixed minerology. The plot sizes were 32 rows by 30 m. At time of planting a solution of liquid ammonium polyphosphate (11-37-0), UAN 32%, and Zinc Sulfate was applied at seed depth and 5 cm to the side of the seed to yield 45-63-0 + 5 Zn (kg/ha). The plots were sown with enough seed to allow for thinning at 21 days after emergence (DAE) to 15 cm in-row plant spacing. Seeds were treated with Concep III, an herbicide protectant, Nugro, a systemic insecticide and Apron X, a fungicide. The plots were grown with natural rainfall totaling 221 mm over the growing season.

Images were taken with the TuLIPSS and Ocean Optics systems at 115 days (herein “younger”) and 150 days (herein “older”) after planting. Additional Ocean Optics readings were taken at locations with noticeable plant leaf states, e.g. a particularly green leaf and a noticeably senescent leaf. Our aim was to showcase TuLIPSS’ ability to separate areas of sorghum based on leaf senescence and achieve spectra that were supported by reference measurements.

In Fig. 12(a) spectral averages taken from the TuLIPSS system across a selected area of sorghum are shown. This generated distinct signals that exhibited the expected spectral characteristics seen in the Ocean Optics reference spectrometer. The sorghum plots were differentiable as group averages; for both silt loam and clay soil, the plants exhibited a stronger senescent (brown) signal than their younger counterparts (Fig. 12(a)). Each plant group measured by TuLIPSS contained a range of spectra, with younger sorghum grown in clay soil shown as an example (Fig. 12(b)). The range of spectra was similar to the respective measurements taken with the reference spectrometer (Fig. 12(c)), where spectra taken from the same group as Fig. 12(b) is shown in blue.

 figure: Fig. 12.

Fig. 12. Recorded TuLIPSS spectra fit within expected values provided by the Ocean Optics reference spectrometer. Signals were normalized to the value at 550nm for display purposes. (a) Average of all signals within a selected area containing only sorghum. (b) Example of range of signals from younger sorghum grown in clay soil. The green line indicates the greenest leaf and the red line the most senescent leaf. (c) Examples of sorghum leaf spectra from the Ocean Optics reference spectrometer. Blue lines are those from the younger sorghum grown in clay soil.

Download Full Size | PDF

Sorghum leaves are large and contain many gradations of color and occasional brown spots. As seen in Fig. 12(c), the spectrum of sorghum leaves varied between leaves with most exhibiting a sharp peak at ∼550nm and a distinctive absorption feature around 660nm, where the strength of the signal in the 650nm to 700nm is variable. In Fig. 12(c), a clear distinction in the spectral shape of severely stressed leaves compared to greener counterparts is shown. TuLIPSS signals within a certain wavelength range can be directly matched to the reference measurements for specific features, qualitatively denoted green leaf, partially senesced leaf, progressively senesced leaf, and dead leaf (Fig. 13).

 figure: Fig. 13.

Fig. 13. (a) Discrete Ocean Optics spectra for different health states which ranges of TuLIPSS can be sorted into. (b) Spectrum of each fiber core can be separated based on health state both as image overlay and spectral graph. (c) Ocean Optics spectra combined into weighted average based on TuLIPSS population numbers.

Download Full Size | PDF

This feature extraction follows a similar procedure as described in the feature extraction section and boundary conditions were set by reference to Ocean Optics values and intensity values for sorghum in sunlight. The specification of boundary conditions to remove unrelated spectra was not necessary because sorghum filled the majority of the image and the spectral sampling was spatially restricted such that non-sorghum objects (grass, sky or ground) were not included. The differentiation ratio between the normalized max value at 550nm to the median value from 572nm to 665nm for both TuLIPSS data and Ocean Optics was used to discern the changing behavior of the red edge. The ratio boundary criteria for TuLIPSS were set around Ocean Optics values: below 0.9 for green leaf, 0.9 to 1.1 for partially senesced leaf, 1.1 to 1.2 for progressively senesced leaf, and above 1.2 for dead leaf. These criteria allowed for creation of aggregate TuLIPSS groups, which are in good agreement with the Ocean Optics measurement.

The ability to more precisely sort sorghum by health state using the spectral resolution enabled by TuLIPSS improved the ability to differentiate sorghum groups compared to the simple group averages discussed above. While younger sorghum grown in clay soil and older sorghum grown in silt loam soil look quite similar from an average across the image, grouping spectra by the reference categories showed older sorghum grown in silt loam soil had a much higher proportion of dead leaves, while younger sorghum grown in clay had a higher proportion of partially senesced leaves (Table 4).

Tables Icon

Table 4. Ratio of sorghum spectral populations

Applying spectral feature extraction based on the four qualitive categories matching the Ocean Optics to the reconstructed images highlights different sorghum states (Fig. 13(b)), including severely stressed leaves and top of plant green leaves. As an additional qualitative validation, weighing the Ocean Optics observations of the spectral groups by the number of samples in each TuLIPSS observation of the same group showed good agreement between TuLIPSS and the reference data in three of the four selected groups; the older sorghum grown in silt loam differed above 650nm, probably due to TuLIPSS weak detection at the red edge of its range, although a similar overall trend was still observed.

Drawing conclusions about the underlying health state of the sorghum based upon the spectra is beyond the scope of this paper, but we have shown that we can confidently separate distinct plant stress groups within a single population of plants.

5.3 On ground preparation for flight

A key advantage of snapshot imaging spectrometers is the ability to image dynamic objects or dynamic scenes. This is especially relevant when imaging from an aircraft, such as a Cessna 172 that has a minimum stall speed, requiring exposure times be fast enough to capture the scene without blurring the image. As shown in Table 5, the minimum altitudes allowed for Cessna 172 flights – 500ft for uninhabited areas and 1000ft for congested areas – require exposure times no more than 1ms using the 50mm focal length fore-objective.

Tables Icon

Table 5. Estimated ground resolution and exposure time for no image blur

To characterize the system range of exposures required under outdoor conditions, we tested our system by observing a car driven at a consistent 20 mph and moving perpendicular to the line of sight of the system, which rested 17 meters away on an elevated sidewalk (Fig. 14). Videos were taken at 50µs, 100µs, 500µs, 1ms, 10ms, and 100ms exposure times. The day was overcast, and ambient light levels were recorded to be 10,780 Lux using the Leaton L830 Lux meter; brighter conditions could allow us to attain even lower exposure times. Imaging in the hundreds of microseconds produced quality reconstructions, thereby providing a reasonable assurance that direct imaging at typical speeds for the Cessna 172 flights was possible. Additionally, the previous system’s reported exposure limit of 100ms [6], was far exceeded, significantly enhancing our ability to provide spectral imaging of a wider range of dynamic phenomena.

 figure: Fig. 14.

Fig. 14. (a) A single frame from a video of a car passing in front of the spectrometer with exposure time above each frame (see Visualization 1). The red line denotes the area used to sample image blur and the black line is a scale bar denoting 200mm on the body of the car. (b) Intensity profile taken in the area between car’s black door seal and gray body.

Download Full Size | PDF

The system reported here has sufficient light throughput such that the temporal resolution is limited primarily by the image transfer rate rather than the exposure time. For example, at 500µs the details of the passing car are easily resolvable, although in a video format, where the image transfer rate is capped at 28fps, it appears to skip slightly across the image. While this may inconvenience in the perceived video quality, individual image quality is maintained. By 50µs exposure, the noise present in the image begins to overcome any advantage gained from increased speed.

5.4 Airborne observations

As a culminating demonstration of the system’s capabilities and field ready status, we performed an engineering flight, capturing observations from a Cessna 172 aircraft. This also allowed demonstration of the ability to image at video rates under real field conditions. In addition to the snapshot imaging spectrometer, an RGB reference camera was added to provide high resolution ground imaging.

The aircraft was flown to an elevation of 500ft above fields to the south of Houston, Texas. Both the TuLIPSS system and reference camera were set for a 1ms exposure, but note that the reference camera’s slower image capture rate (21 fps) resulted in fewer images. We show in Fig. 15(a), a mosaic of a field generated using 180 images, together with a spectrally sorted scene to highlight the trees surrounding the field. The trees are distinguished from the surrounding fields by their intensity difference, and we sort them into three groups based on the previously discussed ratio from Table 3, where group 1 has a value of between 1.2 and 5, group 2 a value between 0.8 to 1.2, and group three below 0.8. As explored earlier, these spectral characteristics can correspond to plant stress and the mosaiced spectral sorting allows for fast interpretation of a large volume of spectral data.

 figure: Fig. 15.

Fig. 15. (a) Color mosaic from TuLIPSS system taken at 500ft altitude and 1ms exposure (left), mosaic with spectral sorting trees into three spectral groups (middle), and associated average spectra from each group (right). (b) Color video from reference video taken at same time as the TuLIPSS video. A zoomed-in area is shown to showcase the ability to resolve individual leaves against background. (c) Mosaic from TuLIPSS system of the city of Alvin at 1000ft.

Download Full Size | PDF

The reference camera can achieve centimeter scale resolution at the altitudes flown, and this allows for resolution of individual leaves against the background. The combination of TuLIPSS and a high resolution camera can be useful for smart farming applications that require fine morphological data as well as detailed spectral analyses.

Finally, a mosaic of the city of Alvin, Texas using 874 images from 1000ft is presented in Fig. 15(c) and shows a wider variety of land uses, showcasing the breadth of spectral data that is readily accessible. With 874 images, each image featuring ∼63,000 spatial points, the resulting pseudo-aggregate datacube contains 55,062,000 spatial points, each with 32 spectral channels.

6. Conclusion and discussion

We present a flight capable, field ready snapshot imaging spectrometer with the capability to produce images with ∼63,000 spatial samples each being dispersed into 32 spectral channels spanning the wavelength range 470nm to 700nm. The system can image with exposure times in the hundreds of micro-seconds to 1ms exposure time, suitable for video rate imaging from an airborne platform, with a SNR of 13.8 to 54 at 1ms. We present examples of laboratory-based mosaicking and field-tested dynamic imaging to showcase the capabilities required for airborne observations. We explore plant stress monitoring as a proof of concept for smart farming applications both in the lab with a water stress experiment and in the field with measurements of sorghum in different soil conditions. Finally, we performed a test flight to validate the future potential for remote sensing, utilizing image mosaicking and spectral feature extraction to quickly present relevant data from a large volume of spectral observations. Design changes to optimize the system parameters for sufficient integration times for flight based remote sensing led to at least a 13.0-18.5x increased photon flux at the sensor over the previously presented system [6].

While we have achieved a functionality that allows for field use during this instrument development phase, several improvements and future directions remain. Future relay systems will feature custom designed lenses to further increase light throughput and image quality. The system housing will be tailored to the specific configuration and application to be used, resulting in a smaller, lighter system, potentially allowing for autonomous imaging via drones. Additionally, a TuLIPSS system tailored to plant stress could update the spectrum to reach further into the red wavelengths, allowing fine grained analysis of the red edge and better comparison to standard plant health indexes. Use of the RGB camera for high spatial resolution images could improve data content and provide a hybrid system for morphological and spectral analysis in plant studies [23]. More advanced methods of selection and analysis of spectra, both at the direction of collaborators and from internal metrics, is planned. A version of the TuLIPSS system that operates in the short-wave infrared allowing up to 1700nm, allowing for the study of evapotranspiration, is also currently being tested.

Finally, the fiber bundle can be further improved. More advanced manufacturing techniques, e.g. using a fiber strand, where only the desired imaging fibers transmit light, would obviate the need for a lithographic mask, side stepping any issues of mask-fiber mismatch. Another direction being considered for improving the fiber bundles lays in implementing a tunable bundle for which the distance between fiber rows at the output can be adjusted either manually or autonomously, creating the ability to trade spatial resolution for spectral resolution within a single system.

Funding

National Aeronautics and Space Administration (IIP ESTO NNX17AD30G).

Acknowledgments

We would like to acknowledge all the members in Tkaczyk lab for the helpful discussions and assistance, David Forster for piloting a Cessna 172, and Adon Shapiro for driving the car.

Disclosures

Dr. Tomasz Tkaczyk has financial interests in Attoris LLC focusing on applications and commercialization of hyperspectral imaging technologies.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. L. Gao and L. V. Wang, “A review of snapshot multidimensional optical imaging: Measuring photon tags in parallel,” Phys. Rep. 616, 1–37 (2016). [CrossRef]  

2. N. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng. 52(9), 090901 (2013). [CrossRef]  

3. S. Migdall, P. Klug, A. Denis, and H. Bach, “The additional value of hyperspectral data for smart farming,” in 2012 IEEE International Geoscience and Remote Sensing Symposium (IEEE, 2012), pp. 7329–7332.

4. B. Lu, P. Dao, J. Liu, Y. He, and J. Shang, “Recent Advances of Hyperspectral Imaging Technology and Applications in Agriculture,” Remote Sens. 12(16), 2659 (2020). [CrossRef]  

5. B.-C. Gao, C. Davis, and A. Goetz, “A Review of Atmospheric Correction Techniques for Hyperspectral Remote Sensing of Land Surfaces and Ocean Color,” in 2006 IEEE International Symposium on Geoscience and Remote Sensing (IEEE, 2006), pp. 1979–1981.

6. Y. Wang, M. E. Pawlowski, S. Cheng, J. G. Dwight, R. I. Stoian, J. Lu, D. Alexander, and T. S. Tkaczyk, “Light-guide snapshot imaging spectrometer for remote sensing applications,” Opt. Express 27(11), 15701 (2019). [CrossRef]  

7. M. E. Pawlowski, J. G. Dwight, T.-U. Nguyen, and T. S. Tkaczyk, “High performance image mapping spectrometer (IMS) for snapshot hyperspectral imaging applications,” Opt. Express 27(2), 1597 (2019). [CrossRef]  

8. B. Khoobehi, K. Firn, E. Rodebeck, and S. Hay, “A new snapshot hyperspectral imaging system to image optic nerve head tissue,” Acta Ophthalmol. 92(3), e241 (2014). [CrossRef]  

9. Y. Liu, H. Pu, and D.-W. Sun, “Hyperspectral imaging technique for evaluating food quality and safety during various processes: A review of recent applications,” Trends Food Sci. Technol. 69, 25–35 (2017). [CrossRef]  

10. X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational Snapshot Multispectral Cameras: Toward dynamic capture of the spectral world,” IEEE Signal Process. Mag. 33(5), 95–108 (2016). [CrossRef]  

11. L. Gao, R. T. Kester, and T. S. Tkaczyk, “Compact Image Slicing Spectrometer (ISS) for hyperspectral fluorescence microscopy,” Opt. Express 17(15), 12293 (2009). [CrossRef]  

12. J. A. G. D. Wight, T. O. S. T. Kaczyk, C. Engineering, and M. St, “Lenslet array tunable snapshot imaging spectrometer (LATIS) for hyperspectral fluorescence microscopy,” Biomed. Opt. Express 8(3), 1950–1964 (2017). [CrossRef]  

13. Y. Wang, M. E. Pawlowski, and T. S. Tkaczyk, “High spatial sampling light-guide snapshot spectrometer,” Opt. Eng. 56(8), 081803 (2017). [CrossRef]  

14. N. Gat, G. Scriven, J. Garman, M. De Li, and J. Zhang, “Development of four-dimensional imaging spectrometers (4D-IS),” Proc. SPIE 6302, 63020M (2006). [CrossRef]  

15. J. Kriesel, G. Scriven, N. Gat, S. Nagaraj, P. Willson, and V. Swaminathan, “Snapshot hyperspectral fovea vision system (HyperVideo),” Proc. SPIE 8390, 83900T (2012). [CrossRef]  

16. D. Zheng, C. Flynn, R. I. Stoian, J. Lu, H. Cao, D. Alexander, and T. S. Tkaczyk, “Radiometric and design model for the tunable light-guide image processing snapshot spectrometer (TuLIPSS),” Opt. Express 29(19), 30174 (2021). [CrossRef]  

17. W. J. Moses, J. H. Bowles, R. L. Lucke, and M. R. Corson, “Impact of signal-to-noise ratio in a hyperspectral sensor on the accuracy of biophysical parameter estimation in case II waters,” Opt. Express 20(4), 4309 (2012). [CrossRef]  

18. D. Bruton, “Color Science,” http://www.midnightkite.com/color.html.

19. M. He, C.-Q. He, and N.-Z. Ding, “Abiotic Stresses: General Defenses of Land Plants and Chances for Engineering Multistress Tolerance,” Front. Plant Sci. 9, 1–18 (2018). [CrossRef]  

20. A. K. Tilling, G. J. O’Leary, J. G. Ferwerda, S. D. Jones, G. J. Fitzgerald, D. Rodriguez, and R. Belford, “Remote sensing of nitrogen and water stress in wheat,” Field Crop. Res. 104(1-3), 77–85 (2007). [CrossRef]  

21. K. Loggenberg, A. Strever, B. Greyling, and N. Poona, “Modelling Water Stress in a Shiraz Vineyard Using Hyperspectral Imaging and Machine Learning,” Remote Sens. 10(2), 202 (2018). [CrossRef]  

22. R. M. Marchin, A. Ossola, M. R. Leishman, and D. S. Ellsworth, “A Simple Method for Simulating Drought Effects on Plants,” Front. Plant Sci. 10, 1–14 (2020). [CrossRef]  

23. Y. Murakami, K. Nakazaki, and M. Yamaguchi, “Hybrid-resolution spectral video system using low-resolution spectral sensor,” Opt. Express 22(17), 20311 (2014). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       Video of a car passing in front of the spectrometer. Exposure for each video shown in top left corner

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. Schematic illustrating the principle of the snapshot imaging system.
Fig. 2.
Fig. 2. Image of fiber bundle (a) Fiber bundle input without a photomask. (b) Fiber bundle output without photomask. (c) Fiber bundle input with slit photomask applied so that 6 fibers are selected from 36 within a multifiber strand. (d) Fiber bundle output with slit photomask.
Fig. 3.
Fig. 3. (a) Solidworks schematic of the snapshot imaging system. (b)-(e) Images of system (a) from above with the lid on, (b) from underneath, (c) for above with the lid removed, and (e) from the front. Note a RGB reference camera has been attached to the side.
Fig. 4.
Fig. 4. Rice campus observed through a laboratory window. (a) TuLIPSS system and the control laptop displaying from left to right: a raw image (b), a pseudo-colored reassembled image (d), and an image from the RGB camera. (c) A selection of 11 of the 32 spectral channels from 470nm to 700nm. (e) An example of spectrum from within the imaged scene.
Fig. 5.
Fig. 5. SNR of variety of objects in remote scenes.
Fig. 6.
Fig. 6. FWHM at three wavelengths across TuLIPSS spectral range.
Fig. 7.
Fig. 7. a) Reassembled image without flatfield correction. Dark spots are due to fiber quality variation and color change is due to influence of illumination source’s spectra (sunlight). b) Reassembled image with flatfield correction. Raw images shown of scene (c) and flatfield (d) were normalized for display purposes.
Fig. 8.
Fig. 8. Process for mosaicking images.
Fig. 9.
Fig. 9. Process for feature extraction in an image.
Fig. 10.
Fig. 10. (a) Illumination of a small area of a leaf for comparison of TuLIPSS and Ocean Optics spectra. (b), (c), (d) Comparison of TuLIPSS signal to an overall plant health state at day 41 to a specific spectral measurement by ocean optics. (b) Plant 1 (well-watered) in comparison to a reference measurement of a green leaf. (c) Plant 7 (severe drought) compared to a dead leaf. (d) Plant 6 (severe drought) compared to a weighted average of ocean optics green and dead leaves.
Fig. 11.
Fig. 11. Change in plant spectra over time grouped by applied water stress. Reconstructed image of each plant at Day 1 time point of the tracking of fiber cores used for spectra in red at bottom.
Fig. 12.
Fig. 12. Recorded TuLIPSS spectra fit within expected values provided by the Ocean Optics reference spectrometer. Signals were normalized to the value at 550nm for display purposes. (a) Average of all signals within a selected area containing only sorghum. (b) Example of range of signals from younger sorghum grown in clay soil. The green line indicates the greenest leaf and the red line the most senescent leaf. (c) Examples of sorghum leaf spectra from the Ocean Optics reference spectrometer. Blue lines are those from the younger sorghum grown in clay soil.
Fig. 13.
Fig. 13. (a) Discrete Ocean Optics spectra for different health states which ranges of TuLIPSS can be sorted into. (b) Spectrum of each fiber core can be separated based on health state both as image overlay and spectral graph. (c) Ocean Optics spectra combined into weighted average based on TuLIPSS population numbers.
Fig. 14.
Fig. 14. (a) A single frame from a video of a car passing in front of the spectrometer with exposure time above each frame (see Visualization 1). The red line denotes the area used to sample image blur and the black line is a scale bar denoting 200mm on the body of the car. (b) Intensity profile taken in the area between car’s black door seal and gray body.
Fig. 15.
Fig. 15. (a) Color mosaic from TuLIPSS system taken at 500ft altitude and 1ms exposure (left), mosaic with spectral sorting trees into three spectral groups (middle), and associated average spectra from each group (right). (b) Color video from reference video taken at same time as the TuLIPSS video. A zoomed-in area is shown to showcase the ability to resolve individual leaves against background. (c) Mosaic from TuLIPSS system of the city of Alvin at 1000ft.

Tables (5)

Tables Icon

Table 1. Component list

Tables Icon

Table 2. System specifications

Tables Icon

Table 3. Feature selection criteria for Fig. 9

Tables Icon

Table 4. Ratio of sorghum spectral populations

Tables Icon

Table 5. Estimated ground resolution and exposure time for no image blur

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

S N R = ( S i g n a l ) x 0.46 ( S i g n a l + B a c k g r o u n d ) x 0.46
C = R d R f d f
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.