Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Hyperspectral push-broom imager using a volume Bragg grating as an angular filter

Open Access Open Access

Abstract

A hyperspectral push-broom imager has been designed, constructed, and tested. The narrow angular selectivity of a weakly index modulated volume Bragg grating is utilized to replace the objective lens, slit, and collimating lens of a conventional slit-based hyperspectral push-broom imager. The imager comprises a dispersion grating, an angular filter grating, a focusing lens, and an image sensor. The imager has a field of view (FOV) of 17 degrees in the spatial direction, a spectral range from 400 nm to 900 nm, and a spectral resolution of 2.1 nm. The acquired hyperspectral data cubes are presented, and the influence of wavelength-dependent incident angle errors is analyzed.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Hyperspectral imaging sensors, which obtain optical spectra for each pixel in the image of a scene, can be used in many fields, including agriculture, food quality, medical diagnosis, and water resource monitoring [14]. Various optical designs and techniques, mainly categorized into spectral scanning, spatial scanning, and snapshot imaging methods, have been used to create a hyperspectral imaging sensor [3,5]. Among them, a push-broom scanner [69], also known as a line scan hyperspectral camera, is a device used to collect a spectrogram consisting of spectra for each spatial position(x) in the line. A three-dimensional hyperspectral data cube (x, y, λ) can be created by stacking spectrograms (x, λ) obtained from sweeping in the perpendicular direction(y) to the line to cover the target region. The main components of typical push-broom scanners are an imaging spectrograph, an electronic image sensor, and an objective lens. The imaging spectrograph consists of an entrance slit, a collimating lens, a dispersive grating, and a focusing lens. A light incident to the objective lens with an angle (θ(x), θ(y)) forms a focused spot at the point (x, y) in the image plane. The entrance slit, located in the image plane of the objective lens, limits the light to pass through a single, narrow line (x, 0). The collimating lens directs the light from the slit to the dispersive grating with the angle (θ(x), 0). The dispersive grating changes the propagation angle according to the wavelength of the light. After passing through the focusing lens, the light converges to form a focused spot at the point (x, λ) on the electronic image sensor, based on the converted angle (θ(x), θ(λ)). The hyperspectral push-broom imager has three focal systems: the objective lens, the collimating lens, and the focusing lens. The finite focal lengths of these focal systems pose a challenge in reducing the size and weight of the hyperspectral imager. These focal systems require three focus adjustments and two rotational alignments: the entrance slit rotation and the dispersive unit rotation. As a result, the optomechanical design becomes complicated, and assembly costs increase.

This work presents a hyperspectral push-broom imager with only one focal system. The proposed hyperspectral push-broom imager consists of a dispersive grating, an angular filter grating, a focusing lens, and an electronic image sensor. A light incident to the dispersive grating with an angle (θ(x), θ(y)) changes its propagation angle after passing the dispersive grating (θ(x), θ(y, λ)). The angular filter grating, which has a narrow angular selectivity, diffracts the incoming light with only specific angles (θ(x), θ(0, λ)). After passing through the focusing lens, the diffracted light (θ(x), θ(λ)) forms a focused spot at the point (x, λ) on the electronic image sensor. The following sections elaborate on the new design of the hyperspectral imager. Additionally, the construction of the prototype hyperspectral imager and the results of demonstration tests are presented.

2. Volume Bragg grating

Volume Bragg grating (VBG), also called volume phase holographic (VPH) grating [1012], works by modulations of the refractive index in the form of fringe planes running parallel to each other through the depth of grating material. Figure 1(A) depicts the structure of the transmission grating with fringes perpendicular to its surface. The incident light to the grating is diffracted according to the grating equation.

$$\mathrm{m\lambda } = \mathrm{\Lambda }({sin(\alpha )- sin(\beta )} )$$
where m is the order of diffraction, Λ is the spacing of the fringe planes, λ is the wavelength of light in free space, α is the angle of incidence, and β is the angle of diffraction in air. The maximum diffraction efficiency occurs when the Bragg condition is satisfied (α= β). For the first-order diffraction, the Bragg condition is represented by
$$\mathrm{\lambda } = 2\mathrm{\Lambda sin}(\beta )$$

The peak diffraction efficiency at the Bragg condition depends on the product of the intensity of the refractive index modulations (Δn) and the thickness of the grating volume(d). For the volume Bragg grating made of dichromated gelatin, typical values of Δn range from 0.02 to 0.1, and d is 4 to 20 μm [10]. In contrast, Δn can be as low as 0.0001, and d must be several millimeters to achieve high diffraction efficiencies for the volume Bragg grating made of photo-thermal-refractive (PTR) glass [12]. Figure 1(B) shows the diffraction efficiencies of the volume Bragg gratings with a strong index modulation (Δn = 0.07) and a weak index modulation (Δn = 0.0004), calculated through the theory of coupled waves developed by Kogelnik [13]. For the strongly index-modulated grating, diffraction occurs at a broad range of incident angles, similar to a typical surface relief grating. However, the significant diffraction occurs only when the Bragg condition is satisfied for the weakly index-modulated grating. The angular bandwidth of the volume Bragg grating is inversely proportional to the grating thickness (d).

 figure: Fig. 1.

Fig. 1. Panel (A) Propagation of optical rays through a volume Bragg grating: (1) incident ray (2) transmitted ray (3) diffracted ray. Panel (B) Calculated diffraction efficiencies of volume Bragg grating with different wavelengths: strong index modulation (Δn = 0.07, d = 4.3μm, and Λ=1.667μm), weak index modulation (Δn = 0.00029, d = 1.06 mm and Λ=0.8734μm).

Download Full Size | PDF

A transmission volume Bragg grating with a weak index modulation purchased from OptiGrate was used as an angular filter. The refractive index modulation is 0.00029, the thickness is 1 mm, the grating period(Λ1) is 0.8734 μm, and the angular selectivity (FWHM) is 1.1 mrad. A standard VPH transmission grating (600 l/mm @600 nm, Λ2 = 1.6667 μm), manufactured by Wasatch Photonics with a strong index modulation, was used as a dispersive grating.

3. Hyperspectral push-broom imager

3.1 Optical design

Figure 2 shows the optical design of a hyperspectral push-broom imager. Light passing through the window is reflected by the scanning mirror and diffracted by the dispersion grating. The angular filter grating diffracts only light that enters with an incident angle that satisfies the Bragg condition. Light diffracted from the angular filter grating is focused by a focusing lens at different locations on the electronic image sensor depending on its wavelength, as shown in Fig. 2(A). At the particular scanning mirror angle shown in Fig. 2, light with an incident angle close to normal to the window satisfies the Bragg condition on the angular filter grating. Light with an incident angle far from the normal to the window is transmitted through the angular filter grating without diffraction. Such light cannot reach the focusing lens, as shown in Fig. 2(B), or is blocked by the aperture, as shown in Fig. 2(C).

 figure: Fig. 2.

Fig. 2. Optical layout of the hyperspectral push-broom imager (A) normal incidence (B) off-angle incidence and filtering by the angular filter grating (C) off-angle incidence and blocked by the apertures: (1) window (2) scanning mirror (3) dispersive grating (4) angular filter grating (5) focusing lens (6) electronic image sensor (7) scanning mirror aperture (8) dispersive grating aperture (9) angular filter grating aperture.

Download Full Size | PDF

An S-Mount f/2.5 lens (Edmund Optics Ltd. part #58-203) with a focal length of 8 mm is used for the focusing optics.

The primary purpose of the scanning mirror is to simplify prototype testing, but it can also be locked at a fixed angle for use with rapidly moving platforms where scanning is accomplished by platform motion.

3.2 Layout of the gratings

Figure 3 shows the layout of the angular filter grating and the dispersive grating. To maximize the overall diffraction efficiencies, the angle(φ) between the two gratings is chosen to satisfy the Bragg conditions at two gratings at the center wavelength(λc), as shown in Fig. 3(A).

$${\lambda _c} = {\mathrm{\Lambda }_1}sin({{\theta_c}} ),\; {\lambda _c} = {\mathrm{\Lambda }_2}sin({{\beta_c}} )$$
$$\mathrm{\varphi } = {\theta _c} - {\beta _c}$$

At a wavelength (λa) different from the center wavelength, the Bragg condition is satisfied only at the angular filter grating. Diffraction occurs according to the grating Eq. (1) at the dispersive grating, as shown in Fig. 3(B).

$${\lambda _a} = 2{\mathrm{\Lambda }_1}sin({{\theta_a}} )$$
$${\lambda _a} = {\mathrm{\Lambda }_2}({sin({{\alpha_a}} )+ sin({{\beta_a}} )} )$$

 figure: Fig. 3.

Fig. 3. The layout of the angular filter grating (1) and the dispersive grating (2). The optical ray propagation at the center wavelength(A) and an off-center wavelength(B).

Download Full Size | PDF

For an ideal operation of the hyperspectral push-broom imager, the incidence angle (αa) at every wavelength should be the same as the incidence angle (βc) at the center wavelength. The angular discrepancy (αa- βc) could be called the wavelength-dependent incidence angle error.

From Eq. (6), the incidence angle (αa) can be represented by

$$sin({{\alpha_a}} )= \frac{{{\lambda _a}}}{{{\mathrm{\Lambda }_2}}} - sin({{\beta_a}} )= \frac{{{\lambda _a}}}{{{\mathrm{\Lambda }_2}}} - sin({{\theta_a} - \varphi } )= \frac{{{\lambda _a}}}{{{\mathrm{\Lambda }_2}}} - sin({{\theta_a}} )cos(\varphi )+ cos({{\theta_a}} )sin(\varphi )$$

Using Eq. (5),

$$sin({{\alpha_a}} )= \frac{{{\lambda _a}}}{{{\mathrm{\Lambda }_2}}} - \frac{{{\lambda _a}}}{{2{\mathrm{\Lambda }_1}}}cos(\varphi )+ \sqrt {1 - {{\left( {\frac{{{\lambda_a}}}{{2{\mathrm{\Lambda }_1}}}} \right)}^2}} sin(\varphi )$$

When φ = 0 and Λ2 = 2Λ1, the Eq. (8) equals to zero and the incidence angle(αa) is zero. There is no wavelength-dependent incidence angle error, but there is a significant loss in the diffraction efficiency at the dispersive grating with normal incidence.

Expressing the off-center wavelength(λa) as λc(1+δ) and setting x1c/(2Λ1), x2c/(2Λ2),

$$sin({{\alpha_a}} )= 2{x_2}({1 + \delta } )- {x_1}({1 + \delta } )cos(\varphi )+ \sqrt {1 - {x_1}^2{{({1 + \delta } )}^2}} sin(\varphi )$$

To minimize the wavelength-dependent incidence angle error, taking the derivative of Eq. (9) with respect to δ and setting zero leads to

$$2{x_2} - {x_1}cos(\varphi )- \frac{{{x_1}^2}}{{\sqrt {1 - {x_1}^2} }}sin(\varphi )= 0$$

Using Eq. (4) and Eq. (3),

$$cos(\varphi )= \sqrt {1 - {x_1}^2} \sqrt {1 - {x_2}^2} + {x_1}{x_2}$$
$$sin(\varphi )= {x_1}\sqrt {1 - {x_2}^2} - \sqrt {1 - {x_1}^2} {x_2}$$

Equation (10) is reduced to the following relation between the two grating periods.

$${\mathrm{\Lambda }_2} = {\mathrm{\Lambda }_1}\sqrt {4 - 3{{\left( {\frac{{{\lambda_c}}}{{2{\mathrm{\Lambda }_1}}}} \right)}^2}} $$

The period (Λ1) of the angular filter grating is 0.8734 μm, and the period (Λ2) of dispersive grating is 1.6667 μm. These almost satisfy Eq. (13) with the center wavelength (λc) of 620 nm. From Eq. (3), βc is 10.7°, and θc is 20.79°. From Eq. (4), φ is 10.07°. The wavelength-dependent incidence angle error calculated by Eq. (8) is −0.073°, −0.017°, 0.001°, −0.021°, and −0.087° at wavelengths of 400 nm, 500 nm, 600 nm, 700 nm, and 800 nm, respectively. Wavelength-dependent errors, which can result in inaccurate optical spectra of objects that sharply change spectra in the direction of scanning, can be further reduced by adding a pair of prisms [14].

3.3 Construction of prototype hyperspectral push-broom imager

The dispersive grating (ϕ25.4 mm, t = 3 mm) and the angular filter grating (25 mm x 25 mm, t = 1 mm) were placed on both sides of the angled spacer to minimize the gap between the two gratings, as shown in Fig. 4. The angle (φ) between the two gratings is determined by the angle of the angled spacer. After aligning the orientation of the dispersive grating, the angled spacer was fixed to the optical bench. The image sensor module was an Arducam OV9281 Global Shutter 1MP Nono NoIR MIPI Camera. An S-mount lens from Edmund Optics replaced the lens installed at the factory. A lens adapter was inserted between the lens mount and the S-mount lens to accommodate its longer back focal length. After adjusting the lens to focus at infinity, the camera module was mounted to the optical bench using a bracket.

 figure: Fig. 4.

Fig. 4. (A) A top view of the hyperspectral push-broom scanner optical bench modeling. (B) A cross-sectional view with an overlay of optical rays. (1) optical bench, (2) scanner plate, (3) scanning mirror holder, (4) angled spacer, (5) dispersive grating, (6) angular filter grating, (7) focusing lens, (8) image sensor module, (9) mounting holes for a close-up lens holder.

Download Full Size | PDF

OV9281 image sensor has a resolution of 1280 × 800 with pixel size of 3 μm x 3 μm. The image sensor was oriented to have 1280 pixels in the spectral axis and 800 pixels in the spatial axis. The 800 pixels were combined by three pixels to reduce the number of pixels in the spatial direction to 266 to reduce the size of the hyperspectral data. The pixel sizes, the number of pixels, and the 8 mm focal length of the focusing optics result in a field of view of 17 degrees in the spatial direction, which is less than half that of the Ref. [8]. In the spectral axial direction, out of the 1280 pixels, 823 pixels correspond to a measurable signal in the range of 400 nm to 900 nm, specifically pixel numbers 970 to 148.

The scanning mirror was mounted on a holder connected to a scanner shaft. A pair of bearings supported the scanner shaft in a bearing housing. The bearing housing is fixed to the scanner plate. On the opposite side of the scanner plate, a lever is attached to the scanner shaft, which protrudes through the scanner plate. New Scale Technologies’ M3-L linear actuator pushes the lever to rotate the scanning mirror against the restoration force of a coil spring connected to the lever.

The optical bench can have a close-up lens for measuring objects at close distances. The close-up lens is an air-spaced doublet with a focal length of 400 mm. Figure 5(A) displays the constructed optical bench for the hyperspectral push-broom scanner. Figure 5(B) shows a 3D model of the prototype hyperspectral push-broom imager.

 figure: Fig. 5.

Fig. 5. (A) Constructed hyperspectral push-broom scanner optical bench. (B) An isometric view of the hyperspectral push-broom imager modeling. (1) optical bench, (2) scanner plate, (3) optical window, (4) LED flashlight circuit board, (5) preview camera module, (6) single-board computer. (C) The constructed hyperspectral push-broom imager.

Download Full Size | PDF

In addition to a hyperspectral push-broom scanner optical bench, the imager further comprises an LED flashlight circuit board, a preview camera module, and a single-board computer. The preview camera module is an additional Arducam OV9281 Camera module. The LED flashlight circuit board is Mikeroe's LED Flash 2-click board.

A single-board computer, the Nvidia’s Jetson Nano Developer kit (B01 version), was used to control the hyperspectral imager and to process the images. The single-board computer has a display port, USB ports for connecting a keyboard and a mouse, two MIPI CSI-2 DPHY lanes for connecting two camera modules, and a 40-pin expansion header including I2C and GPIO signal pins. The linear actuator and the LED flashlight circuit board were controlled via the I2C bus. Triggers to two camera modules and the LED flashlight circuit board were sent through the GPIO signal pins. Details on the electronic hardware and the programming of the hyperspectral push-broom imager are described in Supplement 1.

Figure 5(C) shows the constructed hyperspectral push-broom imager. The dimensions are 230 × 120 × 50 mm.

4. Results of measurements

4.1 Wavelength calibration and smile-keystone correction

A spectrogram was measured using an integrating sphere of 20 cm diameter with an OSRAM NE/10 neon gas lamp. The corresponding spectrogram using the neon gas lamp is shown in Fig. 6(A). The pixel numbers of the observed peaks in the center horizontal line of the spectrogram were located, as marked in red circles in Fig. 7(A). The wavelength calibration was performed by fitting a second-order polynomial to the peak pixel numbers and the known reference emission wavelengths of the neon lamp [15]. Figure 7(B) illustrates the polynomial curve that fits the relationship between pixel numbers and wavelengths.

 figure: Fig. 6.

Fig. 6. (A) The measured spectrogram with the neon gas lamp (image size: 800 × 266). (B) The combined image of 12 spectrograms with different field angles. (C) The smile-keystone corrected spectrogram with the neon gas lamp.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. (A) The peaks used for the wavelength calibration in the center horizontal line of the spectrogram. (B) the polynomial fit for pixel number versus wavelength. (C) The neon gas lamp spectra were measured with the hyperspectral push-broom imager and the spectrometer.

Download Full Size | PDF

Figure 7(C) shows the neon lamp spectra captured using the hyperspectral push-broom imager and the Thorlabs CCS200/M spectrometer. The spectral resolution of the hyperspectral imager, measured as the FWHM, was determined to be 2.1 nm at the 640.2 nm peak, which is a close value to the Ref. [8].

To measure the keystone effect, we imaged the collimated light from a broadband Tungsten-Halogen lamp (Thorlabs SLS201 L/M) and rotated the hyperspectral imager to change the field angle. Figure 6(B) shows the combined image of 12 spectrograms taken at different field angles. The curvatures of the spectral lines are evident in Fig. 6(A), which indicates a high smile effect. However, the keystone effect shown in Fig. 6(B) is relatively small. The maximum spatial pixel shifts caused by the keystone effect were less than 0.8 in the combined pixels.

 figure: Fig. 8.

Fig. 8. The process for a smile and keystone correction. (A) original image (B) smile-corrected image (C) smile-keystone-corrected image.

Download Full Size | PDF

The procedure for correcting the effects of smile and keystone distortion is illustrated in Fig. 8. An original image, a smile-corrected image, and a smile-keystone-corrected image are represented as U, V, and W, respectively. Green dots indicate the coordinates of the smile-corrected image, while red dots correspond to the coordinates of the smile-keystone-corrected image. The shifted coordinate due to the smile effect can be represented by the integer part Y and the fractional part δy. Similarly, X + δx is the shifted coordinate due to the keystone effect. The pixel intensity Wi, j in the smile-keystone-corrected image can be interpolated with the pixel intensities VX(i,j), j, VX(i,j) + 1, j in the smile-corrected image.

$${W_{i,j}} = ({1 - {\delta_x}({i,j} )} ){V_{X({i,j} )+ 1,j}} + {\delta _x}({i,j} ){V_{X({i,j} ),j}}$$

The pixel intensities VX(i,j), j, VX(i,j) + 1, j in the smile-corrected image can be interpolated with the pixel intensities UX(i,j), Y(i,j), UX(i,j), Y(i,j) + 1, UX(i,j) + 1, Y(i + 1,j), UX(i,j) + 1, Y(i + 1,j) + 1 in the original image.

$${V_{X({i,j} ),j}} = ({1 - {\delta_y}({i,j} )} ){U_{X({i,j} ),Y({i,j} )+ 1}} + {\delta _y}({i,j} ){U_{X({i,j} ),Y({i,j} )}}$$
$${V_{X({i,j} )+ 1,j}} = ({1 - {\delta_y}({i + 1,j} )} ){U_{X({i,j} )+ 1,Y({i + 1,j} )+ 1}} + {\delta _y}({i + 1,j} ){U_{X({i,j} )+ 1,Y({i + 1,j} )}}$$

The smile distortion in the original image can be described by a polynomial distortion model [16], which is given as

$$y = {b_{00}} + {b_{10}}{x_{ref}} + {b_{01}}{y_{ref}} + {b_{11}}{x_{ref}}{y_{ref}} + {b_{20}}{x_{ref}}^2 + {b_{20}}{y_{ref}}^2$$
where y is the coordinate (y = Y(i,j) + δy(i,j)), xref and yref are the reference coordinates (xref= i, yref = j) and b is the model coefficient.

Similarly, the keystone distortion in the smile-corrected image is given as

$$x = {a_{00}} + {a_{10}}{x_{ref}} + {a_{01}}{y_{ref}} + {a_{11}}{x_{ref}}{y_{ref}} + {a_{20}}{x_{ref}}^2 + {a_{20}}{y_{ref}}^2$$
where x is the coordinates (x = X(i,j) + δx(i,j)), and a is the model coefficient.

The spectral pixel numbers (y) of ten spectral lines at seven equally spaced spatial pixels were measured in the Ne gas spectrogram (Fig. 6(A)), and the model coefficient b was calculated. The model coefficient b was used to generate the shifted coordinate (Y(i,j) + δy(i,j)) for every pixel point. The original image was then converted into a smile-corrected image applying Eq. (15) and Eq. (16). The model coefficient a was also calculated from the measured spatial pixel number (x) in the smile-corrected combined image with 12 field angles. Using the model coefficient a and Eq. (14), the smile-corrected image was converted to the smile-keystone corrected image. The Ne lamp spectrogram after the smile-keystone correction is shown in Fig. 6(C).

4.2 Measurement without the close-up lens

A hyperspectral data cube of the parking garage at Electronics and Telecommunication Research Institute (ETRI) and the surrounding area was acquired through an open window. Figure 9(a) shows the imaging environment containing a hyperspectral imager captured by a mobile phone. The linear actuator pushed the lever by 5μm to perform the vertical scan. The effective length of the lever is 8 mm, so the mirror rotates 0.036 degrees to have a scan angle of 0.072 degrees. The data cube created by scanning 400 times consisted of 400 spectrographs. As shown in Fig. 9(b), An RGB image was created from the hyperspectral data using the wavelengths of 650 nm, 530 nm, and 470 nm for red, green, and blue channels, respectively. The RGB image was clipped by linearly mapping the intensity in the range 0 - 0.25 rather than 0-1 to the range 0-255 to increase the contrast. As the data is hyperspectral, each pixel in the spatial image in Fig. 9(b) holds spectral information. The point spectra of the sky, the building, and the tree corresponding to the pixels indicated by the arrows in Fig. 9(b) are shown in Fig. 9(c). The spectrum clearly shows the 761 nm peak produced by the oxygen molecules in the air.

 figure: Fig. 9.

Fig. 9. (a) Mobile phone image of the imaging environment. (b) RGB image generated from the hyperspectral data cube (image size: 400 × 266). (c) Point spectra of the sky, the building, and the tree.

Download Full Size | PDF

4.3 Measurement with the close-up lens

We installed a close-up lens and captured a hyperspectral data cube indoors using the LED flash for the short-range measurement. Figure 10(a) shows a picture of the measurement environment taken with a mobile phone. The measurement target consists of color patches of four different sizes in red, yellow, green, and blue printed on white paper, as shown in Fig. 10(b). The four different sizes are denoted as S1 (25 mm x 25 mm), S2 (12.5 mm x 25 mm), S3 (6 mm x 25 mm), S4 (3 mm x 25 mm), and S5 (1.5 mm x 25 mm). The double exposure method was used by subtracting an image taken with the LED flash off from an image taken with the LED flash on to remove the ambient light generated by fluorescent lights and monitors. To examine the impact of scan direction on point spectra, we obtained the first hyperspectral data cube by aligning the target pattern such that the narrow side of the color patch bars (S2, S3, S4, S5) was perpendicular to the scan direction, allowing many color bars to be captured in a single scan exposure. We rotated the target pattern at right angles, causing the color of the color bars to change rapidly in the scan direction for the second hyperspectral data cube. An RGB image was created from the first hyperspectral data cube, as shown in Fig. 10(c). The resulting RGB image appears predominantly green. To adjust the color balance, a pixel that should appear white was identified by a black circle and used as a reference point. The corrected image is shown in Fig. 10(d). Figure 10(e) shows the RGB image generated from the second hyperspectral data cube with white balance correction.

 figure: Fig. 10.

Fig. 10. (a) Mobile phone image of the imaging environment. (b) The imaging target pattern. (c) RGB image generated from the hyperspectral data cube. (d)(e) white-balanced RGB images (image sizes: 400 × 266).

Download Full Size | PDF

In Fig. 10(c), the two locations marked with white arrows correspond to scan angles 0° and 15°, and the measured width of the target pattern is 222 pixels and 215 pixels, respectively. Although these two widths should be equal, a distortion of 3% has occurred due to the increased distance from the close-up lens to the object points in the target plane as the scan angle deviates further from the normal to the target plane [17].

The point spectra of specific points in the target pattern were analyzed, while the upper right portion was excluded to avoid specular reflection effects. The normalized point spectra for the green-dotted points represented in Fig. 10(d) and (e) are shown in Fig. 11. As shown in Fig. 11(a-e), the point spectra were not significantly affected by the size of the color bar when the longer side of the color bars was oriented in the scan direction. However, notable changes in the point spectra were observed, particularly at the smallest size (S5) of the color bars, when the shorter side was oriented in the scan direction, as shown in Fig. 11(f-j). The point spectra of the smallest red bar (S5) in Fig. 11(g) show greater spectral intensity at shorter wavelengths than the larger red bars, possibly due to the neighboring blue bar in the scan direction. Similarly, in Fig. 11(j), the point spectra of the smallest blue bar exhibit greater spectral intensity at longer wavelengths than the larger blue bars.

 figure: Fig. 11.

Fig. 11. (a-e) The normalized point spectra of the color bars with the long side pointing in the direction of the scan. (a) white area, (b) red bars with different sizes, (c) yellow bars with different sizes, (d) green bars with different sizes, and (e) blue bars with different sizes. (f-j) The normalized point spectra of the color bars with the short side pointing in the direction of the scan. (f) white area, (g) red bars with different sizes, (h) yellow bars with different sizes, (i) green bars with different sizes, and (j) blue bars with different sizes. (k-n) The point spectra along the boundary between the red and the blue bars with a size of 1.5 mm x 25 mm(S5). (k) The longer side of the color bars points in the direction of the scan. (l) The fitting lines are linear combinations of the red (77,295) and blue (80,295) spectra. (m) The shorter side of the color bars points in the direction of the scan. (n) The fitting lines are linear combinations of the red (215,229) and blue (215,226) spectra. The coordinates are the spatial pixel number and the scan number.

Download Full Size | PDF

The point spectra at the boundary between the red and blue bars with the smallest size were examined to investigate the influence of the adjacent color bar in both the scan and the orthogonal spatial direction. Figure 11(k) shows the point spectra of the midpoints within the red and blue bars, as well as those at the boundary where the red and blue bars are adjacent in the spatial direction. The spatial pixel number and the scan number are used to define the coordinates of the points. Figure 11(l) shows linear combination fits of the red point spectrum (77,295) and the blue point spectrum (80,295) for two boundary point spectra ((78,295), and (79,295)). Similarly, Fig. 11(m), (n) shows the point spectra and fitting results when the red and blue bars are adjacent in the scan direction. The results exhibit poor performance compared to the linear combination fit in the spatial direction. The boundary point spectra of adjacent color bars in the spatial direction may be attributed to light spreading. However, the perturbation of the spectrum by the neighboring color bar in the scan direction possibly indicates the presence of the wavelength-dependent incident angle error previously mentioned. The combined pixels in the spatial axis have an instantaneous field of view of 0.064°. The volume Bragg grating has an angular selectivity (FHWM) of 0.063°. The scan step angle of 0.07° is slightly larger than the angular selectivity. These values yield a nominal resolution of about 0.5 mm in both the spatial and scan directions when using the close-up lens with a 400 mm focal length. The smallest pattern (S5) with a width of 1.5 mm produces 2-3 pixels and is visible clearly in the RGB images of Fig. 10. However, to reduce spectral and spatial co-registration errors caused by the incident angle error, it is necessary to experimentally characterize the incident angle error and process data between adjacent spectrograms. Characterization can be achieved using a collimated optical source that emits multiple monochromatic wavelengths.

5. Conclusions

In this paper, a new design of a hyperspectral push-broom imager has been presented, whose main optical components consist only of a strongly index-modulated volume Bragg grating, a weakly index-modulated Bragg grating, a focusing optic, and an image sensor. To prove the concept, a hyperspectral push-broom imager, including test peripherals such as a scanning mirror, close-up lens, LED flash, and single-board computer, was built from commercially available standard parts. The weak index-modulated Bragg grating was only custom-fabricated to the desired specifications. The constructed prototype imager shows a field of view (FOV) of 17 degrees in the spatial direction, an effective spectral range from 400 nm to 900 nm, and a spectral resolution of 2.1 nm at the Ne gas lamp line wavelength of 640.2 nm. Hyperspectral cubes were acquired both with and without the close-up lens. The point spectra of the imaging target pattern were examined to investigate the effect of the wavelength-dependent incidence angle error, and it was found that the optical spectra of the color bars could be somewhat inaccurate due to the other color bars being very close in the scan direction. The hyperspectral imager could be utilized for applications where the spectral variation along the scan direction is not critical. Alternatively, the wavelength-dependent incident angle error could be reduced with additional optical components. The presented data were not radiometrically calibrated. Radiometric correction and uniformity should be investigated to use a hyperspectral imager for applications in the future. The radiometric uniformity is affected by the quantum efficiency of the image sensor, the diffraction efficiency of the dispersion grating, and the relative illumination of the optical system, similar to a conventional slit-based push-broom scanner. In addition, the diffraction efficiency of the volume Bragg grating (Fig. 1(B)) used as an angular filter affects the radiometric uniformity of the proposed hyperspectral imager. The presented design could lead to a notably short total track length (TTL) because it has only one focusing optic. This reduction in TTL may be crucial for miniaturizing hyperspectral imaging systems suitable for small and lightweight applications such as mobile devices.

Funding

Institute for Information and Communications Technology Promotion (No.2018-0-00219, No.2021-0-00019).

Acknowledgments

This work was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grants funded by the Korean government (MSIT) (No.2018-0-00219: Spacetime-time complex artificial intelligence blue-green algae prediction technology based on direct-readable water quality complex sensor and hyperspectral image, No.2021-0-00019: Research on Optical Learning Technology for AI). We would like to thank Yong-Seok Hong, Chan-Won Ko, Sung-Mo Seo, and Kaghyeon Lee of Pan Opto Mecha Tronix Corp. for fabricating the close-up lens and the fixtures and providing valuable advice on the design of the scanner bearing housing.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. M. J. Khan, H. A. Khan, A. Yousaf, et al., “Modern trends in hyperspectral image analysis: A review,” IEEE Access 6, 14118–14129 (2018). [CrossRef]  

2. P. Mangalraj and B.-K. Cho, “Recent trends and advances in hyperspectral imaging techniques to estimate solar induced fluorescence for plant phenotyping,” Ecol. Indic. 137, 108721 (2022). [CrossRef]  

3. J. Yoon, “Hyperspectral imaging for clinical applications,” BioChip J. 16(1), 1–12 (2022). [CrossRef]  

4. Y. S. Kwon, J. C. Pyo, Y.-H. Kwon, et al., “Drone-based hyperspectral remote sensing of cyanobacteria using vertical cumulative pigment concentration in a deep reservoir,” Remote Sens. Environ. 236, 111517 (2020). [CrossRef]  

5. M. H. Tran and B. Fei, “Compact and ultracompact spectral imagers: technology and applications in biomedical imaging,” J. Biomed. Opt. 28(04), 040901 (2023). [CrossRef]  

6. F. Sigernes, M. Syrjasuo, R. Storvold, et al., “Do it yourself hyperspectral imager for handheld to airborne operations,” Opt. Express 26(5), 6021–6035 (2018). [CrossRef]  

7. H. Wu, F. Haibach, E. Bergles, et al., “Miniaturized handheld hyperspectral imager,” Proc. SPIE 9101, 91010W (2014). [CrossRef]  

8. Q. Xue, B. Yang, F. Wang, et al., “Compact, UAV-mounted hyperspectral imaging system with automatic geometric distortion rectification,” Opt. Express 29(4), 6092–6112 (2021). [CrossRef]  

9. M. B. Henriksen, E. F. Prentice, C. M. van Hazendonk, et al., “Do-it-yourself VIS/NIR pushbroom hyperspectral imager with C-mount optics,” Opt. Continuum 1(2), 427–441 (2022). [CrossRef]  

10. S. C. Barden, J. A. Arns, and W. S. Colburn, “Volume-phase holographic gratings and their potential for astronomical Applications,” Proc. SPIE 3355, 866–876 (1998). [CrossRef]  

11. S. C. Barden, J. A. Arns, W. S. Colburn, et al., “Volume-phase holographic gratings and the efficiency of three simple volume-phase holographic gratings,” Publ. Astron. Soc. Pac. 112(772), 809–820 (2000). [CrossRef]  

12. I. Ciapurin, L.B. Glebov, and V. I. Smirnov, “Modeling of phase volume diffractive gratings, part 1: transmitting sinusoidal uniform gratings,” Opt. Eng. 45(1), 015802 (2006). [CrossRef]  

13. H. Kogelnik, “Coupled wave theory for thick hologram gratings,” Bell Syst. Tech. J. 48(9), 2909–2947 (1969). [CrossRef]  

14. J.-H. Song, “Hyperspectral sensor”, US Patent 11,313,723, April 2022.

15. Ocean optics, “Wavelength calibration light source, installation and operation manual,” (2018), https://www.oceaninsight.com/globalassets/catalog-blocks-and-images/manuals–instruction-ocean-optics/wavelength-calibration-products-v1.0_updated.pdf

16. K.C. Lawrence, B. Park, W. R. Windham, et al., “Calibration of a pushbroom hyperspectral imaging system for agricultural inspection,” Trans. ASAE 46(2), 513–521 (2003). [CrossRef]  

17. J. A. Gutierrez-Gutierrez, A. Pardo, E. Real, et al., “Custom scanning hyperspectral imaging system for biomedical applications: modeling, benchmarking, and specifications,” Sensors 19(7), 1692 (2019). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Supplement 1 provides details on the electronic hardware and the programming of the hyperspectral push-broom imager

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Panel (A) Propagation of optical rays through a volume Bragg grating: (1) incident ray (2) transmitted ray (3) diffracted ray. Panel (B) Calculated diffraction efficiencies of volume Bragg grating with different wavelengths: strong index modulation (Δn = 0.07, d = 4.3μm, and Λ=1.667μm), weak index modulation (Δn = 0.00029, d = 1.06 mm and Λ=0.8734μm).
Fig. 2.
Fig. 2. Optical layout of the hyperspectral push-broom imager (A) normal incidence (B) off-angle incidence and filtering by the angular filter grating (C) off-angle incidence and blocked by the apertures: (1) window (2) scanning mirror (3) dispersive grating (4) angular filter grating (5) focusing lens (6) electronic image sensor (7) scanning mirror aperture (8) dispersive grating aperture (9) angular filter grating aperture.
Fig. 3.
Fig. 3. The layout of the angular filter grating (1) and the dispersive grating (2). The optical ray propagation at the center wavelength(A) and an off-center wavelength(B).
Fig. 4.
Fig. 4. (A) A top view of the hyperspectral push-broom scanner optical bench modeling. (B) A cross-sectional view with an overlay of optical rays. (1) optical bench, (2) scanner plate, (3) scanning mirror holder, (4) angled spacer, (5) dispersive grating, (6) angular filter grating, (7) focusing lens, (8) image sensor module, (9) mounting holes for a close-up lens holder.
Fig. 5.
Fig. 5. (A) Constructed hyperspectral push-broom scanner optical bench. (B) An isometric view of the hyperspectral push-broom imager modeling. (1) optical bench, (2) scanner plate, (3) optical window, (4) LED flashlight circuit board, (5) preview camera module, (6) single-board computer. (C) The constructed hyperspectral push-broom imager.
Fig. 6.
Fig. 6. (A) The measured spectrogram with the neon gas lamp (image size: 800 × 266). (B) The combined image of 12 spectrograms with different field angles. (C) The smile-keystone corrected spectrogram with the neon gas lamp.
Fig. 7.
Fig. 7. (A) The peaks used for the wavelength calibration in the center horizontal line of the spectrogram. (B) the polynomial fit for pixel number versus wavelength. (C) The neon gas lamp spectra were measured with the hyperspectral push-broom imager and the spectrometer.
Fig. 8.
Fig. 8. The process for a smile and keystone correction. (A) original image (B) smile-corrected image (C) smile-keystone-corrected image.
Fig. 9.
Fig. 9. (a) Mobile phone image of the imaging environment. (b) RGB image generated from the hyperspectral data cube (image size: 400 × 266). (c) Point spectra of the sky, the building, and the tree.
Fig. 10.
Fig. 10. (a) Mobile phone image of the imaging environment. (b) The imaging target pattern. (c) RGB image generated from the hyperspectral data cube. (d)(e) white-balanced RGB images (image sizes: 400 × 266).
Fig. 11.
Fig. 11. (a-e) The normalized point spectra of the color bars with the long side pointing in the direction of the scan. (a) white area, (b) red bars with different sizes, (c) yellow bars with different sizes, (d) green bars with different sizes, and (e) blue bars with different sizes. (f-j) The normalized point spectra of the color bars with the short side pointing in the direction of the scan. (f) white area, (g) red bars with different sizes, (h) yellow bars with different sizes, (i) green bars with different sizes, and (j) blue bars with different sizes. (k-n) The point spectra along the boundary between the red and the blue bars with a size of 1.5 mm x 25 mm(S5). (k) The longer side of the color bars points in the direction of the scan. (l) The fitting lines are linear combinations of the red (77,295) and blue (80,295) spectra. (m) The shorter side of the color bars points in the direction of the scan. (n) The fitting lines are linear combinations of the red (215,229) and blue (215,226) spectra. The coordinates are the spatial pixel number and the scan number.

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

m λ = Λ ( s i n ( α ) s i n ( β ) )
λ = 2 Λ s i n ( β )
λ c = Λ 1 s i n ( θ c ) , λ c = Λ 2 s i n ( β c )
φ = θ c β c
λ a = 2 Λ 1 s i n ( θ a )
λ a = Λ 2 ( s i n ( α a ) + s i n ( β a ) )
s i n ( α a ) = λ a Λ 2 s i n ( β a ) = λ a Λ 2 s i n ( θ a φ ) = λ a Λ 2 s i n ( θ a ) c o s ( φ ) + c o s ( θ a ) s i n ( φ )
s i n ( α a ) = λ a Λ 2 λ a 2 Λ 1 c o s ( φ ) + 1 ( λ a 2 Λ 1 ) 2 s i n ( φ )
s i n ( α a ) = 2 x 2 ( 1 + δ ) x 1 ( 1 + δ ) c o s ( φ ) + 1 x 1 2 ( 1 + δ ) 2 s i n ( φ )
2 x 2 x 1 c o s ( φ ) x 1 2 1 x 1 2 s i n ( φ ) = 0
c o s ( φ ) = 1 x 1 2 1 x 2 2 + x 1 x 2
s i n ( φ ) = x 1 1 x 2 2 1 x 1 2 x 2
Λ 2 = Λ 1 4 3 ( λ c 2 Λ 1 ) 2
W i , j = ( 1 δ x ( i , j ) ) V X ( i , j ) + 1 , j + δ x ( i , j ) V X ( i , j ) , j
V X ( i , j ) , j = ( 1 δ y ( i , j ) ) U X ( i , j ) , Y ( i , j ) + 1 + δ y ( i , j ) U X ( i , j ) , Y ( i , j )
V X ( i , j ) + 1 , j = ( 1 δ y ( i + 1 , j ) ) U X ( i , j ) + 1 , Y ( i + 1 , j ) + 1 + δ y ( i + 1 , j ) U X ( i , j ) + 1 , Y ( i + 1 , j )
y = b 00 + b 10 x r e f + b 01 y r e f + b 11 x r e f y r e f + b 20 x r e f 2 + b 20 y r e f 2
x = a 00 + a 10 x r e f + a 01 y r e f + a 11 x r e f y r e f + a 20 x r e f 2 + a 20 y r e f 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.