Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

A hyperspectral projector for simultaneous 3D spatial and hyperspectral imaging via structured illumination

Open Access Open Access

Abstract

Both 3D imaging and hyperspectral imaging provide important information of the scene and combining them is beneficial in helping us perceive and understand real-world structures. Previous hyperspectral 3D imaging systems typically require a hyperspectral imaging system as the detector suffers from complicated hardware design, high cost, and high acquisition and reconstruction time. Here, we report a low-cost, high-frame rate, simple-design, and compact hyperspectral stripe projector (HSP) system based on a single digital micro-mirror device, capable of producing hyperspectral patterns where each row of pixels has an independently programmable spectrum. We demonstrate two example applications using the HSP via hyperspectral structured illumination: hyperspectral 3D surface imaging and spectrum-dependent hyperspectral compressive imaging of volume density of participating medium. The hyperspectral patterns simultaneously encode the 3D spatial and spectral information of the target, requiring only a grayscale sensor as the detector. The reported HSP and its applications provide a solution for combining structured illumination techniques with hyperspectral imaging in a simple, efficient, and low-cost manner. The work presented here represents a novel structured illumination technique that provides the basis and inspiration of future variations of hardware systems and software encoding schemes.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Structured illumination refers to techniques that employ active illumination of the scene with specially designed spatially varying intensity patterns. Structured illumination techniques play an important role in many fields of optical imaging including three-dimensional (3D) surface imaging [14] , volume density reconstruction [5], and super-resolution microscopy [6,7], etc. 3D surface imaging is one of the most widely-applied applications of structured illumination. Conventional methods use a projector to display coded light patterns onto the surface of an opaque object. Then, images of the coded object are captured with a camera. The 3D surface depth information is obtained through active triangulation, where correspondences are established between the camera image and projected pattern. There have been tremendous advances in the research and development of 3D surface imaging technologies in the past decade for medical, industrial and entertainment applications. Consumer level real-time 3D imaging technologies have been successfully applied in commercial products such as the Apple iPhone X - which has a built-in 3D sensor for Face ID - Microsoft Kinect, and Intel RealSense, continuously driving the needs for better 3D surface imaging technologies. Structured illumination can also be used for recovering 3D volume density of inhomogeneous participating media, which include phenomena like smoke, steam, and mixing fluids [5,8].

In addition to the 3D spatial information, real-world objects also have a spectral dimension rich with information about the material composition of the object. Spectral information is a consequence of the complex underlying interactions between light and matter. Traditional cameras and imaging sensors are able to acquire only RBG images, losing most of the spectral information to the coarse spectral binning. This fundamental restriction greatly limits our ability to perceive and understand the complexity of real-world objects. Hyperspectral imaging is a powerful technology that aims to obtain the spectrum associated with each pixel in the image of a scene in many narrow wavelength ranges [9]. Hyperspectral techniques have been applied in a wide range of applications from remote sensing [10] to microscopy [11,12], from medical imaging [13] to food inspection [14] and art analysis [15] .

As a further step, combining hyperspectral imaging with 3D imaging could be very beneficial and would render possible varieties of applications. Such hyperspectral 3D imaging systems have been realized in different fields [1623]. Most of these works employ a straightforward approach to acquire the hyperspectral and the 3D information through simply replacing the standard RGB camera used in current 3D scanning systems with a hyperspectral imaging device. Specifically, motivated by investigation of animal vision systems and appearance especially avian vision, Kim [16] developed a hyperspectral 3D imaging system composed of a 3D laser projector and detector system and a hyperspectral camera. Sitnik [17] employed a digital projector and a filter wheel hyperspectral camera for archiving 3D shape and precise color of cultural heritage objects. Behmann [19] used a 3D laser scanner and two hyperspectral pushbroom cameras for reconstruction of hyperspectral 3D plant models. Beist [23] developed a system for fast measurement of surface shape and spectral characteristics using a pattern projector and two snapshot hyperspectral cameras with spectral filters. For all previous systems, hyperspectral data needs to be acquired through hyperspectral imaging systems, e.g. pushbroom, snapshot, or filter wheel hyperspectral cameras, and therefore, suffer from the typical drawbacks of hyperspectral cameras such as complicated hardware design, high cost, and high acquisition and reconstruction time. In addition, for many systems such as in [16,1921], the 3D surface model and the hyperspectral image are reconstructed separately and the two types of data need to be registered and fused in post-processing, which is a challenging topic in itself and leads to increased computation complexity and reduced robustness.

The use of hyperspectral cameras instead of grayscale consumer cameras in hyperspectral 3D imaging is largely due to difficulties in the generation of spectrally coded light patterns on the illumination side. The structured illumination generated by a commercial digital projector, though of any color to the human eye, only contains light from three spectral channels: red, green, and blue. The limited spectral content of the digital projector is because it uses either red, green, and blue LEDs, or else a broadband lamp with an RGB color filter wheel as its light source. The limited spectral resolution that results from this is not suitable for spectral encoding.

In this paper, we report a hyperspectral stripe projector (HSP) system and demonstrate two example applications of the HSP in addressing the challenges in hyperspectral 3D imaging using only a grayscale camera as the detector. The HSP is based on a single digital micro-mirror device (DMD) and is capable of projecting two-dimensional images in the visible regime, whereby each row of pixels can have an independently programmable spectrum. The HSP features simple design, high pattern frame rate, low cost, and a compact form factor. The novel aspect of the HSP hardware is that, through careful design, the same diffraction grating and the same lens are used to first disperse and focus the incoming light, and then to collimate and recombine it, instead of having to use another set of grating and lens. It leads to a simple hardware design and the possibility of compressing the system into a very compact form factor. The HSP has distinct advantages over previous work, such as the Hyperspectral Image Projector developed by NIST [24]. Although the NIST system can produce images with an arbitrary spectrum at each pixel, it is built with two DMDs and has a complicated system design, making it high cost, large in size, and challenging to replicate in other labs. Besides, it requires a very intense light source or a very sensitive detector to make up for the large amount of optical loss in the system due to temporally integrating the basis spectra during each pattern period. As a comparison, the single DMD in the reported HSP modulates light in one spatial and one spectral dimension and all the spectral channels are on during each pattern period, giving it a much higher optical efficiency. Meanwhile, the hyperspectral stripe patterns are sufficient for most structured illumination applications. In addition, the HSP has a higher hyperspectral pattern generation frame rate than the NIST system, up to the frame rate of the DMD which is typically on the order of 10K HZ. The high frame rate of the HSP is because one projected hyperspectral pattern corresponds to one DMD pattern in the HSP system, whereas it corresponds to multiple DMD patterns in the NIST system due to temporally integrating the spectral channels. The high frame rate facilitates video or real-time encoding and reconstruction. The HSP opens up the possibility of numerous applications combining structured illumination techniques with hyperspectral imaging in a simple, efficient, and low-cost manner. In addition to hyperspectral 3D imaging applications, it could inspire novel encoding schemes for structured illumination techniques. The capabilities of HSP may also enable high resolution and low cost infrared hyperspectral 3D imaging using a single-pixel infrared detector if combined with the single-pixel imaging method such as in [25,26]. The HSP can also serve as an active hyperspectral light source. It can produce spectral channels up to the number of columns of micromirrors on the DMD and generate any linear combination of single channels at high speed.

We demonstrate two example applications to reveal the potential power of the HSP. The first application is the hyperspectral 3D surface imaging through hyperspectral structured illumination, using the HSP for encoding and a grayscale camera as the detector. The HSP projects hyperspectral patterns on the surface of the object for simultaneous 3D spatial and spectral encoding. The grayscale camera captures are used for recovering both the 3D surface point cloud and the reflectance spectrum associated with every 3D point, without the need to perform data fusion of spatial and spectral data in post-processing. The second application is hyperspectral compressive structured illumination for spectrum-dependent recovery of volume density of a participating medium using the HSP and a grayscale camera. By designing the spectrum of the hyperspectral stripes patterns, the desired part of the target with a certain spectral response is spatially and spectrally encoded, meanwhile decreasing and suppressing signal from unwanted part.

2. HSP prototype

2.1 Optical design

Figure 1 shows the optical design of the HSP. Light from a halogen lamp is collected and guided by a gooseneck fiber optic light guide and is focused by a cylindrical lens onto a vertical slit of 200 µm wide along the $\mathbf {x}$ direction. Light from each point source along the slit is collimated into a light beam by lens L1. The light beams travel through a transmission diffraction grating (Thorlabs, Visible Transmission Grating, 300 Grooves/mm) with the grooves along $\mathbf {x}$ direction. The grating is designed such that most of the incoming light power is concentrated in one of the two symmetric directions of its first order diffracted light, minimizing the light loss in the zeroth order and higher order diffraction. The spectrally dispersed output of the grating is focused by an achromatic lens L2 onto the surface of a DMD (Texas Instrument, DLP4500 .45 WXGA DMD, 912 x 1140). The spectrum of light is dispersed across $\mathbf {y}$ direction and each DMD column (along $\mathbf {x}$ direction) receives light of a different wavelength. The distance between the grating and L2 and the distance between L2 and the DMD are equal to the focal length $f$ of L2. Then the DMD performs spatial and spectral modulation, keeping the desired part of the spectrum and discarding the rest. The modulation process is detailed in the next section. Then, the light to be kept is reflected by the DMD, collected and collimated by L2, recombined by the diffraction grating, and imaged into a line by L3. Due to the symmetric configuration, the image formed by L3 is in fact the image of the slit light source though with only a portion of the original spectrum of the slit. Then a cylindrical lens focuses each point in the slit image to a spectrally modulated line at target. When the application requires rotation of the stripes, either a dove prism can be added between L3 and the cylindrical lens or equivalently we can keep the stripe pattern fixed and rotate the stage and camera simultaneously.

 figure: Fig. 1.

Fig. 1. (a) Schematic of the top view of the HSP system. (b) The HSP prototype

Download Full Size | PDF

The field of projection of the HSP can be adjusted in multiple ways. The simplest way to increase or decrease the field of projection is by moving the cylindrical lens closer to or farther away from lens L3. Moving the cylindrical lens closer to L3 will also increase the distance between the final projected pattern and the cylindrical lens, which an be solved by using a cylindrical lens with a smaller focal length or folding the light path of projected pattern multiple times by mirrors if to maintain the compactness of the setup.

Figure 2 illustrates how different points on the slit are dispersed and focused on the DMD. Because a slit light source in the $\mathbf {x}$ direction is used, every spectral component forms a line on the DMD in the $\mathbf {x}$ direction. Consider two point light sources $\mathbf {a}$ and $\mathbf {b}$ along the slit. The point $\mathbf {a}$ forms a dispersed spectral line $\mathbf {a'}$ spanning in the $\mathbf {y}$ direction on the DMD and, similarly, $\mathbf {b}$ forms a spectral line $\mathbf {b'}$. Yet $\mathbf {a'}$ and $\mathbf {b'}$ are focused at different $\mathbf {x}$ positions, and likewise for all points along the slit.

 figure: Fig. 2.

Fig. 2. Side view of part of the light path in HSP, showing how different points on the slit are dispersed and focused on the DMD

Download Full Size | PDF

2.2 Spatial and spectral modulation

A DMD chip is incorporated at the focal plane of L2 orthogonal to the optical axis, and the dispersed spectrum is focused on the surface of the DMD. The functional part of the DMD chip is an array of electrostatically controlled micromirrors of size 7.6 µm $\times$ 7.6 µm each. Every micromirror can be independently actuated and rotated about a hinge into one of two states: +12° (tilted downward) and -12° (tilted upward) with respect to the DMD surface. ON micromirrors are tilted at +12° reflecting the light towards L2. OFF micromirrors are tilted at -12° sending light away from L2 into a beam stop. At any instant in time, a binary pattern sent to the DMD from a computer determines which DMD micromirrors are ON and OFF. As such, each row of micromirrors on the DMD modulates the spectrum of a corresponding stripe in the final projected 2D image. For a certain row on the DMD, the corresponding stripe will contain the spectrum of light reflected from the ON micromirrors in that row. Therefore, the number of stripes and spectral channels in the projected 2D image can be up to the number of rows and columns on the DMD, respectively. In applications where fewer stripes or spectral channels are sufficient, neighboring rows or columns of micromirrors can be grouped. Figure 3 demonstrates an example DMD pattern for the spatial and spectral modulation process, where rows of micromirrors are grouped into five rows to form five hyperspectral stripes in the projected 2D image. The white areas represent ON micromirrors. Each hyperspectral stripe contains the spectral content selected by the corresponding stripe in the DMD pattern. Figure 3(e) shows the spectra measured by a spectrometer for the top stripe and the bottom stripe. Note that the top stripe is white because the full spectrum is selected by the DMD pattern (Fig. 3(a)), and the bottom stripe is composed of eight spectral bands because eight bands are selected by the DMD pattern. The DMD is set to operate in binary mode for the experiments in this paper. If needed, the DMD can operate in up to 8-bit mode, providing 256 levels of intensity for every spectral channel. The HSP can also operate in short-wave infrared (SWIR) with a SWIR light source and appropriate optical elements.

 figure: Fig. 3.

Fig. 3. Schematic of the spatial and spectral modulation process. (a)Illustration of an example DMD pattern. Mirrors in the white area are ON and those in the black area are OFF. (b) Dispersed spectrum on the DMD surface. (c) Spectrum on the ON area is selected. (d) Image of the projected hyperspectral pattern on the screen when DMD displays the pattern in (a). (e) The spectra measured by the spectrometer for the top stripe and bottom stripe

Download Full Size | PDF

2.3 Spectral calibration

The result of the spectral calibration of the HSP system is shown in Fig. 4(a). For each marker, a vertical stripe composed of 11 contiguous columns on the DMD is turned on with all other DMD mirrors off, generating a single spectral band. The $\mathbf {x}$ and $\mathbf {y}$ values of the marker represent the position of the center column of the vertical stripe on the DMD and the center wavelength of the generated spectral band, respectively. A good linear relationship is clearly shown. It matches the prediction of the grating equation in its first order approximation, verifying the accuracy of the optical design and alignment. In the experiments, a straight line is fitted to the markers based on which spectral modulation is performed. Figure 4(b) shows examples of spectra generated by the HSP in a 32-channel setting. As is shown, each spectral band is of the shape of a slightly asymmetric spike instead of a rectangle with straight vertical lines as in the ideal situation. The spike shape resulted mainly from the nonzero width of the slit light source and the imperfect imaging quality of the achromatic lens L2.

 figure: Fig. 4.

Fig. 4. (a) Spectral calibration of the HSP, showing the relationship between location of the column of ON micromirrors and the center wavelength of the generated light. (b) Example light spectra generated by the HSP for a single spectral channel and for combination of different channels.

Download Full Size | PDF

Multiple factors impact the spectral resolution of the HSP. For the current HSP prototype, by just shrinking the width of the slit, the finest spectral resolution is around 5 nm. Meanwhile, decreasing slit width leads to decreased intensity of the projected pattern. Besides slit width, imaging quality of optics, and the number of columns of micromirrors turned on for each spectra channel, other factors impacting the spectral resolution include the groove density of the diffraction grating and the focal length of the lens L2.

3. Hyperspectral structured illumination for hyperspectral 3D surface reconstruction

This section demonstrates the experiment of hyperspectral 3D surface imaging using the HSP and a grayscale USB camera (FLIR CM3-U3-31S4C-CS) as the detector. Simultaneous 3D spatial and spectral encoding through the hyperspectral patterns from the HSP is realized. Spatial encoding and reconstruction is based on the principle of conventional structured illumination scheme. Spectral encoding makes use of spectral channel multiplexing, increasing the signal-to-noise ratio (SNR) compared to spectral raster scan method. From the raw capture readouts of the grayscale camera , the 128$\times$128 3D surface point cloud of the target and the surface reflectance for each point in 32 visible spectral channels are reconstructed, without the need for data fusion. The center wavelength of spectral channels starts from 425 nm and increments by 10 nm until 735 nm. The full width half maximum of each spectral channel is 10 nm. We perform the 128$\times$128$\times$32 3D spatial and spectral reconstruction as a proof of principle. We reiterate here that the number of spatial stripes and spectral channels generated by HSP can be up to the number of rows and columns of mircomirrors on the DMD, respectively.

The DMD inherent frame rate provides an upper limit of the frame rate of the imaging system based on the HSP. The maximum effective frame rate of the imaging system also depends on the intensity of the projected patterns, the sensitivity of the sensor, and the accuracy requirement of the final reconstruction. In the experiment, the integration period for acquiring a single pattern is 30 ms. Since we are imaging a static target as an example, we did not push the limit of the effective pattern rate.

3.1 Spatial encoding and reconstruction

We design the hyperspectral encoding patterns based on the principle of the conventional gray code spatial encoding scheme as in [27], which is a multiple-shot technique using a series of black and white stripe patterns for spatial encoding. Multiple-shot techniques for surface reconstruction typically provide more reliable and accurate results compared to single-shot techniques [27].

For designing hyperspectral encoding patterns, the black stripes in gray code patterns remain black, whereas the white stripes are designed to be hyperspectral, consisting of light from different combinations of 16 out of the 32 spectral channels. This combination of spectral channels is the same within each hyperspectral pattern and differs between patterns. Also, each hyperspectral pattern has a spatial complement pattern used in encoding where black stripes and hyperspectral stripes are spatially switched. The grayscale camera captures of each pattern and its complement pattern are be summed up to form an equivalent global hyperspectral illumination having the same spectrum at every projected pixel. From such global illuminations, the reflectance of the target at every image pixel can be obtained. The spatial complement patterns serve to simultaneously perform spectral encoding and to improve the robustness of spatial reconstruction. Even using the hyperspectral encoding patterns, spatial reconstruction follows the same principle as the conventional structured illumination reconstruction. We use the spatial reconstruction software provided by [27] in the experiment.

Figure 5(a) shows examples of the hyperspectral encoding patterns on a target and the corresponding modulation patterns displayed on the DMD. A color camera is used here for demonstration purpose only. In the experiment, the raw readout of pixel values of the sensor in each grayscale camera captures are acquired. To demonstrate the sharpness and contrast between dark and bright stripes, example line profiles of actual grayscale camera captures are shown in Fig. 5(b). Note that in the experiment, the grayscale camera capture is a superposition of the projected pattern and the ambient light. To eliminate the effect of the ambient light, the ambient light level is captured by the grayscale camera and subtracted from all captures used for reconstruction. When capturing the ambient light level, all aspects of the ambient environment and the HSP setup are kept the same, except that the projected pattern is blocked from going onto the target by a black aluminum foil wrapped around the cylindrical lens. The contrast of dark and bright stripes after subtracting ambient light is higher than shown in the line profile. The non-ideal contrast and sharpness of the stripes are mainly due to the non-optimal imaging quality of lens L2, L3 and the cylindrical lens.

 figure: Fig. 5.

Fig. 5. (a) Color camera captures of 4 example hyperspectral encoding patterns and their complement patterns used in the experiment, with the corresponding DMD patterns above that generated them. (b)Line profiles of the stripe patterns of the actual grayscale camera captures used for the reconstruction. In the line profiles, the image intensity values along the red line marked in the figure are shown below the each figure. Note that the ambient light level is not subtracted from the grayscle camera capture.

Download Full Size | PDF

The gray code spatial encoding scheme requires both horizontal and vertical stripe patterns. We rotate the rigidly connected target stage and camera so that equivalently, the stripes are rotated. It should be noted that many algorithms have been developed for 3D surface reconstruction where only stripes along one direction are required [2831]. Using the HSP, the same spectral encoding scheme developed here can be combined with such algorithms in a similar way for simultaneous spatial and spectral encoding in a simple, efficient and low-cost manner.

3.2 Spectral encoding and reconstruction

As a proof of concept of the novel hyperspectral 3D surface reconstruction scheme, we choose targets with surface that is diffusive enough in every direction and assume that the reflectance for a point on its surface does not change with the angle of illumination or the reflected light rays, though in reality the reflectance could be dependent on them to different degrees [32].

The image formation model for spectral reconstruction is described here. As shown in Fig. 6, the light coming from point $T$ on the target is focused by the camera lens and collected by pixel $P$ at the camera sensor. The intensity value measured by $P$ is denoted by $p$ and described by

$$p=\int_\lambda i(\lambda)\times r(\lambda)\times c(\lambda)\ d\lambda$$

 figure: Fig. 6.

Fig. 6. Schematic of the image formation model. Light reflected from point $T$ on the orange is focused at pixel $P$ in the camera capture. The pixel value of $P$ is dependent on the illumination energy density $i(\lambda )$, surface reflectance $r(\lambda )$, and camera spectral response $c(\lambda )$.

Download Full Size | PDF

In Eq. (1), $i(\lambda )$ is the illumination energy density as a function of wavelength $\lambda$ at a point $T$ on the target, $r(\lambda )$ is the reflectance of point $T$ defined as the ratio between the reflected light energy density and illumination energy density at wavelength $\lambda$, $c(\lambda )$ is the camera spectral response at $\lambda$, defined as the readout value of a pixel in the raw camera capture when light focused at this pixel is of wavelength $\lambda$ and has one unit of energy. The model assumes that $c(\lambda )$ is not dependent on the intensity of incoming light, and that the raw pixel value has a linear relationship with the intensity of incoming light at all wavelengths. This is a decently accurate assumption for silicon detectors, when the signal is not saturated and does not drop below the noise floor.

For 32-channel spectral reconstruction, the integral in Eq. (1) can be approximated by a discrete sum over the 32 spectral channels. The variables $i(\lambda )$, $r(\lambda )$, and $c(\lambda )$ can be approximated by a constant value within each spectral band, and we denote $I(\lambda )$, $R(\lambda )$, and $C(\lambda )$ to be their discrete approximations, respectively. Let $\lambda _1$, $\lambda _2$, …, $\lambda _{32}$ represent the central wavelengths of the spectral channels used in the experiment, which are 425 nm, 435 nm, …, 735 nm, then $I(\lambda )$, $R(\lambda )$, and $C(\lambda )$ are discrete functions taking values at $\lambda _i$, $i=1,2,\ldots ,32$. Then Eq. (1) is approximated as:

$$p=\sum_{i=1}^{32}I(\lambda_i)\times R(\lambda_i)\times C(\lambda_i)$$
By designing proper $I(\lambda _i)$, $R(\lambda _i)$ can be solved. For encoding, spatial complement patterns are used. The camera captures of the target under each pattern and its complement pattern are summed up to form an equivalent global illumination. As such, $I(\lambda _i)$ represents a global illumination with energy $I(\lambda _i)$ at every pixel. A naïve method is to design the hyperspectral patterns to form 32 different global illuminations, each consisting of light only from one of the 32 spectral channels. This is called the spectral raster scan method. The spectral encoding scheme we designed here is based on spectral multiplexing, where the hyperspectral stripes in each pattern consist of 16 out of the 32 spectral channels. Multiplexing increases the SNR compared to spectral raster scan because the detector measures the integral signal from multiple channels rather than from a single channel. The spectral encoding scheme can be expressed in the general form
$$y=Ax$$
where $x$ is an $N\times 1$ signal vector to be reconstructed, A is an $M\times N$ matrix called the sensing matrix, $M$ is the number of global hyperspectral illuminations, and $y$ is the measurement vector. For the spectral multiplexing method used in the experiment, $N = M = 32$. Let $y_j$ represents the $j$th entry of $y$, then $y_j$ is the pixel value $p$ described in Eq. (2) recorded under the $j$th global illumination. Let $L(\lambda _i),\ i=1,2,\ldots ,32$ represents the intensity of the $i$th spectral channel of the HSP. Then, $I(\lambda _i)= a_{ji}\times L(\lambda _i)$, where $a_{ji} = 1$ if the $i$th channel in the $j$th global illumination is turned on and $a_{ji} = 0$ if turned off. In this notation, we can find that $a_{ji}$ is the entry of row $j$ and column $i$ of matrix A. In the experiment, A is a binary matrix composed of 0 and 1, generated by replacing value -1 with 0 in the $32\times 32$ Walsh-Hadamard matrix and then randomly permuting the matrix columns. Let $x_i$ represent the $i$th entry of $x$, then according to Eq. (2), $x_i = L(\lambda _i)\times R(\lambda _i)\times C(\lambda _i)$. $L(\lambda _i)$ and $C(\lambda _i)$ are parameters of the system that are acquired through experiments. Equation (3) becomes a complete set of 32 linear equations of the unknown $R(\lambda _i),\ i=1,2,\ldots ,32$ for each image pixel, and $R(\lambda _i)$ can be directly solved.

For $128\times 128$ spatial resolution, 30 camera images composed of 15 pattern-complement pairs are needed for spatial reconstruction, including 1 pair of all white and all black patterns, 7 pairs of horizontal and 7 pairs of vertical stripe patterns. By designing the spectrum according to sensing matrix A, we embed 15 global hyperspectral illuminations in these 15 pairs. For the spectral multiplexing scheme, another 17 global illumination are directly generated from the HSP. We point out here that it is straightforward to apply compressive sensing [25,26,33,34] in spectral encoding using an appropriate sensing matrix, like a random matrix used here, so that spectral reconstruction can be performed using fewer than 32 global illuminations. The spectral multiplexing method is typically faster in the reconstruction step, whereas compressive sensing method needs fewer encoding patterns and is beneficial when taking measurements are expensive, e.g. for video hyperspectral 3D imaging. The multiplexing method is used here because, as a proof of concept of the capabilities of the HSP, we use a static 3D target and are not concerned with stringent measurement time constraint.

3.3 Results

The results of the 3D hyperspectral surface reconstruction in $128\times 128$ spatial points and 32 spectral channels using the imaging system based on the HSP are shown. Figure 7 and Fig. 9 demonstrate the reconstructed raw 3D surface point cloud viewed from different angles for the target of candy and the target of the ramp, respectively. As can be seen, there are noisy points in the reconstructed point clouds. Some of the noisy points come from shadowed areas on the target and some are a result of the particular reconstruction algorithm whose robustness depends on multiple factors, including calibration accuracy, sharpness of the stripe patterns, image quality of camera captures, etc. Figure 8 and Fig. 10 show the reconstructed reflectance spectra for a few example areas in each of the targets. For spectral encoding, we performed both the spectral multiplexing method and the spectral raster scan method. The spectra are averaged over the pixels in each area. As can be seen, for 32 spectral channels in the experiment, both method provides decent spectral reconstruction compared to the groundtruth. When more spectral channels are used and the light intensity of each channel becomes very low compared to the sensitivity of the sensor, spectral multiplexing is expected to provide a much higher signal level and better SNR. Because a single grayscalce camera is used as sensor, the correspondence between reconstructed 3D point cloud and the reconstructed reflectance spectra are naturally aligned and no data fusion is needed.

 figure: Fig. 7.

Fig. 7. Left: picture of the target composed of a green candy, an orange candy, and a step with green and red papers. The other four figures show the reconstructed raw 3D surface point cloud viewed from different angles.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Reconstructed reflectance spectra of the areas 1, 2, 3, and 4 in the target photo in Fig. 7. Results for the spectral multiplexing method, the spectral raster scan method, and the groundtruth spectra are shown. Spectra values are averaged over the pixels within each area.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Left: picture of the target of a ramp with red, green and orange tapes attached. The other four figures show the reconstructed raw 3D surface point cloud viewed from different angles.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Reconstructed reflectance spectra of the areas 1, 2, and 3 in the target in Fig. 9. Results for the spectral multiplexing method, the spectral raster scan method, and the groundtruth spectra are shown. Spectra values are averaged over the pixels within each area.

Download Full Size | PDF

4. Hyperspectral compressive structured illumination for spectrum-dependent recovery of volume density of participating medium

This section presents the second example application of HSP. The experimental design adds the spectral dimension to the method developed in [5]. In our experiment, instead of using black and white stripe coding patterns to reconstruct all objects in the scene, we use hyperspectral patterns to selectively measure and reconstruct the volume density of only the object with a specific spectral response. This experiment provides another example of the unique capability of the HSP in conveniently combining hyperspectral imaging with a conventional structured illumination techniques in a simple, efficient and low-cost manner.

Conventional structured illumination approaches for recovery of 3D surface of opaque objects are based on a common assumption: each point in the camera image receives light reflected from a single surface point in the scene. Meanwhile the light transport model is vastly different in the case of a participating medium such as translucent objects, smoke, clouds and mixing fluids. In an image of a volume of a participating medium, each pixel receives scattered light from all points along the line of sight within the volume. Gu [5] proposed a compressive structured light method for recovering the volume density of participating media using a projector and a camera. The projector projects a series of black and white coding patterns into the volume of the object and the camera takes photos of the volume from an orthogonal direction from the projection, thus recording line integrals of the scattered light from the volume. In this way, the volume can be compressively encoded and computationally reconstructed.

In our experiment, we use two objects with different scattering spectrum as target. As shown in Fig. 11, the target has two objects placed close together: one object consists of two red translucent planes with letter C carved on the front and back planes, the other consists of two cyan translucent planes with letter V carved on the front and back planes. The red object has strong scattering response between 590 nm and 750 nm, while cyan has strong scattering response between 390 nm and 590 nm.

 figure: Fig. 11.

Fig. 11. (a) Image of the target. (b) The target as seen by the camera used in the experiment under white illumination. (c) Scattering spectra of the red and cyan objects. (d) Image of the target under an example pattern of the first set of hyperspectral stripe patterns. (e) Spectrum of the first set of hyperspectral stripe patterns. (f) Image of the target under an example pattern of the second set of hyperspectral stripe patterns. (g) Spectrum of the second set of hyperspectral stripe patterns.

Download Full Size | PDF

The experiment uses two sets of hyperspectral stripe patterns that have the same binary stripe coding scheme but different spectral content, as shown in Figs. 11(e) and (g). The first set has wavelengths longer than 610 nm, under which the red object scatters strongly and the cyan object is almost invisible. The second has wavelengths less than 570 nm, under which cyan object scatters strongly and the red object is almost invisible. Reconstruction from the first set contains the volume density of the red object only and second reconstruction contains the cyan object only, as shown in Fig. 12. Twenty-four compressive measurements are used to reconstruct the target at spatial resolution of $32\times 32\times 32$. In this experiment, we choose objects with relatively simple spectral response to demonstrate how the HSP can be applied for spectrum-dependent encoding and recovery of the target. For objects having spectral responses consisting of multiple bands, the hyperspectral patterns can be designed accordingly to intentionally increase signal from the desired object and decrease or totally suppress the signal of unwanted objects. This method can be very useful in applications such as searching for an object hidden behind or inside another object.

 figure: Fig. 12.

Fig. 12. 3D views of reconstructed 3D volume density from (a) the first set of encoding patterns, and (b) the second set of encoding patterns. The spatial resolution is $32\times 32\times 32$. Twenty-four compressive measurements are used.

Download Full Size | PDF

5. Conclusion

This paper reported a hyperspectral stripe projector system capable of projecting two-dimensional patterns where each row of pixels can have an independently programmable arbitrary spectrum. The HSP features simple design, high pattern frame rate, low cost, and a compact form factor. Two novel example applications of hyperspectral 3D imaging are demonstrated using the HSP through simultaneous 3D spatial and spectral encoding. The HSP opens up the possibility of numerous applications combining hyperspectral imaging with any traditional structured illumination techniques in a simple, efficient and low-cost manner. With appropriate optics and sensor, the HSP can operate in SWIR regime and therefore enable SWIR hyperspectral 3D imaging using a SWIR camera, or even using a single-pixel SWIR detector when combined with single-pixel imaging techniques [25,26]. Besides hyperspectral 3D imaging applications, HSP may also inspire novel spectrum-dependent encoding schemes for structured illumination techniques. Here, we encourage readers to try combining hyperspectral imaging with structured illumination techniques in different fields using the HSP and explore possible new applications.

Funding

National Science Foundation (CHE-1610453).

Disclosures

The authors declare no conflicts of interest.

References

1. J. Geng, “Structured-light 3d surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128–160 (2011). [CrossRef]  

2. K. L. Boyer and A. C. Kak, “Color-encoded structured light for rapid active ranging,” IEEE Transactions on Pattern Analysis and Machine Intelligence pp. 14–28 (1987).

3. J. Salvi, J. Pages, and J. Batlle, “Pattern codification strategies in structured light systems,” Pattern Recogn. 37(4), 827–849 (2004). [CrossRef]  

4. S. G. Narasimhan, S. K. Nayar, B. Sun, and S. J. Koppal, “Structured light in scattering media,” in Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1, vol. 1 (IEEE, 2005), pp. 420–427.

5. J. Gu, S. Nayar, E. Grinspun, P. Belhumeur, and R. Ramamoorthi, “Compressive structured light for recovering inhomogeneous participating media,” IEEE Transactions on Pattern Analysis Mach. Intell. 1 (2012).

6. M. G. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000). [CrossRef]  

7. P. Kner, B. B. Chhun, E. R. Griffis, L. Winoto, and M. G. Gustafsson, “Super-resolution video microscopy of live cells by structured illumination,” Nat. Methods 6(5), 339–342 (2009). [CrossRef]  

8. C. Fuchs, T. Chen, M. Goesele, H. Theisel, and H.-P. Seidel, “Density estimation for dynamic volumes,” Comput. & Graph. 31(2), 205–211 (2007). [CrossRef]  

9. R. Willett, M. Duarte, and R. Baraniuk, “Sparsity and structure in hyperspectral imaging: Sensing, reconstruction, and target detection,” IEEE Signal Process. Mag. 31(1), 116–126 (2014). [CrossRef]  

10. A. F. Goetz, “Three decades of hyperspectral remote sensing of the earth: A personal view,” Remote. Sens. Environ. 113, S5–S16 (2009). [CrossRef]  

11. G. A. Roth, S. Tahiliani, N. M. Neu-Baker, and S. A. Brenner, “Hyperspectral microscopy as an analytical tool for nanomaterials,” Wiley Interdiscip. Rev.: Nanomed. Nanobiotechnol. 7(4), 565–579 (2015). [CrossRef]  

12. X. Dong, M. Jakobi, S. Wang, M. Köhler, X. Zhang, and A. Koch, “A review of hyperspectral imaging for nanoscale materials research,” Appl. Spectrosc. Rev. 54(4), 285–305 (2019). [CrossRef]  

13. G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” J. Biomed. Opt. 19(1), 010901 (2014). [CrossRef]  

14. L. M. Dale, A. Thewis, C. Boudry, I. Rotar, P. Dardenne, V. Baeten, and J. A. F. Pierna, “Hyperspectral imaging applications in agriculture and agro-food product quality and safety control: a review,” Appl. Spectrosc. Rev. 48(2), 142–159 (2013). [CrossRef]  

15. F. Daniel, A. Mounier, J. Pérez-Arantegui, C. Pardos, N. Prieto-Taboada, S. F.-O. de Vallejuelo, and K. Castro, “Hyperspectral imaging applied to the analysis of goya paintings in the museum of zaragoza (spain),” Microchem. J. 126, 113–120 (2016). [CrossRef]  

16. M. Kim, T. Harvey, D. Kittle, H. Rushmeier, J. Dorsey, R. Prum, and D. Brady, “3d imaging spectroscopy for measuring hyperspectral patterns on solid objects,” ACM Transactions on Graphics (TOG) pp. 1–11 (2012).

17. R. Sitnik, J. F. Krzeslowski, and G. Maczkowski, “Archiving shape and appearance of cultural heritage objects using structured light projection and multispectral imaging,” Opt. Eng. 51(2), 021115 (2012). [CrossRef]  

18. A. Zia, J. Liang, J. Zhou, and Y. Gao, “3d reconstruction from hyperspectral images,” in 2015 IEEE Winter Conference on Applications of Computer Vision, (IEEE, 2015).

19. J. Behmann, A. Mahlein, S. Paulus, J. Dupuis, H. Kuhlmann, E. Oerke, and L. Plümer, “Generation and application of hyperspectral 3d plant models: methods and challenges,” Mach. Vis. Appl. 27(5), 611–624 (2016). [CrossRef]  

20. C. Zhang, M. Rosenberger, A. Breitbarth, and G. Notni, “A novel 3d multispectral vision system based on filter wheel cameras,” in 2016 IEEE International Conference on Imaging Systems and Techniques (IST), (IEEE, 2016).

21. P. Liu, J. Huang, S. Zhang, and R. Xu, “Multiview hyperspectral topography of tissue structural and functional characteristics,” J. Biomed. Opt. 21(1), 016012 (2016). [CrossRef]  

22. J. Wu, B. Xiong, X. Lin, J. He, J. Suo, and Q. Dai, “Snapshot hyperspectral volumetric microscopy,” Sci. Rep. 6(1), 24624 (2016). [CrossRef]  

23. S. Heist, C. Zhang, K. Reichwald, P. Kühmstedt, G. Notni, and A. Tünnermann, “5d hyperspectral imaging: fast and accurate measurement of surface shape and spectral characteristics using structured light,” Opt. Express 26(18), 23366–23379 (2018). [CrossRef]  

24. J. P. Rice, S. W. Brown, J. E. Neira, and R. R. Bousquet, “A hyperspectral image projector for hyperspectral imagers,” Proc. SPIE 6565, 65650C (2007). [CrossRef]  

25. M. Duarte, M. Davenport, D. Takhar, J. Laska, T. Sun, K. Kelly, and R. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008). [CrossRef]  

26. Y. August, C. Vachman, Y. Rivenson, and A. Stern, “Compressive hyperspectral imaging by random separable projections in both the spatial and the spectral domains,” Appl. Opt. 52(10), D46–D54 (2013). [CrossRef]  

27. D. Moreno and G. Taubin, “Simple, accurate, and robust projector-camera calibration,” in 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization, & Transmission, (IEEE, 2012).

28. R. Daley and L. Hassebrook, “Channel capacity model of binary encoded structured light-stripe illumination,” Appl. Opt. 37(17), 3689–3696 (1998). [CrossRef]  

29. C. Beumier and M. Acheroy, “3d facial surface acquisition by structured light,” in International Workshop on Synthetic-Natural Hybrid Coding and Three Dimensional Imaging, (Citeseer, 1999).

30. M. Rodrigues, M. Kormann, C. Schuhler, and P. Tomek, “Structured light techniques for 3d surface reconstruction in robotic tasks,” in Proceedings of the 8th International Conference on Computer Recognition Systems CORES 2013, (Springer, 2013).

31. M. Rodrigues and A. Robinson, “Real-time 3d face recognition using line projection and mesh sampling,” in Eurographics Workshop on 3D Object Retrieval 2011, (Eurographics Association, 2011).

32. M. Hutchins, A. Topping, C. Anderson, F. Olive, P. van Nijnatten, P. Polato, A. Roos, and M. Rubin, “Measurement and prediction of angle-dependent optical properties of coated glass products: results of an inter-laboratory comparison of spectral transmittance and reflectance,” Thin Solid Films 392(2), 269–275 (2001). [CrossRef]  

33. R. G. Baraniuk, “Compressive sensing [lecture notes],” IEEE Signal Process. Mag. 24(4), 118–121 (2007). [CrossRef]  

34. M. Fornasier and H. Rauhut, “Compressive sensing,” Handbook of mathematical methods in imaging pp. 187–229 (2015).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. (a) Schematic of the top view of the HSP system. (b) The HSP prototype
Fig. 2.
Fig. 2. Side view of part of the light path in HSP, showing how different points on the slit are dispersed and focused on the DMD
Fig. 3.
Fig. 3. Schematic of the spatial and spectral modulation process. (a)Illustration of an example DMD pattern. Mirrors in the white area are ON and those in the black area are OFF. (b) Dispersed spectrum on the DMD surface. (c) Spectrum on the ON area is selected. (d) Image of the projected hyperspectral pattern on the screen when DMD displays the pattern in (a). (e) The spectra measured by the spectrometer for the top stripe and bottom stripe
Fig. 4.
Fig. 4. (a) Spectral calibration of the HSP, showing the relationship between location of the column of ON micromirrors and the center wavelength of the generated light. (b) Example light spectra generated by the HSP for a single spectral channel and for combination of different channels.
Fig. 5.
Fig. 5. (a) Color camera captures of 4 example hyperspectral encoding patterns and their complement patterns used in the experiment, with the corresponding DMD patterns above that generated them. (b)Line profiles of the stripe patterns of the actual grayscale camera captures used for the reconstruction. In the line profiles, the image intensity values along the red line marked in the figure are shown below the each figure. Note that the ambient light level is not subtracted from the grayscle camera capture.
Fig. 6.
Fig. 6. Schematic of the image formation model. Light reflected from point $T$ on the orange is focused at pixel $P$ in the camera capture. The pixel value of $P$ is dependent on the illumination energy density $i(\lambda )$, surface reflectance $r(\lambda )$, and camera spectral response $c(\lambda )$.
Fig. 7.
Fig. 7. Left: picture of the target composed of a green candy, an orange candy, and a step with green and red papers. The other four figures show the reconstructed raw 3D surface point cloud viewed from different angles.
Fig. 8.
Fig. 8. Reconstructed reflectance spectra of the areas 1, 2, 3, and 4 in the target photo in Fig. 7. Results for the spectral multiplexing method, the spectral raster scan method, and the groundtruth spectra are shown. Spectra values are averaged over the pixels within each area.
Fig. 9.
Fig. 9. Left: picture of the target of a ramp with red, green and orange tapes attached. The other four figures show the reconstructed raw 3D surface point cloud viewed from different angles.
Fig. 10.
Fig. 10. Reconstructed reflectance spectra of the areas 1, 2, and 3 in the target in Fig. 9. Results for the spectral multiplexing method, the spectral raster scan method, and the groundtruth spectra are shown. Spectra values are averaged over the pixels within each area.
Fig. 11.
Fig. 11. (a) Image of the target. (b) The target as seen by the camera used in the experiment under white illumination. (c) Scattering spectra of the red and cyan objects. (d) Image of the target under an example pattern of the first set of hyperspectral stripe patterns. (e) Spectrum of the first set of hyperspectral stripe patterns. (f) Image of the target under an example pattern of the second set of hyperspectral stripe patterns. (g) Spectrum of the second set of hyperspectral stripe patterns.
Fig. 12.
Fig. 12. 3D views of reconstructed 3D volume density from (a) the first set of encoding patterns, and (b) the second set of encoding patterns. The spatial resolution is $32\times 32\times 32$. Twenty-four compressive measurements are used.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

p = λ i ( λ ) × r ( λ ) × c ( λ )   d λ
p = i = 1 32 I ( λ i ) × R ( λ i ) × C ( λ i )
y = A x
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.