Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Low-cost, high-speed multispectral imager via spatiotemporal modulation based on a color camera

Open Access Open Access

Abstract

Spectral imaging is a powerful tool in industrial processes, medical imaging, and fundamental scientific research. However, for the commonly used spatial/spectral-scanning spectral imager, the slow response time has posed a big challenge for its employment in dynamic scenes. In this paper, we propose a spatiotemporal modulation concept and build a simple, low-cost spectral imager by combining a liquid crystal (LC) cell with a commercial color camera. By the synergic effect of temporal modulation of the LC materials and spatial modulation of the Bayer filter in a color camera, high-quality multispectral imaging is successfully demonstrated with a high rate of 8 Hz, far beyond the counterparts. Experimental results show that even with three tuning states of the LC material, optical signals with a 10-nm band can be resolved in the range between 410 and 700 nm by this method, overcoming the tradeoff between spectral resolution and time resolution. As a proof of demonstration, we present its potential usage for metamerism recognition, showing superiority over traditional color cameras with more spectral details. Considering its low cost, miniaturization and monolithic-integration ability on color sensors, this simple approach may bring the spectral imaging technology closer to the consumer market and even to ubiquitous smartphones for health care, food inspection and other applications.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Spectral cameras can capture more information about a scene compared to traditional color cameras, making them a powerful tool in fields such as food safety [13], healthcare [48], and machine vision [5,911]. Spatial-scanning, spectral-scanning, and snapshot spectral cameras are three mainstream devices usually mentioned in scientific research and industrial applications. The spatial-scanning systems rely on mechanical moving components and gratings, resulting in a large volume and slow imaging speed [12,13]. The spectral scanning systems utilize tunable filters which can switch one narrowband to another [11,1315]. However, the tunable filters are always expensive, and a tradeoff between luminous flux and spectral resolution is inevitable. Snapshot spectral imaging sacrifices spatial resolution for time cost, making it unsuitable for detailed identification [1620]. Although the coded aperture is designed under the compressed sensing principle [21], the competition between spatial resolution and spectral resolution still exists [2225].

Recently, computational spectral imaging utilizing quantum dots [26,27], nanowires [28,29], Fabry-Perot cavities [3032], and metasurface [30,3335] is emerging. The optical response of systems is changed in different ways and the spectrum of the scene can be reconstructed through post-processing algorithms. Due to the spectral response being usually a broadband spectral combination, computational spectral imaging always achieves high light flux and signal-to-noise ratio (SNR)[36]. However, due to the high cost and large volume, or the demanding complex optical path and manufacturing process, the popularity of spectral cameras is far less than color cameras [31], which limits the development in many commercial and end-user scenarios. Although many computational algorithms have been proposed by reconstructing the spectral information from color images, prior knowledge and qualified image datasets are a necessity. In previous research, the response of the camera’s filter [37] and artificial intelligence algorithms [38,39] are important in resolving the spectrum from a single color camera, but the physical mechanism is still unclear. Other methods such as adding prisms to a color camera [40] or using multiple cameras with different responsivities [41] have also been reported, but these approaches are usually not portable enough. A cost-effective and portable spectral camera hardware based on color cameras can not only promote spectral imaging for consumer electronics but also make a large amount of high-quality spectral image datasets available for the computer science community, boosting the development of computational spectral imaging.

In this paper, we propose a simple and low-cost strategy to build multispectral hardware based on a commercial color camera. The core component of the system is a spectral modulator which is composed of a nematic liquid crystal (LC) cell and two orthogonal polarizers, set between a commercial color camera and imaging lenses. Due to the birefringence of the LC material and the polarization interference effect, the LC cell can modulate the spectral response of the input light by applying various voltages [14,42,43]. Combined with spatio-spectral mapping of the Bayer filters in the color camera, the proposed spectral camera can obtain high-quality spectral images with a speed of around 8 Hz. Experimental results show that the proposed hardware can achieve a spectral resolution of around 10 nm in the whole visible range. As a demonstration, we evaluate its application for object recognition and verification, showing superiority over traditional color cameras with more spectral details. Spectral video capturing is also conducted by this device with high repeatability during a long period, validating that our device is reliable for practical usage. This simple, cost-effective hardware also shows advantages over expensive hyperspectral cameras such as high optical throughput and high SNR, enabling many new applications such as portable point-of-care health management and food inspection.

2. Methods and system configuration

The basic structure of our system is shown in Fig. 1. It is composed of an LC spectral modulator laid on a color camera. In the experiment, due to the short working distance and mechanical structure of the commercial color cameras, we designed a relay structure to apply our LC modulator between a color camera and an imaging lens. In this way, the image formed by the imaging lens first enters through the LC modulator with a relay function. After the spectral modulation of the LC cell, the image is relayed to a color camera. The spectrum of a scene is temporally modulated by an LC and then spatially modulated by the Bayer filters of a color camera, and is converted into an electrical signal by the camera’s photoelectric sensor. Aided with the Bayer filter array in a color camera, high-speed spectral imaging can be obtained with fewer acquiring times compared with the pure spectral-scanning method. Adaptive capturing time and reconstructed spectral accuracy can also be achieved by choosing different voltage combinations applied to the LC modulator, leading to a more flexible configuration for spectral imaging.

 figure: Fig. 1.

Fig. 1. System structure diagram. a, Schematic of the present spatiotemporal modulation system. From top to bottom are the polarizer, liquid crystal cell, analyzer, and color camera sensor. The yellow arrow represents the direction of incident light. b, The process of data collection and processing. R, G and B represent the pixel grayscale values obtained by camera sensors after three Bayer color filters respectively and their subscripts represent various filtering channels under different LC states. c, The experimental system utilizing a relay lens for imaging.

Download Full Size | PDF

The output signal of a single pixel of the camera can be represented by Eq. (1):

$${{I_{im}} = \; \mathop \int \nolimits_{{\lambda _1}}^{{\lambda _2}} R(\lambda ){T_m}({\lambda ,{V_m}} ){D_i}(\lambda )L(\lambda )d\lambda ,\; m = 1,2, \ldots M}$$
where ${I_{im}}$ means the output electrical signal of a single pixel. $i = R,\; G,\; B$ represents the red, green, and blue Bayer color filters and m means the mth driving voltage state on the LC cell. ${\lambda _1}$ and ${\lambda _2}$ are lower and upper bounds of the working wavelength. $R(\lambda )$ means the spectrum of the scene. ${T_m}({\lambda ,{V_m}} )$ refers to the transmittance of the LC modulator at the applied voltage state of Vm. M is the total number of driving voltages on the LC modulator. ${D_i}(\lambda )$ is the effective efficiency of the corresponding detector behind the three types of Bayer filters, which is the product of the transmittance of the Bayer filters and the quantum efficiency of the camera sensor. $L(\lambda )$ is the fixed transmittance of the imaging system. ${T_m}(\lambda )$, ${D_i}(\lambda )$ and $L(\lambda )$ compose a sensing matrix A and can be pre-calibrated. By discretizing the wavelength into N parts, the electrical signal intensity of the camera can be expressed as Eq. (2):
$${{{\boldsymbol I}_{im}}\; \simeq \; \mathop \sum \limits_{n = 1}^N {{\boldsymbol A}_{im}}({{\lambda_n}} ){\boldsymbol R}({{\lambda_n}} ),\textrm{}m = 1,2, \ldots M}$$

After discretizing the wavelength into N parts, the sensing matrix A is expressed as ${{\boldsymbol A}_{im}}({{\lambda_n}} )$ and the spectrum of the scene is expressed as ${\boldsymbol R}({{\lambda_n}} )$. The input spectral dimension N is usually greater than the number of states of the LC modulation, so it is an underdetermined problem of solving R. The problem can be solved by minimizing the l2 norm. The cost function is shown in Eq. (3):

$${\mathop {\textrm{arg min}}\limits_{{R_N}} {\|{\boldsymbol I}_M} - \textrm{}{{\boldsymbol A}_{MN}}\textrm{} \times \textrm{}{{\boldsymbol R}_N}\|_2^2,\; \; \; \; \; s.t.\; 0\textrm{} < \textrm{}{{\boldsymbol R}_N}\; < 1}$$

In this paper, a compressed sensing algorithm is used to improve the reconstruction accuracy of the spectrum. The spectrum ${{\boldsymbol R}_N}$ can be sparsely represented as ${{\boldsymbol \theta }_K}$ on the basis ${{\boldsymbol \varPhi }_{NK}}$, that is, ${{\boldsymbol R}_N} = \; {{\boldsymbol \varPhi }_{NK}}{{\boldsymbol \theta }_K}$ (K < N). We assumed that ${{\boldsymbol \varPsi }_{MK}} = \; {{\boldsymbol A}_{MN}}{{\boldsymbol \varPhi }_{NK}}$, the cost function can be expressed as Eq. (4):

$${\mathop {\textrm{arg min}}\limits_{{{\boldsymbol \theta }_K}} {\|{\boldsymbol I}_M} - \textrm{}{{\boldsymbol \varPsi }_{MK}}\textrm{} \times \textrm{}{{\boldsymbol \theta }_K}\|_2^2 + w|{{{\boldsymbol \theta }_K}} |\; }$$

In Eq. (4), the second term $w|{{{\boldsymbol \theta }_K}} |$ is L1 regularization and w is represented as the weight of the regularization term. Solving Eq. (4) ensures a sparse solution. The key to recovering high-precision spectra is that the column rank of sensing matrix A is large enough, that is, the correlation of matrix A is small enough. We use the Pearson correlation coefficient (formula (5)) to characterize the correlation of different voltage combinations in the LC modulator. Correlation coefficients between the wavelength bands i and j can be expressed as31:

$${{r_{ij}} = \; \left|{\frac{{\mathop \sum \nolimits_{m = 1}^M ({{i_m} - \textrm{}\bar{i}} )({{j_m} - \textrm{}\bar{j}} )}}{{\sqrt {\left( {\mathop \sum \nolimits_{m = 1}^M {{({{i_m} - \textrm{}\bar{i}} )}^2}} \right)\left( {\mathop \sum \nolimits_{m = 1}^M {{({{j_m} - \textrm{}\bar{j}} )}^2}} \right)} }}} \right|}$$
where i(j)m is the transmittance at the applied voltage state of m and $\bar{i}$($\bar{j}$) is the average transmittance. A value of one indicates a complete correlation and a value of zero indicates that there is no correlation between filtering channels in the combination of different voltage states.

In this paper, the over-complete dictionary ${{\boldsymbol \varPhi }_{NK}}$ is trained by the K-SVD algorithm [44] using spectral image datasets [45,46]. A simple gradient descent algorithm is finally used to solve Eq. (4). ${{\boldsymbol R}_N}$ will be reconstructed by the solution ${{\boldsymbol \theta }_K}$ and its basis ${{\boldsymbol \varPhi }_{NK}}$ together.

3. Simulation and system performance

The transmittance of a nematic LC cell and two orthogonal polarizers can be approximated as ${\cos ^2}\delta /2$ where $\delta $ is the phase retardation introduced by liquid crystal. Figure 2(a) shows the simulated transmissive response of a bare LC cell and the responses after three Bayer filters. The simulation details can be found in Supplement 1 Section 3. In the simulation, the thickness d of the LC cell is 4 µm, and the refractive index of LC is adopted from the Ref [47]. We define $\varDelta n = {n_\textrm{e}} - {n_\textrm{o}}$ at each wavelength. Due to $\delta = 2\pi {n_{V,\lambda }}d/\lambda $, all possible values of ${\cos ^2}\delta /2$ could be obtained as long as ${n_{V,\lambda }}$ changes from Δn to Δn/2 in the visible wavelength band, so that fewer voltage states can demonstrate excellent performance in the real situation. Compared with the bare LC cell [48], the introduction of Bayer filters reflects a smaller correlation coefficient and a better performance (Fig. 2(b)). To evaluate its performance for multispectral imaging, we simulate the reconstruction of spectral images using hyperspectral datasets [45]. We select different numbers of LC states in the range from Δn to Δn/2 for spectral reconstruction. As shown in Fig. 2(c), the results show that a large peak signal-noise ratio (PSNR) of over 26 dB can be obtained even if three states are selected, indicating a high-speed rate. While more states can result in a higher PSNR but a low speed. This flexible configuration makes this device meet the requirements for either dynamic or high-accuracy scenes.

 figure: Fig. 2.

Fig. 2. Simulation performance for the proposed configuration. a, The transmittance of a bare LC spectral modulator and the response of the modulator combined with different Bayer filters with different birefringence ${n_{V,\lambda }}$ of the LC. Δn is equal to the absolute value of ne - no at each wavelength. The position of the black dashed line represents the threshold voltage of a 4 µm-thick LC cell. E7 liquid crystal is used in this paper. b, Correlation coefficient matrix of the system response without Bayer filters (left, average value of the lower triangular correlation coefficient is 0.5835) and with Bayer filters (right, average value of the lower triangular correlation coefficient is 0.4302). c, Ground truth (GT) image [45] and images reconstructed with different total numbers of modulator states. The ground truth and reconstructed spectral images are rendered as synthetic color images. Even with three states, the reconstructed images can have a large PSNR, showing high reconstruction accuracy.

Download Full Size | PDF

4. Experimental results

In Fig. 3(a), we show the measured transmissive response for our system with different driving voltages. The measurement process can be seen in Supplement 1 Section 6. As can be seen, the measured transmittance is in good agreement with the simulations (Fig. 2(a)). Given that the response changes more dramatically in the range from 2 V to 4 V, in our paper, the spectrum reconstruction is achieved by selecting the filtering states in this range (Fig. 3(a)). Three typical filtering states in this range can be seen in Fig. 3(b). The field of view (FOV) of our system is also characterized by measuring the output intensity distribution of the color camera with different input signals. The intensity maps are quite uniform for different input wavelengths as shown in Fig. 3(c), even in the bands with low light flux, ensuring that the spectrum can be reconstructed using a common model at different pixel points. As the focal length of the relay lens is 75 mm, the FOV of our system is calculated to be 18.7 degrees.

 figure: Fig. 3.

Fig. 3. Performance of the system. a, Transmittance of the system with different color Bayer filters. The blue dashed box region represents the range of applied voltage. The thickness of the LC cell is 4 µm. b, System transmittance at three voltage states. Different colored lines represent different Bayer filters. c, Intensity maps with different monochromatic input lights. Three types of Bayer filters are displayed separately. X and Y mean the position of the camera sensor.

Download Full Size | PDF

To exhibit its ability for multispectral imaging, we first measure several narrowband inputs with our system. The experiment details can be seen in Supplement 1 Section 7. In the experiment, three driving voltages of 2.5 V, 3.2 V, and 3.7 V are applied and used for spectrum reconstruction. The pristine images (ground truth) of monochromatic light emitted from an integrating sphere and the reconstructed images are shown in Figs. 4(a) and 4(b) respectively. There is no color difference between the reconstruction and ground truth. Figure 4(c) shows the measured spectra by a commercial spectrometer and the recovered spectra. Although the bandwidth of reconstructed spectra at some wavelengths is broadened, the position of the peak is quite accurate. The resolution of our device is also evaluated by a signal with two separate peaks with a wavelength difference of 20 nm. It can be seen in Fig. 4(d) that the two peaks can be distinguished clearly, demonstrating we can achieve a resolution around 20 nm. More experimental details can be found in Supplement 1 Section 7. Due to the slow transition speed of our LC cell, the system needs 40 ms after applying a voltage on the LC and then the camera can start to capture the scene. Considering that three different voltages are sufficient for high accuracy, the acquisition time for a spectral image can be around 120 ms, or around 8 Hz for capturing rate, several times faster than traditional spectral cameras.

 figure: Fig. 4.

Fig. 4. Spectral image reconstruction of monochromatic light sources. a, Ground truth images of monochromatic light source emitted from an integrating sphere. b, Reconstructed images of monochromatic light source emitted from an integrating sphere. The bandwidth of monochromatic light is 10 nm. All spectral images are rendered as synthetic color images. c, Ground truth and reconstructed spectra of monochromatic light source emitted from an integrating sphere. The dashed line represents ground truth spectra and the circle solid line represents reconstructed spectra. The interval between each channel is 10 nm. Each monochromatic spectrum is represented separately and the wavelength range of each channel is 60 nm on the canvas for a clear demonstration. d, Reconstruction on a spectrum with two peaks. Left: in the green band, peaks are at 550 nm and 570 nm. Right: in the red band, peaks are at 660 nm and 680 nm. All peaks have a bandwidth of 10 nm. Ground truth was measured by a commercial spectrometer (Ocean Optics, QE PRO).

Download Full Size | PDF

In Fig. 5, two colorful scenes are measured by our spectral camera at voltage states of 2.5 V, 3.2 V, and 3.7 V. The reconstructed images (Fig. 5(b) and Fig. 5(e)) show no apparent difference compared with the original images (Fig. 5(a) and Fig. 5(d)) and even look better. The reconstructed spectra also fit well with ground truth, and the average root mean square error (RMSE) is less than 0.11. The spectral camera is also used to identify authenticity. In Fig. 5(d), we put several real fruits (marked with circles) into a pile of fake fruits. As can be seen, from color images, one can hardly distinguish between real and fake fruits. However, our spectral camera upgraded from a color camera successfully distinguished their differences in the spectral domain. The true and fake apples, bananas, and mangosteens have quite different reflection features in the green, red, and blue bands measured by our spectral camera (the original images are shown in Fig. 5(f)), which is confirmed by a commercial spectrometer.

 figure: Fig. 5.

Fig. 5. Spectral image reconstruction of scenes. a, Colorchecker captured by a color camera. b, The reconstructed image of the colorchecker rendered as a synthetic color image. c, Spectra of some color blocks in the colorchecker. d, Fruits captured by color camera. e, The reconstructed spectral image of fruits rendered as synthetic color images. f, Raw images of the three voltage states used to calculate the spectrum. g, Spectrum of true and false apples, bananas, and mangosteens (left to right)

Download Full Size | PDF

Finally, we have confirmed the potential application of our spectral camera for video capturing. The spectral camera continuously collected data for eight seconds. More details can be seen in Supplement 1 Section 7. After subsequent data processing, we reconstructed the spectral images of the scene in eight seconds. Figure 6(a) shows eight frames with a one-second time interval, and the spectra of all pixels at all times are perfectly reconstructed. The spectra of four points in these images are displayed to show the spectral accuracy and stability of our spectral camera. Spectra marked with hexagonal and circle points almost have no change, which shows that spectra can be stably reconstructed over a long period (Fig. 6(b)(i)-(ii)). While the scene was interrupted within eight seconds at the star and square points. The spectrum only changed when the scene was interrupted, and then restored to the initial state as shown in Fig. 6(b)(iii)-(iv). To capture the video in Fig. 6, the applied voltage varies from 2 V to 4 V, including 5 voltage states for a better reconstruction. In this way, the acquisition time for a spectral image is around 250 ms, still several times faster than traditional spatial-scanning cameras.

 figure: Fig. 6.

Fig. 6. High-rate spectral image reconstruction of dynamic scenes. a, Reconstructed images at different times. b, Reconstructed spectra of four points at different times. (i)-(ii) Spectra for stationary objects and (iii)-(iv) for dynamic objects shown in b.

Download Full Size | PDF

5. Conclusion and discussion

In summary, we demonstrate a low-cost multispectral imager by upgrading a color camera with a simple off-the-shelf liquid crystal cell. In this way, the correlation of the response matrix for computational spectral reconstruction can be greatly reduced compared with a bare liquid crystal cell. Moreover, by combining the LC cell with the existing Bayer filter of color cameras, the sampling time for spectral imaging can be reduced by three times with a hybrid spatiotemporal modulation fashion. This spatiotemporal modulation greatly releases the tradeoff between spectral and time resolution in a spectrum-scanning system. With the pre-trained spatial-spectral dictionary, spectral images can be reconstructed with spectral accuracy of 10 nm in the range of 410-700 nm within only three measurements based on compressive sensing. As a proof of concept, the proposed spectral camera can work well to identify metamerism and obtain high-quality spectral images at a high rate of 8 spectral frames per second, several times faster than traditional spectral cameras.

It is noted that, due to the mechanical structure of the commercial color camera, we didn’t integrate the LC cell on the sensors in this paper. This leads to a large volume of our spectral imaging system with an extra relay lens for re-imaging. For further work, the LC cell can be monolithically integrated into a color sensor during the fabrication process. In this way, a dynamic color sensor can be made with reconfigurable responses for various functionalities. Additionally, as a result of the usage of the basic iterative algorithm, the spectral reconstruction time is relatively long, and it takes 5 minutes to reconstruct a million-pixel image which is far from keeping up with the speed of data collection. Using other more advanced algorithms or artificial intelligence may compensate for this deficiency. Bayer filter has spatial periodicity, which can be utilized to achieve lossless reconstruction of spatial resolution in the spatial modulation rather than sacrificing spatial resolution to a quarter of the full size of raw data. Furthermore, pixelating or speeding up the liquid crystal is also a promising way to a high precision and speed spectral camera. This low-priced dynamic color sensor can be employed widely not only for machine vision but also for wearable and portable devices like smartwatches and smartphones, enabling opportunities for advanced applications like point-of-care testing and food quality analysis.

Funding

National Key Research and Development Program of China (2022YFC2010003,2022YFC3601000); “Pioneer” and “Leading Goose” R&D Program of Zhejiang (2022C03051, 2023C03083, 2023C03135); Key Research and Development Program of Zhejiang Province (2019C03089); National Natural Science Foundation of China (12004332, 62105284).

Acknowledgments

T. B. G. and Z. J. L. proposed the original idea. T. B. G. and S. L. H. supervised the project. X. C, Z. Z. and J. H. T. helped fabricate the sample and measure the performance. We thank the core facilities and cleanroom provided by the Center for Optical and Electromagnetic Research in the College of Optical Science and Engineering, Zhejiang University. The authors are grateful to Dr. Julian Evans, and Mr. Xiao Wu of Zhejiang University for valuable discussions.

Disclosures

A patent related to this work has been submitted by Z. J. L., T. B. G., Z. Z. and S. L. H. The authors declare no competing financial interests

Data availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. Y. Lu, W. Saeys, M. Kim, et al., “Hyperspectral imaging technology for quality and safety evaluation of horticultural products: A review and celebration of the past 20-year progress,” Postharvest Biol. Technol. 170, 111318 (2020). [CrossRef]  

2. P. M. Mehl, Y.-R. Chen, M. S. Kim, et al., “Development of hyperspectral imaging technique for the detection of apple surface defects and contaminations,” J. Food Eng. 61(1), 67–81 (2004). [CrossRef]  

3. J. Qin, K. Chao, M. S. Kim, et al., “Hyperspectral and multispectral imaging for evaluating food safety and quality,” J. Food Eng. 118(2), 157–171 (2013). [CrossRef]  

4. C. P. Bacon, T. Mattley, and R. DeFrece, “Miniature spectroscopic instrumentation: Applications to biology and chemistry,” Rev. Sci. Instrum. 75(1), 1–16 (2004). [CrossRef]  

5. F. Cutrale, V. Trivedi, L. A. Trinh, et al., “Hyperspectral phasor analysis enables multiplexed 5D in vivo imaging,” Nat. Methods 14(2), 149–152 (2017). [CrossRef]  

6. X. Hadoux, F. Hui, J. K. H. Lim, et al., “Non-invasive in vivo hyperspectral imaging of the retina for potential biomarker use in Alzheimer's disease,” Nat. Commun. 10(1), 4227 (2019). [CrossRef]  

7. C. Jiao, Z. Lin, Y. Xu, et al., “Noninvasive Raman Imaging for Monitoring Mitochondrial Redox State in Septic Rats,” Prog. Electromagn. Res. 175, 149–157 (2022). [CrossRef]  

8. Y. X. Xing, G. X. Wang, T. Zhang, et al., “VOC Detections with Optical Spectroscopy,” Prog. Electromagn. Res. 173, 71–92 (2022). [CrossRef]  

9. W. Jahr, B. Schmid, C. Schmied, et al., “Hyperspectral light sheet microscopy,” Nat. Commun. 6(1), 7990 (2015). [CrossRef]  

10. Q. Pian, R. Yao, N. Sinsuebphon, et al., “Compressive hyperspectral time-resolved wide-field fluorescence lifetime imaging,” Nat. Photonics 11(7), 411–414 (2017). [CrossRef]  

11. J. Luo, Z. J. Lin, Y. X. Xing, et al., “Portable 4D Snapshot Hyperspectral Imager for Fastspectral and Surface Morphology Measurements,” Prog. Electromagn. Res. 173, 25–36 (2022). [CrossRef]  

12. Z. Xu, Y. Jiang, J. Ji, et al., “Airborne hyperspectral sensor systems,” IEEE Aerosp. Electron. Syst. Mag. 9(10), 26–33 (1994). [CrossRef]  

13. Y. Garini, I. T. Young, and G. McNamara, “Spectral imaging: principles and applications,” Cytometry, Part A 69A, 735–747 (2006). [CrossRef]  

14. L. Duempelmann, B. Gallinet, and L. Novotny, “Multispectral Imaging with Tunable Plasmonic Filters,” ACS Photonics 4(2), 236–241 (2017). [CrossRef]  

15. W. Gunning, J. Pasko, and J. Tracy, “A liquid-crystal tunable spectral filter - visible and infrared operation,” P Soc Photo-Opt Inst 268, 190–194 (1981). [CrossRef]  

16. C. V. Correa, H. Arguello, and G. R. Arce, “Snapshot colored compressive spectral imager,” J. Opt. Soc. Am. A 32(10), 1754 (2015). [CrossRef]  

17. N. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng 52(9), 090901 (2013). [CrossRef]  

18. X. Yuan, T. H. Tsai, R. Y. Zhu, et al., “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Signal Process. 9(6), 964–976 (2015). [CrossRef]  

19. T. H. Tsai, X. Yuan, and D. J. Brady, “Spatial light modulator based color polarization imaging,” Opt. Express 23(9), 11912–11926 (2015). [CrossRef]  

20. S. K. Sahoo, D. Tang, and C. Dang, “Single-shot multispectral imaging with a monochromatic camera,” Optica 4(10), 1209–1213 (2017). [CrossRef]  

21. E. J. Candes and M. B. Wakin, “An Introduction To Compressive Sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008). [CrossRef]  

22. X. Cao, T. Yue, X. Lin, et al., “Computational Snapshot Multispectral Cameras: Toward dynamic capture of the spectral world,” IEEE Signal Process. Mag. 33(5), 95–108 (2016). [CrossRef]  

23. N. Tack, A. Lambrechts, P. Soussan, et al., “A Compact, High-speed and Low-cost Hyperspectral Imager,” Silicon Photonics Vii 8266, 82660Q (2012). [CrossRef]  

24. C. Tao, H. Zhu, P. Sun, et al., “Hyperspectral image recovery based on fusion of coded aperture snapshot spectral imaging and RGB images by guided filtering,” Opt. Commun. 458, 124804 (2020). [CrossRef]  

25. A. A. Wagadarikar, N. P. Pitsianis, X. Sun, et al., “Video rate spectral imaging using a coded aperture snapshot spectral imager,” Opt Express 17(8), 6368 (2009). [CrossRef]  

26. J. Bao and M. G. Bawendi, “A colloidal quantum dot spectrometer,” Nature 523(7558), 67–70 (2015). [CrossRef]  

27. X. Zhu, L. Bian, H. Fu, et al., “Broadband perovskite quantum dot spectrometer beyond human visual resolution,” Light: Sci. Appl. 9(1), 73 (2020). [CrossRef]  

28. J. Meng, J. J. Cadusch, and K. B. Crozier, “Detector-Only Spectrometer Based on Structurally Colored Silicon Nanowires and a Reconstruction Algorithm,” Nano Lett. 20(1), 320–328 (2020). [CrossRef]  

29. Z. Yang, T. Albrow-Owen, H. Cui, et al., “Single-nanowire spectrometers,” Science 365(6457), 1017–1020 (2019). [CrossRef]  

30. T. B. Guo, Z. J. Lin, X. A. Xu, et al., “Broad-Tuning, Dichroic Metagrating Fabry-Perot Filter Based on Liquid Crystal for Spectral Imaging,” Prog. Electromagn. Res. 177, 43–51 (2023). [CrossRef]  

31. M. Yako, Y. Yamaoka, T. Kiyohara, et al., “Video-rate hyperspectral camera based on a CMOS-compatible random array of Fabry–Pérot filters,” Nat. Photon. 17(3), 218–223 (2023). [CrossRef]  

32. Z. Zhang, T. Guo, Z. Lin, et al., “Customized structural color filters by pixel-level electrothermal regulation,” Laser Photonics Rev 17, (2023).

33. C. Chen, X. Li, G. Yang, et al., “Computational hyperspectral devices based on quasi-random metasurface supercells,” Nanoscale 15(19), 8854–8862 (2023). [CrossRef]  

34. B. Craig, V. R. Shrestha, J. Meng, et al., “Experimental demonstration of infrared spectral reconstruction using plasmonic metasurfaces,” Opt. Lett. 43(18), 4481–4484 (2018). [CrossRef]  

35. J. Yang, K. Cui, X. Cai, et al., “Ultraspectral Imaging Based on Metasurfaces with Freeform Shaped Meta-Atoms,” Laser Photonics Rev 16 (2022).

36. L. Huang, R. Luo, X. Liu, et al., “Spectral imaging with deep learning,” Light: Sci. Appl. 11(1), 61 (2022). [CrossRef]  

37. B. Arad and O. Ben-Shahar, “Filter selection for hyperspectral estimation,” In IEEE International Conference on Computer Vision3172–3180 (2017).

38. H. Q. Li, Z. W. Xiong, Z. Shi, et al., “Hsvcnn: Cnn-based hyperspectral reconstruction from RGB videos,” In IEEE International Conference on Image Processing (2018), pp. 3323–3327.

39. R. M. H. Nguyen, D. K. Prasad, and M. S. Brown, “Training-based spectral reconstruction from a single RGB image,” Lect Notes Comput Sc 8695, 186–201 (2014). [CrossRef]  

40. S.-H. Baek, I. Kim, D. Gutierrez, et al., “Compact single-shot hyperspectral imaging using a prism,” ACM Trans. Graph. 36(6), 1–12 (2017). [CrossRef]  

41. S. W. Oh, M. S. Brown, M. Pollefeys, et al., “Do it yourself hyperspectral imaging with everyday digital cameras,”In IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 2461–2469.

42. M. J. AbuLeil, D. Pasha, I. August, et al., “Helical nanostructures of ferroelectric liquid crystals as fast phase retarders for spectral information extraction devices: a comparison with the nematic liquid crystal phase retarders,” Materials 14(19), 5540 (2021). [CrossRef]  

43. L. Driencourt, F. Federspiel, D. Kazazis, et al., “Electrically tunable multicolored filter using birefringent plasmonic resonators and liquid crystals,” ACS Photonics 7(2), 444–453 (2020). [CrossRef]  

44. M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process. 54(11), 4311–4322 (2006). [CrossRef]  

45. D. H. Foster and A. Reeves, “Colour constancy failures expected in colourful environments,” Proc Biol Sci 289, 20212483 (2022). [CrossRef]  

46. F. Yasuma, T. Mitsunaga, D. Iso, et al., “Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum,” IEEE Trans. on Image Process. 19(9), 2241–2253 (2010). [CrossRef]  

47. J. Li, C. H. Wen, S. Gauza, et al., “Refractive indices of liquid crystals for display applications,” J. Display Technol. 1(1), 51–61 (2005). [CrossRef]  

48. I. August, Y. Oiknine, M. AbuLeil, et al., “Miniature compressive ultra-spectral imaging system utilizing a single liquid crystal phase retarder,” Sci. Rep. 6(1), 23524 (2016). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Supplemental Document

Data availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. System structure diagram. a, Schematic of the present spatiotemporal modulation system. From top to bottom are the polarizer, liquid crystal cell, analyzer, and color camera sensor. The yellow arrow represents the direction of incident light. b, The process of data collection and processing. R, G and B represent the pixel grayscale values obtained by camera sensors after three Bayer color filters respectively and their subscripts represent various filtering channels under different LC states. c, The experimental system utilizing a relay lens for imaging.
Fig. 2.
Fig. 2. Simulation performance for the proposed configuration. a, The transmittance of a bare LC spectral modulator and the response of the modulator combined with different Bayer filters with different birefringence ${n_{V,\lambda }}$ of the LC. Δn is equal to the absolute value of ne - no at each wavelength. The position of the black dashed line represents the threshold voltage of a 4 µm-thick LC cell. E7 liquid crystal is used in this paper. b, Correlation coefficient matrix of the system response without Bayer filters (left, average value of the lower triangular correlation coefficient is 0.5835) and with Bayer filters (right, average value of the lower triangular correlation coefficient is 0.4302). c, Ground truth (GT) image [45] and images reconstructed with different total numbers of modulator states. The ground truth and reconstructed spectral images are rendered as synthetic color images. Even with three states, the reconstructed images can have a large PSNR, showing high reconstruction accuracy.
Fig. 3.
Fig. 3. Performance of the system. a, Transmittance of the system with different color Bayer filters. The blue dashed box region represents the range of applied voltage. The thickness of the LC cell is 4 µm. b, System transmittance at three voltage states. Different colored lines represent different Bayer filters. c, Intensity maps with different monochromatic input lights. Three types of Bayer filters are displayed separately. X and Y mean the position of the camera sensor.
Fig. 4.
Fig. 4. Spectral image reconstruction of monochromatic light sources. a, Ground truth images of monochromatic light source emitted from an integrating sphere. b, Reconstructed images of monochromatic light source emitted from an integrating sphere. The bandwidth of monochromatic light is 10 nm. All spectral images are rendered as synthetic color images. c, Ground truth and reconstructed spectra of monochromatic light source emitted from an integrating sphere. The dashed line represents ground truth spectra and the circle solid line represents reconstructed spectra. The interval between each channel is 10 nm. Each monochromatic spectrum is represented separately and the wavelength range of each channel is 60 nm on the canvas for a clear demonstration. d, Reconstruction on a spectrum with two peaks. Left: in the green band, peaks are at 550 nm and 570 nm. Right: in the red band, peaks are at 660 nm and 680 nm. All peaks have a bandwidth of 10 nm. Ground truth was measured by a commercial spectrometer (Ocean Optics, QE PRO).
Fig. 5.
Fig. 5. Spectral image reconstruction of scenes. a, Colorchecker captured by a color camera. b, The reconstructed image of the colorchecker rendered as a synthetic color image. c, Spectra of some color blocks in the colorchecker. d, Fruits captured by color camera. e, The reconstructed spectral image of fruits rendered as synthetic color images. f, Raw images of the three voltage states used to calculate the spectrum. g, Spectrum of true and false apples, bananas, and mangosteens (left to right)
Fig. 6.
Fig. 6. High-rate spectral image reconstruction of dynamic scenes. a, Reconstructed images at different times. b, Reconstructed spectra of four points at different times. (i)-(ii) Spectra for stationary objects and (iii)-(iv) for dynamic objects shown in b.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

I i m = λ 1 λ 2 R ( λ ) T m ( λ , V m ) D i ( λ ) L ( λ ) d λ , m = 1 , 2 , M
I i m n = 1 N A i m ( λ n ) R ( λ n ) , m = 1 , 2 , M
arg min R N I M A M N × R N 2 2 , s . t . 0 < R N < 1
arg min θ K I M Ψ M K × θ K 2 2 + w | θ K |
r i j = | m = 1 M ( i m i ¯ ) ( j m j ¯ ) ( m = 1 M ( i m i ¯ ) 2 ) ( m = 1 M ( j m j ¯ ) 2 ) |
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.