Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Computational spectrometer based on local feature-weighted spectral reconstruction

Open Access Open Access

Abstract

The computational spectrometer enables the reconstruction of spectra from precalibrated information encoded. In the last decade, it has emerged as an integrated and low-cost paradigm with vast potential for applications, especially in portable or handheld spectral analysis devices. The conventional methods utilize a local-weighted strategy in feature spaces. These methods overlook the fact that the coefficients of important features could be too large to reflect differences in more detailed feature spaces during calculations. In this work, we report a local feature-weighted spectral reconstruction (LFWSR) method, and construct a high-accuracy computational spectrometer. Different from existing methods, the reported method learns a spectral dictionary via ${L_ {\textrm {4}}}$-norm maximization for representing spectral curve features, and considers the statistical ranking of features. According to the ranking, weight features and update coefficients then calculate the similarity. What’s more, the inverse distance weighted is utilized to pick samples and weight a local training set. Finally, the final spectrum is reconstructed utilizing the local training set and measurements. Experiments indicate that the reported method’s two weighting processes produce state-of-the-art high accuracy.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

The scene’s reflection and absorption qualities are reflected in the spectrum, which reveals many intrinsic properties. The uniqueness of characterization is not affected by the environment, which solves the phenomenon of metamerism well. It has been extensively used in remote sensing [1], agriculture [2,3], and mineral prospecting [4]. Conventional spectrometers measure the narrowband information at the expense of large optical elements and high cost. In recent years, computational spectrometers have advanced rapidly due to the increased availability of computer processing power and reductions in microprocessor size and cost [512]. They are composed of a set of filters or detectors to encode spectra and reconstructs spectra from measurements [11,12]. Due to their integrated setup, high light throughput, and low cost, they maintain wide application potential.

The computational reconstruction needs to reconstruct spectra from precalibrated information encoded by detectors. The conventional methods include the least squares methods [13], the Wiener methods [14], the regularization methods [1518], and the kernel methods [1921]. In the last decade, methods based on selecting training sampling are reported to calculate the transformation matrix [22,23] or interpolate the spectra [2426], which take advantage of useful information to improve the reconstruction performance. These methods cluster or pick training samples by the similarity in feature spaces, like measurements, color spaces [22,25], principal component spaces [27,28] and spectral spaces. Local samples are equally important in the calculation. Further, weighting the training samples according to the similarity overcomes this problem [13,22,27,29,30]. Even some methods consider a feature space, in which the features have different representative abilities [27,31]. They usually focus on the coefficients in important features and calculate the similarity of important features for selection and weighting. However, they ignore that the more important feature is typically more fundamental and has larger coefficient. When calculating the similarity, the important features’ coefficients may be too large to reflect differences in more detailed feature spaces. It is crucial to consider the differences in more detailed feature coefficients along with the similarity in important features for high-accuracy reconstruction. It is valuable but has not received sufficient attention.

Apart from the coefficients, the feature spaces also impact similarity expression. Some methods utilize the principal component analysis (PCA) methods and dictionary learning methods to represent spectra. By the methods, fewer features than the number of spectral channels can represent spectra. Recently, a complete dictionary learning method via ${L_ {\textrm {4}}}$-norm maximization was proposed with highly representative ability. It recovers the entire dictionary [32,33], while conventional ${L_ {\textrm {1}}}$-norm-based methods, such as K-SVD, can only deal with one row of a dictionary at a time. This ${L_ {\textrm {4}}}$-norm-based dictionary learning only uses dozens of SVDs to learn entire features with lower complexity and higher computational efficiency than existing methods. It has been applied in blind source separation, independent component analysis [34,35], data detection [36], and dictionary learning optimization [32,33]. For spectral reconstruction, it has a great deal of potential.

The potential behind the feature space and the development of dictionary learning motivate us to explore new feature spaces and reconstruction methods. In this work, we report a local feature-weighted spectral reconstruction (LFWSR) method, and construct a high-accuracy computational spectrometer. The method’s framework is shown in Fig. 1. This method utilizes the ${L_ {\textrm {4}}}$-norm maximization to learn a spectral dictionary and coefficients. Then conduct the statistical ranking of features to select the most important ones. The top-ranked features are then weighted, and the corresponding coefficients are updated accordingly. A mapping matrix is calculated to transform the measurements into coefficients in this feature space. In this feature space, calculate the Euclidean distance of the training samples’ coefficients and testing sample’ coefficients. Based on the distances, pick training samples and utilize the inverse distance method to weight this local training set. Finally, the local training set is utilized to calculate the transformation matrix and reconstruct the final spectrum. Experiments indicate that the reported method’s two weighting processes produce state-of-the-art high accuracy.

 figure: Fig. 1.

Fig. 1. The framework of the reported local feature-weighted spectral reconstruction (LFWSR) method.

Download Full Size | PDF

2. Theories and methods

In the computational spectrometer system, the $i^{th}$ channel measurement $g_{i}$ could be approximated by

$$g_{i}=\int_{\lambda_{\min }}^{\lambda \max } I(\lambda) r(\lambda) f_{i}(\lambda) o(\lambda) s(\lambda) \mathrm{d} \lambda+b_{i}+n_{i}, i=1, 2,\ldots, m$$
where $\lambda _{\max }$ and $\lambda _{\min }$ denote the maximum and minimum of the wavelength $\lambda$, $I(\lambda )$ is the spectral radiance of the illuminant, $r(\lambda )$ is the spectral reflectance, $f_{i}(\lambda )$ is the transmittance of the $i$th channel, $o(\lambda )$ is spectral transmittance of the optical system, and $s(\lambda )$ is the spectral sensitivity function of the camera, $b_{i}$ is the dark current response of the $i$th channel, $n_{i}$ is the noise. These four unknown quantities ($I(\lambda ), f_{i}(\lambda ), o(\lambda ), s(\lambda )$) can be combined into a response matrix $Q$. The above formula can be simplified to
$$g_{i}=Q_{i}r+b_{i}+n_{i}.$$

Firstly, we assume the spectra $r$ can be presented by complete dictionary $B$ and coefficient $a$:

$$r=Ba.$$

We use ${L_{\textrm {4}}}$-norm maximization to learn the orthogonal complete dictionary from samples, and promote the sparseness of the coefficient matrix at the same time. The ${L_{\textrm {4}}}$-norm maximization objective formulation is defined as follows:

$$\max \left\|B^{*} R\right\|_{4}^{4} ,B \in \mathrm{O}(n, \mathbb{R})$$
where $^{*}$ denotes matrix transpose, $\mathrm {O}(n, \mathbb {R})$ is an orthogonal group, and $R=\left [r_{1},\ldots \cdots,r_{p}\right ] \in \mathbb {R}^{n \times p}$ are a collection of $p$ spectra. The optimization (Eq. (5)–Eq. (7)) process needs to repeat the following formula cyclically about dozens of times:
$$\partial B_{t}^{*}=4\left(\boldsymbol{B}_{t}^{*} \boldsymbol{R}\right)^{{\circ} 3} \boldsymbol{R}^{*},$$
$$U \sum V^{*}=\operatorname{svd}\left(\partial B_{t}^{*}\right),$$
$$B_{t+1}^{*}=U V^{*},$$
where $\partial B_{t}^{*}$ is the gradient of the objective function, and $^{\circ 3}$ is element-wise cubic function.

Then, we conduct the statistical ranking of features, which sorts by sum of coefficient’ absolute value from greatest to smallest. The absolute value of the coefficient is larger, the feature is more basic. The detailed features usually correspond with the smaller coefficient. According to ranking, pick the top k bases (features) and weight the features. We assign less weight $\alpha$ to the more detailed features, which means assigning greater weight $\alpha$ to the coefficients of detailed features:

$$A=[\alpha_{1}*a_{1},\ldots\cdots,\alpha_{k}*a_{k}].$$

The weights $\alpha _{i}$ are the empirical values.

Assume the learned sparse coefficients matrix $A$ and the samples’ measurements $G=[g_{1},\ldots \cdots,g_{m}]$ satisfy the following relationship:

$$A=P G.$$

Transformation matrix $P$ can be obtained from the analytical solutions instead of gradient descent algorithms [27]:

$$P=A G^{T}\left(G G^{T}\right)^{{-}1}.$$

Then utilize the $P$ to transform the testing sample’ measurements ${g}^{\prime }$ to the coefficient ${a}^{\prime }$:

$${a}^{\prime}=A G^{T}\left(G G^{T}\right)^{{-}1}{g}^{\prime}.$$

Then, we calculate the similarity between the training samples and the testing sample in the learned features space to select training samples and construct a diagonal weighting matrix $W$.

$$w_{j}=\frac{1}{D\left({a}^{\prime}, {a}_{j}\right)+\sigma}, j=1, 2,\ldots, N \left(N>n\right),$$
where $D(\cdot )$ represents the Euclidean distance, ${a}_{j}$ is the $j^{th}$ sample’s coefficients and $a^{\prime }$ is the testing sample’s coefficients, and $\sigma$ is a very small parameter. In this work, $\sigma$ is 0.001. Select the top m samples from large to small weights to construct a diagonal weighting matrix $W$, weight corresponding spectra $R^{\prime }$ and measurements $G^{\prime }$, and form a local training set.
$$G^{\prime}=G^{\prime}*W,$$
$$R^{\prime}=R^{\prime}*W.$$

Finally, the local training set is used to calculate the transformation matrix $M$ with the least square method. We assume the transformation matrix $M$ satisfies with $R^{\prime }=M*G^{\prime }$. The transformation matrix $M$ can be calculated by:

$$M=R^{\prime}*pinv(G^{\prime}).$$

The testing spectrum ${r}^{\prime }$ is reconstructed:

$${r}^{\prime}=M*{g}^{\prime}.$$

3. Numerical simulation and analysis

To evaluate its performance, we used spectral datasets: Munsell colors matt, Munsell colors glossy, Natural colors and Paper spectra [37] (spectral range: 400 nm-700 nm, band spacing 10 nm, 31 bands). We also utilized 7 kinds of broad-band materials’ spectral transmittance functions like [9] to generate synthetic measurements. We randomly picked half of the data set for training and the rest for testing. The hardware platform is Intel Core i7-7700 processor, NVIDIA GeForce GTX 1660. The computational platform is MATLAB 2019b. Furthermore, the polynomial expansion is a common strategy to achieve higher accuracy with the risk of overfitting and instability. To make the method comparison more direct without other effects, we don’t use the polynomial expansion in the methods that follow.

The root mean square error (RMSE), structural similarity (SSIM), and color difference ($\Delta E$) [38] under CIE-D65 standard illumination are used to evaluate performance. RMSE, SSIM and $\Delta E$ are used to estimate intensity differences, structural differences and color differences, respectively. They are expressed as, respectively

$$RMSE=\sqrt{\frac{\sum_{\lambda}[r(\lambda)-\hat{r}(\lambda)]^{2}}{M}},$$
$$\operatorname{SSIM}=\frac{\left(2 \mu_{r} \mu_{\hat{r}}+c_{1}\right)\left(2 \sigma_{r \hat{r}}+c_{2}\right)}{\left(\mu_{r}^{2}+\mu_{\hat{r}}^{2}+c_{1}\right)\left(\sigma_{r}^{2}+\sigma_{\hat{r}}^{2}+c_{2}\right)},$$
$$\begin{aligned} \Delta E_{r,\hat{r} }^{*} & =\sqrt{\left(L_{r}^{*}-L_{\hat{r}}^{*}\right)^{2}+\left(a_{r}^{*}-a_{\hat{r}}^{*}\right)^{2}+\left(b_{r}^{*}-b_{\hat{r}}^{*}\right)^{2}}\\ & =\sqrt{\left(\Delta L^{*}\right)^{2}+\left(\Delta a^{*}\right)^{2}+\left(\Delta b^{*}\right)^{2}}. \end{aligned}$$

We first utilized the orthogonal experiment to estimate the optimal parameters of different methods (the optimal bases’ number and optimal number of training samples). The results are shown in the Table 1.

Tables Icon

Table 1. The optimal parameter’s estimation by orthogonal experiments.

To show the representational ability and the significance of the ${L_{\textrm {4}}}$-norm maximization (MSP) [32], we first compared ${L_{\textrm {4}}}$-norm maximization with the PCA method and the non-negative K-SVD (NN-K-SVD) method [17] in spectral compression instead of reconstruction. We randomly used half of the Munsell colors matt dataset for component analysis or dictionary learning to get a set of basis vectors. The rest is used for spectral compression and reconstruction. Then we represented the rest by the Orthogonal Matching Pursuit algorithm using different numbers of bases. The NN-K-SVD utilizes a variation of BP for non-negative decomposition. Table 2 shows time and the average reconstruction accuracy. Fewer number of bases means sparser. Although, the color difference of MSP is larger than that of PCA. When the number of bases is larger, the advantages of MSP became obvious. It has a smaller RMSE, but higher SSIM. That’s because MSP utilizes the sparsity of the spectra in the spectral domain, which is more inclined to search for sparse features on the spectral curve. MSP has better representative ability in spectral curve features, so this method has the most optimal RMSE and SSIM as the number of bases increases. When the number of bases is greater than 13, each method does not increase significantly.

Tables Icon

Table 2. Spectral compression results of different component analysis methods.

Then we did an ablation study on four datasets to show the reported method performance. To show LFWSR’s advantages, we replaced dictionary learning in the reported method with Principal Component Analysis (PCA-SSR) and NN-K-SVD (NN-K-SVD-SSR). We also compared LFWSR with the method based on spectral local-weighted reconstruction (SL), which first roughly reconstructs spectra and then uses the spectral similarity to locally weight training samples for reconstruction. The coarse-to-fine process calculates spectral differences, which contain more information than measurements. Its features are equally important in the calculation. Not only the reconstruction accuracy, we also considered the method’s reconstruction efficiency. The average results of 10 simulations are shown in Table 3. In most cases, methods based on our local feature-weighted framework outperform SL in terms of accuracy. This indicates that the local feature-weighted strategy is effective. What’s more, LFWSR has the highest accuracy. Regarding time consumption, LFWSR requires more time than SL and PCA-SSR as dictionary learning is needed. However, compared with NN-K-SVD, the time of LFWSR required is significantly reduced. This is because NN-K-SVD requires sparse encoding and column-by-column dictionary updates at a high time consumption, while LFWSR needs dozens of SVDs to learn the features. In conclusion, our method LFWSR can achieve high accuracy with acceptable time consumption and outperforms SL, PCA-SSR, and NN-K-SVD-SSR in terms of comprehensive performance.

Tables Icon

Table 3. The ablation study of the LFWSR using SL, PCA-SSR and NN-K-SVD-SSR.

We also explored the influence of bases’ numbers on spectral reconstruction. We experimented on the Munsell colors matt dataset. At the same time, we compared PCA-SSR and SL to demonstrate the superiority of the reported method. The average results in Fig. 2(a) show that a large number is not always beneficial to accuracy. When the number increases from small, these methods all have obvious improvement. As the number increases above 8, the accuracy improves little. Our method outperforms the method PCA-SSR with higher accuracy in different numbers of bases.

 figure: Fig. 2.

Fig. 2. Simulations of different bases’ numbers or selected samples’ numbers. (a) Comparison of component-analysis-based methods in different numbers of bases. (b) Comparison of local-weighted methods in different numbers of selected sample.

Download Full Size | PDF

To explore the influence of the selected samples’ number, we compared other local-weighted methods with different numbers of selected samples in Fig. 2(b). We experimented on the Munsell colors matt dataset. The conventional methods are usually based on RGB or CIE XYZ values, which include the optimized adaptive estimation (O-AE) method [13], the local-weighted nonlinear regression method proposed by Liang and Wan (Liang) [29], and the adaptive local-weighted linear regression method (ALWLR) [30]. The above methods all get good performance between 20-40 selected samples. When selecting more samples to form a training set, the color difference of some methods becomes worse. That’s because the too many training samples are not exactly close to the target. From the figure, it could be concluded that our method has a better strategy of selection and more stable performance.

We then conducted simulations on four datasets and demonstrated numerically and intuitively the superiority of our method. In addition to the methods mentioned earlier, we also added the least squares method (LS), the Wiener method (Wiener), the kernel method (kernel), ${L_ {\textrm {1}}}$-norm optimization methods (L1), and the method based on the principal component analysis (PCA-BASE). Table 4 shows the simulated comparison with existing methods in accuracy and efficiency. In terms of accuracy, LFWSR outperforms existing methods on various metrics. Due to the time consumption of dictionary learning, LFWSR spends longer time compared to other methods, but it is still on an order of magnitude. Figure 3 intuitively displays the results of spectral reconstruction using five methods (O_AE, Liang, ALWLR, SL, LFWSR) on four datasets. The GT is the standard spectral reflectance. From the figure, the reconstructed spectra using LFWSR method are very close to the true spectra in terms of spectral structure and smoothness. The reason why our method outperforms other methods is that we have assigned the optimal weighted features as well as training samples for each of the testing samples.

Tables Icon

Table 4. Comparison of simulated spectral reconstruction results.

 figure: Fig. 3.

Fig. 3. The reconstructed spectra in simulations.

Download Full Size | PDF

In practice, reconstruction quality is affected by various factors such as dark current and thermal noise during acquisition. To study the noise robustness of our method, we compared our method with some state-of-the-art methods on noisy measurements comprehensively. The Gaussian white noise was added to the one-dimensional measurements. Here we assume the Gaussian white noise following the probability distribution:

$$n(x) = \frac{1}{\sqrt{2 \pi} \sigma} exp(-\frac{x^2}{2\sigma^2}),$$
where $x$ denotes the noise signal, and $\sigma$ stands for its standard deviation which is set from $1\times 10^{-4}$ to $6\times 10^{-4}$ in the simulations. Table 5 shows spectral reconstruction using synthetic data. Although our method is slightly inferior at some extreme values, our method is generally stable and has good accuracy. Under the influence of different levels of noise, our method enables achieving high-fidelity spectral reconstruction.

Tables Icon

Table 5. Spectral reconstruction using synthetic data at different noise levels.

Rather than relying on more measurements, some spectral reconstruction modals reconstruct spectra from RGB values. In this work, we utilized seven-channel measurements, but our method can also be applied in different cases. To demonstrate the generalization of our proposed method regarding the number of channels, we simulated spectral reconstructions from RGB values. Specifically, we used spectral and RGB values from Munsell colors matt data, splitting it into two halves for training and testing. The numerical results in Table 6 show the mean and extreme values of the reconstructed results. Although some methods may have better extreme values, our method shows a good average performance on spectral reconstruction. As shown in the boxplot in Fig. 4, our reported method achieves high average accuracy and has few outliers, with compact performance distribution. We believe that our method can be applied to computational spectrometers with different channels.

Tables Icon

Table 6. Comparison of simulated spectral renconstruction from RGB measurements for Munsell colors matt spectral data.

 figure: Fig. 4.

Fig. 4. Boxplot of simulated spectral renconstruction from RGB measurements for Munsell colors matt spectral data.

Download Full Size | PDF

4. Experiments

We also experimentally demonstrated the reported method. We constructed a prototype spectrometer system by integrating a series of broad-band filter arrays with a CMOS photodetector (JAI GO-5100M), the structure of which is shown in Fig. 5(a). The spectral transmittance functions of the filter array are shown in Fig. 5(b). The key element of the spectrometer is a filter array of 7 kinds of broad-band material like [9], which have high light throughout and compact sizes. The light passes through 7 broad-band filters and illuminates the detector at the back to obtain the corresponding measured values. The rest of the detector is used to measure light intensity for calibration. The detector can obtain multiple measurements at one time. It should be noted that the order of the wideband filter does not affect the experimental effect. In the experiments, the light source for calibration was a halogen lamp (Thorlabs SL301, range: 360$\sim$3800 nm). We picked 179 spectra from the Munsell colors matt dataset and printed them on coated paper. By using the prototype spectrometer system, we got coded measurements to reconstruct spectra. At the same time, we utilized a real spectrometer (Thorlabs CCS200/M) to get the standard for spectral measurement. The spectra were sampled at 10nm from 300 to 700 nm. Table 7 shows the mean, maximum, and minimum values of the reconstruction results using the prototype spectrometer system. Although our method is slightly inferior at some extreme values, our method’s overall accuracy is good. The boxplot Fig. 6 shows the distribution of the evaluated metrics compared with other methods. The boxplot distribution of LFWSR is compact and has high reconstruction accuracy in some metrics. Most the SSIM values are larger than those of existing methods. From the overall results, our method enables achieving high-fidelity spectral reconstruction compared with the other methods.

 figure: Fig. 5.

Fig. 5. Scheme of the prototype spectrometer. (a) The prototype spectrometer. (b) The spectral transmittance functions of the prototype spectrometer.

Download Full Size | PDF

Tables Icon

Table 7. Spectral reconstruction experiment for Munsell colors matt spectral data.

 figure: Fig. 6.

Fig. 6. Boxplot of spectral renconstruction experiments for Munsell colors matt spectral data.

Download Full Size | PDF

What’s more, we used the X-rite color checker (24 patches) in experiments to test the reported method. We utilized the above real munsell colors matt samples for training, and reconstructed the X-rite color checker. The results are shown in Table 8 and Fig. 7. The RMSE are smaller than those of existing methods, and average metrics are best. The distribution of results are compact. Although some methods may have better extreme values, our method shows a good comprehensive performance on spectral reconstruction.

Tables Icon

Table 8. Spectral reconstruction experiment for X-rite color checker.

 figure: Fig. 7.

Fig. 7. Boxplot of spectral renconstruction experiments for X-rite color checker.

Download Full Size | PDF

To make the results visualization, the reconstructed spectra are shown in Fig. 8, where the GT is the standard spectral reflectance. The spectra reconstructed by LFWSR is more accurate than those of the other methods. The experimental results prove the superiority of the proposed method.

 figure: Fig. 8.

Fig. 8. The reconstructed spectra in real experiments.

Download Full Size | PDF

5. Conclusion

In summary, we report a local feature-weighted spectral reconstruction (LFWSR) method, and construct a high-accuracy computational spectrometer. The advantages of the reported method over the conventional methods lie in the following three aspects. First, the reported method uses dictionary learning for representing a novel spectral space. Second, this method considers that different features are unequally important and conducts the statistical ranking of features. Third, this method pays more attention to the coefficients in the detailed features under the condition of ensuring certain accuracy. Weight coefficients corresponding to different features then calculate the similarity takes advantage of detailed information, which is beneficial to high-accuracy spectral reconstruction. We also tested that our method can be applied to computational spectrometers with 3 channels. We believe it can improve the accuracy of different computational spectrometers in the future.

Funding

National Key Research and Development Program of China (2020YFB0505601); National Natural Science Foundation of China (61971045, 61991451, 62131003).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. D. Landgrebe, “Hyperspectral image data analysis,” IEEE Signal Process. Mag. 19(1), 17–28 (2002). [CrossRef]  

2. M. Ebermann, N. Neumann, K. Hiller, M. Seifert, M. Meinig, and S. Kurth, “Tunable MEMS Fabry-Pérot filters for infrared microspectrometers: a review,” in MOEMS and Miniaturized Systems XV, vol. 9760 W. Piyawattanametha and Y.-H. Park, eds., International Society for Optics and Photonics (SPIE, 2016), p. 97600H.

3. R. A. Crocombe, “Portable spectroscopy,” Appl. Spectrosc. 72(12), 1701–1751 (2018). [CrossRef]  

4. F. Kruse, J. Boardman, and J. Huntington, “Comparison of airborne hyperspectral data and EO-1 Hyperion for mineral mapping,” IEEE Trans. Geosci. Remote 41(6), 1388–1400 (2003). [CrossRef]  

5. B. Redding, S. F. L. R. Sarma, and H. Cao, “Compact spectrometer based on a disordered photonic chip,” Nat. Photonics 7(9), 746–751 (2013). [CrossRef]  

6. J. Bao and M. G. Bawendi, “A colloidal quantum dot spectrometer,” Nature 523(7558), 67–70 (2015). [CrossRef]  

7. Z. Wang, S. Yi, A. Chen, M. Zhou, T. S. Luk, A. James, J. Nogan, W. Ross, G. Joe, A. Shahsafi, K. Wang, M. A. Xingze Kats, and Z. Yu, “Single-shot on-chip spectral sensors based on photonic crystal slabs,” Nat. Commun. 10(1), 1–6 (2019). [CrossRef]  

8. Z. Yang, T. Albrow-Owen, H. Cui, J. Alexander-Webber, F. Gu, X. Wang, T.-C. Wu, M. Zhuge, C. Williams, P. Wang, A. V. Zayats, W. Cai, L. Dai, S. Hofmann, M. Overend, L. Tong, Q. Yang, Z. Sun, and T. Hasan, “Single-nanowire spectrometers,” Science 365(6457), 1017–1020 (2019). [CrossRef]  

9. X. Zhu, L. Bian, H. Fu, L. Wang, B. Zou, Q. Dai, J. Zhang, and H. Zhong, “Broadband perovskite quantum dot spectrometer beyond human visual resolution,” Light: Sci. Appl. 9(1), 73 (2020). [CrossRef]  

10. H. Li, L. Bian, K. Gu, H. Fu, G. Yang, H. Zhong, and J. Zhang, “A near-infrared miniature quantum dot spectrometer,” Adv. Opt. Mater. 9(15), 2100376 (2021). [CrossRef]  

11. Z. Yang, T. Albrow-Owen, W. Cai, and T. Hasan, “Miniaturization of optical spectrometers,” Science 371(6528), eabe0722 (2021). [CrossRef]  

12. P. S. Dizaji, H. Habibiyan, and H. Arabalibeik, “A miniaturized computational spectrometer with optimum number of nanophotonic filters: Deep-learning autoencoding and inverse design-based implementation,” Photonics Nanostruct. 52, 101057 (2022). [CrossRef]  

13. H.-L. Shen and J. H. Xin, “Spectral characterization of a color scanner based on optimized adaptive estimation,” J. Opt. Soc. Am. A 23(7), 1566–1569 (2006). [CrossRef]  

14. H.-L. Shen, P.-Q. Cai, S.-J. Shao, and J. H. Xin, “Reflectance reconstruction for multispectral imaging by adaptive Wiener estimation,” Opt. Express 15(23), 15545–15554 (2007). [CrossRef]  

15. V. Heikkinen, T. Jetsu, J. Parkkinen, M. Hauta-Kasari, T. Jaaskelainen, and S. D. Lee, “Regularized learning framework in the estimation of reflectance spectra from camera responses,” J. Opt. Soc. Am. A 24(9), 2673–2683 (2007). [CrossRef]  

16. J. Oliver, W. Lee, S. Park, and H.-N. Lee, “Improving resolution of miniature spectrometers by exploiting sparse nature of signals,” Opt. Express 20(3), 2613–2625 (2012). [CrossRef]  

17. S. Zhang, Y. Dong, H. Fu, S.-L. Huang, and L. Zhang, “A spectral reconstruction algorithm of miniature spectrometer based on sparse optimization and dictionary learning,” Sensors 18(2), 644 (2018). [CrossRef]  

18. T. Sarwar, C. Yaras, X. Li, Q. Qu, and P.-C. Ku, “Miniaturizing a chip-scale spectrometer using local strain engineering and total-variation regularized reconstruction,” Nano Lett. 22(20), 8174–8180 (2022). [CrossRef]  

19. V. Heikkinen, R. Lenz, T. Jetsu, J. Parkkinen, M. Hauta-Kasari, and T. Jääskeläinen, “Evaluation and unification of some methods for estimating reflectance spectra from RGB images,” J. Opt. Soc. Am. A 25(10), 2444–2458 (2008). [CrossRef]  

20. V. Heikkinen, A. Mirhashemi, and J. Alho, “Link functions and Matérn kernel in the estimation of reflectance spectra from RGB responses,” J. Opt. Soc. Am. A 30(11), 2444–2454 (2013). [CrossRef]  

21. G. Xiao, X. Wan, L. Wang, and S. Liu, “Reflectance spectra reconstruction from trichromatic camera based on kernel partial least square method,” Opt. Express 27(24), 34921–34936 (2019). [CrossRef]  

22. X. Zhang, Q. Wang, J. Li, X. Zhou, Y. Yang, and H. Xu, “Estimating spectral reflectance from camera responses based on CIE XYZ tristimulus values under multi-illuminants,” Color Res. Appl. 42(1), 68–77 (2017). [CrossRef]  

23. Y. Xiong, G. Wu, X. Li, and X. Wang, “Optimized clustering method for spectral reflectance recovery,” Front. Psychol. 13, 1 (2022). [CrossRef]  

24. H. Li, Z. Wu, L. Zhang, and J. Parkkinen, “SR-LLA: A novel spectral reconstruction method based on locally linear approximation,” in International Conference on Image Processing (IEEE, 2013), pp. 2029–2033.

25. B. Cao, N. Liao, and H. Cheng, “Spectral reflectance reconstruction from RGB images based on weighting smaller color difference group,” Color Res. Appl. 42(3), 327–332 (2017). [CrossRef]  

26. Y.-C. Wen, S. Wen, L. Hsu, and S. Chi, “Irradiance independent spectrum reconstruction from camera signals using the interpolation method,” Sensors 22(21), 8498 (2022). [CrossRef]  

27. K. Xiao, Y. Zhu, C. Li, D. Connah, J. M. Yates, and S. Wuerger, “Improved method for skin reflectance reconstruction from camera images,” Opt. Express 24(13), 14934–14950 (2016). [CrossRef]  

28. Y.-C. Wen, S. Wen, L. Hsu, and S. Chi, “Spectral reflectance recovery from the quadcolor camera signals using the interpolation and weighted principal component analysis methods,” Sensors 22(16), 6288 (2022). [CrossRef]  

29. J. Liang and X. Wan, “Optimized method for spectral reflectance reconstruction from camera responses,” Opt. Express 25(23), 28273–28287 (2017). [CrossRef]  

30. J. Liang, K. Xiao, M. R. Pointer, X. Wan, and C. Li, “Spectra estimation from raw camera responses based on adaptive local-weighted linear regression,” Opt. Express 27(4), 5165–5180 (2019). [CrossRef]  

31. F. Agahian, S. A. Amirshahi, and S. H. Amirshahi, “Reconstruction of reflectance spectra using weighted principal component analysis,” Color Res. Appl. 33(5), 360–371 (2008). [CrossRef]  

32. Y. Zhai, Z. Yang, Z. Liao, J. Wright, and Y. Ma, “Complete dictionary learning via ℓ4-norm maximization over the orthogonal group,” J. Mach. Learn. Res. 21, 1–68 (2020).

33. Y. Shen, Y. Xue, J. Zhang, K. Letaief, and V. Lau, “Complete dictionary learning via ℓp-norm maximization,” in Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), vol. 124 of Proceedings of Machine Learning Research J. Peters and D. Sontag, eds. (PMLR, 2020), pp. 280–289.

34. Y. Zhang, H.-W. Kuo, and J. Wright, “Structured local optima in sparse blind deconvolution,” IEEE Trans. Inf. Theory 66(1), 419–452 (2020). [CrossRef]  

35. Y. Li and Y. Bresler, “Global geometry of multichannel sparse blind deconvolution on the sphere,” in Advances in Neural Information Processing Systems, vol. 31 S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, eds. (Curran Associates, Inc., 2018).

36. Y. Xue, Y. Shen, V. K. N. Lau, J. Zhang, and K. B. Letaief, “Blind data detection in massive MIMO via ℓ3-norm maximization over the stiefel manifold,” IEEE Trans. Wirel. Commun. 20(2), 1411–1424 (2021). [CrossRef]  

37. University of Eastern Finland, “Spectral: Spectral database,” https://sites.uef.fi/spectral/databases-software/spectral-database (2022).

38. G. Wyszecki and W. Stiles, Color Science: Concepts and Methods, Quantitative Data and Formulae, 2nd edition (John Wiley & Sons, 2000).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. The framework of the reported local feature-weighted spectral reconstruction (LFWSR) method.
Fig. 2.
Fig. 2. Simulations of different bases’ numbers or selected samples’ numbers. (a) Comparison of component-analysis-based methods in different numbers of bases. (b) Comparison of local-weighted methods in different numbers of selected sample.
Fig. 3.
Fig. 3. The reconstructed spectra in simulations.
Fig. 4.
Fig. 4. Boxplot of simulated spectral renconstruction from RGB measurements for Munsell colors matt spectral data.
Fig. 5.
Fig. 5. Scheme of the prototype spectrometer. (a) The prototype spectrometer. (b) The spectral transmittance functions of the prototype spectrometer.
Fig. 6.
Fig. 6. Boxplot of spectral renconstruction experiments for Munsell colors matt spectral data.
Fig. 7.
Fig. 7. Boxplot of spectral renconstruction experiments for X-rite color checker.
Fig. 8.
Fig. 8. The reconstructed spectra in real experiments.

Tables (8)

Tables Icon

Table 1. The optimal parameter’s estimation by orthogonal experiments.

Tables Icon

Table 2. Spectral compression results of different component analysis methods.

Tables Icon

Table 3. The ablation study of the LFWSR using SL, PCA-SSR and NN-K-SVD-SSR.

Tables Icon

Table 4. Comparison of simulated spectral reconstruction results.

Tables Icon

Table 5. Spectral reconstruction using synthetic data at different noise levels.

Tables Icon

Table 6. Comparison of simulated spectral renconstruction from RGB measurements for Munsell colors matt spectral data.

Tables Icon

Table 7. Spectral reconstruction experiment for Munsell colors matt spectral data.

Tables Icon

Table 8. Spectral reconstruction experiment for X-rite color checker.

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

g i = λ min λ max I ( λ ) r ( λ ) f i ( λ ) o ( λ ) s ( λ ) d λ + b i + n i , i = 1 , 2 , , m
g i = Q i r + b i + n i .
r = B a .
max B R 4 4 , B O ( n , R )
B t = 4 ( B t R ) 3 R ,
U V = svd ( B t ) ,
B t + 1 = U V ,
A = [ α 1 a 1 , , α k a k ] .
A = P G .
P = A G T ( G G T ) 1 .
a = A G T ( G G T ) 1 g .
w j = 1 D ( a , a j ) + σ , j = 1 , 2 , , N ( N > n ) ,
G = G W ,
R = R W .
M = R p i n v ( G ) .
r = M g .
R M S E = λ [ r ( λ ) r ^ ( λ ) ] 2 M ,
SSIM = ( 2 μ r μ r ^ + c 1 ) ( 2 σ r r ^ + c 2 ) ( μ r 2 + μ r ^ 2 + c 1 ) ( σ r 2 + σ r ^ 2 + c 2 ) ,
Δ E r , r ^ = ( L r L r ^ ) 2 + ( a r a r ^ ) 2 + ( b r b r ^ ) 2 = ( Δ L ) 2 + ( Δ a ) 2 + ( Δ b ) 2 .
n ( x ) = 1 2 π σ e x p ( x 2 2 σ 2 ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.