Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Superpixels meet essential spectra for fast Raman hyperspectral microimaging

Open Access Open Access

Abstract

In the context of spectral unmixing, essential information corresponds to the most linearly dissimilar rows and/or columns of a two-way data matrix which are indispensable to reproduce the full data matrix in a convex linear way. Essential information has recently been shown accessible on-the-fly via a decomposition of the measured spectra in the Fourier domain and has opened new perspectives for fast Raman hyperspectral microimaging. In addition, when some spatial prior is available about the sample, such as the existence of homogeneous objects in the image, further acceleration for the data acquisition procedure can be achieved by using superpixels. The expected gain in acquisition time is shown to be around three order of magnitude on simulated and real data with very limited distortions of the estimated spectrum of each object composing the images.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Raman microscopy [1] is an imaging modality suited for numerous uses in life imaging [24] which requires both chemical and spatial information. A current limitation of the technique is the acquisition time which is not systematically compatible with real-time imaging of in vivo samples. This is intrinsically linked with the point-by-point raster-scanning of a laser spot which requires long exposure when pointing at weakly scattering samples. The resulting acquisition times can range from a few minutes to hours for extended samples. Consequently, an open research front of science is to accelerate the data acquisition in Raman microscopy via computational approaches [524]. These approaches differ by their method and by the informational tasks that they address.

Some accelerations can be achieved via the instrumentation itself, e.g., with the use of line scanning [57,12] or digital micro-mirrors for compressive sensing in the spectral domain [1621]. Other authors exploit the spatial redundancy of the sample itself to reduce the acquisition time [1315]. For a large family of samples, priors on the chemical content can be assumed such as the knowledge that the acquired signals result from the linear mixture of only a few spectral components. In such cases, compression can come from the selection of the few pixels carrying the essential spectral information needed for the unmixing of the few chemical components constituting the sample [2224]. The current article lays on this foundation and exploits this trend further.

In earlier works, essential spectra were identified based on a principal component analysis (PCA) [2224]. Essential spectra form a convex hull of the data points in the reduced space. Measuring only the spectra corresponding to these essential pixels for a longer time should enable the reconstruction of the whole pixels first acquired quickly. While rich from a methodological point of view, this approach has some practical limitations since, to be applied, the PCA needs all the pixels to be first scanned. Very recently an extension of this work was proposed [25] to solve this issue while building upon the idea of essential spectra for spectral unmixing. The authors in [25] demonstrated the possibility of computing essential spectra on the fly by operating the dimension reduction in the Fourier domain of the spectral data. This enables dynamic (i.e., performed during data acquisition) and chemometric-driven (i.e., based on spectral relevance for unmixing) selective sampling. This approach was evaluated with the potential of a 50-fold acceleration of Raman acquisition [25]. Further speed improvement should be possible by considering the spatial redundancy of the components in samples.

To this aim, we consider samples constituted by objects with contrasted boundaries and homogeneous chemical content. In such configurations, a common approach to compress the spatial information is to use the concept of superpixels [26]. Images are segmented into areas (called superpixels) that share common content. Superpixels are very basic tools in image processing. For instance, they have been applied with a standard acquisition scheme in Raman microscopy in [15]. In this article, we do not introduce a novel superpixel algorithm per se. Instead, we propose to associate the standard concepts of superpixels with the one of essential spectra to constitute a smart scanning protocol in Raman microscopy. The article describes each step of this smart scanning process and then provides the details of the samples involved to evaluate this process together with the metrics used to assess its performance. We present the results and conclude about the gain in speed of this proposed scanning process.

2. Material and methods

2.1 Methods

We start with an overview of the proposed smart scanning protocol as schematically represented in Fig. 1. It is composed of a two-pass scan. First, a fast acquisition, i.e. at a low signal-to-noise ratio, of the whole sample is made with a Raman microscope at full spatial and spectral resolution to produce the hyperspectral image $\textrm {I}_{\textrm {low}}(x,y,\lambda )$, e.g., made of $X\times Y$ pixels at position $(x,y)$ and $L$ wavelengths. Then, a spectral dimension reduction of $\textrm {I}_{\textrm {low}}$ is performed to produce a grayscale image $\textrm {I}_{\textrm {mono}}(x,y)$ which is further segmented into $P$ superpixels to produce $\textrm {I}_{\textrm {seg}}(x,y)$. Superpixels are small groups of homogeneous neighboring pixels gathered in connected components. Once all positioned, the superpixels cover entirely the image $\textrm {I}_{\textrm {seg}}(x,y)$ like the pieces of a jigsaw. The pixels of a superpixel in image $\textrm {I}_{\textrm {seg}}(x,y)$ are labeled with a unique ID ranging from $1$ to $P$. The location of the superpixels and the low signal-to-noise ratio hyperspectral image $\textrm {I}_{\textrm {low}}(x,y,\lambda )$ are fed to the innovative part of our protocol named unmixed superpixel centroid rescan (UnSCR). This UnSCR produces, after a longer second scan, i.e. at a higher signal-to-noise ratio on a small selection of pixels, a reconstructed hyperspectral image of the superpixels, $\textrm {I}_{\textrm {output}}(x,y,\lambda )$.

 figure: Fig. 1.

Fig. 1. Pipeline representing the different steps of the proposed smart scanning protocol.

Download Full Size | PDF

We now detail the UnSCR step of our smart scanning protocol as depicted in Fig. 2. During this step, a second scan with a longer exposure of only those specific pixels underlain by the so-called essential spectra is performed. This second scan corresponds to the upper path in Fig. 2 and to the protocol recently proposed in [25] that we briefly recall here.

 figure: Fig. 2.

Fig. 2. Detailed view of the unmixed superpixel centroid rescan (UnSCR).

Download Full Size | PDF

The Raman spectrum $\boldsymbol {\textrm {I}_{\textrm {low}}}{(x,y)}$ captured at each pixel $(x,y)$ is assumed to be approximated by a linear mixture of $K$ individual spectra $\boldsymbol {s_k}=(s_{k1},\ldots,s_{kL})$

$$\boldsymbol{\textrm{I}_{\textrm{low}}}{(x,y)} = \sum_{k=1}^{K}c_{k}(x,y)\boldsymbol{s_k}+\boldsymbol{e}(x,y)\;,$$
where $c_{k}(x,y)$ stands for the $k$-th mixture component in the spectrum measured at location $(x,y)$ and $\boldsymbol {e}(x,y)$ is an error term accounting for measurement noise at location $(x,y)$. The discrete Fourier transform of Eq. (1) along the wavelength domain translates into
$$\boldsymbol{\tilde{\textrm{I}}_{\textrm{low}}}(x,y) =\sum_{k=1}^{K}c_{k}(x,y) \boldsymbol{\tilde{s}}_k+\boldsymbol{\tilde{e}}(x,y)\;,$$
with the $r$-th Fourier coefficient of $\boldsymbol {\tilde {s}}_k$ and $\boldsymbol {\tilde {e}}(x,y)$ given by:
$$\tilde{s}_{kr}= \sum_{l=1}^{L}s_{kl}\exp{\left({-}j\frac{2\pi}{L}(r-1)(l-1)\right)}\;,$$
and
$$\tilde{e}_{r}(x,y)= \sum_{l=1}^{L}e_{l}(x,y)\exp{\left({-}j\frac{2\pi}{L}(r-1)(l-1)\right)}\;,$$
where $r = 1,\ldots,L$ and $j$ the imaginary number for which $j^2 =-1$. The real and imaginary part of $\boldsymbol {\tilde {\textrm {I}}_{\textrm {low}}}(x,y)$ can then be rewritten as
$$G_r(x,y)=\sum_{k=1}^{K}c_{k}(x,y)\Re(\tilde{s}_{kr})+\Re(\tilde{e}_{r}(x,y))\;,$$
$$Q_r(x,y)=\sum_{k=1}^{K}c_{k}(x,y)\Im(\tilde{s}_{kr})+\Im(\tilde{e}_{r}(x,y))\;.$$

By the inverse DFT, the real numbers $G_r(x,y)$ and $Q_r(x,y)$ can be seen as coordinates of the $r$-th phasor involved in the Fourier representation of $\textbf {I}_{\textrm {low}}{(x,y)}$

$$I_{\textrm{low}}(x,y)_l=\frac{1}{L}\sum_{r=1}^{L}(G_{r}(x,y)+jQ_{r}(x,y))\exp{\left(j\frac{2\pi}{L}(l-1)(r-1)\right)}\;.$$

Consequently the phasor coordinates $G_r(x,y)$ and $Q_r(x,y)$ of $\textrm {I}_{\textrm {low}}(x,y)_l$ can be approximated as convex combinations of the spectra $\tilde {s}_{kr}$ with $r = 1,\ldots,L$.

During UnSCR, the convex hull of the points cloud including all $G_r(x,y)$ and $Q_r(x,y)$ for all pixels $(x,y)$ of $\textrm {I}_{\textrm {low}}$ is computed. The pixels located on this convex hull are associated to the most linearly dissimilar spectrum. We designate them as essential spectra. It is to be noticed that the essential spectra may not be those of pure chemical components but always contitutes the minimum set of profiles from which all the measured ones can be estimated linearly. To provide a better estimate of these essential spectra they are rescanned during a longer exposure which leads to less noisy spectra. This constitutes a first gain in time in the UnSCR method.

Therefrom, one can in principle reconstruct the spectrum of each single pixel as a combination of the less noisy spectra essential spectra. We propose here a second source of acceleration, not included in the original work of [25], and compute only a single representative spectrum for each superpixel. This corresponds to the lower path in the pipeline of Fig. 2. The computation of the spectrum of each representative of the superpixels $\boldsymbol {\textrm {I}_{\textrm {low}}}{(x_p,y_p)}$ can be described in the following way

$$\boldsymbol{\textrm{I}_{\textrm{low}}}{(x_p,y_p)} = \textbf{S}_{\textrm{low}}^T \boldsymbol{c}_{\textrm{low}}(x_p,y_p)$$
where $\textbf {S}_{\textrm {low}}$ is a matrix containing the $K$ essential spectra along the rows and $L$ spectral bands along columns and $\boldsymbol {c}_{\textrm {low}}(x_p,y_p)$ a vector containing the $K$ concentrations associate with the essential spectra for the pixel located at $(x_p,y_p)$.

Since $\boldsymbol {\textrm {I}_{\textrm {low}}}{(x_p,y_p)}$ contains noise, there is no exact solution for Eq. (8). However, we can find $\boldsymbol {c}_{\textrm {low}}(x_p,y_p)$ so as to minimize the distance between $\boldsymbol {\textrm {I}_{\textrm {low}}}{(x_p,y_p)}$ and $\textbf {S}_{\textrm {low}}^T \boldsymbol {c}_{\textrm {low}}(x_p,y_p)$. A possibility is to minimize the Euclidean norm

$$d = \left\lVert \boldsymbol{\textrm{I}_{\textrm{low}}}{(x_p,y_p)} - \textbf{S}_{\textrm{low}}^T \boldsymbol{c}_{\textrm{low}}(x_p,y_p) \right\lVert^2_2\;,$$
and differentiate it with respect to $\boldsymbol {c}_{\textrm {low}}(x_p,y_p)$ :
$$\nabla _c (d) = 2\textbf{S}_{\textrm{low}}\textbf{S}_{\textrm{low}}^T\boldsymbol{c}_{\textrm{low}}(x_p,y_p)- 2\textbf{S}_{\textrm{low}}\boldsymbol{\textrm{I}_{\textrm{low}}}{(x_p,y_p)}\;.$$

The minimum of $d$ is obtained when $\nabla _c (d) = 0$, i.e. when

$$\boldsymbol{c}_{\textrm{low}}(x_p,y_p) = (\textbf{S}_{\textrm{low}}\textbf{S}_{\textrm{low}}^T)^{{-}1}\textbf{S}_{\textrm{low}}\boldsymbol{\textrm{I}_{\textrm{low}}}{(x_p,y_p)}\;$$
where $(\textbf {S}_{\textrm {low}}\textbf {S}_{\textrm {low}}^T)$ is supposed to be a full-rank matrix.

Then, the pixels corresponding to the essential spectra only are scanned longer, namely $\textbf {S}_{\textrm {high}}$. The spectrum $\textbf {I}_{\textrm {output}}(x_p,y_p)$ of the centroid of the superpixels are estimated from the estimated mixture proportion $\boldsymbol {c}_{\textrm {low}}(x_p,y_p)$ and $\textbf {S}_{\textrm {high}}$:

$$\textbf{I}_{\textrm{output}}(x_p,y_p) = \textbf{S}_{\textrm{high}}^T\boldsymbol{c}_{\textrm{low}}(x_p,y_p)\;.$$

The produced output image is made of superpixels with homogeneous spectra as given in Eq. (12).

The proposed smart scanning method of Figs. 1 and 2 necessitated some choices that need to be further detailed. First, there are many ways of producing the spectral dimension reduction. We tested several common state-of-the-art methods. We selected the one providing the best result on the datasets used in this article. This comparison is provided in the Appendix. Second, there are many ways of performing superpixel segmentation. We arbitrarily selected the version of simple linear iterative clustering (SLIC) provided in [27] but any alternative could be used without any loss of generality. Many criteria could be proposed to select a single representative pixel within each superpixel. Here, we selected the spatial centroid. Also, UnSCR includes some hyperparameters. The first one is the acquisition time of the fast first scan. The second one is the number of elementary spectra $K$ which can depend on the number of harmonics $r$ used in the DFT representation. A larger number of essential spectra can be found using more harmonics $r$. The last hyperparameter is the number of superpixels $P$ in the image. We decided to set this number $P$ of superpixels to an arbitrarily fixed value high enough to segment at least all the objects of interest in our images. Finally, we vary the remaining hyperparameter $r$ and the time of acquisition which is related to the signal to noise ratio. With these choices, the relevance of UnSCR has been evaluated on various datasets of practical interest as detailed in the following section.

2.2 Datasets

We have tested our smart sampling method (UnSCR), on three real types of samples that we describe here.

  • 1. $D_1$ is a sample of biomedical interest. A piece of a femoral neck was collected during a hip replacement surgery and placed immediately in formalin. This type of sample was obtained with written informed consent from the donor. The bone sample was then dehydrated in a graded series of ethanol before polymethylmethacrylate (PMMA) embedding at 4$^{\circ }$C. The surface of the resin block was then grinded with sandpaper and polished with diamond paste (Struers, Champ sur Marne, France). The bone sample was finally imaged with a Renishaw InVia Qontor confocal Raman microscope using a 20$\times$ Olympus objective (0.40 N.A.). The spectrometer was equipped with a 1200 g/mm gratting. For excitation, a 785 nm Diode laser operated at 3 mW power was used. Raman microscopy is a common imaging technique for the investigation of bone mineral composition [28]. One can identify three objects of interest in the $D_1$ sample when gazed with Raman microscopy: (i) the Haversian canal ($\phi\,\textrm {50-100 } \mu m$), which is surrounded by (ii) lamellae. These form elliptic shapes named osteons ($\phi\,\textrm {150-250 } \mu m$). The osteons are included in a global matrix which constitutes (iii) the third object, i.e. background here.
  • 2. $D_2$ is a calibration Raman sample made using Raman material standards, namely Naphthalene (CAS 91-20-3) and Polystyrene (9003-70-7). Naphthalene was deposited on a coverslip and melted with a hotplate. Polystyrene was dissolved in chloroform and a drop of the mixture was left to air dry over the Naphtalene. The Naphtalene-Polystyrene sample was finally imaged with a Renishaw InVia Qontor confocal Raman microscope using a 20$\times$ Olympus objective (0.40 N.A.). The spectrometer was equipped with a 1800 g/mm gratting. For excitation, a 532 nm Diode laser operated at 1 mW power was used. Therefore, two objects basically compose the $D_2$ sample and yield the contrast observed in Raman microscopy: (i) the Naphtalene crystals ($\phi\,\textrm {100-150 } \mu m$) which serve as objects scattered in the sample, and (ii) the Polystyrene region which serves as a background.
  • 3. $D_3$ was composed of three types of objects obtained by mixing (i) powders of calcium carbonate ($\textrm {CaCO}_3$), (ii) sodium nitrate ($\textrm {NaNO}_3$) and (iii) sodium sulfate ($\textrm {Na}_2\textrm {SO}_4$) and then pressing in a tablet. The objects in this mixture of powders have no typical fixed size. Image acquisition was performed on a LabRAM HR micro spectrometer (Horiba France SAS, Palaiseau, France) using a $50\times$ Olympus objective (0.75 NA). The spectrometer was equipped with a 600 g/mm grating. For excitation, a 632.8 nm HeNe laser was used (15 mW laser power at the sample) [28,29].

Several replicated measurements at different acquisition times were performed for each dataset, leading to different noise levels. To quantify the noise contained in the spectra of our hyperspectral image (HSI), we used an estimate of the signal-to-noise ratio (SNR) similar to the one given in [30],

$$\textrm{SNR} = \frac{1}{XY} \sum_{x=1}^X \sum_{y=1}^Y 10\ \textrm{log}_{10}\Bigg(\frac{\sum_{\lambda=1}^{L} I_{\textrm{S}\lambda}(x,y)^2}{\sum_{\lambda=1}^{L} e_{\lambda}(x,y)^2}\Bigg)$$
with $\textrm {I}_{\textrm {S}}$ the signal extracted from $\textrm {I}_{\textrm {low}}$ by the Savitzky-Golay algorithm [31], and $e$ the noise estimated by the difference of $\textrm {I}_{\textrm {low}}$ and $\textrm {I}_{\textrm {S}}$. The SNR of an HSI is estimated by the average SNR of the whole spectra comprised in the HSI.

To reduce acquisition time compared to a classical raster scanning approach where all pixels are scanned at high SNR, the UnSCR scanning method necessitates that few essential spectra are detected and that the segmentation of the image in superpixels is of good quality. This is expected to happen with the chosen samples which can be viewed as composed of few chemical components and spatially assembled in objects of uniform content as illustrated in Fig. 3.

 figure: Fig. 3.

Fig. 3. Comparison between real and simulated data. The first two columns correspond to real data and the last two correspond to simulated data. Each row refers to a sample of each dataset. $D_1$ and $D_3$ are 20 dB HSI and $D_2$ is a 13 dB HSI. The crosses are located in the different objects of the samples. The Raman spectrum of the pixel pointed by the cross is provided with the corresponding color.

Download Full Size | PDF

2.3 Simulator

For further assessment of the UnSCR sampling method, we generated a simulated version of the real samples $D_1$, $D_2$, and $D_3$ described in the previous section. The simulation of $D_1$, $D_2$, and $D_3$ were composed of images covering an SNR range from -20 dB to 25 dB with a step of 1 image per dB. To provide an estimation of the noise-induced variance, we replicated the simulated acquisition of the same image ten times for each SNR.

The generation of the simulated samples was done using the following steps to produce realistic synthetic hyperspectral Raman images:

  • 1. We first initialize an empty image of size $X\times Y$ where we randomly position objects that represent different components of the sample. A label is associated with the objects for each of these components.
  • 2. We simulate spectra with Lorentzian peaks at characteristic wavenumbers. We then associate these spectra to each respective label, thus we obtain a hyperspectral image $\textrm {I}$ of size $X\times Y \times L$. Here, we add thermal noise related to the acquisition time and thus obtain a noisy image
    $$I_N = I + N,$$
    where $N$ is a $X\times Y \times L$ tensor containing random values following a normal distribution with a zero mean and a standard deviation
    $$\sigma_N = \frac{1}{\sqrt{L}} 10^{a},$$
    with
    $$a = \frac{\sum_{x=1}^X \sum_{y=1}^Y 10\ \textrm{log}_{\small{10}}(\sum_{\lambda=1}^{L}I_{\textrm{S}\lambda}(x,y)^2)-\textrm{SNR}\times XY}{20XY}.$$
  • 3. We finally added random sensor gain to our synthetic hyperspectral image
    $$I_{N+G} = I_N \odot G$$
    by calculating the Hadamard (element-wise) product, $\odot$ [32], of $\textrm {I}_N$ and $G$ where $G$ is a $X \times Y \times L$ tensor filled with random values following a normal distribution of mean one and standard deviation empirically set to $5.10^{-2}$.

Our simulator enables the generation of synthetic data from a given sample at different SNR levels, allowing us to test the robustness of the compared algorithms in terms of sensitivity to the noise level and computational times. Figure 3 illustrates the spatial and spectral realism of the simulations via a comparison between real samples and their simulated counterparts. The images are provided at high SNR in Fig. 3.

2.4 Metrics

To quantitatively evaluate the performance of the proposed UnSCR sampling protocol, we take into consideration spatial and spectral reconstruction of the hyperspectral image. Among all possible metrics described in [3336], we decided to use the Boundary Recall (Rec) from [34] and the Precision (Prec) as an extension to the previous metric. Both assess the quality of the segmentation according to a given spatial ground truth by respectively emphasizing its capability to avoid missing boundaries, and accurately identify positive boundaries while minimizing false alarms. To evaluate spectral reconstruction, we used the Root Mean Square Error (RMSE) from [33] which measures the spectral error between our reconstructed HSI and the spectral ground truth. Let $\textrm {I}_{\textrm {seg GT}}$ be the segmented image of the ground truth. In simulated samples, this ground truth was natively generated in silico. In real samples, the ground truth was manually established from a Reference image acquired at high SNR $I_{\textrm {ref}}(x,y,\lambda )$.

We compute the number of true positives $\textrm {TP}(\textrm {I}_{\textrm {seg}}, \textrm {I}_{\textrm {seg GT}})$, false positives $\textrm {FP}(\textrm {I}_{\textrm {seg}}, \textrm {I}_{\textrm {seg GT}})$ and false negatives $\textrm {FN}(\textrm {I}_{\textrm {seg}}, \textrm {I}_{\textrm {seg GT}})$ boundary pixels in $\textrm {I}_{\textrm {seg}}$ with respect to $\textrm {I}_{\textrm {seg GT}}$, and then define

$$\textrm{Rec}(\textrm{I}_{\textrm{seg}}, \textrm{I}_{\textrm{seg GT}}) = \frac{\textrm{TP}(\textrm{I}_{\textrm{seg}}, \textrm{I}_{\textrm{seg GT}})}{\textrm{TP}(\textrm{I}_{\textrm{seg}}, \textrm{I}_{\textrm{seg GT}}) + \textrm{FN}(\textrm{I}_{\textrm{seg}}, \textrm{I}_{\textrm{seg GT}})}\;,$$
and
$$\textrm{Prec}(\textrm{I}_{\textrm{seg}}, \textrm{I}_{\textrm{seg GT}}) = \frac{\textrm{TP}(\textrm{I}_{\textrm{seg}}, \textrm{I}_{\textrm{seg GT}})}{\textrm{TP}(\textrm{I}_{\textrm{seg}}, \textrm{I}_{\textrm{seg GT}}) + \textrm{FP}(\textrm{I}_{\textrm{seg}}, \textrm{I}_{\textrm{seg GT}})}\;,$$
the ratio of the well-predicted boundaries among ground truth.

The Root Mean Square Error metric (RMSE) is defined as

$$\textrm{RMSE} = \frac{1}{X \times Y} \sum_{x=1}^{X} \sum_{y=1}^{Y} \sqrt{\frac{1}{L} \sum_{\lambda=1}^{L}\big[I_{\textrm{output}}(x,y,\lambda) - I_{\textrm{ref}}(x,y,\lambda)\big]^2} \;.$$

In addition to these metrics, we assess the processing time taken by the full acquisition scheme. This includes the acquisition time of the low SNR image, the computation time for the dimension reduction and for the segmentation in superpixels, and the sampling time for the high SNR rescan (see Figs. 1 and 2).

To provide a relative quantitative interpretation of the metrics of performance the proposed UnSCR sampling methods will be compared to alternative scanning protocols. First, UnSCR is compared with the current conventional scanning method which consists in acquiring all pixels at high SNR. We call this method the Reference. Then, we can compare UnSCR with alternative fast scanning approaches also operating with superpixels. A method, that we call Baseline consists in a single scan at low SNR. This image is segmented in superpixels like in the UnSCR. Then, no rescan is done and instead, the superpixels are filled with the average spectrum of the pixel constituting each superpixel. With this Baseline, one can probe the added value of the UnSCR scheme of Fig. 2. We expect the Baseline to be faster than UnSCR since there is a single-pass scan but with lower quality in spectral estimation. As a second alternative, we propose the superpixel centroid rescan (SCR). In this variant, we find every centroid of the superpixels on the segmented image. Based on these centroid positions, we come back on the sample to rescan the corresponding spectra with a longer acquisition time, which leads to a higher SNR. Each of the rescan spectra is then assigned to every pixel of the corresponding superpixel. The compression in time is here limited by the number of the superpixels by comparison with the UnSCR where only the essential spectra are rescanned at higher SNR. The UnSCR is expected to be faster than the SCR in every situation where the number of essential spectra is lower than the number of superpixels in the image.

3. Results

We start with the assessment of the sampling method UnSCR on simulated data. The quality of the reconstruction in the spectral domain is illustrated in Fig. 4. The baseline method has a much larger RMSE than SCR which performs just slightly better than UnSCR. Also, the number of harmonics $r$ does not seem to impact significantly the error. This result is interesting since a lower number of harmonics also means a lower number of essential pixels to be rescanned and thus faster acquisition. The error of SCR and UnSCR tends towards the same value from 0 dB.

 figure: Fig. 4.

Fig. 4. Evolution of the Root Mean Square Error of the reconstructed spectra as a function of SNR on a logarithmic scale. The mean error stands for the dashed lines and standard deviation (transparent area) of the whole synthetic datasets $D_1$, $D_2$, and $D_3$ estimated on 10 repetitions of simulation of noise for each SNR. All curves should be compared to the Reference (Ref) which is a constant line at 0 error.

Download Full Size | PDF

The assessment of UnSCR on the synthetic data sets in the spatial domain is depicted in Fig. 5 with the Boundary Recall plotted as a function of the noise level for the three different synthetic datasets. The curves exhibit a consistent overall trend and the segmentation achieves at least 90% of precision for a noise level close to 0 dB, which is a decent accuracy for the reconstruction. Slight differences may be observed between $D_1$, $D_2$, and $D_3$ due to their different spatial and spectral content. Boundary recall and precision of $D_2$ tend to be lower than $D_1$ and $D_3$ due to the size of the objects (see Fig. 3). This may be due to the fact that the same number of superpixels was set for all the datasets while obviously, more objects need to be extracted from $D_2$ than in the other datasets.

 figure: Fig. 5.

Fig. 5. Evolution of the boundary recall (Rec) and the precision (Prec) as a function of SNR for the synthetic data sets $D_1$, $D_2$, and $D_3$. The mean error stands for the dashed lines and standard deviation (transparent area) of the whole synthetic datasets $D_1$, $D_2$, and $D_3$ estimated on 10 repetitions of simulation of noise for each SNR. Curves should be compared to the Reference (Ref) which is a constant line at 100%.

Download Full Size | PDF

Concerning processing time, Fig. 6 illustrates on synthetic data that we achieved up to 1000 times faster with the UnSCR using one DFT harmonic $r=1$ when compared to the current conventional acquisition where all pixels are acquired at high SNR. In more complex situation, higher number of harmonics might be required, however the gain in time still exist. The baseline method is obviously the fastest since it involves no rescan step. UnSCR method achieves a comparable speed to the baseline around 0 dB while yielding lower error (Fig. 4). From this study on synthetic data sets, one can consider, as a trade-off between the different metrics, scanning the first pass at 0 dB (indicated in dashed line in Figs. 46) and use UnSCR method to reconstruct the hyperspectral image. This is the SNR we used to test UnSCR on the real samples of our data sets.

 figure: Fig. 6.

Fig. 6. Average evolution of the processing time as a function of SNR on a synthetic version of all data sets $D_1$, $D_2$, and $D_3$. The mean error stands for the dashed lines and standard deviation (transparent area) of the whole synthetic datasets $D_1$, $D_2$, and $D_3$ estimated on 10 repetitions of simulation of noise for each SNR.

Download Full Size | PDF

To confirm the validity of the results obtained at the identified trade-off of 0 dB, we provide the reconstructed images and spectra for the simulated and real datasets in Figs. 7 and 8, respectively. A visual comparison between $\textrm {I}_{\textrm {low}}$, $\textrm {I}_{\textrm {output}}$, the Baseline, and $\textrm {I}_{\textrm {ref}}$ shows that the quality of the Baseline and the UnSCR appear much better than the low SNR. Also, as expected from the results on synthetic data of Fig. 5, the images produced by the baseline and UnSCR appear to be not far from the Reference. The main distortion is due to the limited number of superpixels here (that we purposely kept fixed in this article). The quality of the reconstructed spectra in simulated and real samples when operating at 0 dB is provided in Fig. 8 for the different objects present in the samples. Agreement of the UnSCR spectra with the spectra of Reference is very good with exact superimposing curves except for one object of the sample $D_3$ where some errors appear. This lower performance with a larger error on $D_3$ can be interpreted in the following way. In the samples $D_1$ and $D_2$, the chemical content and the spatial content are highly intricated with objects containing almost pure chemical components. By contrast, $D_3$ is a mixture of scattered powders with overlapping chemical content. Interestingly, the quantitative assessment of the error on the simulated and the real samples when operating at $0$ dB for the first scan are in good agreement as illustrated in Table 1.

 figure: Fig. 7.

Fig. 7. Spatial qualitative assessment of our smart sampling method on the three datasets with the single spectral component $\textrm {I}_{\textrm {mono}}$. Panel (a) stands for the simulated dataset and (b) for the real dataset. Low SNR hyperspectral image (0 dB) on which are based the methods are presented on the first (left) column. The Baseline and our proposed smart sampling method respectively on the second and the third column. The reference high SNR hyperspectral image (25 dB) is depicted on the right in the fourth column. To allow visualization, we reduced the hyperspectral images to grayscale images by using the Fourier transform (see 2.1).

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Qualitative assessment of the reconstructed spectra of the different objects imaged in the three simulated (a) and real (b) samples. Left panels: spectra acquired at low SNR (0 dB). Middle panels: spectra computed by means of Baseline. Right panels: spectra reconstructed by means of UnSCR. Reference spectra are represented as solid gray lines. All spectra are retrieved at the pixel locations indicated in Fig. 3.

Download Full Size | PDF

Tables Icon

Table 1. Quantitative assessment of the images used in the Figs. 7 and 8. Processing time is detailed as follows: first stands for the duration of the first scan, i.e. the low SNR hyperspectral image acquisition time; Computing stands for computation time of the dimension reduction; Second scan stands for the segmentation and sampling method and rescanning time.

4. Discussion

The result section demonstrated the efficiency of the UnSCR sampling method both in simulation and with real data. The gain in time is remarkable especially for samples for which the Raman signal is weak. This is for instance the case with the sample $D_1$ of biomedical interest. For such bone samples, the conventional Reference scanning where all pixels are scanned at high SNR is 11 hours. With UnSCR the process takes less than a second.

This work opens perspectives in several directions to further improve the images reconstructed after our smart scanning protocol. On the spectral side, we used a basic mean square method. There exist many alternatives to spectral unmixing that could be tested to further improve the reconstruction [37]. Specifically, from a chemometric perspective [38], one could head toward the decomposition in terms of pure chemical content instead of decomposition on the essential spectra. On the spatial side we also used a basic superpixelization algorithm. Many variants of this algorithm exist and could be tested such as multiscale superpixels [39] that would be specifically adapted for complex samples made of objects of various sizes. Concerning the scanning time, as visible in Table 1 the current limiting factor of the method remains the first scan. We could select a lower SNR and head for an advanced denoising method before applying the superpixelization. An alternative would be to use another imaging modality registered with our Raman microscope which would be accessible at low cost and at faster speed. One could think of RGB imaging, polarization imaging, or any alternative that would provide enough contrast to allow a good superpixelization. The interest of UnSCR was demonstrated on real and simulated samples made of homogeneous objects composed of few chemical components. The rescan of the sole essential spectral and the segmentation in superpixel is found efficient under these assumptions. After this proof of feasibility, it would be important to investigate what happens when one departs from these assumptions. One could also investigate the impact of the number of harmonics for various types of spectra. This is accessible with the simulator that we developed and that we make accessible to the reader (supplementary material to be added after acceptance of the manuscript). Last, the UnSCR could be performed jointly with other smart scanning strategies proposed in the literature [514,1621] to further increase the acquisition speed. More specifically, the methods based on the use of filters with digital micro-mirror device (DMD) are accelerating the acquisition of the spectral data but then rely on a single scan of all the pixels of the sample at the same SNR. Our acceleration in the spectral domain is based, instead, on a selective two-pass scan. Interestingly, the two methods are therefore completely compatible. As such, UnSCR could therefore be further accelerated coupling it with a DMD setup system. This would be particularly suited for the protocol of spectral estimation described in [21].

One may wonder how our UnSCR relates to artificial intelligence (AI) approaches more and more used for denoising and computational imaging [40,41] as well as for fast Raman microscopy [42,43]. Modern AI is mostly based on supervised or self-supervised learning. This means that examples of data annotated by humans or by the machines themselves have to be provided for training purposes. Morever, the training process can be very long especially when considering tensors of data such as hyperspectral images. High SNR images could always be generated in real time, from low-SNR images but only after such a training process on a given set of sample. Conversely, our scanning method can operate in real time, without the need of training at least at the speed indicated in Table 1, speed that could be even further increased. Besides, another limitation of AI methods is that they are data-driven. If the samples change, then the training has to be performed again. Conversely, our method only required the knowledge of the typical size of the objects in the scene and does not have to be further fine-tuned as demonstrated with the three types of images used for illustration in this article. The latest advancements in AI head toward reducing the need for annotation with few or zero-shot learning based on foundation models [44]. With few-shot learning approaches, image annotation is still necessary as illustrated recently for Raman microscopy [43]. In zero-shot learning, the methods require few prompts (bounding boxes, pixels). An interesting approach combining the principles of UnSCR with those of zero-shot learning methods could be to use the spatial features (superpixels, centroids $\ldots$) produced by UnSCR as prompts to generate an improved post-processed image.

5. Conclusion

We have introduced a smart-scanning method for Raman microscopy imaging which can speed up the acquisition by a factor of 1000. This method is suited for samples constituted of objects with homogeneous chemical content. The gain is larger than the one recently estimated in [25] because we combined a compression in the spectral domain via essential spectral and compression in the spatial domain via superpixels. The investigations have been carried out successfully on simulated and real samples of biological and pharmacological interest which provided a good match. We have discussed new directions to head toward further improvements in the speed and quality of image reconstruction for this new fast scanning protocol.

Last but not least, it is worth noting that the UnSCR method could apply to any situation with spatial scenes made of objects with homogeneous content and linear combination of components mixed in the non spatial dimension. As such it is therefore not limited to Raman microscopy. This non spatial dimension could be other spectral information but also temporal dimension. The interest is when the scanning at low SNR is mandatory and long. It could for instance be of specific value for fluorescence life time microscopy (FLIM) for which linear combination of signals (exponential relaxations) can be assumed and low SNR is mandatory to avoid photobleaching.

6. Appendix

In this appendix, we report the comparison we conducted on various dimension reduction techniques we tested for the production of the spectral mono-component grayscale image $\textrm {I}_{\textrm {mono}}(x,y)$ from the original low SNR first scan image $\textrm {I}_{\textrm {low}}(x,y,\lambda )$ as depicted in Fig. 1. Our algorithm is intended to operate on the fly, we cannot use a supervised method. Therefore, we included only non-supervised methods in our comparison. As a disclaimer, we also stress that this selection of methods was not intended to be an exhaustive benchmark since in principle all methods would be appropriate provided that their computational cost is reasonable (i.e. no appearing as a limiting factor in the UnSCR process) and the superpixel segmentation is judged of sufficient quality. We included a simple mean, the standard PCA, t-SNE [45], UMAP [46], ISOMAP [47] and the discrete Fourier transform along the first spectral harmonic as provided in Eq. (1) with r=1. We tested on a simulation of the data set $D_1$ (i.e. bone sample) generated at 0 SNR (see Eq. (13)) to compare all these methods. The comparison with the superpixelized version of the ground truth was measured with the boundary recall and precision of Eqs. (18) and (19). The comparative visualization of the $\textrm {I}_{\textrm {mono}}(x,y)$ is provided in Fig. 9. The quantitative comparison of the boundary recall is given in Table 2 together with the computation time. The fastest method is of course the simple computation of the mean. The DFT appears second in both boundary recall and precision, and third in computational time thanks to the fast Fourier transform algorithm. Consequently, the discrete Fourier transform appears as an appropriate choice since it is relatively fast and provides comparable results in terms of segmentation to the best-tested method (PCA). On top of it all, the Discrete Fourier transform is already to be computed to select the essential spectra. Therefore, we selected this method to produce $\textrm {I}_{\textrm {mono}}(x,y)$.

 figure: Fig. 9.

Fig. 9. Visualization of $\textrm {I}_{\textrm {mono}}(x,y)$ reduced by different spectral dimension reduction methods. Original low SNR hyperspectral image $\textrm {I}_{\textrm {low}}(x,y,\lambda )$ was simulated from the $D_1$ data set.

Download Full Size | PDF

Tables Icon

Table 2. Precision of the segmentation obtained on the images depicted in Fig. 9 with the tested dimension reduction methods and associated computation times.

7. Supplementary material

The code for the essential spectra computing is available in our previous article [25]. We provide as an exe file the simulator used in this article to allow reproducibility of the simulation. In addition, a video tutorial of how to use this simulator is accessible at https://uabox.univ-angers.fr/index.php/s/06nv06docramS0V.

Funding

Agence Nationale de la Recherche (ANR-21-CE29-0007).

Acknowledgements

All authors gratefully acknowledge financial support from the "ANR-21-CE29-0007" project (Agence Nationale de la Recherche).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. Toporski, T. Dieing, and O. Hollricher, Confocal Raman Microscopy, vol. 66 (Springer, 2018).

2. N. Gierlinger and M. Schwanninger, “The potential of raman microscopy and raman imaging in plant research,” Spectroscopy 21(2), 69–89 (2007). [CrossRef]  

3. D. W. Shipp, F. Sinjab, and I. Notingher, “Raman spectroscopy: techniques and applications in the life sciences,” Adv. Opt. Photonics 9(2), 315–428 (2017). [CrossRef]  

4. S. G. da Costa, A. Richter, U. Schmidt, et al., “Confocal raman microscopy in life sciences,” Morphologie 103(341), 11–16 (2019). [CrossRef]  

5. A. Z. Samuel, S. Yabumoto, K. Kawamura, et al., “Rapid microstructure characterization of polymer thin films with 2d-array multifocus raman microspectroscopy,” Analyst 140(6), 1847–1851 (2015). [CrossRef]  

6. L. Kong, M. Navas-Moreno, and J. W. Chan, “Fast confocal raman imaging using a 2-d multifocal array for parallel hyperspectral detection,” Anal. Chem. 88(2), 1281–1285 (2016). [CrossRef]  

7. Y. Kumamoto, K. Mochizuki, K. Hashimoto, et al., “High-throughput cell imaging and classification by narrowband and low-spectral-resolution raman microscopy,” J. Phys. Chem. B 123(12), 2654–2661 (2019). [CrossRef]  

8. C. Hu, X. Wang, L. Liu, et al., “Fast confocal raman imaging via context-aware compressive sensing,” Analyst 146(7), 2348–2357 (2021). [CrossRef]  

9. H. Yuan, P. Zhang, and F. Gao, “Compressive hyperspectral raman imaging via randomly interleaved scattering projection,” Optica 8(11), 1462–1470 (2021). [CrossRef]  

10. H. Yuan, P. Zhang, F. Gao, et al., “Combination of scattering-projection interleaving and random down-sampling for compressive confocal raman imaging,” Opt. Express 30(25), 44657–44664 (2022). [CrossRef]  

11. Y. Yu, Y. Dai, X. Wang, et al., “A critical evaluation of compressed line-scan raman imaging,” Analyst 148(12), 2809–2817 (2023). [CrossRef]  

12. S. Reitzig, F. Hempel, J. Ratzenberger, et al., “High-speed hyperspectral imaging of ferroelectric domain walls using broadband coherent anti-stokes raman scattering,” Appl. Phys. Lett. 120(16), 162901 (2022). [CrossRef]  

13. M. Ahmad, R. Vitale, C. S. Silva, et al., “Exploring local spatial features in hyperspectral images,” J. Chemom. 34(10), e3295 (2020). [CrossRef]  

14. J. Zhang, M. L. Perrin, L. Barba, et al., “High-speed identification of suspended carbon nanotubes using raman spectroscopy and deep learning,” Microsyst. Nanoeng. 8(1), 19 (2022). [CrossRef]  

15. X. Feng, M. C. Fox, J. S. Reichenberg, et al., “Superpixel raman spectroscopy for rapid skin cancer margin assessment,” J. Biophotonics 13(2), e201960109 (2020). [CrossRef]  

16. P. Réfrégier, C. Scotté, H. B. de Aguiar, et al., “Precision of proportion estimation with binary compressed raman spectrum,” J. Opt. Soc. Am. A 35(1), 125–134 (2018). [CrossRef]  

17. D. Cebeci, B. R. Mankani, and D. Ben-Amotz, “Recent trends in compressive raman spectroscopy using dmd-based binary detection,” J. Imaging 5(1), 1 (2018). [CrossRef]  

18. M.-A. Burcklen, F. Galland, and L. Le Goff, “Optimizing sampling for surface localization in 3d-scanning microscopy,” J. Opt. Soc. Am. A 39(8), 1479–1488 (2022). [CrossRef]  

19. F. Soldevila, J. Dong, E. Tajahuerce, et al., “Fast compressive raman bio-imaging via matrix completion,” Optica 6(3), 341–346 (2019). [CrossRef]  

20. C. Scotté, S. Sivankutty, P. Stockton, et al., “Compressive raman imaging with spatial frequency modulated illumination,” Opt. Lett. 44(8), 1936–1939 (2019). [CrossRef]  

21. T. Justel, F. Galland, and A. Roueff, “Compressed raman method combining classification and estimation of spectra with optimized binary filters,” Opt. Lett. 47(5), 1101–1104 (2022). [CrossRef]  

22. C. Ruckebusch, R. Vitale, M. Ghaffari, et al., “Perspective on essential information in multivariate curve resolution,” TrAC, Trends Anal. Chem. 132, 116044 (2020). [CrossRef]  

23. L. Coic, P.-Y. Sacré, A. Dispas, et al., “Pixel-based raman hyperspectral identification of complex pharmaceutical formulations,” Anal. Chim. Acta 1155, 338361 (2021). [CrossRef]  

24. L. Coic, P.-Y. Sacre, A. Dispas, et al., “Selection of essential spectra to improve the multivariate curve resolution of minor compounds in complex pharmaceutical formulations,” Anal. Chim. Acta 1198, 339532 (2022). [CrossRef]  

25. L. Coic, R. Vitale, M. Moreau, et al., “Assessment of essential information in the fourier domain to accelerate raman hyperspectral microimaging,” Anal. Chem. 95(42), 15497–15504 (2023). [CrossRef]  

26. M. Wang, X. Liu, Y. Gao, et al., “Superpixel segmentation: A benchmark,” Signal Process. Image Commun. 56, 28–39 (2017). [CrossRef]  

27. F. Pedregosa, G. Varoquaux, A. Gramfort, et al., “Scikit-learn: Machine learning in Python,” Journal of Machine Learning Research 12, 2825–2830 (2011).

28. G. S. Mandair and M. D. Morris, “Contributions of raman spectroscopy to the understanding of bone strength,” Bonekey Rep. 4, 620 (2015). [CrossRef]  

29. G. Fini, “Applications of raman spectroscopy to pharmacy,” J. Raman Spectrosc. 35(5), 335–337 (2004). [CrossRef]  

30. V. Mazet, “Développement de méthodes de traitement de signaux spectroscopiques: estimation de la ligne de base et du spectre de raies,” Ph.D. thesis, Université Henri Poincaré-Nancy 1 (2005).

31. A. Savitzky and M. J. E. Golay, “Smoothing and differentiation of data by simplified least squares procedures,” Anal. Chem. 36(8), 1627–1639 (1964). [CrossRef]  

32. E. Million, “The hadamard product,” Course Notes 3, 1–7 (2007).

33. R. Shrestha, R. Pillay, S. George, et al., “Quality evaluation in spectral imaging–quality factors and metrics,” Journal of the International Colour Association 10, 22–35 (2014).

34. D. Stutz, A. Hermans, and B. Leibe, “Superpixels: An evaluation of the state-of-the-art,” Comput. Vis. Image Underst. 166, 1–27 (2018). [CrossRef]  

35. M.-Y. Liu, O. Tuzel, S. Ramalingam, et al., “Entropy rate superpixel segmentation,” in CVPR 2011, (IEEE, 2011), pp. 2097–2104.

36. P. Neubert and P. Protzel, “Superpixel benchmark and comparison,” in Proc. Forum Bildverarbeitung, vol. 6 (2012), pp. 1–12.

37. R. M. Cavalli, “Spatial validation of spectral unmixing results: A systematic review,” Remote Sens. 15(11), 2822 (2023). [CrossRef]  

38. K. Varmuza and P. Filzmoser, Introduction to multivariate statistical analysis in chemometrics (CRC press, 2016).

39. S. Zhang, S. Li, W. Fu, et al., “Multiscale superpixel-based sparse representation for hyperspectral image classification,” Remote Sens. 9(2), 139 (2017). [CrossRef]  

40. K. de Haan, Y. Rivenson, Y. Wu, et al., “Deep-learning-based image reconstruction and enhancement in optical microscopy,” Proc. IEEE 108(1), 30–50 (2019). [CrossRef]  

41. H. Lin and J.-X. Cheng, “Computational coherent raman scattering imaging: breaking physical barriers by fusion of advanced instrumentation and data science,” eLight 3(1), 6–19 (2023). [CrossRef]  

42. B. Manifold, E. Thomas, A. T. Francis, et al., “Denoising of stimulated raman scattering microscopy images via deep learning,” Biomed. Opt. Express 10(8), 3860–3874 (2019). [CrossRef]  

43. P. Abdolghader, A. Ridsdale, T. Grammatikopoulos, et al., “Unsupervised hyperspectral stimulated raman microscopy image enhancement: denoising and segmentation via one-shot deep learning,” Opt. Express 29(21), 34205–34219 (2021). [CrossRef]  

44. A. Kirillov, E. Mintun, N. Ravi, et al., “Segment anything,” arXiv, arXiv:2304.02643 (2023). [CrossRef]  

45. L. Van der Maaten and G. Hinton, “Visualizing data using t-sne,” Journal of machine learning research 9, 1 (2008).

46. L. McInnes, J. Healy, and J. Melville, “Umap: Uniform manifold approximation and projection for dimension reduction,” arXiv, arXiv:1802.03426 (2018). [CrossRef]  

47. M. Balasubramanian and E. L. Schwartz, “The isomap algorithm and topological stability,” Science 295(5552), 7 (2002). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Pipeline representing the different steps of the proposed smart scanning protocol.
Fig. 2.
Fig. 2. Detailed view of the unmixed superpixel centroid rescan (UnSCR).
Fig. 3.
Fig. 3. Comparison between real and simulated data. The first two columns correspond to real data and the last two correspond to simulated data. Each row refers to a sample of each dataset. $D_1$ and $D_3$ are 20 dB HSI and $D_2$ is a 13 dB HSI. The crosses are located in the different objects of the samples. The Raman spectrum of the pixel pointed by the cross is provided with the corresponding color.
Fig. 4.
Fig. 4. Evolution of the Root Mean Square Error of the reconstructed spectra as a function of SNR on a logarithmic scale. The mean error stands for the dashed lines and standard deviation (transparent area) of the whole synthetic datasets $D_1$, $D_2$, and $D_3$ estimated on 10 repetitions of simulation of noise for each SNR. All curves should be compared to the Reference (Ref) which is a constant line at 0 error.
Fig. 5.
Fig. 5. Evolution of the boundary recall (Rec) and the precision (Prec) as a function of SNR for the synthetic data sets $D_1$, $D_2$, and $D_3$. The mean error stands for the dashed lines and standard deviation (transparent area) of the whole synthetic datasets $D_1$, $D_2$, and $D_3$ estimated on 10 repetitions of simulation of noise for each SNR. Curves should be compared to the Reference (Ref) which is a constant line at 100%.
Fig. 6.
Fig. 6. Average evolution of the processing time as a function of SNR on a synthetic version of all data sets $D_1$, $D_2$, and $D_3$. The mean error stands for the dashed lines and standard deviation (transparent area) of the whole synthetic datasets $D_1$, $D_2$, and $D_3$ estimated on 10 repetitions of simulation of noise for each SNR.
Fig. 7.
Fig. 7. Spatial qualitative assessment of our smart sampling method on the three datasets with the single spectral component $\textrm {I}_{\textrm {mono}}$. Panel (a) stands for the simulated dataset and (b) for the real dataset. Low SNR hyperspectral image (0 dB) on which are based the methods are presented on the first (left) column. The Baseline and our proposed smart sampling method respectively on the second and the third column. The reference high SNR hyperspectral image (25 dB) is depicted on the right in the fourth column. To allow visualization, we reduced the hyperspectral images to grayscale images by using the Fourier transform (see 2.1).
Fig. 8.
Fig. 8. Qualitative assessment of the reconstructed spectra of the different objects imaged in the three simulated (a) and real (b) samples. Left panels: spectra acquired at low SNR (0 dB). Middle panels: spectra computed by means of Baseline. Right panels: spectra reconstructed by means of UnSCR. Reference spectra are represented as solid gray lines. All spectra are retrieved at the pixel locations indicated in Fig. 3.
Fig. 9.
Fig. 9. Visualization of $\textrm {I}_{\textrm {mono}}(x,y)$ reduced by different spectral dimension reduction methods. Original low SNR hyperspectral image $\textrm {I}_{\textrm {low}}(x,y,\lambda )$ was simulated from the $D_1$ data set.

Tables (2)

Tables Icon

Table 1. Quantitative assessment of the images used in the Figs. 7 and 8. Processing time is detailed as follows: first stands for the duration of the first scan, i.e. the low SNR hyperspectral image acquisition time; Computing stands for computation time of the dimension reduction; Second scan stands for the segmentation and sampling method and rescanning time.

Tables Icon

Table 2. Precision of the segmentation obtained on the images depicted in Fig. 9 with the tested dimension reduction methods and associated computation times.

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

I low ( x , y ) = k = 1 K c k ( x , y ) s k + e ( x , y ) ,
I ~ low ( x , y ) = k = 1 K c k ( x , y ) s ~ k + e ~ ( x , y ) ,
s ~ k r = l = 1 L s k l exp ( j 2 π L ( r 1 ) ( l 1 ) ) ,
e ~ r ( x , y ) = l = 1 L e l ( x , y ) exp ( j 2 π L ( r 1 ) ( l 1 ) ) ,
G r ( x , y ) = k = 1 K c k ( x , y ) ( s ~ k r ) + ( e ~ r ( x , y ) ) ,
Q r ( x , y ) = k = 1 K c k ( x , y ) ( s ~ k r ) + ( e ~ r ( x , y ) ) .
I low ( x , y ) l = 1 L r = 1 L ( G r ( x , y ) + j Q r ( x , y ) ) exp ( j 2 π L ( l 1 ) ( r 1 ) ) .
I low ( x p , y p ) = S low T c low ( x p , y p )
d = I low ( x p , y p ) S low T c low ( x p , y p ) 2 2 ,
c ( d ) = 2 S low S low T c low ( x p , y p ) 2 S low I low ( x p , y p ) .
c low ( x p , y p ) = ( S low S low T ) 1 S low I low ( x p , y p )
I output ( x p , y p ) = S high T c low ( x p , y p ) .
SNR = 1 X Y x = 1 X y = 1 Y 10   log 10 ( λ = 1 L I S λ ( x , y ) 2 λ = 1 L e λ ( x , y ) 2 )
I N = I + N ,
σ N = 1 L 10 a ,
a = x = 1 X y = 1 Y 10   log 10 ( λ = 1 L I S λ ( x , y ) 2 ) SNR × X Y 20 X Y .
I N + G = I N G
Rec ( I seg , I seg GT ) = TP ( I seg , I seg GT ) TP ( I seg , I seg GT ) + FN ( I seg , I seg GT ) ,
Prec ( I seg , I seg GT ) = TP ( I seg , I seg GT ) TP ( I seg , I seg GT ) + FP ( I seg , I seg GT ) ,
RMSE = 1 X × Y x = 1 X y = 1 Y 1 L λ = 1 L [ I output ( x , y , λ ) I ref ( x , y , λ ) ] 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.