Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Residual image recovery method based on the dual-camera design of a compressive hyperspectral imaging system

Open Access Open Access

Abstract

Compressive hyperspectral imaging technology can quickly detect the encoded two-dimensional measurements and reconstruct the three-dimensional hyperspectral images offline, which is of great significance for object detection and analysis. To provide more information for reconstruction and improve the reconstruction quality, some of the latest compressive hyperspectral imaging systems adopt a dual-camera design. To utilize the information from additional camera more efficiently, this paper proposes a residual image recovery method. The proposed method takes advantage of the structural similarity between the image captured by the additional camera and the hyperspectral image, combining the measurements from the additional camera and coded aperture snapshot spectral imaging (CASSI) sensor to construct an estimated hyperspectral image. Then, the component of the estimated hyperspectral image is subtracted from the measurement of the CASSI sensor to obtain the residual data. The residual data is used to reconstruct the residual hyperspectral image. Finally, the reconstructed hyperspectral image is the sum of the estimated and residual image. Compared with some state-of-the-art algorithms based on such systems, the proposed method can significantly improve the reconstruction quality of hyperspectral image

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Hyperspectral imaging technology is able to identify, analyze, and classify objects according to the spectral characteristics of different materials. Additionally, researchers applied hyperspectral imaging in many fields, such as face recognition [1], food safety [2], agriculture surveillance [3], and military reconnaissance [4]. However, traditional hyperspectral imaging methods, such as whiskbroom or pushbroom, require point-by-point or line-scanning of objects [5], which are time-consuming. Compressive hyperspectral imaging is a computational hyperspectral imaging technology that uses compressive sensing theory. It reconstructs the hyperspectral image of the object through the measurements of one or very few shots. This can significantly improve the measurement speed and overcome the tradeoff between spatial resolution, spectral resolution, light collection efficiency, and measurement time of traditional methods [6].

The compressive sensing theory says that if signal x is compressible by transform coding with a known transform, and the reconstruct procedure is nonlinear, the number of measurements n is able to be dramatically smaller than the signal size m [7]. Using compressive sensing theory, the sparse signals can be recovered with very few measurements. Researchers first applied it to the field of hyperspectral imaging in 2007 by D. Brady et al. [6]. They proposed a dual-disperser coded aperture snapshot spectral system. Also, they proposed a simplified single disperser coded aperture snapshot spectral imaging (CASSI) system in 2008 [8]. After that, many hyperspectral imaging systems have been proposed [916]. Some of these systems added an additional camera (grayscale or RGB camera) to provide side information for imaging [1720]. For example, the dual-camera compressive hyperspectral imaging system (DCCHI) [18,19] which utilizes a dual camera design, has the following advantages [18,21]. First, the uncoded measurement of the additional camera greatly eases the reconstruction problem and improve the performance of CASSI, and this system is even competitive to multi-frame CASSI. Second, the dual came design enables higher light efficiency compared to CASSI using random binary coded apertures that blocks half of the incident light. Third, the DCCHI system maintains the snapshot advantage rather than multi shots required in multi-frame CASSI. Moreover, the side information provided by the additional camera helps develop more efficient reconstruction algorithm. However, the original paper only using the combined CASSI and additional camera measurements for reconstruction through general compressive sensing algorithm. The structural information of the object contained in additional camera measurement is not fully utilized. Researchers proposed some algorithms to make better use of the information provided by the additional camera in the DCCHI system [2126]. There are also algorithms that attempt to use GPU for parallel computing to speed up the recovery of DCCHI for near real-time imaging [27,28].

To effectively use the information provided by the additional camera and improve the reconstruction quality, this paper proposes a residual image recovery method based on the DCCHI system (DCCHI-RIR). First, it constructs a small dictionary D from the information provided by the additional camera and uses the least-squares method to combine the information of the two cameras to jointly estimate the coefficient $\boldsymbol{\mathrm{\theta}}$ of the hyperspectral image under the dictionary D to obtain the estimated hyperspectral image festimated= D$\boldsymbol{\mathrm{\theta}}$. This process exploits the structural similarity between the additional camera measurement and hyperspectral image. The resulting estimated hyperspectral image retains most of the structural and spatial information of the object with fairly high accuracy. Next, it uses the CASSI measurement minus the component of estimated image to get the residual data. Then, compressive sensing theory is used to reconstruct the residual image. Finally, it adds the residual image to the estimated image to further reduce the error between the estimated and real image to obtain a more accurate final result. Furthermore, two DCCHI system structures are taken as examples to compare the simulation results between this method and existing methods. The results show that the DCCHI-RIR can obtain a better reconstruction quality than existing algorithms, and the proposed method is also capable of improving other types of compressive hyperspectral systems that are suitable for applying the dual-camera design, such as the grating-based [14] or single-pixel imaging based spectral imaging systems [29,30].

2. System configuration and method

2.1 System model

The DCCHI system adds a grayscale or RGB camera for uncoded measurement, which is in conjunction with coded CASSI measurement and can thus greatly ease reconstruction problem [18]. Figure 1 shows the DCCHI system used in this paper. The light from the scene is equally divided into two directions by the beam splitter. The light in one direction passes through the CASSI system placed in the sequence of objective lens, dispersive prism, coded aperture, relay lens, dispersive prism, and a grayscale camera. The light from the other direction is directly captured by the additional camera. The additional camera can be grayscale or RGB camera, which are two types of systems.

 figure: Fig. 1.

Fig. 1. Illustration showing the DCCHI system.

Download Full Size | PDF

By taking the DCCHI system with the grayscale camera as an example, this subsection introduces the sensing process. Let the three-dimensional (3D) hyperspectral data cube be f0(x,y,λ), where x and y are the spatial coordinates, and λ is the spectral coordinate. Assuming that the beam splitter divides the light equally, the spectral intensity distribution before the coded aperture after dispersion is f1(x,y,λ) 0.5f0(xϕ(λ),y,λ), where ϕ(λ) donates the wavelength-dependent dispersion introduced by the dispersive prism. Assuming that the transmittance function of the coded aperture is T(x,y), the spectral density after the coded aperture is f2(x,y,λ) f1(x,y,λ)T(x,y). After the light passes through the second dispersive prism for the opposite dispersion to the first, the final spectral intensity distribution on the sensor becomes f3(x,y,λ) f2(x + ϕ(λ),y,λ). Donate Ω(λ) as the spectral response function of the CASSI detector. Combining the above formulas, the final measurement of the CASSI sensor is as follows:

$$\textrm{g}(x,y) = \int {\varOmega (\lambda ){f_{_3}}(x,y,\lambda )} \textrm{d}\lambda = 0.5\int {\varOmega (\lambda ){f_{_0}}(x,y,\lambda )} T(x + \phi (\lambda ),y)\textrm{d}\lambda .$$

Then, the detector measurement for pixel (m, n) become:

$${g_{mn}} = \int_{m\Delta }^{(m + 1)\Delta } {\int_{n\Delta }^{(n + 1)\Delta } {g(x,y)} } \textrm{d}x\textrm{d}y,$$
where Δ is the pixel size of the CASSI sensor. The distribution of dispersion is assumed to be linear: ϕ(λ)=α(λλ0). Such an assumption is approximately true over the limited wavelength range studied; it can also be used in the nonlinear region by introducing some corrections [6]. Generally, it can be assumed that adjacent wavelength intervals are separated by a distance of one pixel on the coded aperture after dispersion. Considering x, y, and λ as discrete, the discrete indicators are m, n, and k respectively, and the discrete representation of the measurement of the pixel (m, n) is:
$${g_{mn}} = 0.5\sum\limits_{k = 0}^{\textrm{B} - 1} {{\Omega _k}{f_{mnk}}{T_{\textrm{(}m + k\textrm{)}n}}} ,$$
where B is the total number of the hyperspectral bands, and Ωk is the spectral response of the detector for the k th spectral band (k = 1,……, B). Suppose that the image of one wavelength band exhibits H rows and W columns. Arrange all gmn and fmnk into the detected 2D image GCASSI (GCASSI ∈ ℝ H × W) and the total 3D hyperspectral image F (F ∈ ℝ H × W × B), respectively. Donate gCASSI (gCASSI ∈ ℝ HW × 1) and f (f ∈ ℝ HWB × 1) as the vectorization of GCASSI and F, respectively. Therefore, Eq. (1) can be expressed in the linear matrix form:
$${{\mathbf g}_{{\bf CASSI}}} = {{\mathbf H}_{{\bf CASSI}}}{\mathbf f},$$
where HCASSI (HCASSI ∈ ℝ HW × HWB) is the observation matrix of the CASSI system.

Similarly, the measurement of the pixel (m, n) of the grayscale camera is given by

$${g_{mn}} = 0.5\sum\limits_{k = 0}^{\textrm{B} - 1} {{\Omega _k}{f_{mnk}}} ,$$
and the corresponding linear matrix form is given by
$${{\mathbf g}_{{\bf gray}}} = {{\mathbf H}_{{\bf gray}}}{\mathbf f}.$$

The earliest literature [18] that proposed DCCHI is to combine Eq. (4) and Eq. (6) for calculation. Let g = [ gCASSI; ggray], H = [ HCASSI; Hgray]. The total detected signal g can be expressed as

$${\mathbf g} = {\mathbf H\mathbf f}.$$

Then, total variation (TV) regularization [31] and two-step iterative shrinkage/thresholding (TwIST) algorithm [32] are adopted to reconstruct f:

$$\mathop {\mathbf f}\limits^{\wedge} = \mathop {\arg \min }\limits_{\mathbf f} [{{{||{{\mathbf g} - {\mathbf{H}\mathbf{f}}} ||}^2} + \tau {\textrm{TV}(}{\mathbf f})} ],$$
where τ is the regularization coefficient. The TV term represents the smoothness of the hyperspectral image:
$$\textrm{TV(}{\mathbf f}\textrm{) = }\sum\limits_k {\sum\limits_{m\textrm{,}n} {\sqrt {{{({f_{\textrm{(}m\textrm{ + 1)}nk}} - {f_{mnk}})}^2} + {{({f_{m\textrm{(}n\textrm{ + 1)}k}} - {f_{mnk}})}^2}} } } .$$

However, this method does not take full advantage of the information provided by the additional camera. Due to the smoothing effect of total variation, some spatial details will be failed to reconstruct. Therefore, this paper proposes the DCCHI-RIR method which is able to effectively utilize the similarity between the additional camera measurement and hyperspectral image. Thus, the spatial details can be preserved more and the reconstruction quality can be improved.

2.2 Proposed method for DCCHI system with grayscale camera

By taking the DCCHI system with the grayscale camera as an example, this subsection introduces the proposed DCCHI-RIR method. First, the way to obtain the estimated hyperspectral image is introduced. Let fk be the vectorization data of the k th wavelength band (k = 1, 2, ……, B; fk ∈ ℝ HW × 1). Then, f can be expressed as a concatenation of B fk: f = [f1;f2;f3;……;fB]. Because the measurement of the grayscale camera is the sum of all spectral bands, the image captured by the grayscale camera contains the shape and structural information of the scene and demonstrates a certain structural similarity with the spectral image of each band. Therefore, ggray is able to be used to estimate fk. The simplest way to estimate fk is fkggray/B. Then, f can be simply expressed as f = [ggray; ggray; ggray;……; ggray]/B. However, this result is inaccurate. To estimate the f more accurately, the measurement of the CASSI sensor gCASSI must also be used. First, B vectors from atom1 to atomB is constructed. Let the k th vector be atomk (1 ≤ k ≤ B, atomk ∈ ℝ HWB × 1). Then, these B vectors can be expressed as:

$$\left\{ {\begin{array}{{c}} {{\mathbf {atom}_{\mathbf 1}} = \frac{1}{\textrm{B}}[{{{\mathbf g}_{{\bf gray}}};{\mathbf z};{\mathbf z};{\mathbf z};\ldots \ldots .;{\mathbf z}} ]}\\ {{\mathbf {atom}_{\mathbf 2}} = \frac{\mathbf 1}{\textrm{B}}[{{\mathbf z};{{\mathbf g}_{{\bf gray}}};{\mathbf z};{\mathbf z};\ldots \ldots .;{\mathbf z}} ]}\\ {{\mathbf {atom}_{\bf 3}} = \frac{1}{\textrm{B}}[{{\mathbf z};{z};{{\mathbf g}_{{\bf gray}}};{\mathbf z};\ldots \ldots .;{\mathbf z}} ]}\\{{\mathbf {atom}_{\bf B}} = \frac{1}{\textrm{B}}[{{\mathbf z};{\mathbf z};{\mathbf z};\ldots \ldots ;{\mathbf z};{{\mathbf g}_{{\bf gray}}}} ]} \end{array}} \right.$$
where z ∈ ℝ HW × 1 is a vertical vector whose all elements are zero. In Eq. (10), each atomk consists of one ggray and (B − 1) z. Then, a matrix D (D ∈ ℝ HWB × B) is constructed as the combination of all atomk:
$${\mathbf D} = [{{\mathbf {atom}_\textrm{1}},{\mathbf {atom}_\textrm{2}},\ldots \ldots .,{\mathbf {atom}_\textrm{B}}} ].$$
D can be seen as a small dictionary to express the estimated hyperspectral data. Suppose that:
$${{\mathbf f}_{{\mathbf {estimated}}}} = {\mathbf D \mathbf \theta },$$
where festimated is the estimated hyperspectral images, and $\boldsymbol{\mathrm{\theta}}$ is the coefficient of the small dictionary D. The detected signal g can be approximated as the product of H and festimated: gHfestimatedHD$\boldsymbol{\mathrm{\theta}}$. Therefore, the $\boldsymbol{\mathrm{\theta}}$ can be obtained by solve the overdetermined equation g = HD$\boldsymbol{\mathrm{\theta}}$. This overdetermined equation does not have exact solutions. However, a least-squares solution is able to be obtained by minimize the squared error between HD$\boldsymbol{\mathrm{\theta}}$ and g. The least-square solution is: $\boldsymbol{\mathrm{\theta}}$ = [(HD)T (HD)]−1 (HD)T g. Therefore, a more accurate version of festimated is given by
$${{\mathbf f}_{{\mathbf {estimated}}}} = {\mathbf D}{[{({\mathbf H\mathbf D})^\textrm{T}}({\mathbf H\mathbf D})]^{ - 1}}{({\mathbf H\mathbf D})^\textrm{T}}{\mathbf g}.$$

In this paper, a patch-based method is adopted. The measurements are divided into overlapping small patches; each small patch is recovered by parallel computing on the GPU, and the overlapping parts are averaged. This reconstruction strategy can significantly improve the accuracy and speed when calculating the estimated hyperspectral images. To calculate Eq. (13), HD must be calculated first for each patch. H is a large sparse matrix with many zero elements, and D also exhibits many zero elements. To increase the speed of the program, transform the formula to remove the unneeded 0 by 0 multiplications:

$$\left\{ {\begin{array}{{l}} {{\mathbf H\mathbf D} = [{\mathbf H_{\mathbf {CASSI}}};{\mathbf H \mathbf {gray}}]{\mathbf D}}\\ {{\mathbf H_\mathbf{CASSI}\mathbf D} = [\Omega 1{\mathbf C1}\cdot \frac{{{{\mathbf g}_{{\bf gray}}}}}{\textrm{B}},\ldots \ldots ,\Omega \textrm{B}{\mathbf CB}\cdot \frac{{{{\mathbf g}_{{\bf gray}}}}}{\textrm{B}}]}\\ {{\mathbf H\textrm {gray}\mathbf D} = \Omega 1\frac{{{{\mathbf g}_{{\bf gray}}}}}{\textrm{B}} + \ldots \ldots + \Omega \textrm{B}\frac{{{{\mathbf g}_{{\bf gray}}}}}{\textrm{B}} = (\Omega 1 + \ldots \ldots \Omega \textrm{B})\frac{{{{\mathbf g}_{{\bf gray}}}}}{\textrm{B}}} \end{array}} \right.,$$
where Ck represents the coded aperture that the k th spectral band passes through. Eq. (14) can be derived from the imaging process of the system, and each atom of the small dictionary D can be regarded as a hyperspectral image with B bands. As a result, the number of multiplications is significantly reduced by Eq. (14).

Although the estimated hyperspectral image demonstrates high accuracy, a residual between it and the ground truth still exists. If the residual can be recovered, the imaging accuracy can be further improved. Suppose that fresidual is the residual error between festimated and f: fresidual = f − festimated. To reconstruct fresidual, multiply both sides by H at the same time: Hfresidual = g − Hfestimated. Let r be the residual data whose components of festimated has been subtracted from the detected signal: r = g − Hfestimated. The final constraint that fresidual need to satisfy are given by

$${\mathbf H}{{\mathbf f}_{{\mathbf {residual}}}} = {\mathbf r}.$$

Since Eq. (15) is a severely underdetermined equation, and fresidual is sparse in TV regularization, TwIST algorithm and TV regularization can be used to reconstruct fresidual:

$${\hat{{\mathbf f}}_{{\mathbf {residual}}}} = \mathop {\arg \min }\limits_{{{\mathbf f}_{{\mathbf {residual}}}}} [{{{||{{\mathbf r} - {\mathbf H}{{\mathbf f}_{{\mathbf {{residual}}}}}} ||}^2} + \tau \textrm{TV(}{\mathbf f_\mathbf {residual}}\textrm{)}}].$$

Then, the final result is the sum of festimated and fresidual:

$$\hat{{\mathbf f}} = {\hat{{\mathbf f}}_{{\mathbf {residual}}}} + {{\mathbf f}_{{\mathbf {estimated}}}}.$$

In summary, the DCCHI-RIR is divided into two steps. The first step is to combine the measurements from the additional camera and CASSI sensor to quickly obtain an estimated hyperspectral image with high accuracy using Eq. (13). The second step is to reconstruct the residual hyperspectral image using the compressive sensing theory according to Eq. (16). Then, the residual hyperspectral image is added to the estimated hyperspectral image to obtain a more accurate final result. It is worth noting that the accuracy of the estimated hyperspectral image decreases when the spectral image of each band greatly differs from the grayscale image. However, due to the calculation of the residual, DCCHI-RIR can still guarantee that the reconstruction quality is similar to the general compressive sensing algorithm.

2.3 Proposed method for DCCHI system with RGB camera

This subsection introduces DCCHI-RIR by taking the DCCHI system with RGB camera as an example. The DCCHI system with RGB camera is also shown in Fig. 1. Let the detected RGB image be Grgb (Grgb ∈ ℝ H × W × 3). Then, the sensing process of the RGB camera can be expressed as:

$${{\mathbf g}_{{\bf rgb}}} = {{\mathbf H}_{{\bf rgb}}}{\mathbf f},$$
where grgb is the vectorization of Grgb (g ∈ ℝ 3HW × 1), and Hrgb is the observation matrix of the RGB branch.

Let g = [ gCASSI; grgb ] and H = [ HCASSI; Hrgb ]. Eq. (7) can be obtained by combining Eq. (4) and Eq. (18). For the DCCHI system with an RGB camera, the way for constructing D is different. Let the images of the red, green, and blue channel of the RGB camera be Gr, Gg, and Gb, respectively (Gr, Gg, and Gb ∈ ℝ H × W). Here gr, gg, and gb are their vectorization, respectively (gr, gg, and gb∈ ℝ HW × 1). Obviously, grgb = [gr, gg, gb]. Also, the relationship between the measurement of RGB camera and hyperspectral image can be expressed as:

$$[{{{\mathbf g}_{\bf r}},{{\mathbf g}_{\bf g}},{{\mathbf g}_{\bf b}}} ]= [{{\mathbf f}_{\bf 1}},{{\mathbf f}_{\bf 2}},\ldots \ldots ,{{\mathbf f}_{\bf B}}]{\mathbf A},$$
where A ∈ ℝ B × 3 is the spectral response function of the RGB camera. Donate the (i, j) th element of A as ai,j (i = 1, 2, ……, B; j = 1,2,3); each spectral band can be estimated as
$${\hat{{\mathbf f}}_{\mathbf k}} \mathbf \approx {\textrm{a}_{k\textrm{,1}}}{{\mathbf g}_{{rn}}} + {\textrm{a}_{k\textrm{,2}}}{{\mathbf g}_{\mathbf{gn}}} + {\textrm{a}_{k\textrm{,3}}}{{\mathbf{g}}_{\mathbf{bn}}}$$
where grn, ggn, and gbn are the normalized form of gr, gg, and gb, respectively. The B atoms can be expressed as
$$\left\{ {\begin{array}{{c}} {{\mathbf{ato}}{{\mathbf{m}}_{\mathbf{1}}} = \left[ {{{{\mathbf{\hat{f}}}}_{\mathbf{1}}};{\mathbf{z}};{\mathbf{z}};{\mathbf{z}};.......;{\mathbf{z}}} \right]} \\ {{\mathbf{ato}}{{\mathbf{m}}_{\mathbf{2}}} = \left[ {{\mathbf{z}};{{{\mathbf{\hat{f}}}}_{\mathbf{2}}};{\mathbf{z}};{\mathbf{z}};.......;{\mathbf{z}}} \right]} \\ {{\mathbf{ato}}{{\mathbf{m}}_3} = \left[ {{\mathbf{z}};{\mathbf{z}};{{{\mathbf{\hat{f}}}}_{\mathbf{3}}};{\mathbf{z}};.......;{\mathbf{z}}} \right]} \\ {......} \\ {{\mathbf{ato}}{{\mathbf{m}}_B} = \left[ {{\mathbf{z}};{\mathbf{z}};{\mathbf{z}};.......;{\mathbf{z}};{{{\mathbf{\hat{f}}}}_{\mathbf{B}}}} \right]} \end{array}} \right..$$

It is worth noting that Eq. (21) is not the only way to construct atomk, but in our simulations, this way demonstrates the best results. Then, the dictionary D can be constructed as Eq. (11). When calculating the estimated hyperspectral image, the calculation of HD is still able be simplified as follows:

$$\left\{ {\begin{array}{{l}} {{\mathbf{HD}} = [{\mathbf{H_{\mathbf CASSI}}};{\mathbf{H_{rgb}}}]{\mathbf{D}}} \\ {{\mathbf{H_{\mathbf CASSID}}} = [\Omega 1{\mathbf{C_1}}\cdot{{{\mathbf{\hat{f}}}}_{\mathbf{1}}},......,\Omega_{\text{B}}{\mathbf{C_B}}\cdot{{{\mathbf{\hat{f}}}}_{\mathbf{B}}}]} \\ {{\mathbf{HrgbD}} = [{{\text{a}}_{{\text{1}},1}}{{{\mathbf{\hat{f}}}}_{\mathbf{1}}},......,{{\text{a}}_{{\text{B}},1}}{{{\mathbf{\hat{f}}}}_{\mathbf{B}}};{{\text{a}}_{{\text{1}},2}}{{{\mathbf{\hat{f}}}}_{\mathbf{1}}},.....,{{\text{a}}_{{\text{B}},2}}{{{\mathbf{\hat{f}}}}_{\mathbf{B}}};{{\text{a}}_{{\text{1}},3}}{{{\mathbf{\hat{f}}}}_{\mathbf{1}}},.....,{{\text{a}}_{{\text{B}},3}}{{{\mathbf{\hat{f}}}}_{\mathbf{B}}}]} \end{array}} \right..$$

The remaining steps of DCCHI-RIR are the same as Eq. (15-17).

The DCCHI-RIR algorithm flow of the two systems is shown in Algorithms 1 and 2. The framework schematic of the DCCHI-RIR is shown in Fig. 2. It is worth mentioning that the algorithm flow and framework schematic are for each overlapping small patch. After recovering each small patch, adding them up and averaging the overlapping part value are necessary.

oe-30-11-20100-i001

 figure: Fig. 2.

Fig. 2. Framework schematic of DCCHI-RIR.

Download Full Size | PDF

3. Results and discussions

3.1 Results comparison for DCCHI system with grayscale camera

To evaluate the effectiveness of DCCHI-RIR, Natural Scenes 2015 dataset [33] is used. This dataset contains 30 hyperspectral images with a wavelength range of 400-720 nm sampled at an interval of 10 nm, and each pixel value represents spectral radiance in Wm−2sr−1nm−1. In this paper, the wavelength from 400 to 700 nm has been used, and six hyperspectral images are randomly selected as our test images. The resolution of the selected test images is set to 520×520×31. All test hyperspectral images are scaled to the interval from 0 to 1 for easier comparison.

Additionally, three quantitative metrics are employed to evaluate the reconstructed hyperspectral images, including spectrum angular mapper (SAM) [34], mean peak signal-to-noise ratio (M-PSNR), and mean structural similarity (M-SSIM). SAM is a metric to measure the spectral accuracy whose value range is from 0 to 1. The smaller the SAM, the more accurate the spectrum. M-PSNR is a metric to calculate the PSNR of each spectral band and finally take the average value. PSNR is the image evaluation index of the overall recovery effect; a large PSNR value represents a good recovery effect. M-SSIM is used to calculate the SSIM of each spectral band first and then calculate the mean value. SSIM is an indicator to measure the similarity of the image structure, and its value range is from 0 to 1; a high SSIM value means less distortion of the restored image.

Figure 3 shows the 15th band (540-550 nm) of the estimated and residual images of six 100×100 regions. The estimated images are calculated according to Eq. (13). To examine the properties of the residual images to be reconstructed, the residual images in Fig. 3 are not reconstructed according to Eq. (16) but obtained by subtracting the estimated hyperspectral images from the original hyperspectral images. The estimated hyperspectral images obtained in the first step of DCCHI-RIR preserve most of the spatial details of the original hyperspectral images and demonstrate high accuracy. As can be seen from results in Fig. 3, the estimated hyperspectral images in Fig. 3(b1-b6) are very similar with the original hyperspectral images in Fig. 3(a1-a6) when being visually observed. Additionally, the quality metrics in Fig. 3(b1-b6) also verify this similarity. Comparing Fig. 3(a1-a6) with Fig. 3(c1-c6), the residual hyperspectral images exhibit smaller interval ranges than the original hyperspectral images. While the normalized residual images can be obtained through normalizing the differences of residual images and their minimum values, and the corresponding images shown in Fig. 3(c1-c6) still have similar shapes to the original images. In addition, the TVs of normalized residual images are also close to the original images, and as a result it is reasonable to use TV regularization for reconstructing the residual hyperspectral images.

 figure: Fig. 3.

Fig. 3. Visual results of the estimated and normalized residual hyperspectral images in one spectral band (500-510 nm) from six regions of the test hyperspectral images. The TVs and the intervals of the original hyperspectral images are listed in (a1-a6). The quality metrics (SAM, M-PSNR and M-SSIM) of the estimated images are listed in (b1-b6). The TVs of the normalized residual images and the intervals of the original residual images are listed in (c1-c6).

Download Full Size | PDF

The proposed method is compared with other methods for DCCHI system with the grayscale camera: TwIST with TV regularization (DCCHI-TwIST) [18], dictionary-based method (DCCHI-DBR) [25], and adaptive nonlocal sparse representation method (DCCHI-ASNR) [26]. The dictionary for DCCHI-DBR is trained using the remaining 24 datasets. First, the 24 datasets are divided into several 5×5×31 small patches. Then, some patches with all zero elements or very little total variance are removed. The remaining patches make up the training set and are used to train the DBR dictionary using K-SVD algorithm. The principal component analysis (PCA) sub-dictionaries for DCCHI-ASNR method are also trained using the above training sets. First, the training set is divided into 70 clusters using K-means clustering method. Then, each cluster is used to train a PCA sub-dictionary. Fig. 4 shows our strategy for dividing the overlapping patches. The detected image is divided into four types of overlapping patches of grey, green, blue, and red. There are three types of regions in Fig. 4, marked with horizontal, vertical, and diagonal lines. The first area is covered only by one type of rectangle. The second area is covered by two types of rectangles, and the final result is calculated by dividing by 2. The third area is covered by all four types of rectangles, and the final result is calculated by dividing by 4. It is worth mentioning that the patch size must match the condition of h × w ≥ B for the DCCHI-RIR, where h and w are the height and width of the patches, because the inverse of the matrix [(HD)T(HD)] needs to be obtained when calculating Eq. (13), and it is necessary to ensure that the matrix is full rank. Considering the balance of reconstruction quality and calculation speed, the patch size for DCCHI-TwIST and DCCHI-RIR is 10×10×31.

 figure: Fig. 4.

Fig. 4. Strategy for dividing the overlapping patches.

Download Full Size | PDF

Table 1 presents the reconstruction results of different methods. For all hyperspectral images, the estimated hyperspectral images obtained in the first step of the DCCHI-RIR demonstrate higher accuracy than other methods. After adding the residual hyperspectral images obtained in the first step of the DCCHI-RIR, the accuracy of the final results is further improved. Because the six test images exhibit high nonlocal similarity, and many small patches are similar, the DCCHI-ASNR using the nonlocal similarity of hyperspectral images outperforms the other two methods, second only to the proposed method. The proposed method has an average SAM reduction of 0.0184, an average M-PSNR improvement of 3.62 dB, and an average M-SSIM improvement of 0.0582.

Tables Icon

Table 1. SAM, PSNR, and SSIM comparison of different methods for DCCHI system with the grayscale camera

Fig. 5 shows the visual comparison of reconstructed images for different methods. The reconstructed images of the DCCHI-TwIST, DCCHI-DBR, and DCCHI-ASNR demonstrate different degrees of color distortion in Fig. 5(b1-e1). The magnified areas marked with the red boxes in Fig. 5(b1) and (b2) show that the reconstructed images of DCCHI-TwIST exhibit a blocking effect due to inaccurate reconstruction. Figure 5(b2-d2) exhibit some colored stripes in the magnified areas due to the color distortion. However, in Fig. 5(e1) (f1) and Fig. 5(e2) (f2), the estimated and reconstructed images of the DCCHI-RIR exhibit no block effect and color stripes, and the colors are almost the same as the original images.

 figure: Fig. 5.

Fig. 5. Visual comparison of reconstruction results using different methods. For ease of display, the entire 3D hyperspectral data are synthesized into an RGB image.

Download Full Size | PDF

Fig. 6 shows the grayscale images of the 15th spectral band (540–550 nm) of the reconstructed hyperspectral images. The reconstructed images of other methods in the red boxes in Fig. 6(b1-d1) demonstrate shape deformation. The results of the other methods in Fig. 6(c2) and (d2) exhibit some streaks and speckle noise. The estimated and reconstructed images of the DCCHI-RIR have almost the same shapes as the original images in Fig. 6(e1) and (f1), and are smoother without streaks and speckle noise in Fig. 6(e2) and (f2).

 figure: Fig. 6.

Fig. 6. Visual comparison of reconstruction results for the 15th spectral band (540-550 nm) using different methods.

Download Full Size | PDF

Figure 7 shows the recovered spectra at two randomly selected spatial points A and B marked in 7(a). Figure 7(b) and (c) are the relative spectra of points A and B. The reconstruction spectra of the proposed method (the black line) are closest to the ground truth (red line).

 figure: Fig. 7.

Fig. 7. Spectra of two points reconstructed by different methods.

Download Full Size | PDF

Furthermore, GPU is used for computation to verify the effect of different patch sizes on the accuracy and calculation speed of estimated hyperspectral images. The calculation is performed on NVIDIA 2080ti and Intel i7-9700K and the program is run on Matlab 2021a using its parallel computing tool. The running time required is obtained by averaging 10 runs. The size of the hyperspectral images is set to 280×280×31 based on the running memory of the GPU. The quality metrics of the DCCHI-RIR for different patch sizes are also listed in Table 2. The second step of DCCHI-RIR is implemented through CPU calculation. When the patch size changes from 7×7×31 to 20×20×31, the required running time for estimated images decreases gradually. However, all times are less than 40 ms. The frame rate for reconstructing the estimated hyperspectral image exceeds 25 frames per second. For the estimated image, the patch size of 8×8×31 demonstrates the best reconstruction quality. However, the final result of DCCHI-RIR method has the best quality when the patch size is 10×10×31. Fig. 8 shows the visual comparison of the estimated images for different patch sizes. The results of all patch sizes demonstrate good visual effects. Therefore, the method of obtaining estimated hyperspectral images can be used alone as a real-time and high accuracy reconstruction method.

Tables Icon

Table 2. The effect of different patch sizes based on DCCHI system with grayscale camera

 figure: Fig. 8.

Fig. 8. Estimated hyperspectral image visual comparison at different patch sizes based on the DCCHI system with grayscale camera. The total image size is 280×280×31. (a1-f1) are the synthesized RGB images. (a2-f2) are the grayscale images of the 15th band (540-550 nm) of the hyperspectral image.

Download Full Size | PDF

3.2 Results comparison for DCCHI system with RGB camera

In this paper, the RGB sensor adopts the Foveon structure, and each pixel can detect the red, green, and blue in sequence [35]. If the RGB camera has the Bayer pattern sensor, the raw image should be implemented demosaicing to obtain the full image [19]. The DCCHI-TwIST and DCCHI-DBR mentioned above can also be applied to this system. Also, the DCCHI-RIR is compared with the patch-based fusion algorithm (PFusion) proposed by Wei He et al. [28]. This method takes advantage of the low-rank property of the hyperspectral image and can reconstruct the hyperspectral image with high accuracy using far less computational time. It is worth noting that the patch size influences its low-rank property. When the patch size is too small, the performance of PFusion decreases significantly. Therefore, the patch size of our proposed method is first set to 10×10×31 for comparison with DCCHI-TwIST and DCCHI-DBR. Then, the patch size is set to 20×20×31 to compare with PFusion. The results are shown in Tables 3. Table 3 shows that the proposed method outperforms DCCHI-TwIST and DCCHI-DBR when the pitch size is 10×10×31. At the same time, DCCHI-RIR demonstrates a similar performance with the PFusion method when the pitch size is 20×20×31. However, the advantage of DCCHI-RIR is that it does not require the low-rank characteristics of hyperspectral images and can achieve better results when the patch size is small.

Tables Icon

Table 3. SAM, PSNR, and SSIM comparison of different methods for DCCHI design with RGB camera

Table 4 and Fig. 9 show the effect of different patch sizes on the accuracy and speed of estimated hyperspectral images. The running time in Table 4 is longer than that in Table 3 because the RGB camera captures more data than the grayscale camera, thus, requiring more computational time. Therefore, the accuracy of Table 4 improves more than that of Table 3. It can be seen that the computation time for the estimated images is about 50 ms. The quality metrics of the DCCHI-RIR for different patch sizes are also listed in Table 4. For the DCCHI system with RGB camera, the estimated image has the best quality when the patch size is 8×8×31, meanwhile the final result of the DCCHI-RIR has the best quality when the patch size is 20×20×31.

Tables Icon

Table 4. The effect of different patch sizes based on DCCHI system with RGB camera

 figure: Fig. 9.

Fig. 9. Estimated hyperspectral image visual effects at different patch sizes based on the DCCHI system with RGB camera. The total image size is 280×280×31. (a1-f1) are the synthesized RGB images. (a2-f2) are the grayscale images of the 15th band (540-550 nm) of the hyperspectral image.

Download Full Size | PDF

3.3 Robustness and generalization ability of the proposed method

The noise interference is an unavoidable problem in real world. To investigate the robustness of DCCHI-RIR, the white Gaussian noise with noise level σ = 0.01, 0.04 and 0.1 is added to the measurements of CASSI sensor and the grayscale camera, where σ denotes the standard deviation of the write Gaussian noise. To begin with, the quality metrics of the reconstruction results for all the six test datasets are obtained, and then their mean values are calculated. Finally, the mean values of the quality metrics are shown in Table 5. As the noise level increases, the reconstruction quality of the estimated hyperspectral images and the final results of the DCCHI-RIR degrades slowly. For all the noise levels, the proposed method demonstrates the best performance. Therefore, the robustness of the DCCHI-RIR is reliable for most case.

In order to verify the generalization ability of the proposed method, six groups of hyperspectral images are chosen randomly in another hyperspectral dataset of ICVL [36], and simulations have been carried out to get the reconstruction results of the DCCHI-RIR on DCCHI system with grayscale camera. The visual comparison of reconstruction results and their quality metrics are illustrated in Fig. 10, which shows that the SAM of all six reconstructed images are smaller than 0.05, the M-PSNR are all larger than 31 dB, and the SSIMs are all larger than 0.95. These results indicate that the proposed method can realize reconstruction results with high quality for other hyperspectral datasets.

Tables Icon

Table 5. Quality metrics of different reconstruction methods at different noise levels based on DCCHI system with grayscale camera.

 figure: Fig. 10.

Fig. 10. Visual comparison of reconstruction results by DCCHI-RIR of the six test hyperspectral images in the ICVL dataset. The quality metrics of SAM, M-PSNR and M-SSIM are also listed.

Download Full Size | PDF

4. Conclusion

In summary, this paper proposes a residual image recovery method based on the dual-camera compressive hyperspectral imaging system. The proposed method utilizes the structural similarity between the image of the additional camera and the original hyperspectral image, which can more effectively utilize the side information and improve the reconstruction quality of the hyperspectral image. The simulation results show that DCCHI-RIR demonstrates higher accuracy than some state-of-the-art methods based on DCCHI system. Additionally, the proposed method can also be applied to other compressive hyperspectral systems with dual-camera design.

Funding

Beijing Municipal Science and Technology Commission (Z201100004020012).

Disclosures

The authors declare no conflicts of interest.

Data Availability

Data underlying the results presented in this paper are not publicly available but may be obtained from the authors upon reasonable request.

References

1. V. Sharma and L. Van Gool, “Image-level classification in hyperspectral images using feature descriptors, with application to face recognition,” https://arxiv.org/abs/1605.03428.

2. J. Qin, M. S. Kim, K. Chao, D. E. Chan, S. R. Delwiche, and B.-K. Cho, “Line-scan hyperspectral imaging techniques for food safety and quality applications,” Appl. Sci. 7(2), 125 (2017). [CrossRef]  

3. R. N. Sahoo, S. Ray, and K. Manjunath, “Hyperspectral remote sensing of agriculture,” Curr. Sci. 108(5), 848–859 (2015).

4. I. Makki, R. Younes, C. Francis, T. Bianchi, and M. Zucchetti, “A survey of landmine detection using hyperspectral imaging,” ISPRS Journal of Photogrammetry and Remote Sensing 124, 40–53 (2017). [CrossRef]  

5. R. G. Sellar and G. Boreman, “Classification of imaging spectrometers for remote sensing applications,” Opt. Eng. 44(1), 013602 (2005). [CrossRef]  

6. M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express 15(21), 14013–14027 (2007). [CrossRef]  

7. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]  

8. A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47(10), B44–B51 (2008). [CrossRef]  

9. A. A. Wagadarikar, N. P. Pitsianis, X. Sun, and D. J. Brady, “Video rate spectral imaging using a coded aperture snapshot spectral imager,” Opt. Express 17(8), 6368–6388 (2009). [CrossRef]  

10. T. Sun and K. Kelly, “Compressive sensing hyperspectral imager,” in Computational Optical Sensing and Imaging (Optical Society of America, 2009), paper CTuA5.

11. Y. Wu, I. O. Mirza, G. R. Arce, and D. W. Prather, “Development of a digital-micromirror-device-based multishot snapshot spectral imaging system,” Opt. Lett. 36(14), 2692–2694 (2011). [CrossRef]  

12. Y. August, C. Vachman, Y. Rivenson, and A. Stern, “Compressive hyperspectral imaging by random separable projections in both the spatial and the spectral domains,” Appl. Opt. 52(10), D46–D54 (2013). [CrossRef]  

13. X. Lin, G. Wetzstein, Y. Liu, and Q. Dai, “Dual-coded compressive hyperspectral imaging,” Opt. Lett. 39(7), 2044–2047 (2014). [CrossRef]  

14. X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Trans. Graph. 33(6), 1 (2014). [CrossRef]  

15. H. Arguello and G. R. Arce, “Colored coded aperture design by concentration of measure in compressive spectral imaging,” IEEE Trans. on Image Process. 23(4), 1896–1908 (2014). [CrossRef]  

16. M. A. Golub, A. Averbuch, M. Nathan, V. A. Zheludev, J. Hauser, S. Gurevitch, R. Malinsky, and A. Kagan, “Compressed sensing snapshot spectral imaging by a regular digital camera with an added optical diffuser,” Appl. Opt. 55(3), 432–443 (2016). [CrossRef]  

17. X. Yuan, T.-H. Tsai, R. Zhu, P. Llull, D. J. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Sig. Process. 9(6), 964–976 (2015). [CrossRef]  

18. L. Wang, Z. Xiong, D. Gao, G. Shi, and F. Wu, “Dual-camera design for coded aperture snapshot spectral imaging,” Appl. Opt. 54(4), 848–858 (2015). [CrossRef]  

19. L. Wang, Z. Xiong, G. Shi, W. Zeng, and F. Wu, “Compressive hyperspectral imaging with complementary RGB measurements,” in 2016 Visual Communications and Image Processing (IEEE, 2016), pp. 1–4.

20. J. Hauser, M. A. Golub, A. Averbuch, M. Nathan, V. A. Zheludev, and M. Kagan, “Dual-camera snapshot spectral imaging with a pupil-domain optical diffuser and compressed sensing algorithms,” Appl. Opt. 59(4), 1058–1070 (2020). [CrossRef]  

21. C. Tao, H. Zhu, P. Sun, R. Wu, and Z. Zheng, “Hyperspectral image recovery based on fusion of coded aperture snapshot spectral imaging and RGB images by guided filtering,” Opt. Commun. 458, 124804 (2020). [CrossRef]  

22. Y. Xu, Z. Wu, J. Chanussot, and Z. Wei, “Hyperspectral Computational Imaging via Collaborative Tucker3 Tensor Decomposition,” IEEE Trans. Circuits Syst. Video Technol. 31(1), 98–111 (2021). [CrossRef]  

23. N. Cheng, H. Huang, L. Zhang, and L. Wang, “Snapshot Hyperspectral Imaging Based on Weighted High-order Singular Value Regularization,” in 2020 25th International Conference on Pattern Recognition (ICPR, 2021), pp. 1267–1274.

24. Z. Liang, Y. Xu, L. Xiao, and Z. Wei, “Spatial-Spectral Total Variation Constrained Collaborative Tensor Regularization for Dual-Camera Compressive Hyperspectral Imaging,” in 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS (IEEE, 2021), pp. 3873–3876.

25. L. Wang, Z. Xiong, D. Gao, G. Shi, W. Zeng, and F. Wu, “High-speed hyperspectral video acquisition with a dual-camera architecture,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 4942–4950.

26. L. Wang, Z. Xiong, G. Shi, F. Wu, and W. Zeng, “Adaptive nonlocal sparse representation for dual-camera compressive hyperspectral imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 39(10), 2104–2111 (2017). [CrossRef]  

27. S. Zhang, L. Wang, Y. Fu, and H. Huang, “Gpu assisted towards real-time reconstruction for dual-camera compressive hyperspectral imaging,” in Pacific Rim Conference on Multimedia (Springer, 2018), pp. 711–720.

28. S. Zhang, H. Huang, and Y. Fu, “Fast parallel implementation of dual-camera compressive hyperspectral imaging system,” IEEE Trans. Circuits Syst. Video Technol. 29(11), 3404–3414 (2019). [CrossRef]  

29. A. Jerez, H. Garcia, and H. Arguello, “Single Pixel Spectral Image Fusion with Side Information from a Grayscale Sensor,” in 2018 IEEE 1st Colombian Conference on Applications in Computational Intelligence (IEEE, 2018), pp. 1–6.

30. H. Garcia, C. V. Correa, and H. Arguello, “Optimized Sensing Matrix for Single Pixel Multi-Resolution Compressive Spectral Imaging,” IEEE Trans. on Image Process 29, 4243–4253 (2020). [CrossRef]  

31. A. Chambolle, “An algorithm for total variation minimization and applications,” Journal of Mathematical Imaging and Vision 20(1/2), 73–87 (2004). [CrossRef]  

32. J. M. Bioucas-Dias and M. A. Figueiredo, “A New TwIst: Two-Step Iterative Shrinkage/Thresholding Algorithms for Image Restoration,” IEEE Trans. on Image Process. 16(12), 2992–3004 (2007). [CrossRef]  

33. S. M. Nascimento, K. Amano, and D. H. Foster, “Spatial distributions of local illumination color in natural scenes,” Vision Res. 120, 39–44 (2016). [CrossRef]  

34. F. A. Kruse, A. Lefkoff, J. Boardman, K. Heidebrecht, A. Shapiro, P. Barloon, and A. J. Goetz, “The spectral image processing system (SIPS)—interactive visualization and analysis of imaging spectrometer data,” Remote Sensing of Environment 44(2-3), 145–163 (1993). [CrossRef]  

35. P. Hubel, J. Liu, and R. Guttosch, “Spatial frequency response of color image sensors: Bayer color filters and Foveon X3,” Proc. SPIE 5301, 402–407 (2004). [CrossRef]  

36. B. Arad and O. Ben-Shahar, “Sparse recovery of hyperspectral signal from natural RGB images,” in IEEE European Conference on Computer Vision (Springer, 2016), pp. 19–34.

Data Availability

Data underlying the results presented in this paper are not publicly available but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Illustration showing the DCCHI system.
Fig. 2.
Fig. 2. Framework schematic of DCCHI-RIR.
Fig. 3.
Fig. 3. Visual results of the estimated and normalized residual hyperspectral images in one spectral band (500-510 nm) from six regions of the test hyperspectral images. The TVs and the intervals of the original hyperspectral images are listed in (a1-a6). The quality metrics (SAM, M-PSNR and M-SSIM) of the estimated images are listed in (b1-b6). The TVs of the normalized residual images and the intervals of the original residual images are listed in (c1-c6).
Fig. 4.
Fig. 4. Strategy for dividing the overlapping patches.
Fig. 5.
Fig. 5. Visual comparison of reconstruction results using different methods. For ease of display, the entire 3D hyperspectral data are synthesized into an RGB image.
Fig. 6.
Fig. 6. Visual comparison of reconstruction results for the 15th spectral band (540-550 nm) using different methods.
Fig. 7.
Fig. 7. Spectra of two points reconstructed by different methods.
Fig. 8.
Fig. 8. Estimated hyperspectral image visual comparison at different patch sizes based on the DCCHI system with grayscale camera. The total image size is 280×280×31. (a1-f1) are the synthesized RGB images. (a2-f2) are the grayscale images of the 15th band (540-550 nm) of the hyperspectral image.
Fig. 9.
Fig. 9. Estimated hyperspectral image visual effects at different patch sizes based on the DCCHI system with RGB camera. The total image size is 280×280×31. (a1-f1) are the synthesized RGB images. (a2-f2) are the grayscale images of the 15th band (540-550 nm) of the hyperspectral image.
Fig. 10.
Fig. 10. Visual comparison of reconstruction results by DCCHI-RIR of the six test hyperspectral images in the ICVL dataset. The quality metrics of SAM, M-PSNR and M-SSIM are also listed.

Tables (5)

Tables Icon

Table 1. SAM, PSNR, and SSIM comparison of different methods for DCCHI system with the grayscale camera

Tables Icon

Table 2. The effect of different patch sizes based on DCCHI system with grayscale camera

Tables Icon

Table 3. SAM, PSNR, and SSIM comparison of different methods for DCCHI design with RGB camera

Tables Icon

Table 4. The effect of different patch sizes based on DCCHI system with RGB camera

Tables Icon

Table 5. Quality metrics of different reconstruction methods at different noise levels based on DCCHI system with grayscale camera.

Equations (22)

Equations on this page are rendered with MathJax. Learn more.

g ( x , y ) = Ω ( λ ) f 3 ( x , y , λ ) d λ = 0.5 Ω ( λ ) f 0 ( x , y , λ ) T ( x + ϕ ( λ ) , y ) d λ .
g m n = m Δ ( m + 1 ) Δ n Δ ( n + 1 ) Δ g ( x , y ) d x d y ,
g m n = 0.5 k = 0 B 1 Ω k f m n k T ( m + k ) n ,
g C A S S I = H C A S S I f ,
g m n = 0.5 k = 0 B 1 Ω k f m n k ,
g g r a y = H g r a y f .
g = H f .
f = arg min f [ | | g H f | | 2 + τ TV ( f ) ] ,
TV( f ) =  k m , n ( f ( m  + 1) n k f m n k ) 2 + ( f m ( n  + 1) k f m n k ) 2 .
{ a t o m 1 = 1 B [ g g r a y ; z ; z ; z ; . ; z ] a t o m 2 = 1 B [ z ; g g r a y ; z ; z ; . ; z ] a t o m 3 = 1 B [ z ; z ; g g r a y ; z ; . ; z ] a t o m B = 1 B [ z ; z ; z ; ; z ; g g r a y ]
D = [ a t o m 1 , a t o m 2 , . , a t o m B ] .
f e s t i m a t e d = D θ ,
f e s t i m a t e d = D [ ( H D ) T ( H D ) ] 1 ( H D ) T g .
{ H D = [ H C A S S I ; H g r a y ] D H C A S S I D = [ Ω 1 C 1 g g r a y B , , Ω B C B g g r a y B ] H gray D = Ω 1 g g r a y B + + Ω B g g r a y B = ( Ω 1 + Ω B ) g g r a y B ,
H f r e s i d u a l = r .
f ^ r e s i d u a l = arg min f r e s i d u a l [ | | r H f r e s i d u a l | | 2 + τ TV( f r e s i d u a l ) ] .
f ^ = f ^ r e s i d u a l + f e s t i m a t e d .
g r g b = H r g b f ,
[ g r , g g , g b ] = [ f 1 , f 2 , , f B ] A ,
f ^ k a k ,1 g r n + a k ,2 g g n + a k ,3 g b n
{ a t o m 1 = [ f ^ 1 ; z ; z ; z ; . . . . . . . ; z ] a t o m 2 = [ z ; f ^ 2 ; z ; z ; . . . . . . . ; z ] a t o m 3 = [ z ; z ; f ^ 3 ; z ; . . . . . . . ; z ] . . . . . . a t o m B = [ z ; z ; z ; . . . . . . . ; z ; f ^ B ] .
{ H D = [ H C A S S I ; H r g b ] D H C A S S I D = [ Ω 1 C 1 f ^ 1 , . . . . . . , Ω B C B f ^ B ] H r g b D = [ a 1 , 1 f ^ 1 , . . . . . . , a B , 1 f ^ B ; a 1 , 2 f ^ 1 , . . . . . , a B , 2 f ^ B ; a 1 , 3 f ^ 1 , . . . . . , a B , 3 f ^ B ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.