Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Color reproduction from low-SNR multispectral images using spatio-spectral Wiener estimation

Open Access Open Access

Abstract

It is possible to capture images with high-fidelity color by using a spectrum-based color image reproduction method that estimates the spectral information of objects from a multispectral image along with information from illumination. When multispectral images do not have a sufficient signal-to-noise ratio (SNR), the accuracy of the spectral and color estimation will be reduced. To improve color estimation accuracy, this paper proposes a spatio-spectral Wiener estimation, which uses spatial correlation as well as spectral correlation. In order to show the effectiveness of the proposed method, computer simulations and an experiment are carried out using a six-channel video camera. As a result, it is assured that the proposed method improves color estimation accuracy and suppresses color noise.

©2008 Optical Society of America

1. Introduction

High-fidelity color reproduction is quite important in digital imaging systems for printing, telemedicine, on-line shopping, teleconference systems, and so on. However, it is known that ordinary digital cameras cannot obtain the accurate tristimulus values of given objects because their spectral sensitivity is different from that of human vision characteristics. Furthermore, considerable error arises when the color under different illuminants is reproduced, even if we use a camera with spectral sensitivity equal to color-matching functions of human vision. The error is caused by illumination metamerism. In order to accurately acquire and reproduce the color of an object under arbitrary illumination, the spectral reflectance function of the object becomes indispensable.

Based on this background, a spectrum-based color reproduction method has been studied [1–6]. In this method, spectral reflectance or spectral power distribution of an image is first estimated from the observed data, and then the color is calculated by using the estimated spectral information. In order to accurately estimate the spectral information as an image, a multispectral image with more than three color channels is utilized. Several prototypes of multispectral still and video cameras have been developed [7–9]. At the same time, spectral estimation methods from multispectral or three-band images have been widely investigated [10–14]. Based on these researches, it has been confirmed that a high-fidelity color image can be reproduced from a multispectral image with fewer than 10 color channels.

However, if there are bad recording conditions, the recorded multispectral image may not have a sufficient signal-to-noise ratio (SNR). This could happen not only when exposure time or illumination power is insufficient, but also when multispectral images are recorded under fluorescent illumination. Except for the strong line spectra, the spectral power of fluorescent lamps is relatively weak. Therefore, when narrow band color filters of a multispectral camera are used, some specific channel images become very dark; that is, their SNR becomes low. Similar problems can arise when the spectral bandwidths of specific color channels are designed to be narrower than those of other color channels. There is no need to consider such problems for conventional trichromatic cameras, because their color filters have wide and similar spectral bandwidths.

When even one of the channel images of a multispectral image suffers from a low SNR, it reduces the quality of the reproduced color image. In addition, if we use an estimation method that considers the SNR of the acquired data, such as the Wiener estimation method, the information of the low-SNR channel image cannot be effectively utilized in the spectral estimation. This decreases the color estimation accuracy.

It is also known that an image restoration method such as a spatial averaging operator is effective for suppression of noise, and two simple ways can be applied to a multispectral image: pre- and post-processing of spatial averaging in the spectral estimation procedure. By applying spatial averaging prior to the spectral estimation, it is possible to improve the spectral estimation accuracy because the SNR of the restored images will be improved. When we apply spatial averaging after the spectral estimation process, the color image is directly restored and color estimation accuracy will not be improved, although the noise in the reproduced image can be suppressed. In addition to these two methods, combination of spatial restoration and spectral estimation, a spatio-spectral operation could be a promising candidate. Does this spatio-spectral approach work more effectively than the former two methods? This is the main research subject of this paper.

In this paper, widely used spectral estimation based on the Wiener estimation is carried out. In addition, spatial Wiener filtering is also used for image restoration. This is known to be effective in the restoration of degraded images, especially in the presence of noise. Since both Wiener estimation and Wiener filtering are derived on the basis of the same Wiener criteria, in this paper we simply call the combined method a spatio-spectral Wiener estimation. It should be noted that although both Wiener estimation and Wiener filtering are the methods used to reconstruct original data from observed data with noise, they are conventionally discriminated by the applicable problems. While the term Wiener estimation is mainly used when the dimension of the observed data is less than that of the original, the term Wiener filtering is mainly used when the dimensions of the observed and original data are almost the same.

Spatio-spectral operations based on the Wiener criteria have been studied already for the restoration of multispectral or color images with blur, missing data, and noise. This is called multichannel Wiener restoration [15–19]. These studies deal with the spatial degradation of multispectral images where no spectral degradation is assumed. The spatial degradation is restored using the spectral correlation based on Wiener criterion. It has been shown that the multichannel Wiener restoration is more effective than the channel-independent Wiener restoration.

In contrast to those previous works, this paper deals with a more complicated case. The original data is a spectral reflectance image, and some spectral information is lost in the multispectral image acquisition where additive noise and no spatial degradation are assumed. Then, the spectral reflectance image is estimated from the multispectral image using both spatial and spectral correlations. Although the data acquisition process and the purpose are different from the above-mentioned multichannel Wiener restoration, the spatio-spectral Wiener estimation described in this paper is mathematically equivalent to the multichannel Wiener restoration. However, there have been no previous studies in which the spatio-spectral Wiener estimation has been applied to the spectral estimation from multispectral images.

The purpose of this paper is to show the effectiveness of the spatio-spectral Wiener estimation for the color image reproduction from low-SNR multispectral images. For this purpose, four kinds of estimation methods are compared, as shown in Fig. 1. The first one does not use spatial restoration; spectral reflectance is obtained through pixel-by-pixel Wiener estimation, and the color is calculated from the estimated spectral reflectance. This one-dimensional operation along the spectral dimension is referred to as 1D. The next one is referred to as 1D+2D and consists of a color estimation followed by a two-dimensional spatial Wiener filtering. On the other hand, 2D+1D consists of spatial Wiener filtering followed by color image estimation. The last one is referred to as 3D and corresponds to a three-dimensional spatio-spectral Wiener estimation.

Wiener filters work as averaging operators when the degradation includes only noise without blur [20]. As a result, random noise is reduced, while the spatial resolution is degraded. Therefore, the resolution of the reconstructed images as well as color estimation accuracy and SNR are evaluated in this paper.

Generally, Wiener filters require the matrix inversion of dimension n by n, where n is the number of the image pixels. Since implementation is not easy for images of large size, implementation based on fast Fourier transform (FFT) algorithms are frequently employed. As an alternative, a finite impulse response (FIR) Wiener filter [20] can be composed by the convolution of a limited size of the filter. In this paper, a FIR Wiener filter is used for practical implementation, as it is known that optimal FIR filters can achieve the performance of the original Wiener filters with lower computational complexity.

 figure: Fig. 1.

Fig. 1. Block diagrams of four different approaches for multispectral-based color image reproduction with noise restoration.

Download Full Size | PDF

This paper is organized as follows: In Section 2, four comparative approaches are introduced, including the proposed spatio-spectral Wiener estimation for spectrum-based color reproduction. In Section 3, these four methods are evaluated through simulation results in which the evaluation measures are introduced. The experimental results are given in Section 4, and Section 5 concludes the paper.

2. Spatio-spectral Wiener estimation

2.1 Spectral Wiener estimation (1D)

Let us start with reviewing the Wiener estimation of a spectral reflectance from multispectral image data where we focus on the estimation of a single pixel. Let f be an L-dimensional column vector representing a spectral reflectance, and let g be an N-dimensional column vector representing a multispectral image data where the number of color channels of multispectral images is assumed to be N. The relation between them can be written as

g=Hf+n,

where H is a N×L system matrix that is given by the spectral sensitivity of the camera and the illumination spectrum, and n is a N-dimensional column vector representing noise. The Wiener estimation of f is given by

f̂=Ag,
A=fgTggT1
=ffTHT(HffTHT+nnT)1,

where 〈〉 is an ensemble-averaging operator. To derive the Wiener estimation matrix A, the correlation matrices 〈ff T〉 and 〈nn T〉 are required. An example of how to build the 〈ff T〉 is given in Section 3.2. As for the noise correlation matrix 〈nn T〉, we usually use a diagonal matrix with the noise variance of each channel on the diagonal element. In addition, it should be noted that the Wiener estimation matrix A is derived based on the assumption that the noise is independent from the image signals. Although it is not always true in actual imaging systems, it has been shown that the Wiener estimation based on that assumption is an effective method.

2.2 Restoration by finite impulse response Wiener filters (1D+2D/2D+1D)

A FIR Wiener filter has a limited-size filter, and the restoration is implemented as a convolution of the filter kernel. Assuming that the filter size is M×M, the restoration of the pixel data of a monochrome image is only related to the observations inside the M×M window region centered on the pixel of interest. Let x and y be the original and observed images inside the window, where they are expressed in lexicographic order in the one-dimensional vectors of size M 2:

x=(xcent(M21)2xcentxcent+(M21)2),y=(ycent(M21)2ycentycent+(M21)2).

Assuming that an original image is degraded only by additive noise η, the relationship between x and y can be written by

y=x+η.

The central pixel of the restored image cent by the FIR Wiener filter is given by

x̂cent=bTy,

where b is the M 2-dimensional FIR Wiener filter given by

bT=xcentyTyyT1
=xcentxT(xxT+ηηT)1.

The whole image is restored through the convolution of this filter.

In the case of a spectral estimation followed by Wiener filtering, namely, 1D+2D, a spectral reflectance function is estimated from a multispectral image data pixel-by-pixel Wiener estimation in Eqs. (2) and (3), and the spectral reflectance image is obtained. Then, the spectral reflectance image is transformed to a color image that can be represented in an arbitrary color space. After that, each channel image is independently restored by the FIR Wiener filter. For derivation of the FIR Wiener filter, the noise variance on the estimated color image is estimated and utilized.

In the case of Wiener filtering followed by spectral estimation, namely, 2D+1D, each channel image of the recorded multispectral image is independently restored by the FIR Wiener filter. After that, spectral reflectance is estimated from the restored multispectral image in Eqs. (2) and (3). In derivation of Wiener estimation matrix A, noise variance on the restored multispectral image is used.

Note that the Wiener filter in Eq. (7) becomes a delta function, b T(0,…,0,1,0,…,0), when the noise is negligible, η=0, which is shown in Appendix A.

2.3 Spatio-spectral Wiener estimation (3D)

This section describes a spatio-spectral Wiener estimation based on the FIR Wiener filter. The following formulation is almost equivalent to the three-dimensional Wiener filter for multispectral images in [15–19], except that the observation model includes spectral degradation and that the FIR Wiener filter is used, while the previous works deal with the original Wiener filter with an infinite impulse response function.

Consider M×M a window region centered at the pixel of interest and define

𝓕=(fcent(M21)2fcentfcent+(M21)2),𝓖=(gcent(M21)2gcentgcent+(M21)2),𝓝=(ncent(M21)2ncentncent+(M21)2),

where 𝓕, 𝓖, and 𝓝 are LM 2-, NM 2-, and NM 2-dimensional column vectors, and the subscript indicates a pixel index. Using these notations, Eq. (1) can be replaced by

𝓖=(HHH)𝓕+𝓝,
=(IM2H)𝓕+𝓝,

where ⊗ denotes the Kronecker product and IM2 is an M 2×M 2 identity matrix, which is a square matrix with unity along the diagonal. The spectral reflectance of the central pixel is estimated by

f̂cent=C𝓖,

where C is a L×NM 2 matrix to estimate f cent from all multispectral image signals inside the window 𝓖. Based on the Wiener criteria, C is given by

C=fcent𝓖T𝓖𝓖T1
=fcent𝓕T(IM2H)T[(IM2H)𝓕𝓕T(IM2H)T+𝓝𝓝T]1.

To calculate matrix C, we need a matrix inversion of the dimension NM 2×NM 2.

For formation of the spatio-spectral correlation matrix 〈𝓕𝓕T〉, we sometimes introduce an assumption that the correlation is separable between elements of 𝓕 into the products of spatial and spectral correlation. Although the separable assumption for spectral and spatial dimensions is not always approved, it has been utilized effectively in multispectral image restoration methods [20]. Under this assumption, the correlation matrix can be expressed by

𝓕𝓕T=xx¯TffT,

where xx¯T is a normalized spatial correlation, i.e., xx¯T=xxTσ2 , where σ 2 is the image signal variance. This assumption is introduced in the simulations and experiments below. Note that the Wiener filter is not separable, even if the correlation matrix is separable [20], except for the negligible-noise case, 𝓝=0. When the noise is negligible and the correlation is separable, Wiener filter C works in the same way as the pixel-by-pixel spectral Wiener estimation, which is shown in Appendix B.

3. Simulations

3.1 Conditions

Through simulations, four methods shown in Fig. 1 are compared: 1D, 1D+2D, 2D+1D, and 3D. Multispectral image acquisition is simulated based on the spectral sensitivity of a six-band HDTV video camera [6] shown in Fig. 2. For the recording illumination, a fluorescent illuminant F12 is assumed, and the color under daylight illuminant D65 is estimated and reproduced. The spectral power distributions of the illuminants are shown in Fig. 3. Gaussian random white noise is added to the multispectral image; the amount of the noise on each channel is shown in Fig.4, where SNR [dB] is defined by

SNR20log10(signalofthewhiteobjectstandarddeviationofthenoise)[dB].

It can be seen that the SNRs of #1, #2, and #6 are lower than the others.

The size of the Wiener filter is fixed at 5×5 for simplicity in the simulations. In general, the filter size should be decided according to the amount of blur and additive noise and the correlation of original signals.

 figure: Fig. 2.

Fig. 2. Spectral sensitivity of six-band video camera used in simulations.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Spectral power distribution of D65 and F2 illuminants.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. SNR of each of the six-channel images under simulation conditions.

Download Full Size | PDF

3.2 Correlation matrices for Wiener filter formation

This subsection describes the formations of the correlation matrices required for Wiener filters. The spectral correlation matrix (L×L) is made from 24 spectral reflectance functions of a Gretag Macbeth ColorChecker:

ffT=124i=124mimiT,

where, m i is the spectral reflectance of the i-th color patch of a Macbeth ColorChecker. This is used to derive the spectral Wiener estimation matrix in 1D, 1D+2D, and 2D+1D. Since the spectral reflectance functions of a Macbeth ColorChecker are designed and collected so as to simulate the spectral reflectance functions of objects in natural scenes, Eq. (14) is one of the reasonable choices for a spectral correlation matrix.

The spatial correlation matrix (M 2×M 2) is formed under two assumptions: the first assumption is that the sequence of image pixels can be modeled by a 1st-order Markov sequence [20]. The second assumption is that the correlation between elements of x is separable into the products of horizontal and vertical correlation. The measured correlation for a typical image can fit quite well with a Markov covariance, and the separable approximation has been found to be reasonably accurate [21]. Under these assumptions, the spatial correlation is given by

xxT=σ2[R(ρ1)R(ρ2)]
=σ2(ρ10R(ρ2)ρ11R(ρ2)ρ1M1R(ρ2)ρ11R(ρ2)ρ10R(ρ2)ρ1M2R(ρ2)ρ1M1R(ρ2)ρ1M2R(ρ2)ρ10R(ρ2)),

where

R(ρ)=(ρ0ρ1ρM1ρ1ρ0ρM2ρM1ρM2ρ0),

and where σ 2 is the image signal variance and ρ 1 and ρ 2 are the correlation coefficients in the vertical and horizontal directions. In the simulations, the correlation coefficient is set as ρ 1=ρ 2=ρ, where ρ is either of 0.6, 0.9, 0.95, 0.97, and 0.985. The above-mentioned spatial correlation matrix is used to derive the FIR Wiener filter in 1D+2D and 2D+1D. In the case of 1D+2D, σ 2 is the variance of each channel signal of color images, while in the case of 2D+1D, σ 2 is the variance of the recorded multispectral image signals.

The spatio-spectral correlation matrix (LM 2×LM 2) for 3D is derived with the assumption that the correlation between elements of 𝓕 is separable into the product of spatial and spectral correlation, as shown in Eq. (12). In the simulations, the spatio-spectral correlation is calculated by substituting Eqs. (14) and (15) into Eq. (12).

3.3 Evaluation measures

This section describes four evaluation measures used in the simulations: color estimation accuracy, noise variance, spatial resolution, and color fidelity for natural scene images.

The first measurement is for color estimation accuracy. When we estimate the color of a uniform region from the observations with noise, the estimated color distributes with some variance. Color estimation accuracy measures the color difference between the original and the average of the estimations. For this purpose, a virtual object is considered that consists of 24 uniform regions with the spectral reflectance of a Gretag Macbeth ColorChecker. One color region consists of 100×100 pixels, and the whole image consists of 400×600 pixels. Using this spectral reflectance image, the color estimation process is carried out by computer, and the image in the CIE 1976 L*a*b* space is simulated. Then, the color difference ΔE*ab between the original and the average L*a*b* value of each color region is calculated. The average of 24 ΔE*ab is used as the measurement for color estimation accuracy, which is referred to as ΔE.

The second measurement is for noise behavior in the reproduced color images. This corresponds to the variance of the estimated color of a uniform region. Accordingly, the L*a*b* image is estimated for the same virtual object based on the Macbeth ColorChecker; then, the standard deviation of L*, a*, and b* in each color region is calculated. The average of 24 standard deviations are used as the measurements for noise, which are referred to as σL, σa, and σb.

The third measurement is for spatial resolution. Since the SNR, colorimetric accuracy, and spatial resolution are trade-offs, ΔE,σL, σa, and σb are evaluated as functions of spatial resolution. This paper uses the peak value of a point-spread function (PSF) as the resolution measure, where a PSF is calculated as a reconstructed image of a one-pixel white object on a black background. Figure 5 shows some examples of the PSFs given by the 3D approach with several correlation coefficients ρ, where the PSFs are represented by 5×5 sRGB images. The corresponding peak L* values are also shown; in the original, L*=100. We can see that a broad PSF has a lower peak L* and a sharp PSF has the peak L* nearer to 100. Because the total power of all PSFs is unity, the peak value becomes lower when the PSF becomes wider. Therefore, the peak L* values are used as a spatial resolution measure; less value implies more resolution degradation.

 figure: Fig. 5.

Fig. 5. Reconstructed peak L* value of a one-pixel white object used as a spatial resolution measure. Less resolution measure implies more resolution degradation.

Download Full Size | PDF

The fourth measure is for the fidelity of natural scenes. The above-mentioned measurements ΔE,σL, σa, and σb are all calculated based on uniform regions. However, real scenes include various spatial structures. Therefore, each method is evaluated using images of real scenes with spatial structures. Two spectral reflectance images, referred to as Toy and Scarf (Fig. 6), were prepared that were estimated from 16-band images captured by a 16-band multispectral camera [8]. The number of pixels is 512×512 each, and the number of spectral samplings is 65, which range from 380 to 700 nm with 5-nm intervals. Using these spectral images, ΔE*ab is calculated pixel by pixel.

 figure: Fig. 6.

Fig. 6. Spectral reflectance images used in the simulations: Toy (left) and Scarf (right). Images displayed in sRGB.

Download Full Size | PDF

3.4 Results

Figure 7 shows simulation results where four panels correspond to the results of ΔE, σL, σa, and σb, respectively. All horizontal axes show the spatial resolution measure; the lower the value, the more the spatial resolution is degraded. The symbol of the white square indicates 1D, and the other three color symbols correspond to the other three methods. Color symbols are plotted by 5 points each, which correspond to the results with the different spatial correlation coefficient ρ. From the upper left graph ΔE we can see that 1D+2D does not reduce the color estimation error from the 1D-case. Although 2D+1D reduces the color estimation error, it involves resolution degradation. On the other hand, 3D reduces the color estimation error without resolution degradation. Let us move on to the noise results. Although we cannot see the difference among three methods in the results of L*, in the results of a* and b*, the plots of 3D are the nearest to the bottom-right corner. This means that 3D reduces the color noise the most at the same resolution. From these results, it is confirmed that, compared to 1D+2D and 2D+1D, 3D is the most effective in reducing the color estimation error as well as noise.

Table 1 shows the average and the maximum ΔE*ab of all pixels for Toy and Scarf. The correlation coefficient ρ used in the formation of Wiener filters is also shown. The parameter ρ is selected so as to give the spatial resolution measurement around 85; the corresponding plots are shown by the dotted squares in Fig. 7. With the spatial resolution measurement around 85, the spatial degradation is scarcely perceived for these two images. The results show that 3D reduces the average error by 2/3 compared to 1D. Although the effect on the maximum error is not so large, 1D+2D, 2D+1D, and 3D all reduce the maximum error compared to 1D. Figure 8 shows an example of the original image and the reproduced images by 1D, 1D+2D, 2D+1D, and 3D. A 65×65 -pixel area of the B-channel image of the sRGB reproduction of Scarf is shown. While the random noise is distinct in the 1D, the noise is reduced by other methods. At the same time, the detail of structures in the angel are most clearly reconstructed by 3D. From these results, we can say that the spatio-spectral Wiener estimation is effective for natural scene images.

All above simulations are done under the assumption of F12 recording illumination and the SNR shown in Fig. 4. At the end of this subsection, the results in the other conditions are discussed. When D65 is used as a recording illumination at approximately the same SNR, though the average ΔE*ab is slightly improved by applying 3D, the difference between 1D and 3D are almost invisible in the reproduced color images. In the case of noise-free images, the results of the four methods become completely the same with any recording illuminations, as explained in Section 2.

 figure: Fig. 7.

Fig. 7. Simulation results of comparative evaluation among four methods, 1D, 1D+2D, 2D+1D, and 3D: (a) ΔE, (b) σL, (c) σa, and (d) σb as a function of spatial resolution measure. Filters corresponding to plots in dotted squares are applied for reproduction of natural scene images.

Download Full Size | PDF

Tables Icon

Table 1. Average and Maximum ΔE*ab for Natural Scene Images, Toy and Scarf

 figure: Fig. 8.

Fig. 8. 65×65 pixel area of original and reconstructed images of Scarf. Monochrome images are B-channel of sRGB images. Random noise seen in 1D reconstruction is reduced in 1D+2D, 2D+1D, and 3D reconstructions. Details in left angel are reconstructed most clearly by 3D.

Download Full Size | PDF

5. Experiments

Four different approaches are compared based on experimental data. While simulation data is generated based on a virtual camera model, experimental data contains unexpected errors or noises from the model. Therefore it is important to demonstrate the effectiveness of the proposed method for experimental data. A Macbeth ColorChecker is captured by a 6-channel video camera under daylight (SERIC) and fluorescent illuminants with two aperture settings, f/5 and f/8. The spectral sensitivity of the 6-band camera is shown in Fig. 9. The spectral power distributions of two illuminations are shown in Fig. 10. The SNR in the four combinations of two illuminations and two f-numbers are shown in Fig. 11. Because of the narrower bandwidth of the #2, #3, and #6 color channels, their SNRs are lower than the other color channels. Needless to say, the SNR of the aperture of f/8 is lower than that of f/5. The spectral correlation is formed assuming that spectral reflectance functions can be modeled as a 1st-order Markov sequence with the correlation coefficient 0.9995, namely that the correlation between two wavelengths separated by l nm is R(l)=σ 2 0.9995l, where the variance of the reflectance ratios is σ 2. The spatial correlation is calculated based on Eq. (15), where the correlation coefficients are selected so as to give the spatial resolution measurement of around 85; the complete figures are given in Table 2.

 figure: Fig. 9.

Fig. 9. Spectral sensitivity of 6-band video camera used in experiments.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Spectral power distributions of daylight and fluorescent illuminants used in experiments.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. SNR of each of the six-channel images under the experimental conditions.

Download Full Size | PDF

The color image under recording illumination is calculated from the recorded six-band image through 1D, 1D+2D, 2D+1D, and 3D. For the evaluation, the measurements ΔE, σL, σa, and σb are used as in the simulations, which are calculated based on the spectra measured by a PR-650 SpectraColorimeter. The results are shown in Fig. 12. In the high-SNR

 figure: Fig. 12.

Fig. 12. Experimental results under high and low SNR cases: (a) ΔE, (b) σL, (c) σa, and (d) σb. Each filter is designed to give approximately the same spatial resolution measurements as shown in Table 2.

Download Full Size | PDF

Tables Icon

Table 2. Spatial Resolution Measures and Spatial Correlation Coefficients in Experimental Settings

case (f/5), there is no significant difference in ΔE among four different methods. On the other hand, in the low-SNR case (f/8), the ΔE s of 1D, 1D+2D, and 2D+1D increase from that of a high SNR; however, 3D realizes almost the same accuracy with that of a high SNR. Although the noise is not completely reduced to the high-SNR level even by 3D, the values σa and σb are reduced by applying 3D.

Figure 13 shows some examples of the reproduced images related to fluorescent illumination. Since there is no original image, the image of f/5 by the 1D method is shown as a reference. Comparing the two images given by 1D and 3D in the low-SNR case, we can see that the saturation of the reproduced colors by 1D is slightly lower than that of the reference. On the other hand, the reproduced colors by 3D are approximately the same as the reference image of f/5. From these results, the effectiveness of the 3D method is verified based on the experimental data. At the same time, we can say that 3D gives improvement only if the SNR is insufficient.

 figure: Fig. 13.

Fig. 13. Examples of the reproduced images under fluorescent illumination: (a) f/5 (high SNR) by 1D, (b) f/8 (low SNR) by 1D, and (c) f/8 (low SNR) by 3D. The top image is shown as a reference. Comparing two images of low SNR in the bottom row, the color reproducibility in the image of 3D is improved compared to 1D, while the noise is not completely eliminated by 3D.

Download Full Size | PDF

6. Conclusions

This paper shows that the spatio-spectral Wiener estimation can reduce color estimation error as well as color noise without perceivable resolution degradation when the SNR of the recorded multispectral images is not sufficient. For natural scene images, the average ΔE*ab is reduced to about 2/3 of that in the conventional pixel-by-pixel approach. Note that if the SNR of the recorded multispectral images is sufficient, the four methods shown in this paper give almost the same results.

Appendix A: spatial Wiener filter b for negligible noise case

Based on the definition of Eq. (4), xcent can be written by the equation

xcent=pTx,

where p T=(0,…,0,1,0,…,0). Substituting Eq. (17) and η=0 into Eq. (7),

bT=pTxxTxxT1=pT,

which shows that b T(0,…,0,1,0,…,0).

Appendix B: spatio-spectral Wiener filter C for negligible-noise and separable-correlation case

Based on the definition of Eq. (8), f cent can be written by the equation

fcent=P𝓕,P=(0L,,0L,IL,0L,,0L),

where P is a L×LM 2 matrix, and 0 L and I L are L×L zero and identity matrices, respectively. Substituting Eq. (19) and 𝓝=0 into Eq. (11),

C=P𝓕𝓕T(IM2H)T[(IM2H)𝓕𝓕T(IM2H)T]1.

Substituting Eq. (12) into Eq. (20) and applying the arithmetic rules on the Kronecker product,

C=P(xx¯TffT)(IM2H)T[(IM2H)(xx¯TffT)(IM2H)T]1
=P[xx¯T(ffTHT)][xx¯T(HffTHT)]1
=P{IM2[ffTHT(HffTHT)1]}
=(0L,,0L,ffTHT(HffTHT)1,0L,,0L).

The derived C gives the same estimation as the pixel-by-pixel Wiener estimation cent=〈ff TH T(Hff TH T)-1 g cent.

Acknowledgments

This research is supported by the National Institute of Information and Communications Technology (NICT).

References and links

1. P. D. Berns and R. S. Berns, “Analysis of multispectral image capture,” in Proceedings of the Fourth Color Imaging Conference19–22 (IS&T, 1996).

2. B. Hill, “Color capture, color management and the problem of metamerism,” Proc. SPIE 3963, 3–14 (2000).

3. H. Haneishi, T. Hasegawa, A. Hosoi, Y. Yokoyama, N. Tsumura, and Y. Miyake, “System design for accurately estimating spectral reflectance of art paintings,” Appl. Opt. 39, 6621–6632 (2000). [CrossRef]  

4. M. Yamaguchi, R. Iwama, Y. Ohya, T. Obi, N. Ohyama, Y. Komiya, and T. Wada, “Natural color reproduction in the television system for telemedicine,” Proc. SPIE 3031, 482–489 (1997). [CrossRef]  

5. M. Yamaguchi, T. Teraji, K. Ohsawa, T. Uchiyama, H. Motomura, Y. Murakami, and N. Ohyama, “Color image reproduction based on the multispectral and multiprimary imaging: experimental evaluation,” Proc. SPIE 4663, 15–26 (2001). [CrossRef]  

6. M. Yamaguchi, H. Hideaki, H. Fukuda, J. Koshimoto, H. Kanazawa, M. Tsuchida, R. Iwama, and N. Ohyama, “High-fidelity video and still-image communication based on spectral information: Natural Vision system and its applications,” Proc. SPIE 6062, 129–140 (2006).

7. H. Sugiura, T. Kuno, N. Watanabe, N. Matoba, J. Hayashi, and Y. Miyake, “Development of highly accurate multispectral cameras,” presented at International Symposium on Multispectral Imaging and Color Reproduction for Digital Archives, Chiba, Japan, 21–22 October 1999.

8. H. Fukuda, T. Uchiyama, H. Haneishi, M. Yamaguchi, and N. Ohyama, “Development of 16-band multispectral image archiving system,” Proc. SPIE 5667, 136–145 (2005). [CrossRef]  

9. K. Ohsawa, T. Ajito, H. Fukuda, Y. Komiya, H. Haneishi, M. Yamaguchi, and N. Ohyama, “Six-band HDTV camera system for spectrum-based color reproduction,” J. Imaging Sci. Technol. 48, 85–92 (2004).

10. F. H. Imai and R. Berns, “Spectral estimation using trichromatic digital cameras,” in International Symposium on Multispectral Imaging and Color Reproduction for Digital Archives (Society of Multispectral Imaging of Japan, 1999), 42–49.

11. Y. Murakami, T. Obi, M. Yamaguchi, and N. Ohyama, “Nonlinear estimation of spectral reflectance based on Gaussian mixture distribution for color image reproduction,” Appl. Opt. 41, 4840–4847 (2002). [CrossRef]   [PubMed]  

12. Y. Zhao, L. A. Taplin, M. Nezamabadi, and R. S. Berns, “Using the matrix R method for spectral image archives,” in Proceedings of AIC Colour 05, 469–472 (IS&T, 2005).

13. N. Shimano, “Recovery of spectral reflectances of objects being imaged without prior knowledge,” IEEE Trans. Image Process. 15, 1848–1856 (2006). [CrossRef]   [PubMed]  

14. H. Shen, P. Cai, S. Shao, and J. Xin, “Reflectance reconstruction for multispectral imaging by adaptive Wiener estimation,” Opt. Express 15, 15545–15554 (2007). [CrossRef]   [PubMed]  

15. B. R. Hunt and O. Kubler, “Karhunen-Loeve multispectral image restoration, part I: Theory,” IEEE Trans. Acoust. , Speech Signal Process. 32, 592–600 (1984). [CrossRef]  

16. N. P. Galatsanos and R. T. Chin, “Digital restoration of multichannel images,” IEEE Trans. Acoust. , Speech Signal Process. 37, 415–421 (1989). [CrossRef]  

17. L. Suryani, S. M. Yamaguchi, N. Ohyama, T. Honda, and K. Tanaka, “Estimation of an image sampled by a CCD sensor array using a color synthetic method,” Opt. Commun. 84, 133–138 (1991). [CrossRef]  

18. M. K. Ozkan, A. T. Erdem, M. I. Sezan, and A. M. Tekalp, “Efficient multiframe Wiener restoration of blurred and noisy image sequences,” IEEE Trans. Image Process. 1, 453–476 (1992). [CrossRef]   [PubMed]  

19. A. K. Katsaggelos, K. T. Lay, and N. P. Galatsanos, “A general framework for frequency domain multichannel signal processing,” IEEE Trans. Image Process. 2, 417–420 (1993). [CrossRef]   [PubMed]  

20. A. K. Jain, Fundamentals of digital image processing (Prentice-Hall, Inc., 1989).

21. W. K. Pratt, Digital image processing (John Wiley & Sons, Inc., 1978).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Block diagrams of four different approaches for multispectral-based color image reproduction with noise restoration.
Fig. 2.
Fig. 2. Spectral sensitivity of six-band video camera used in simulations.
Fig. 3.
Fig. 3. Spectral power distribution of D65 and F2 illuminants.
Fig. 4.
Fig. 4. SNR of each of the six-channel images under simulation conditions.
Fig. 5.
Fig. 5. Reconstructed peak L* value of a one-pixel white object used as a spatial resolution measure. Less resolution measure implies more resolution degradation.
Fig. 6.
Fig. 6. Spectral reflectance images used in the simulations: Toy (left) and Scarf (right). Images displayed in sRGB.
Fig. 7.
Fig. 7. Simulation results of comparative evaluation among four methods, 1D, 1D+2D, 2D+1D, and 3D: (a) ΔE, (b) σL , (c) σa , and (d) σb as a function of spatial resolution measure. Filters corresponding to plots in dotted squares are applied for reproduction of natural scene images.
Fig. 8.
Fig. 8. 65×65 pixel area of original and reconstructed images of Scarf. Monochrome images are B-channel of sRGB images. Random noise seen in 1D reconstruction is reduced in 1D+2D, 2D+1D, and 3D reconstructions. Details in left angel are reconstructed most clearly by 3D.
Fig. 9.
Fig. 9. Spectral sensitivity of 6-band video camera used in experiments.
Fig. 10.
Fig. 10. Spectral power distributions of daylight and fluorescent illuminants used in experiments.
Fig. 11.
Fig. 11. SNR of each of the six-channel images under the experimental conditions.
Fig. 12.
Fig. 12. Experimental results under high and low SNR cases: (a) ΔE, (b) σL , (c) σa , and (d) σb . Each filter is designed to give approximately the same spatial resolution measurements as shown in Table 2.
Fig. 13.
Fig. 13. Examples of the reproduced images under fluorescent illumination: (a) f/5 (high SNR) by 1D, (b) f/8 (low SNR) by 1D, and (c) f/8 (low SNR) by 3D. The top image is shown as a reference. Comparing two images of low SNR in the bottom row, the color reproducibility in the image of 3D is improved compared to 1D, while the noise is not completely eliminated by 3D.

Tables (2)

Tables Icon

Table 1. Average and Maximum ΔE* ab for Natural Scene Images, Toy and Scarf

Tables Icon

Table 2. Spatial Resolution Measures and Spatial Correlation Coefficients in Experimental Settings

Equations (29)

Equations on this page are rendered with MathJax. Learn more.

g = Hf + n ,
f ̂ = Ag ,
A = fg T gg T 1
= ff T H T ( H ff T H T + nn T ) 1 ,
x = ( x cent ( M 2 1 ) 2 x cent x cent + ( M 2 1 ) 2 ) , y = ( y cent ( M 2 1 ) 2 y cent y cent + ( M 2 1 ) 2 ) .
y = x + η .
x ̂ cent = b T y ,
b T = x cent y T yy T 1
= x cent x T ( xx T + η η T ) 1 .
𝓕 = ( f cent ( M 2 1 ) 2 f cent f cent + ( M 2 1 ) 2 ) , 𝓖 = ( g cent ( M 2 1 ) 2 g cent g cent + ( M 2 1 ) 2 ) , 𝓝 = ( n cent ( M 2 1 ) 2 n cent n cent + ( M 2 1 ) 2 ) ,
𝓖 = ( H H H ) 𝓕 + 𝓝 ,
= ( I M 2 H ) 𝓕 + 𝓝 ,
f ̂ cent = C 𝓖 ,
C = f cent 𝓖 T 𝓖 𝓖 T 1
= f cent 𝓕 T ( I M 2 H ) T [ ( I M 2 H ) 𝓕 𝓕 T ( I M 2 H ) T + 𝓝 𝓝 T ] 1 .
𝓕 𝓕 T = xx ¯ T ff T ,
SNR 20 log 10 ( signal of the white object standard deviation of the noise ) [ dB ] .
ff T = 1 24 i = 1 24 m i m i T ,
xx T = σ 2 [ R ( ρ 1 ) R ( ρ 2 ) ]
= σ 2 ( ρ 1 0 R ( ρ 2 ) ρ 1 1 R ( ρ 2 ) ρ 1 M 1 R ( ρ 2 ) ρ 1 1 R ( ρ 2 ) ρ 1 0 R ( ρ 2 ) ρ 1 M 2 R ( ρ 2 ) ρ 1 M 1 R ( ρ 2 ) ρ 1 M 2 R ( ρ 2 ) ρ 1 0 R ( ρ 2 ) ) ,
R ( ρ ) = ( ρ 0 ρ 1 ρ M 1 ρ 1 ρ 0 ρ M 2 ρ M 1 ρ M 2 ρ 0 ) ,
x cent = p T x ,
b T = p T xx T xx T 1 = p T ,
f cent = P 𝓕 , P = ( 0 L , , 0 L , I L , 0 L , , 0 L ) ,
C = P 𝓕 𝓕 T ( I M 2 H ) T [ ( I M 2 H ) 𝓕 𝓕 T ( I M 2 H ) T ] 1 .
C = P ( xx ¯ T ff T ) ( I M 2 H ) T [ ( I M 2 H ) ( xx ¯ T ff T ) ( I M 2 H ) T ] 1
= P [ xx ¯ T ( ff T H T ) ] [ xx ¯ T ( H ff T H T ) ] 1
= P { I M 2 [ ff T H T ( H ff T H T ) 1 ] }
= ( 0 L , , 0 L , ff T H T ( H ff T H T ) 1 , 0 L , , 0 L ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.