Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Improving resolution of miniature spectrometers by exploiting sparse nature of signals

Open Access Open Access

Abstract

In this paper, we present a signal processing approach to improve the resolution of a spectrometer with a fixed number of low-cost, non-ideal filters. We aim to show that the resolution can be improved beyond the limit set by the number of filters by exploiting the sparse nature of a signal spectrum. We consider an underdetermined system of linear equations as a model for signal spectrum estimation. We design a non-negativeL1norm minimization algorithm for solving the system of equations. We demonstrate that the resolution can be improved multiple times by using the proposed algorithm.

©2012 Optical Society of America

1. Introduction

Miniature spectrometers are highly demanded in various industrial and domestic applications [1]. Modern miniature spectrometers are built with filter arrays to bring down the cost of the spectrometers. A filter array is usually fabricated using CMOS or Nano technology [2, 3] which brings down the size of the spectrometer in addition to its overall cost. Miniature spectrometers based on a fabricated filter array enhance the portability that helps to obtain in situ measurements. These spectrometers can be readily interfaced with personal computers and other electronic systems. In addition, filter array based spectrometers have the ability to capture a signal spectrum of a light source in a short duration.

In recent years, researchers focus on improving the resolution of filter array based spectrometers due to the following reasons. Filter array based spectrometers consist of fixed set of filters each with different transmission function. A filter with an ideal brick-wall transmission function that allows light only in a designed band and rejects light outside the band is deemed best for spectrometers. This is obvious since spectrometers with ideal filters directly give an estimate of the signal spectrum. In modern portable spectrometers, however, due to low-cost fabrication of the filter array, the shape of each filter transmission function may not be an ideal narrow brick-wall, but highly selective in amplitude and phase throughout the entire spectrum. Thus, the signal spectrum obtained from such a spectrometer is severely distorted which causes difficulty in resolving the distinct spectral components of a light source [4]. To alleviate this problem, digital signal processing (DSP) techniques which process the raw signal spectrum obtained from the spectrometers are shown to be helpful. The current literature [4,5] provides a few DSP techniques to improve the ability of the filter array spectrometers in resolving the distinct spectral components. Analytically, these DSP techniques solve a system of linear equations in order to estimate the signal spectrum. In order to solve the system of linear equations, an optimization method should be selected. In this paper, we consider two different criteria, one is the relatively newL1norm minimization (more details follow in Section 4.2) and the other is the classicalL2norm minimization.

In [4], authors have solved a system of linear equations which is overdetermined. In an overdetermined system, the number of unknowns (spectral components) is smaller than or equal to the number of equations (the number of filters). Thus, the limit of spectral resolution is set by the number of filters in an array. For a given fixed set of non-ideal filters, improving the resolution beyond the limit amounts to solving an underdetermined system of linear equations. Hence, the authors have attempted to solve an underdetermined system of equations. They have used the classical L2norm minimization criterion to obtain a unique solution to the underdetermined system. But their approach failed because the solution of an underdetermined system obtained using L2norm minimization was completely off and did not give accurate solution. Therefore, without defining resolution [4], concluded that the number of filters in an array decides the maximum resolution achievable by a spectrometer. Similar to [4], the technique in [5] solves the overdetermined system using L2norm minimization and hence the resolution obtained in [5] is also limited by number of filters. In a recent paper [6], the authors solved an underdetermined system using an L1norm based minimization for the purpose of increasing the reconstruction accuracy.

In this paper, we develop a new framework that determines an achievable resolution of a spectrometer with the inclusive definition for reconstruction accuracy. We aim to show that the achievable resolution of a spectrometer can be improved multiple times beyond the limit set by the number of filters, which was originally discussed in [7]. We show that an L1 norm minimization based approach is more suitable than a classical L2 norm minimization for solving an underdetermined system of linear equations. We design a new spectrum estimation algorithm using L1 norm minimization that exploits the prior information that the signal spectrum is sparse. We demonstrate that given a fixed set of filters with non-ideal filter characteristics, resolution can be increased multiple times beyond what is suggested by [4] and [5]. The technique proposed in this paper could be applicable to other related digital signal processing spectrometers usually employed in the determination of rice wine composition [8], liquid signature analysis [9] and organic detection [10].

This paper is organized as follows: Section 2 presents the system model and Section 3 discusses the concept of resolution and achievable limit of resolution of a spectrometer. Section 4 presents the sparse representation of the signal spectrum and L1 norm minimization technique. Experimental results are discussed in Section 5 and Section 6 concludes the paper.

2. System description

A source of light, whose spectrum is to be estimated, falls on the filter array shown in Fig. 1 . The filter array consisting of Melements (filters) arranged in a 2D manner. Underneath this filter-array is the photo sensor array made of charge coupled devices (CCD) each of which converts the filtered-light into an electric charge. Each filter corresponds to a pixel element in the CCD array. The combined arrangement of filter and the CCD array constitute a spectral detector. The output of the spectral detector is then sampled and fed into a DSP unit to estimate the spectrum.

 figure: Fig. 1

Fig. 1 Schematic diagram of a typical miniature spectrometer.

Download Full Size | PDF

Let x(λ)represent the spectral component of the original light source at wavelengthλ. Each element of the filter-array is specified in terms of its transmission function, which is a measure of fraction of light that the filter allows at a given wavelengthλ. Letfi(λ), i=1,2,,M, represent the transmission function of the ith filter of the filter array. Figure 2 shows typical transmission functions of the filters in the filter array [4]. We note from Fig. 2 that the transmission functions are not ideal brick-wall because of low-cost fabrication of the filters. As evident from Fig. 2, the pass band of each filter transmission function leaks into the pass bands of neighboring filters as well. Therefore, each filter not only senses the light in its own pass band but also the light from the neighboring bands. Though this results in distortion of spectral information in a particular filter band, we observe that more than one filter senses the same spectral information.

 figure: Fig. 2

Fig. 2 Typical transmission functions of filters in an array.

Download Full Size | PDF

Let d(λ)represent the sensitivity function of CCD detectors (see Fig. 1) which is assumed to be the same for all the elements. Let Di(λ),i=1,2,,M,represent the sensitivity of the ithspectral detector at a specified λ given byDi(λ)=d(λ)fi(λ). We note that each Di(λ) is a continuous function of wavelengthλ. The output, yi, of the ithspectral detector is then given by yi=Di(λ)x(λ)dλ+wi, where wiis the observation or measurement noise.

We collect all Msamples from the output of the spectral detector and arrange them in the form of a vectory, as y=[y1,,yM]T . Our aim is to obtain a high resolution spectral estimate using a given, fixed (i.e.,M) number of filter elements (and thus a fixed number of elements in the CCD array). We model the output vector y as a system of linear equations as follows:

y=Dx+w
where Dis an M×N detector sensitivity matrix given by

[D1(λ1)D1(λ2)D1(λN)D2(λ1)D2(λ2)D2(λN)DM(λ1)DM(λ2)DM(λN)]

We note that a value Di(λj) in Eq. (2) is obtained by uniformly sampling the sensitivity function of the ithspectral sensor along the wavelength axis. We also note that the conditional number of the matrix Dmay be high, because the non-ideal transmission functions make the rows of Dcorrelated with each other. We also observe that the more common implementation of spectrometers is based on grating, an element which disperses the original light into its individual wavelengths. In the filter-array based miniature spectrometers, considered in this paper, the role of the dispersive grating is taken by the filter-array. That is, each filter in the filter-array acts as a grating element and passes only a few wavelengths of the original spectrum. Our method is applicable to grating based spectrometers as well. We only need to know the transmission function of all the grating elements (in our case the filter transmission functions). Once they are known, they can be arranged in the rows of the matrix D in Eq. (2). We model each component of the M×1vector w as a Gaussian random variable with zero-mean and varianceσ2.

Let x=[x1,x2,,xN]Trepresent a signal spectrum vector obtained by uniformly sampling the continuous signal spectrum x(λ) at wavelengthsλ1,λ2,,λN. Let Wλdenote the total bandwidth of the signalx. Let ΔλN=WλNdenote the spacing between the samples of x. Our goal is to solve for an estimate x^of the signal spectrum x from the observation y given the detector sensitivity matrixD. We measure the reconstruction accuracy of the signal spectral estimate in terms of its mean square error (MSE) defined as follows: MSE=1Ni=1N(xix^i)2where xidenotes the ithcomponent of x. In the noiseless case, the system in Eq. (1) is overdetermined if M>N and it is underdetermined if M<N.

3. Limit on resolution of a spectrometer

The resolution of a spectrometer is determined by its ability to distinguish two closely spaced spectral components. At a given spacingΔλN=WλN, we define the maximum achievable resolution of a spectrometer asΔλmax=μNΔλN=μNWλN, where μNis given by

μN:=minμ{1,2,,N1}μsubjecttoMSEδ
whereδ>0 is a user-defined positive number. We say that any two spectral components which are μNΔλN apart from each other are resolvable if the MSE (reconstruction accuracy as defined in Section 2) between the recovered and the input signal spectrum is less than or equal toδ. Since the spacing between the samples is given byΔλN=WλN, for a fixedWλ, increasing Ndecreases the spacingΔλN. We note that to find the maximum possible resolution, Nneeds to be large. For μN=1, we need to check and see if there is any two consecutive non-zero spectral components of x which are ΔλNapart from each other that can be distinctively resolved. For μN=2, there are no pairs of spectral components ΔλN apart from each other that are resolvable; but there are some pairs of non-zero spectral components 2ΔλN apart from each other that are distinctly resolvable.

In the DSP technique proposed in [4, 5], the resolution was μMWλM (becauseN=M). Also, the authors in [4, 5] have considered the caseN>M; but concluded it is of no use to increase N > M since the MSE becomes very large whenever N>M using the L2norm recovery technique. Using the methods in [4, 5], increasing the number of filters was the only way to improve the resolution further.

In this paper, we aim to show that the resolution can be improved to μNWλN from μMWλM with the same fixed M number of filters. The resolution improvement factor is thus μMNμNM. When we increase NbeyondM, the system Eq. (1) becomes underdetermined (in the noiseless case). An underdetermined system which is solved by considering classicalL2norm minimization does not improve the spectral resolution as shown in [4].

Alternatively, we aim to solve the underdetermined system using L1norm minimization techniques [1113]. To aid L1norm minimization, we exploit the prior information that a natural signal spectrum has a degree of freedom which is often much smaller than its ambient dimension. Such a signal can be represented as a sparse signal in a certain domain. A sparse signal is often represented as a vector whose elements are all zero except a few non-zero components. For example, a time domain signal of two tones can be described as a sparse signal in the Fourier domain. The spectrum of the two tone signal has only two non-zero components while the rest of spectral components are zero. The theory of compressed sensing is built on the assumption that natural signals are sparse in a certain domain such as the Fourier, the Wavelet and other transform domains. We will discuss more about sparse signal modeling in Section 4. Given that the signals are sparse in a certain domain, the advantages of using theL1norm minimization over its L2counterpart are as follows:

  • 1. L1norm minimization yields the sparsest solution to an underdetermined system [13].
  • 2. L1norm minimization algorithms are shown to give robust performance against the measurement noise. Usually, the L2based solutions require an inverse of the matrixDDT. Since the filter transmission functions are correlated (as discussed in Section 2), the conditional number of the matrixDDTwill be high which enhances the noise level of the spectral estimate [4]. Even in the noiseless case, theL2based solution does not guarantee an improved spectral estimate. These problems can be avoided by using theL1norm minimization approach.

We observe, however, that we cannot increase N (decreaseΔλN) indefinitely to obtain an infinite resolution. Given a fixedM, there is a limit for increasingN. This limit depends on the detector sensitivity matrix as well as the sparsifying basis (explained in Section 4). Given a fixed number of non-ideal filters, we aim to find the limit of the resolution that can be achieved in practice. In the next section, we discuss sparse modeling of the signal spectrum and L1 norm minimization technique for solving an underdetermined system of equations.

4. Sparse representation for resolution improvement

4.1 Background and sparse data modeling

Signal representations have always been at the heart of most digital signal processing techniques. The purpose of such representations is to explicitly reveal the useful information embedded in a signal. This is typically performed by using a linear (matrix) transformation. Recently, the focus has turned toward the sparse representation of a signal [12, 13]. Sparse representation of a signal is fundamental in many applied sciences such as signal, audio and image processing, biomedical sciences, seismic processing, astronomy, machine learning and the emerging area of compressive sensing [14, 15].

Any natural signal or a vectorx in Eq. (1) can be directly sparse or represented as sparse in a certain basis, i.e., x=Ψs. The basis Ψis an N×Nmatrix called sparsifying basis and the signal sis K-sparse, i.e., only K components of s are non-zero and the remaining NK components are zero. Therefore, the natural signal is a linear combination of only K columns of the matrixΨ. We note that whenΨ=I,the identity matrix, x=s; such a signal x is called a directly sparse signal, i.e., it is inherently sparse [15].

In this paper, we model the original signal spectrum x in Eq. (1) as a linear combination of KGaussian kernels [5, 6], i.e., x=Ψs. The reason for using Gaussian kernels is that smooth Gaussian kernels can preserve the smooth features of the signal spectrum. In addition, specification of a Gaussian kernel requires only two parameters, namely, the location and width, which can be chosen according to the nature of the signal spectrum in a particular application. To construct the kernel matrixΨ, we sample a single Gaussian kernel which has certain full-width at half-maximum (FWHM). The sampled kernel then forms the first column ofΨ. The remaining N1columns of Ψare just shifted versions of the first column. We observe that the spacing between the samples of the Gaussian kernel is alsoΔλN.

Now, using the sparse modelx=Ψs, Eq. (1) can be rewritten as

y=Dx+w=DΨs+w

We note that the dimension of y isM×1, Dis M×Nwith M<N and s is N×1. Our goal is to obtain an estimate s^of sfromyin Eq. (4). Note that the dimension of yis much smaller than the dimension of the sparse signals. After the sparse representation, we employ theL1norm minimization criterion for unique recovery of the sparse signal from the measurement vectory. We discuss our recovery algorithm in Section 4.3. Before that, we point out a few similarities and differences of our approach from the closely related area of compressed sensing (CS).

As we have mentioned briefly, CS also exploits the sparse nature of signals by means of sparse representation. The purpose of CS as the name indicates is to compress a given signal of a fixed ambient dimension. The quality of the signal degrades with aggressive compression. The papers [16] and [17] are well known compressive sensing examples applied in optical systems. The purpose of this paper, however, is to improve the quality (resolution) of the recovered signal given a fixed number of observations of the signal, utilizing the prior information about the signal. The signal dimension is increased (by increasingN) purposely. We find the resolution limit beyond which any two closely spaced spectral components are no longer resolvable. In this regard, our perspective is completely different from that of the CS.

In CS, a sensing matrix is designed for efficient compression. The sensing matrix is usually designed in such a way that it captures maximum information about the signal with as few numbers of compressed samples as possible. In general, best sensing matrices can be tailor made to suit the need of a particular application. Most common ones are the iid Gaussian sensing matrices [15].

We note that the role of the sensing matrix in CS is taken by the sensitivity matrix D in our problem. Unlike CS, we purposely increase the number of columns in the detector sensitivity matrix for a better resolution. We note from Section 2 that the sensitivity matrix D is determined from the filter fabrication process. It is possible that the matrix D can be designed as in CS to obtain better recovery results. We may pose this as a new optical filter design problem. The transmission function of each filter is nothing but a row of the sensitivity matrix. The transmission functions of filters can be designed following the guidelines provided in the CS literature. We plan to explore this interesting research direction in the near future.

After the sparse representation is done, the next step is to uniquely recover the sparse signalsfromy. In the next section, we discuss a few optimization methods that estimate the sparse signal. In Section 4.3, we present our new recovery algorithm based on L1norm minimization.

4.2 Optimization methods

We recall that we are given with only M(<N) number of raw spectral measurements. We need to estimate Nnumber of unknowns that appear in Eq. (4). Clearly, Eq. (4) is an underdetermined system. The conventional way to estimate the sparse signal sfrom yis by L2norm minimization techniques [4]. However, L2norm based techniques produce non-sparse solutions [11]; hence they are not useful for sparse signal recovery. We will return to this point shortly. In this section, we introduce L1norm minimization techniques [1820] which yield unique, sparse solutions for the underdetermined system of linear equations.

We recall that our signal model isy=DΨs+w, where s is a Ksparse signal. The best, brute-force approach for recovering s is to search for the sparsest vector swhich is consistent with the measurementy. This leads to solving the following L0norm minimization problem:

s^=minss0suchthatDΨs-y2ε
where s0operator counts the number of non-zero components of sand εis a small positive constant specified by the user. However, Eq. (5) is a combinatorial optimization problem which is known to be computationally intractable [15]. Interestingly, L1norm minimization provides a tractable solution to the problem in Eq. (5). The L1norm minimization for sparse signal recovery is given by

s^=minss1suchthatDΨs-y2ε

In sparse signal estimation, theL2 solution gives a non-sparse solution—as we have mentioned already—when theL1solution is exact. Why? Our quick answer to this important question is that the L1 technique mimics the behavior of theL0 technique which is an optimal but brute-force method for sparse signal estimation. The more elaborate answers to this question can be found in [11, 15]. For example, Baraniuk (in [15] page 119) says, “Unfortunately, L2 minimization will almost never find a K-sparse solution, returning instead a non-sparse s with many nonzero elements. The L2 norm measures signal energy and not signal sparsity.”

4.3 The proposed non-negative L1minimization (NNLM) algorithm

In order to find an optimal estimates^, the above L1norm minimization problem is usually recast as a linear program, which can be solved efficiently. We assign A=DΨ and define the relaxed version of Eq. (6) as

minss1+τ2yAs22
where τ is a non-negative parameter. We now cast the minimization in Eq. (7) as a linear programming problem with nonnegative constraint s0 (as the signal spectrum is non-negative) as

mins1TssubjecttoAs-y22ε,s0

In this paper, we have adopted the modern interior point method called primal-dual approach [20] that solves the above linear programming problem in order to find the optimal signal spectrum estimate. A pseudo code of the algorithm is given in Table 1 .

Tables Icon

Table 1. Pseudo code to solve Eq. (8) by Non-Negative Minimization Method

The following L1minimization problem called L1regularized least squares was adopted in [6] from Kim et al. [21].

minsAsy22+λi=1nsisubjecttosi0,i=1,2,n

Kim et al. have solved the above problem using a classical interior-point method called logarithmic barrier approach. In this paper, we refer to this algorithm as log-barrier interior point (LBIP) algorithm. The goal of the LBIP is to make the inequality constraints in Eq. (9) implicit in the objective function.

In our method, we have solved the problem in Eq. (8) which can be categorized as anL1 minimization with quadratic constraint. Our optimization approach is completely different from Eq. (9). First, their objective functions are different and the ways of incorporating the constraints into the objective function are different as well. Hence, the solution paths taken by these algorithms are distinct. Second, we have designed a new algorithm by adopting the modern primal-dual interior-point approach. The algorithms based on primal-dual approaches are often shown [20, 22] to be more efficient than the logarithmic barrier methods, especially when high accuracy is required. As mentioned in the review article [22] about interior-point methods, the general opinion today is that primal-dual methods offer the greatest promise for faster and more reliable algorithms. We have verified these thoughts through simulation whose results are shown in Section 5. The algorithmic level differences between the logarithmic barrier and the primal dual approaches can be referred (from [20] p. 609; [22], p. 527).

5. Results and discussions

In this section, we aim to demonstrate the performance of the proposed NNLM. We consider a filter array with M=40 elements, as in [4]. The typical transmission functions of 1st, 10th and the 30th filter are shown in Fig. 2. The operating wavelength range of the spectrometer is from 400nm to 1200nm that corresponds to a Wλof 800nm. We vary the filter length N in order to find the maximum possible resolutionΔλmax.The value of Nstarts from M(that corresponds to the methods in [4, 5]) and increased in steps of 40, i.e., N = 40, 80, 120, 160, 200, 240. ForN=40, the minimum spacing between two adjacent spectral components is 20nm, i.e.,Δλ=20nm (from Section 2). We generate the Gaussian kernels for various Nwith FWHM = 40nm. We have also considered non Gaussian kernels such as Lorentzians and Secant Hyperbolic. Investigation of the method using these kernels has been suggested by an anonymous reviewer during the review process of this paper. This will solidify our claim that our method works for generally shaped signals. We report here that we have obtained similar results as those obtained by using the Gaussian Kernels in the aspects such as ability to reconstruct two closely spaced Lorentzians and Secant Hyperbolic peaks. We define the signal-to-noise ratio (SNR) in dB for the model in Eq. (4) asSNR=DΨs2σ2.

We first consider the filter lengthN=240. The value of δrequired in Eq. (3) is set as 1e-3. The original signal spectrum has a sparsity ofK=2, i.e., two spectral components. We choose two consecutive spectral components, as resolution is an issue to see if two spectral components that are close to each other can be resolved or not. Figure 3(a) shows the original signal spectrum, which is a linear combination of two closely spaced spectral components that are located at wavelengths of 828.46nm and 831.8nm respectively, as shown in Fig. 3(b).

 figure: Fig. 3

Fig. 3 (a) Original signal spectrum. (b) Components of original signal spectrum.

Download Full Size | PDF

Since the two spectral components are close to each other they are not clearly visible in the original signal spectrum. We aim to resolve these components in the sparse domain.

Figure 4(a) shows the output samples of the spectral detector at 60 dB SNR from which we need to estimate the spectral components. Figure 4(b) shows the reconstructed signal spectrum using the NNLM approach. It is evident from Fig. 4(b) that NNLM is able to clearly reconstruct the original signal spectrum.

 figure: Fig. 4

Fig. 4 (a) Output samples of the spectral detector. (b) Estimated signal spectrum.

Download Full Size | PDF

In order to resolve the closely spaced spectral components, we show the reconstructed signal in the sparse domain in Fig. 5(a) . It is evident from Fig. 5(a) that the dominant spectral components are clearly resolved. However, two additional spectral components appear at 805.02nm and 871.97nm whose magnitudes are very small. These additional spectral components are due to the error in the reconstruction of the sparse signal. We note that these additional components do not affect the resolvability of the dominant spectral components, originally present in the signal spectrum. Since the two spectral components that are ΔλNapart from each other are distinctly resolved, the value ofμN=1. Hence, the maximum spectral resolution obtained by using N=240is Δλmax=3.33nm which is 6 times better than the conventional limit of 20nm assumingμM=1.

 figure: Fig. 5

Fig. 5 Estimated signal spectrum in the sparse domain (a) using NNLM. (b) using L2technique.

Download Full Size | PDF

Recently, it became well known in the sparse representation and compressed sensing literature [15] that conventional L2norm solutions for underdetermined systems do not provide an accurate estimate. This fact has been verified in our experiments as well by observing that it is not possible to resolve the closely spaced spectral components in the sparse domain using the classicalL2techniques. To illustrate this fact, we show in Fig. 5(b) a portion of a signal in sparse domain recovered using the L2technique. It is evident from the figure that the L2technique does not give the sparse signals and is not able to resolve the two closely spaced spectral components in the sparse domain.

We now illustrate the performance of the proposed NNLM in the reconstruction of generally shaped spectrums with multiple peaks at 40 dB SNR. Figure 6 shows an original spectrum with peaks at 564nm, 761.5 nm and 875.3 nm. It is evident from Fig. 6 that NNLM is able to resolve the three dominant spectral components. We also include in Fig. 6 the estimate of the signal spectrum obtained by LBIP algorithm. It is evident from Fig. 6 that unlike our NNLM algorithm, the LBIP algorithm underestimates the original signal spectrum, especially along the tails and at the peaks of the signal spectrum. For instance, the dominant peak of the original signal spectrum at 875.3 nm is not detected as dominant by the LBIP algorithm. The MSE (defined in Section 2) for the proposed NNLM algorithm is 2.03e-5, whereas, for the LBIP algorithm it is 5.6e-3.

 figure: Fig. 6

Fig. 6 Reconstruction of generally shaped signal spectrum by various algorithms.

Download Full Size | PDF

Figure 7 shows the original and the reconstructed signals in the sparse domain using the proposed NNLM and the LBIP algorithm. The original signal spectrum shown in Fig. 6 is modeled as a linear combination of three Gaussian kernels. The locations of the non-zero components of the sparse vector are 1, 60 and 240. In Fig. 7, it should be noted that the proposed NNLM detects all the original non-zero components, whereas the LBIP algorithm does not detect the second dominant non-zero component at all. Both, the NNLM and the LBIP, make some detection errors because of the presence of observation noise; but the errors made by NNLM are less significant than those by the LBIP. Based on these results and observations, we conclude that the proposed NNLM outperforms the LBIP algorithm in terms of the MSE performance.

 figure: Fig. 7

Fig. 7 Reconstruction of generally shaped signal spectrum in sparse domain.

Download Full Size | PDF

6. Summary and conclusions

In this paper, we discussed a signal processing approach to improve the resolution of portable spectrometers beyond the limit set by the number of filters. We modeled the signal spectrum estimation as a procedure for solving an underdetermined system of linear equations. We designed a novel non-negative L1norm minimization (NNLM) algorithm that exploits the sparse nature of the signal spectrum. We illustrated the performance of the proposed NNLM in improving the resolution of the signal spectrum through various experiments.

Acknowledgments

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MEST) (Do-Yak Research Program, No. 2011-0016496).

References and links

1. D. J. Brady, Optical Imaging and Spectroscopy, (John Wiley and Sons, 2009).

2. S. W. Wang, X. Chen, W. Lu, L. Wang, Y. Wu, and Z. Wang, “Integrated optical filter arrays fabricated by using the combinatorial etching technique,” Opt. Lett. 31(3), 332–334 (2006). [CrossRef]   [PubMed]  

3. S. W. Wang, C. Xia, X. Chen, W. Lu, M. Li, H. Wang, W. Zheng, and T. Zhang, “Concept of a high-resolution miniature spectrometer using an integrated filter array,” Opt. Lett. 32(6), 632–634 (2007). [CrossRef]   [PubMed]  

4. C. C. Chang and H. N. Lee, “On the estimation of target spectrum for filter-array based spectrometer,” Opt. Express 16(2), 1056–1061 (2008). [CrossRef]  

5. U. Kurokawa, B. I. Choi, and C.-C. Chang, “Filter-based miniature spectrometers: spectrum reconstruction using adaptive regularization,” IEEE Sens. J. 11(7), 1556–1563 (2011). [CrossRef]  

6. C. C. Chang, N. T. Lin, U. Kurokawa, and B. I. I. Choi, “Spectrum reconstruction for filter-array spectrum sensor from sparse template selection,” Opt. Eng. 50(11), 114402 (2011). [CrossRef]  

7. H. N. Lee, Introduction to Compressed Sensing (Lecture notes; Spring Semester, GIST, South Korea, 2011).

8. H. Y. Yu, X. Y. Niu, H. J. Lin, Y. B. Ying, B. B. Li, and X. X. Pan, “A feasibility study on on-line determination of rice wine composition by Vis-NIR spectroscopy and least-squares support vector machines,” Food Chem. 113(1), 291–296 (2009). [CrossRef]  

9. H. Chen and H. Tang, “Application of miniature spectrometer in liquid signature analysis technology,” Appl. Opt. 50(26), 5093–5098 (2011). [CrossRef]   [PubMed]  

10. N. J. Chanover, D. A. Glenar, D. G. Voelz, X. Xiao, R. Tawalbeh, P. J. Boston, W. B. Brinckerhoff, P. R. Mahaffy, S. Getty, I. Ten Kate, and A. McAdam, “An AOTF-LDTOF spectrometer suite for in situ organic detection and characterization,” in Proceedings of IEEE Aerospace Conference (2011), pp. 1–13.

11. S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Rev. 43(1), 129–159 (2001). [CrossRef]  

12. D. L. Donoho, M. Elad, and V. Temlyakov, “Stable recovery of sparse over complete representations in the presence of noise,” IEEE Trans. Inf. Theory 52, 6–18 (2006).

13. D. L. Donoho, “For most large underdetermined systems of linear equations, the minimal l1-norm solution is also the sparsest near solution,” Commun. Pure Appl. Math. 59, 907–934 (2006). [CrossRef]  

14. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006).

15. R. Baraniuk, “Compressive sensing,” IEEE Signal Process. Mag. 24(4), 118–121 (2007). [CrossRef]  

16. D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE 6065, 43–52 (2006).

17. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008). [CrossRef]  

18. D. L. Dohono and Y. Tsaig, “Fast solution of L1-norm minimization problems when the solution may be sparse,” IEEE Trans. Inf. Theory 54, 4789–4812 (2008).

19. J. A. Tropp, “Just relax: Convex programming methods for identifying sparse signals,” IEEE Trans. Inf. Theory 52, 1030–1051 (2006).

20. S. Boyd and L. Vandenberghe, Convex Optimization, (Cambridge University Press, 2009).

21. S. J. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky, “An interior-point method for large-scale l1-regularized least squares,” IEEE J. Sel. Top. Signal Process. 1, 606–617 (2007).

22. A. Forsgren, P. E. Gill, and M. H. Wright, “Interior methods for nonlinear optimization,” SIAM Rev. 44(4), 525–597 (2002). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Schematic diagram of a typical miniature spectrometer.
Fig. 2
Fig. 2 Typical transmission functions of filters in an array.
Fig. 3
Fig. 3 (a) Original signal spectrum. (b) Components of original signal spectrum.
Fig. 4
Fig. 4 (a) Output samples of the spectral detector. (b) Estimated signal spectrum.
Fig. 5
Fig. 5 Estimated signal spectrum in the sparse domain (a) using NNLM. (b) using L 2 technique.
Fig. 6
Fig. 6 Reconstruction of generally shaped signal spectrum by various algorithms.
Fig. 7
Fig. 7 Reconstruction of generally shaped signal spectrum in sparse domain.

Tables (1)

Tables Icon

Table 1 Pseudo code to solve Eq. (8) by Non-Negative Minimization Method

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

y=Dx+w
[ D 1 ( λ 1 ) D 1 ( λ 2 ) D 1 ( λ N ) D 2 ( λ 1 ) D 2 ( λ 2 ) D 2 ( λ N ) D M ( λ 1 ) D M ( λ 2 ) D M ( λ N ) ]
μ N := min μ{1,2,,N1} μsubject toMSEδ
y=Dx+w=DΨs+w
s ^ = min s s 0 suchthat DΨs-y 2 ε
s ^ = min s s 1 suchthat DΨs-y 2 ε
min s s 1 + τ 2 yAs 2 2
min s 1 T ssubjectto As-y 2 2 ε,s0
min s Asy 2 2 +λ i=1 n s i subjectto s i 0,i=1,2,n
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.