Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Bispectral coding: compressive and high-quality acquisition of fluorescence and reflectance

Open Access Open Access

Abstract

Fluorescence widely coexists with reflectance in the real world, and an accurate representation of these two components in a scene is vitally important. Despite the rich knowledge of fluorescence mechanisms and behaviors, traditional fluorescence imaging approaches are quite limited in efficiency and quality. To address these two shortcomings, we propose a bispectral coding scheme to capture fluorescence and reflectance: multiplexing code is applied to excitation spectrums to raise the signal-to-noise ratio, and compressive sampling code is applied to emission spectrums for high efficiency. For computational reconstruction from the sparse coded measurements, the redundancy in both components promises recovery from sparse measurements, and the difference between their redundancies promises accurate separation. Mathematically, we cast the reconstruction as a joint optimization, whose solution can be derived by the Augmented Lagrange Method. In our experiment, results on both synthetic data and real data captured by our prototype validate the proposed approach, and we also demonstrate its advantages in two computer vision tasks—photorealistic relighting and segmentation.

© 2014 Optical Society of America

1. Introduction

Fluorescence exists widely in the real world and is of great importance in microscopy, so accurate description of reflective and fluorescent behaviors in such scenes is quite important in modeling, rendering, and other related tasks. In spite of its importance, capturing reflectance and fluorescence of a scene is highly challenging.

First, the fluorescent component at each point is a traverse of dense excitation and emission wavelength bands [1], which can be measured by a commercial fluorescence spectrometer. However, the fluorescence spectrometer is inapplicable for capturing the fluorescence properties of a structured scene. One naive approach is to use two groups of bandpass filters to traverse all of the excitation–emission wavelength pairs, but the capturing process is inefficient, and the results tend to be noisy due to the weakness of fluorescence. Therefore capturing the fluorescence behaviors efficiently and with high quality is an important but rarely explored direction. Another challenge is that most fluorescent materials are both reflective and fluorescent, and there is no clean-cut threshold between wavelengths of two components, so the separation of two components from one input image is difficult. Previous studies mainly capture multiple images under different illuminations [2] and make use of the different illumination dependencies of reflectance and fluorescence for separation. In spite of being able to separate out the fluorescence from images of a scene, the separation is not in the high-dimensional excitation–emission space and is far from sufficient to describe the scene’s reflective and fluorescent behaviors.

With a focus on efficient and high-quality capturing of fluorescent and reflective components in a whole scene, this paper explores the intrinsic redundancy within reflectance and fluorescence and proposes an approach reconstructing them from mixed compressive measurements. The fluorescence component usually covers a wide spectrum, and the Kasha–Vavilov rule [3] states that the spectral distributions of emitted fluorescence from the same material under different monochromatic lights remain unchanged up to a certain scale. In other words, if we represent the excitation–emission values of each point as a matrix, the matrix rows are the same except for some scaling factors. Therefore we formulate the fluorescence as a low-rank matrix, and this facilitates the reconstruction from compressive samples. On the other hand, the reflectance does not change the input illuminations; thus it is quite sparse in the excitation–emission space and has some overlap [4] with the fluorescent component due to the Stokes shift. Based on the above analysis, we cast the reconstruction as a joint optimization of the nuclear norm of the fluorescence component and the l1 norm of the reflective component. Mathematically, we resort to convex optimization for the solution.

Combining two programmable spectrum filters with a commercial camera, we build a prototype shown in Fig. 1(a). We can capture the coded measurements like Fig. 1(b) and recover both reflective and fluorescent components of the scene in high spectral resolution, as shown in Figs. 1(c) and 1(d). Such descriptions model the behaviors of the fluorescent objects with high accuracy and would also assist in related tasks in computer vision, biology, and medical science.

 figure: Fig. 1

Fig. 1 The system and results of our approach on one exemplary scene. (a) Prototype setup. (b) One coded image. (c, d) Reconstructed reflectance and fluorescence, respectively.

Download Full Size | PDF

In summary, the proposed approach contributes mainly in the following ways:

  • Explore the redundancy in reflectance and fluorescence and propose an efficient acquisition and computational reconstruction approach for both components.
  • Formulate the reconstruction of two components from sparse multiplexed measurements as joint optimization, which is solved as derived in later sections.
  • Build a setup for effective capturing of reflectance and fluorescence in real scenes at high spectral resolution.

2. Related work

2.1. Fluorescence

Some substances emit luminescence, which we name fluorescence, after absorbing light or other electromagnetic radiation. Physically, fluorescence occurs when an orbital electron of a molecule, atom, or nanostructure emits a photon to jump back down to ground state after being excited to a higher quantum state by some type of energy, e.g., illumination. In most cases, the emitted light has a longer wavelength than that of the excitation illumination. The discovery of the fluorescence phenomenon dates back centuries, but its studies have drawn the attention of researchers in computer science only recently. Accurate description of the fluorescent components is helpful in photorealistic rendering. Johnson and Fairchild [5] first incorporate fluorescence into the rendering model, and Hullin et al. [6] further model and render fluorescent objects by acquiring their bispectral bidirectional reflectance and reradiation distribution functions (BRRDF); the results showed significant improvement in fluorescent object modeling. Considering the high dimensionality of fluorescence, there are also researchers working on dimension reduction of fluorescence [7] or using partial measurements for specific applications, such as diagnosis [8, 9]. Benefiting from its unique properties, fluorescence is introduced to assist computer vision tasks, such as photometric stereo [10] and camera response calibration [11]. There are also some researchers attempting to capture the fluorescence component; usually, a multiplexing strategy is adopted to increase the signal-to-noise ratio (SNR), such as [12, 13] and [14]. Two studies most closely related to our work are [2] and [15]. The former performs only fluorescent and reflective component separation, whereas we attempt to obtain a full description of the excitation–emission properties of a scene. We differ from the latter in that we capture the reflective and fluorescence behaviors of each point in a scene, and [15] gives an integral description. In sum, we will focus on acquiring the complete description of the fluorescent and reflective components in a scene efficiently and with high quality; we hope that this will push forward the high-accuracy modeling and synthesis of fluorescent scenes.

2.2. Compressive spectrum imaging

The redundancy of visual information is widely known and exploited to build next-generation spectrum imaging systems, which usually include randomly coded image acquisition followed by sparse reconstruction. Horisaki et al. [16, 17] propose to apply compressive sensing in spectral imaging under a multiplexing framework [18, 19], and some representative works include [16, 17] and [20]. In a similar framework, August and Stern [21] use a liquid crystal device and a single pixel sensor for compressive spectrometry. However, these works are mainly limited to reflection spectrum and are inapplicable for fluorescence capturing. This paper shows that fluorescent and reflective components are both highly redundant but in different forms and proposes to reconstruct them from sparsely captured measurements.

2.3. Multiplexing capturing

Multiplexing is a widely used technique in imaging plenoptic information or its subset; readers can refer to [22] for a theoretic analysis and examples. The previous works most related to our approach address multiplexed illumination [23, 24, 25] for raising SNR, multispectral imaging [14, 12], and relighting [26]. Although inspired by the above paradigm, our approach largely differs from previous work in two respects. First, we sample the multiplexed excitation–emission matrix compressively to raise efficiency; that is, the captured multiplexed measurements are incomplete, and thus demultiplexing is ill posed and much more challenging. Second, capturing fluorescent and reflective components in high-dimensional excitation–emission space has been rarely explored in the above-referenced work.

3. Formulation

3.1. Derivation of the optimization for a single pixel

Discretizing the spectrums of incident (excitation) and outgoing (fluorescence and reflectance) light into m and n levels, respectively, for each scene point, we can represent its high-spectrum fluorescence as matrix m×n and reflectance as m×n. Suppose we shed illumination combinations and capture accumulated responses m×n, which are composed of three components,

M^=R^+F^+N^,
with being the noise, which we assume follows Gaussian white noise N(μ, σ2).

Roughly, fluorescence is of a longer wavelength than that of the excitation illumination, and its spectral distribution is generally independent of the excitation illumination except for a scaling factor. Therefore we can assume no overlap between excitation and emission spectrums, as shown in Fig. 2(a). However, for many materials the fluorescence spectrum overlaps with that of excitation illuminations, as shown in Fig. 2(b); the separation of reflective and fluorescent components in the overlapped wavelength is worth studying and helps accurate appearance modeling. This paper deals with the general case with overlapping between reflectance and fluorescence and treats the form with no overlapping as a special case. In such cases, we need to introduce priors of both fluorescent and reflective components for unmixing since we cannot discriminate two components by wavelength thresholding in the overlapping range. (i) We assume that reflectance does not change the spectrum of the incident light, and the reflected components form a diagonal matrix, which is naturally sparse, as shown in Fig. 2(b). (ii) The independence between the distribution of emitted fluorescence and the excitation spectrum indicates that is a low-rank coefficient matrix whose rank is 1 if the Kasha–Vavilov rule is strictly followed.

 figure: Fig. 2

Fig. 2 The visualization of excitation–emission matrix. The strength of the matrix entries is illustrated by intensity here. Vertical and horizontal color bars respectively illustrate the excitation wavelength λin and the emission wavelength, which includes the fluorescent component λoutref and reflective component λoutfluo. (a) No overlap between excitation and emission. (b) Slight overlap between excitation and emission.

Download Full Size | PDF

As previously mentioned, fluorescence behaviors need a bispectral description, and we can adopt a coding paradigm on both excitation and emission sides. To raise the efficiency and quality of capturing simultaneously, we use multiplexed excitation illumination to raise SNR and random subsampling on the emission side to reduce the number of necessary measurements.

Suppose we shed p coded illuminations and record q narrowbands at CCD; the illumination code and recording code are respectively Ip×m and On×q. Accordingly, the reconstruction can be performed by the following optimization:

(F^*,R^*,N^*)=argminF^*+αR^1s.t.πΩ^(C^)=πΩ^(I^(F^+R^)O^+N^)|N^μ|<3σ.
Here || · ||* is the nuclear norm for rank minimization; || · ||l1 is the 1 norm, which has been widely used to force the sparsity of ; α is a weighting factor introduced to balance energy terms describing two priors; Ĉ(i, j) is the measurement from the ith illumination pattern in Î and the jth recording wavelength in Ô; and πΩ: ℝp×q → ℝp×q is a linear operator that subsamples the entries out of all p × q possible measurements by performing dot product with a binary matrix Ω̂. As for , the three-sigma rule is used to impose the noise constraints.

3.2. From single pixel to image lattice

The formulations in the above subsection are defined on the appearance of one single pixel; capturing the fluorescence and reflectance of a whole image sequentially is still quite time consuming. Here we concatenate the data at w different pixels horizontally as follows:

[C^1C^2C^w]=I^([F^1F^2F^w]+[R^1R^2R^w])[O^O^O^]+[N^1N^2N^w],
which is further simplified as
C=I(F+R)O+N.
Here the size of the matrices F, R, N, C is m × (n × w) with w being the number of pixels, I is equivalent to Î, and the coding matrix O turns into a diagonal replication of the n × q coding matrix Ô in Eq. (2).

Because the fluorescence properties of different positions are mostly different, the low rankness of concatenated matrices is destroyed, as illustrated in Fig. 3(a). However, we can introduce scaling factors to normalize the difference and make the batch processing feasible, as shown in Fig. 3(b). Let a = [a1, a2, ···, am], b = [b1, b2, ···, bm], f and g denote four row vectors and [a1f; a2f; ··· ;amf] and [b1g; b2g; ··· ;bmg] denote low-rank fluorescence matrices at two pixels. We normalize them by factor matrix = [a′, a′,···, a′, b′, b′,···,b′] and concatenate them to get a low-rank matrix [f g;f g;··· ;f g]. Here needs to be estimated automatically. In addition, concatenating multiple image pixels will not change the sparsity of the reflective component. Then the optimization for reconstructing fluorescent and reflective components of a whole image can be rewritten as

(F*,R*,N*)=argminF*+αR1s.t.πΩ(C)=πΩ(I(F+R)O+N)|Nμ|<3σ,
where ⊙ is the component-wise product that for any two matrices A and B, (AB)ij = AijBij. Note that the subsampling matrix Ω is the horizontal replication of Ω̂ in Eq. (2).

 figure: Fig. 3

Fig. 3 Extending the reconstruction of a single pixel to image lattice by normalization. Here we use different line colors to differentiate excitation wavelengths.

Download Full Size | PDF

4. Optimization

The optimization defined in Eq. (5) is apparently nonconvex; we propose an iterative algorithm for numeric optimization. To simplify the optimization, following the idea in [27], we introduce a slack variable ε (∀ij, εij ≥ 0) to convert the inequality |Nμ| < 3σ into an equality constraint (Nμ) ⊙ (Nμ) − 9σ2 + ε2 = 0. In addition, F and R are replaced with S1 and S2, respectively, to get closed-form solutions to F and R. So the objective turns into

min.S1*+αS21s.t.S1=FS2=RC=IFO+IRO+N+E,E(i,j)(i,j)Ω=0,(Nμ)(Nμ)9σ2+ε2=0.

The problem in Eq. (6) is a typical sparse optimization with equality constraints. There are a number of efficient methods to address the above optimization, e.g., the Proximal Gradient method and Augmented Lagrangian Multiplier. For an extensive review of these methods one may refer to [28, 29]. Here we prefer Alternating Direction Minimization (ADM) due to its efficiency and effectiveness [27]. By ADM, Eq. (5) is subject to the following Lagrangian equation:

Lag=S1*+αS21+<Y1,S1F>+β2S1FF2+<Y2,IFO+IRO+N+EC>+β2IFO+IRO+N+ECF2+<Y3,(Nμ)29σ2+ε2>+β2(Nμ)29σ2+ε2F2,
where < ·,· > denotes the inner product. The matrices F, R and N are optimization variables; matrices Y0∼3 define the Lagrangian multipliers; and the other matrices, e.g., I, O, and N, are all known. The above objective is analytically tractable by Distributed Optimization as used in [28]. To solve Eq. (7), we need to derive the update rules for all the unknowns. In the following derivations of updating rules, we omit the superscript (k) or (k + 1) on the right-hand side of the derivation.

For S1, the Lagrangian equation can be rewritten as

f(S1)=S1*+β2S1(Fβ1Y1)F2+C,
where C is irrelevant to S1. According to [30], the update rule of such a nuclear norm optimization can be written as
S1(k+1)=Usβ1(Stemp)VT,
where UStempVT is the Singular Value Decomposition of (Fβ−1Y1) and
sβ1(x)={xβ1,x>β1x+β1,x<β10,others.

Similarly, we rewrite the Lagrangian equation for S2 as

f(S2)=αS21+β2S2(Rβ1Y0)F2.
Referring to the solution to 1 problem in [30], S2 can be updated as
S2(k+1)=sαβ1(Rβ1Y0).

For E and ε, we set the Lagrangian equation’s partial derivative to be zero and thus obtain the updated value. The Lagrangian equation’s partial derivative to E is

f(E)E=β[E(CIFOIRONβ1Y2)],
and the update rule is
E(k+1)=CIFOIRONβ1Y2.
Similarly, we can get ε’s update rule,
ε(k+1)=9σ2(Nμ)(Nμ)β1Y3.

Tables Icon

Algorithm 1: Reconstruct reflective and fluorescent components in a scene.

As for F, R and N, it is difficult to obtain the closed-form solution to the three equations; we use gradient descent method [29] for updating

F(k+1)=F(k)γ1f(F)FR(k+1)=R(k)γ2f(R)RN(k+1)=N(k)γ3f(N)N,
where γ1∼3 represents the step size parameters. The corresponding partial derivatives are respectively
f(F)F=β[F2+ITIFOOT(S1+β1Y1)IT(CIRONEβ1Y2)OT]f(R)R=β[R+ITIROOT(S2+β1Y0)IT(CIFONEβ1Y2)OT]f(N)N=β[2(Nμ)32(Nμ)(9σ2ε2β1Y3)+N(CIFOIROEβ1Y2)].

The update of the auxiliary variables Y0∼3 can be derived in closed form as listed in Algorithm 1, and the other variables are kept constant during the optimization: ρ = 1.05, β = 1e − 2 and βmax = 1e6.

5. Experiments

In this section, we conduct a series of experiments to test the proposed approach, including quantitative accuracy analysis on synthetic data, performance in capturing real scenes with both fluorescence and reflectance, and the advantages in some related applications, including relighting and segmentation.

In Figs. 1(a) and 4, we respectively show the prototype and its light path. The data are captured in a dark room to avoid the interference from ambient light; we can also seal the whole setup instead. The multiplexed excitation illumination is implemented by placing a VariSpec liquid crystal tunable filter in front of a xenon lamp. For compressive sampling, we add the same filter in front of the camera lens of Point2Grey FL2G-13S2C-C, which is synchronized with the filters automatically. VariSpec transmission is wavelength dependent, so we normalize the final results by scaling according to the transmittance curve provided by the supplier.

 figure: Fig. 4

Fig. 4 The light path of the proposed imaging system. The corresponding real setup is shown in Fig. 1(a).

Download Full Size | PDF

In the following experiments, we evenly discretize the wavelength from 400 nm to 700 nm into 10 levels. Considering that the excitation illuminations are of shorter wavelengths, we only vary the excitation illumination from 400 nm to 580 nm. One can easily figure out that the excitation–emission matrix is of dimension 6 × 10, and traverse sampling needs 45 shots here.

5.1. Synthetic data

To test the performance of the proposed approach quantitatively, we first capture the ground truth reflectance and fluorescence of several fluorescent materials adopting the exhaustive strategy, i.e., traversing all possible narrowband pairs without the combinations with emission wavelength shorter than that of excitation. Here we average multiple shots for each setting to exclude the effects from noise.

Later, the noise-free coded measurements are simulated by summing up multiple responses under narrowband excitation illuminations and subsampling the emission spectrums. Then we impose noise according to the rule [24] that the noise level increases linearly with the number of photons, with the noise parameters estimated from the above repetitive acquisition. Because the optimum multiplexing codes proposed in [24, 31] are applicable only if (m + 1)/4 is an integer, we use full-rank random multiplexing codes in this paper without loss of generality. The optimum rate for coded illumination is around 50% according to [24, 31]. For sampling rates, we compare the reconstructed fluorescence at different sampling rates against the true values. The results from three randomly selected points are plotted in Fig. 5.

 figure: Fig. 5

Fig. 5 The performance on synthetic data, including three materials (horizontal) and three sampling rates (vertical). These results are averaged over five different random codes.

Download Full Size | PDF

From the results we can see that our algorithm can recover the fluorescence at full wavelength resolution successfully from subsampled measurements and the algorithm is applicable for different cases. Comparing the performance at different sampling rates, one can observe that there exists slight degeneration in accuracy as the number of measurement decreases, but the high accuracy at 30% sampling rate is still quite promising.

5.2. Real data

In this experiment we capture the coded measurements of a scene including both reflective and fluorescent components using the prototype and reconstruct two components computationally. Since the filter can only generate a single 20 nm narrowband, we code the spectrum temporally to generate multiplexed illuminations. In data capture, we set the sampling rate to be 30%; that is, 15 snapshots are taken compared to 45 for the traverse capturing. For data capturing, we bought some fluorescent toys coated with fluorescent paint (such as the balls and car) and also created one by writing several letters on a background with either MK1800 fluorescent pigments or nonfluorescent paint. We selected the fluorescent paint or toys that could be excited by short-wavelength visible light.

Figure 6 gives a comparison between the results by demultiplexing and those by traditional capturing. In the left and middle columns, we respectively display the reflective and fluorescent components at specific excitation–emission spectrum pairs. From the comparison one can see that the proposed approach can recover both components successfully while the noise is suppressed largely, especially in the dark regions. We also show a comparison between the simulated image under a specific multiplexed illumination and that by exhaustive capturing in the rightmost column. The high similarity also validates the accuracy of our reconstruction. In addition, we can see that the proposed algorithm is applicable for the regions either with or without fluorescence. To quantitatively measure the benefits of multiplexing, we average over multiple measurements under the above three illuminations as ground truth and compare the mean absolute percentage error of reconstruction, as labeled in Fig. 6. The consistently smaller reconstruction error clearly validates the multiplexing strategy.

 figure: Fig. 6

Fig. 6 Performance of our algorithm on a real scene and comparison with that of traverse capturing. The left column contains only reflectance, the middle column is the fluorescence excited by a single band illumination, and the right column gives the result under a mixture-spectrum illumination. The mean absolute percentage error (MAPE) of each result is labeled in the top right corner.

Download Full Size | PDF

Another noticeable phenomenon is that the tiger stuffed toy on the right is not fluorescent itself, but there is still an apparent fluorescent component in the data by both traverse capturing and our approach. This is caused by the physical light transport: the fluorescent emission is similar to diffuse reflection, and the fluorescence emitted from the truck is shed on the nonfluorescent tiger, so the fluorescence is reflected into the camera.

We also compare our reconstructed results with those obtained by traversing capturing quantitatively at three randomly selected points, as plotted in Fig. 7. The small difference between true value (solid curve) and prediction (dashed curve) validates the effectiveness of our approach. In each subfigure, we can also see that there exists a large difference among the curves of different excitation illuminations on the left half, and the consistency increases with the excitation wavelength. The reason is that the fluorescence part is low rank and of longer wavelength, so the consistency gains dominance at longer wavelength.

 figure: Fig. 7

Fig. 7 Quantitative evaluation on real data. Here three representative points are selected, and we differentiate the excitation illuminations with different colors.

Download Full Size | PDF

5.3. Advantages in other applications

5.3.1. Relighting

The model in this paper describes the fluorescent and reflective properties of a scene at each incident wavelength and each outgoing wavelength, from which we can easily perform photo-realistic relighting according to the rendering model [5, 32]. Except for synthesizing the appearance under various illumination coding patterns as shown in Fig. 7(c), the proposed model can also generate images under different light sources given their spectrum distributions. Figures 8(a)–8(c) respectively display the relighting results of a fluorescent scene illuminated by three different light sources, whose light spectra are calibrated using the VariSpec liquid crystal tunable filter and a digital camera. The top row of Fig. 8 shows the true appearances (including both fluorescence and reflectance) under three illuminations, in parallel with which we display the corresponding reconstructed appearance. Their high degree of similarity validates the high performance of our rendering results effectively. In the bottom row, we also show the relighting results with only the fluorescence component. The results clearly show that the letters C and E are nonfluorescent.

 figure: Fig. 8

Fig. 8 Relighting results under three different types of light sources and comparison with true results. (a) Noon sunlight. (b) Tungsten lamp. (c) Mercury vapor lamp.

Download Full Size | PDF

5.3.2. Segmentation

As known, fluorescence behavior is closely related with materials, so our high-resolution description of the fluorescent properties can help to discriminate the regions of different materials but with similar RGB values, which highly confuse the traditional color-based segmentation approaches. In Fig. 9(a), we show a scene including several fluorescent objects challenging the segmentation algorithms in RGB space. The door of the toy car is coated with a fluorescent paint different from that of the body, while the yellow ball and round region in the background are also of different materials but similar appearances under daylight illuminations, as shown in Fig. 9(b).

 figure: Fig. 9

Fig. 9 Segmentation assisted by high-spectral-resolution fluorescent components. (a) A scene under daylight. (b) RGB values of five labeled regions. (c) Top three discriminative features between regions 1 and 2. (d) Segmentation of car parts. (e) Top three discriminative features among regions 3, 4, and 5. (f) Segmentation of toy ball and fluorescent paint.

Download Full Size | PDF

Using two-tuples to denote the excitation–emission wavelength pairs, we visually select the top three discriminative features—(430 nm, 550 nm), (430 nm, 610 nm), and (460 nm, 610 nm)—and adopt simple hard thresholding for region labeling. One can see clear separation between the door and body of the car, as shown in Figs. 9(c) and 9(d). The fluorescence behaviors of the ball and the painted round region are also different; we can separate region 4 from regions 3 and 5 easily. However, regions 3 and 5 are undistinguishable even in high-dimensional excitation–emission space, as shown in Figs. 9(e) and 9(f); this is mainly due to the fact that the emitted fluorescence from the background is shed on the left part of the ball, which thus exhibits similar fluorescence behaviors.

Recall that we capture the reflectance and fluorescence spectrum at each pixel, and fluorescence can be either emitted by single material or mixture material. However, we cannot perform unmixing on the composite emission, which needs additional information from the behaviors of the componential fluorescence materials.

6. Conclusions and discussions

6.1. Conclusions

We represent an approach for capturing fluorescent and reflective components in a scene efficiently and with high quality. The proposed approach is promising in many applications making use of accurate fluorescence measurements and is applicable for general cases—reflective, fluorescent, or both. We also validate the approach with a prototype and apply it successfully to several computer vision tasks.

6.2. Limitations and potential extensions

Our algorithm is mainly limited by the Gaussian assumption on the system noise. Because noises on CCD are highly complicated (e.g., photon noise, dark noise, and read noise) and can be signal dependent, Gaussian white noise is not accurate enough for higher-quality acquisition. Considering more complex noise model is beneficial for theoretical analysis and system building, so this can be one of our directions in the future.

Furthermore, the efficiency and accuracy of current implementation is limited by the tunable filter and light source. For the adopted filter, each wavelength transition needs around 50 ms, and the narrowband filter is Gaussian shaped, so the prototype can be improved further by using some higher-end optics. A light source with even spectrum will also be preferable. So far, the dependence between adjacent pixels is not introduced, and another extension is introducing spatial constraints to raise the efficiency and quality further. The definition of spatial smoothness constraints on reflectance and fluorescence can be borrowed from that of natural images.

Fluorescence is of great importance in microscopy imaging; an extension of our approach to microcopy can obtain a high-resolution excitation-fluorescence description of the object. First, the weakness of fluorescence emission makes the multiplexing strategy specially important. Second, the filters are applied to the whole light source and CCD without technically demanding modifications, so that it can be extended easily to microscopy systems. The direct extension is nontrivial due to the uniqueness of microspecimens, e.g., the bleaching due to scattering. Combining some microimaging strategies such as confocal techniques may help in addressing such problems but is beyond the scope of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China, Nos. 61171119, 61120106003, and 61327902. The authors thank Prof. Imari Sato, Prof. Yochi Sato for their constructive discussions, and also wish to thank the editor and the anonymous reviewers for their insightful comments on the manuscript.

References and links

1. R. Donaldson, “Spectrophotometry of fluorescent pigments,” Br. J. Appl. Phys. 5(6), 210–214 (1954). [CrossRef]  

2. I. Sato and C. Zhang, “Image-based separation of reflective and fluorescent components using illumination variant and invariant color,” IEEE Trans. Pattern Anal. 35(12), 2866–2877 (2013). [CrossRef]  

3. A. D. McNaught and A. Wilkinson, Compendium of Chemical Terminology (Blackwell Science, 1997).

4. A. Springsteen, “Introduction to measurement of color of fluorescent materials,” Anal. Chim. Acta 380(2), 183–192 (1999). [CrossRef]  

5. G. M. Johnson and M. D. Fairchild, “Full-spectral color calculations in realistic image synthesis,” IEEE Comput. Graphics Appl. 19(4), 47–53 (1999). [CrossRef]  

6. M. B. Hullin, J. Hanika, B. Ajdin, H.-P. Seidel, J. Kautz, and H. P. A. Lensch, “Acquisition and analysis of bis-pectral bidirectional reflectance and reradiation distribution functions,” ACM Trans. Graphics 29(4), 1–7 (2010). [CrossRef]  

7. M. Soriano, W. Oblefias, and C. Saloma, “Fluorescence spectrum estimation using multiple color images and minimum negativity constraint,” Opt. Express 10(25), 1458–1464 (2002). [CrossRef]   [PubMed]  

8. Q. Liu, K. Chen, M. Martin, A. Wintenberg, R. Lenarduzzi, M. Panjehpour, B. F. Overholt, and T. Vo-Dinh, “Development of a synchronous fluorescence imaging system and data analysis methods,” Opt. Express 15(20), 12583–12594 (2007). [CrossRef]   [PubMed]  

9. T. Vo-Dinh, “Principle of synchronous luminescence (SL) technique for biomedical diagnostics,” Proc. SPIE 3911, 42–49 (2000). [CrossRef]  

10. I. Sato, T. Okabe, and Y. Sato, “Bispectral photometric stereo based on fluorescence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 270–277.

11. S. Han, Y. Matsushita, I. Sato, T. Okabe, and Y. Sato, “Camera spectral sensitivity estimation from a single image under unknown illumination by using fluorescence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 805–812.

12. C. Chi, H. Yoo, and M. Ben-Ezra, “Multi-spectral imaging by optimized wide band illumination,” Int. J. Comput. Vision 86(2–3), 140–151 (2010). [CrossRef]  

13. M. Alterman, Y. Schechner, and A. Weiss, “Multiplexed fluorescence unmixing,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2010), pp. 1–8.

14. J. Park, M. Lee, M. D. Grossberg, and S. K. Nayar, “Multispectral imaging using multiplexed illumination,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 2007), pp. 1–8.

15. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Separating the fluorescence and reflectance components of coral spectra,” Appl. Opt. 40(21), 3614–3621 (2001). [CrossRef]  

16. M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual disperser architecture,” Opt. Express 15(21), 14013–14027 (2007). [CrossRef]   [PubMed]  

17. A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47(10), B44–B51 (2008). [CrossRef]   [PubMed]  

18. R. Horisaki, X. Xiao, J. Tanida, and B. Javidi, “Feasibility study for compressive multi-dimensional integral imaging,” Opt. Express 21(4), 4263–4279 (2013). [CrossRef]   [PubMed]  

19. R. Horisaki and J. Tanida, “Multi-channel data acquisition using multiplexed imaging with spatial encoding,” Opt. Express 18(22), 23041–23053 (2010). [CrossRef]   [PubMed]  

20. Y. Wu, I. O. Mirza, G. R. Arce, and D. W. Prather, “Development of a digital-micromirror-device-based multishot snapshot spectral imaging system,” Opt. Lett. 36(14), 2692–2694 (2011). [CrossRef]   [PubMed]  

21. Y. August and A. Stern, “Compressive sensing spectrometry based on liquid crystal devices,” Opt. Lett. 38(23), 4996–4999 (2013). [CrossRef]   [PubMed]  

22. G. Wetzstein, I. Ihrke, and W. Heidrich, “On plenoptic multiplexing and reconstruction,” Int. J. Comput. Vision 101(2), 384–400 (2013). [CrossRef]  

23. N. Ratner and Y. Y. Schechner, “Illumination multiplexing within fundamental limits,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 2011), pp. 1–8.

24. Y. Y. Schechner, S. K. Nayar, and P. N. Belhumeur, “Multiplexing for optimal lighting,” IEEE Pattern Anal. Mach. Intell. 29(8), 1339–1354 (2007). [CrossRef]  

25. C. Chen, D. Vaquero, and M. Turk, “Illumination demultiplexing from a single image,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 2011), pp. 17–24.

26. F. Moreno-Noguer, S. Nayar, and P. Belhumeur, “Optimal illumination for image and video relighting,” in Proceedings of IEE European Conference on Visual Media Production (IEE, 2005), pp. 201–210.

27. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learning 3(1), 741–755 (2010). [CrossRef]  

28. A. Yang, S. Sastry, A. Ganesh, and Y. Ma, “Fast-minimization algorithms and an application in robust face recognition: A review,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2010), pp. 1849–1852.

29. Y. Deng, Q. Dai, and Z. Zhang, “An overview of computational sparse models and their applications in artificial intelligence,” Artif. Intell. Evol. Comput. Metaheuristics 427, 345–369 (2012). [CrossRef]  

30. Z. Lin, M. Chen, L. Wu, and Y. Ma, “The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices,” in Technical Report UILU-ENG-09-2215 (UIUC, 2009).

31. M. Harwit and N. J. A. Sloane, Hadamard Transform Optics (Academic, 1979).

32. A. Lam and I. Sato, “Spectral modeling and relighting of reflective-fluorescent scenes,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1452–1459.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 The system and results of our approach on one exemplary scene. (a) Prototype setup. (b) One coded image. (c, d) Reconstructed reflectance and fluorescence, respectively.
Fig. 2
Fig. 2 The visualization of excitation–emission matrix. The strength of the matrix entries is illustrated by intensity here. Vertical and horizontal color bars respectively illustrate the excitation wavelength λin and the emission wavelength, which includes the fluorescent component λ out ref and reflective component λ out fluo . (a) No overlap between excitation and emission. (b) Slight overlap between excitation and emission.
Fig. 3
Fig. 3 Extending the reconstruction of a single pixel to image lattice by normalization. Here we use different line colors to differentiate excitation wavelengths.
Fig. 4
Fig. 4 The light path of the proposed imaging system. The corresponding real setup is shown in Fig. 1(a).
Fig. 5
Fig. 5 The performance on synthetic data, including three materials (horizontal) and three sampling rates (vertical). These results are averaged over five different random codes.
Fig. 6
Fig. 6 Performance of our algorithm on a real scene and comparison with that of traverse capturing. The left column contains only reflectance, the middle column is the fluorescence excited by a single band illumination, and the right column gives the result under a mixture-spectrum illumination. The mean absolute percentage error (MAPE) of each result is labeled in the top right corner.
Fig. 7
Fig. 7 Quantitative evaluation on real data. Here three representative points are selected, and we differentiate the excitation illuminations with different colors.
Fig. 8
Fig. 8 Relighting results under three different types of light sources and comparison with true results. (a) Noon sunlight. (b) Tungsten lamp. (c) Mercury vapor lamp.
Fig. 9
Fig. 9 Segmentation assisted by high-spectral-resolution fluorescent components. (a) A scene under daylight. (b) RGB values of five labeled regions. (c) Top three discriminative features between regions 1 and 2. (d) Segmentation of car parts. (e) Top three discriminative features among regions 3, 4, and 5. (f) Segmentation of toy ball and fluorescent paint.

Tables (1)

Tables Icon

Table 1 Algorithm 1: Reconstruct reflective and fluorescent components in a scene.

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

M ^ = R ^ + F ^ + N ^ ,
( F ^ * , R ^ * , N ^ * ) = argmin F ^ * + α R ^ 1 s . t . π Ω ^ ( C ^ ) = π Ω ^ ( I ^ ( F ^ + R ^ ) O ^ + N ^ ) | N ^ μ | < 3 σ .
[ C ^ 1 C ^ 2 C ^ w ] = I ^ ( [ F ^ 1 F ^ 2 F ^ w ] + [ R ^ 1 R ^ 2 R ^ w ] ) [ O ^ O ^ O ^ ] + [ N ^ 1 N ^ 2 N ^ w ] ,
C = I ( F + R ) O + N .
( F * , R * , N * ) = argmin F * + α R 1 s . t . π Ω ( C ) = π Ω ( I ( F + R ) O + N ) | N μ | < 3 σ ,
min . S 1 * + α S 2 1 s . t . S 1 = F S 2 = R C = I F O + I R O + N + E , E ( i , j ) ( i , j ) Ω = 0 , ( N μ ) ( N μ ) 9 σ 2 + ε 2 = 0 .
Lag = S 1 * + α S 2 1 + < Y 1 , S 1 F > + β 2 S 1 F F 2 + < Y 2 , I F O + I R O + N + E C > + β 2 I F O + I R O + N + E C F 2 + < Y 3 , ( N μ ) 2 9 σ 2 + ε 2 > + β 2 ( N μ ) 2 9 σ 2 + ε 2 F 2 ,
f ( S 1 ) = S 1 * + β 2 S 1 ( F β 1 Y 1 ) F 2 + C ,
S 1 ( k + 1 ) = U s β 1 ( S temp ) V T ,
s β 1 ( x ) = { x β 1 , x > β 1 x + β 1 , x < β 1 0 , others .
f ( S 2 ) = α S 2 1 + β 2 S 2 ( R β 1 Y 0 ) F 2 .
S 2 ( k + 1 ) = s α β 1 ( R β 1 Y 0 ) .
f ( E ) E = β [ E ( C I F O I R O N β 1 Y 2 ) ] ,
E ( k + 1 ) = C I F O I R O N β 1 Y 2 .
ε ( k + 1 ) = 9 σ 2 ( N μ ) ( N μ ) β 1 Y 3 .
F ( k + 1 ) = F ( k ) γ 1 f ( F ) F R ( k + 1 ) = R ( k ) γ 2 f ( R ) R N ( k + 1 ) = N ( k ) γ 3 f ( N ) N ,
f ( F ) F = β [ F 2 + I T I F O O T ( S 1 + β 1 Y 1 ) I T ( C I R O N E β 1 Y 2 ) O T ] f ( R ) R = β [ R + I T I R O O T ( S 2 + β 1 Y 0 ) I T ( C I F O N E β 1 Y 2 ) O T ] f ( N ) N = β [ 2 ( N μ ) 3 2 ( N μ ) ( 9 σ 2 ε 2 β 1 Y 3 ) + N ( C I F O I R O E β 1 Y 2 ) ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.