Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Implementation of a multiplexed structured illumination method to achieve snapshot multispectral imaging

Open Access Open Access

Abstract

An instantaneous multispectral imaging setup based on frequency recognition algorithm for multiple exposures (FRAME) is presented and demonstrated experimentally. With this implementation of FRAME, each light source is uniquely encoded with a spatial modulation and the corresponding fluorescent responses pertaining to each maintain the same unique encoding. This allows the extraction of each source response from a single captured image by filtering in the Fourier domain. As a result, a multispectral imaging system based on FRAME can perform all the illumination and corresponding fluorescence detection simultaneously, where the latter is recorded in a single exposure and on a single detector and is thus capable of recording true ‘snapshot’ multispectral images. The results presented here demonstrate that the technique is capable of distinguishing source responses for well separated and co-localized fluorophores as well as providing z-sectioning capabilities. This implementation of FRAME demonstrates its viability as a tool for multispectral imaging of dynamic samples. Additionally, since all the spectral images are captured simultaneously, the method has potential for studying samples prone to photobleaching. Finally, this application of FRAME makes it possible to discriminate between signals due to infinitely spectrally close sources which, to the best of the authors’ knowledge, has not been possible in snapshot multispectral imaging schemes before.

© 2017 Optical Society of America

1. Introduction

Multispectral imaging – capturing image information at specific spectral regions - is an important tool for investigating the structure and composition of variety of articles. It is used for quality control in the food [1], and pharmaceuticals industries [2], as well as fluorescence microscopy [3,4], among numerous other applications. Classically multispectral imaging is achieved by recording a sequence of images at different wavelengths of interest and compiling them into a single image. This approach makes the acquisition time consuming and prone to errors, for example due to sample movement between exposures. In addition, there is a general scientific interest in investigating dynamic samples such as combustion processes [5,6] or fluorescent probes used in biological and biomedical imaging of tissue and cells [7]. With conventional multispectral imaging schemes, it does not appear to be possible, to the best of the authors’ knowledge, to obtain accurate multispectral images of such samples.

Various ‘snapshot’ multispectral imaging devices have been developed to tackle the need for high temporal resolution, a requirement for imaging dynamic samples. These include methods which use lens arrays [8] both with [9] and without individual spectral filters [9,10], spectrum slicers [11], birefringent spectral demultiplexor (BSD) [12] and encoded illumination [13] and emission [9,14] light. The MAFC (multiaperture filtered camera) proposed by Shogenji et al. uses an array of lenses aligned with an array of spectral filters which are in turn aligned with a monolithic detector array [15]. SHIFT (snapshot hyperspectral imaging Fourier transform spectrometer) [10] is similar to the MAFC [15] however does not involve the use of individual spectral filters. Both of these methods are division of aperture approaches. The IRIS (Image-Replicating Imaging Spectrometer) system [12] relies on separating the spectral channels using a BSD incorporating Wollaston prisms. In narrowband and broadband cases the spectral dispersion may result in the optical point spread function covering less than one detector pixel and more than 10 pixels, respectively. The system is therefore sensitive to overlap of the different spectral images leading to a form of cross-talk. CASSI (coded aperture snapshot spectral imager) [14] applies an encoding to the emitted light by placing a coded aperture in the image plane. Prisms are subsequently used to disperse the light such that each spectral element hits the detector at a slightly different horizontal position. Due to calibration of the system it is then possible to computationally extract each overlapping region corresponding to each of the different encoded spectral channels. In [14] it is noted that saturation problems occur due to higher intensities from certain wavelength channels and subsequently these cause errors in the neighboring channels. This highlights the importance of being able to balance separate wavelength channels which FRAME, the snapshot scheme presented in this paper, is able to do due to the individual source control. The multiplexed structured illumination (SI) scheme presented by S. Dong et al. [13] demonstrates the use of SI for obtaining either z sample information or spectral information in addition to higher spatial resolution, the classical use for SI in imaging. This is achieved through speckle illumination where experimentally 114 different translations are used. As a result the method does not allow for snapshot imaging.

In this paper we present a variation on a young method, called FRAME, which uses multiplexed SI to encode several images into a single image. Then, using a computational algorithm based on the lock-in detection principle, the different images can be separated in the post-processing stage. The technique can be implemented in various experimental setups in order to target certain parameters. To date, two implementations of FRAME have been presented, demonstrating its capability of multiplexing images containing information from different locations in space [16] or from different instances in time [17]. Specifically, in [16], snapshot 3D imaging of flame species using planar laser-induced fluorescence (PLIF) imaging is demonstrated and in [17] ultrafast videography of the propagation of a laser pulse through a Kerr-sensitive medium is presented. Both these past implementations used a single wavelength to probe the sample and subsequently demonstrate one additional important technical aspect; the FRAME image-coding concept is color-independent, i.e. the technique is compatible with spectroscopy. In this paper we expand on this feature even further and demonstrate, for the first time, how the technique can be used to multiplex images carrying different color information, providing a new means for snapshot multispectral imaging.

In the present work the FRAME concept is used as part of a new experimental scheme where each encoded image corresponds to different spectral information. In the setup, each different illumination source (wavelength) is uniquely encoded such that the fluorescence pertaining to each can be computationally separated after detection. This is achieved by spatially modulating each wavelength such that, when the Fourier transform is determined, the information is contained within distinct regions (specific clusters of frequencies) in the Fourier domain. In this way it is possible to ‘place’ the sample information corresponding to different wavelengths into distinct regions in the Fourier domain so that they are well separated. As a result, instantaneous multispectral imaging becomes possible since all the different wavelengths can be captured at the same time. In addition, the use of a single camera circumvents any spatial overlapping errors which can occur when using schemes which employ multiple cameras in parallel. Furthermore, since the presented method provides control over the illumination through the encoding such that the corresponding emissions are equally well controlled, there is no theoretical limit on how spectrally close the sources can be, in contrast to techniques which employ dispersive elements, such as CASSI [14]. The presented methodology is therefore particularly powerful for imaging processes involving both narrowband- and broadband absorbers, e.g. in combustion research. Similarly, the presented FRAME implementation can distinguish between spectrally overlapping sample emissions resulting from different probe sources, despite how spectrally close they themselves are. Finally, due to the snapshot capability, this multispectral FRAME approach allows for dynamic samples to be imaged at the frame rate provided by the detector (unlike the results presented in [17]). In this paper proof-of-concept results are presented for the implementation of the FRAME concept for snapshot multispectral imaging, including a demonstration of the ability to distinguish signals from infinitely spectrally close sources.

2. FRAME technique

This section gives a conceptual description of FRAME followed by the analysis method for spectral image extraction from the raw data collected. See also [18,19] for more information on how to analyze modulated images.

Figure 1 illustrates three different illumination schemes. Figure 1(a) illustrates a uniformly illuminated sample (Spatial domain) and the corresponding Fourier transform (Fourier domain) of the collected image. It can be seen that the frequencies from which the image is composed are unevenly weighted, with greater contributions at lower frequencies, i.e. centered around the origin. This can be understood in a similar way to digital image compression methods where it is known that lower frequencies are more important than high frequencies when representing an image [20]. The solid black circle represents the resolution limit of the detection system.

 figure: Fig. 1

Fig. 1 (a) A uniformly illuminated sample in the spatial domain (upper image) and the cluster of frequencies the collected image is comprised of in the Fourier domain (lower image). (b) A sample illuminated by a spatially modulated light source (of a single wavelength) in the spatial domain (upper image) and its corresponding Fourier transform (lower image) where additional copies of the sample information are located at higher frequency regions. (c) A sample illuminated by three different spatially modulated and rotated light sources (of three different wavelengths) in the spatial domain (upper image) and its corresponding Fourier transform (lower image) where additional sample copies from each wavelength are located at higher frequency regions and indicated by matching colors.

Download Full Size | PDF

Figure 1(b) shows a sample illuminated with a spatially modulated (encoded) wavelength and the corresponding data in the Fourier domain. Due to the encoding of the light the sample information becomes superposed multiplicatively with the spatial modulation and is therefore shifted to the region in the Fourier domain corresponding to the applied modulation. The angular position (rotation) of the two modulated regions corresponds directly to the rotation of the line grating used to spatially modulate the illumination light. In this way the encoding of the different wavelengths can be chosen such that the placement, of the recorded data, in the Fourier domain can be controlled.

Figure 1(c) shows a sample illuminated with three different wavelengths, where each has been assigned a unique combination of a spatial modulation and rotation of it. The data from each wavelength is found in the cluster pairs (indicated by matching colors) on either side of the origin, as well as in the origin itself. Since all the wavelengths produce a data cluster at the origin it is not possible to extract the individual wavelength images there since they all overlap. If, however, one of the cluster pairs around it is selected then the image corresponding to the single wavelength can be viewed, separately from all other probed wavelengths. Each of the modulated clusters should be within the detectors resolution limit and also well separated from each other in order not to experience cross talk. The maximum number of simultaneous images possible in one acquisition using the FRAME technique is dictated by the sample investigated, the resolution limit of the detector and perhaps by further, currently unknown sources.

In order to extract the different spectral images, which are all captured in a single exposure by the detector, the data is handled in the following way, illustrated by Fig. 2. The raw data, shown in Fig. 2(a) and corresponding zoomed regions, contains all signals resulting from the different excitation sources, which can be seen as modulation patterns in different orientations. This data is then Fourier transformed such that the corresponding Fourier domain image is seen, as shown in Fig. 2(b). Due to the applied spatial encoding, the spectral image information pertaining to the different sources are found in the corresponding spatial frequency regions of the Fourier domain. The single raw data image is then analyzed using the FRAME computer algorithm, to separate each of the spectral images contained within it. The FRAME computer algorithm performs - for each spectral image encoded in the raw data - the following set of operations:

 figure: Fig. 2

Fig. 2 (a) Raw data captured by the detector of fluorescent shapes each containing different fluorophores. Modulations pertaining to each of the three different excitation sources are visible with enlargements shown in the colored boxes. (b) The Fourier transform of the image in (a). (c) The Fourier transform where one of the modulated regions shifted to the center on the Fourier domain hence demodulating the image information contained within. The black circle defines the limit of the applied Gaussian filter. (d) The resulting spatial domain image after inverse Fourier transformation of the filtered Fourier domain, pertaining to one of the probe wavelengths. (e) The multispectral image obtained after performing steps (b), (c) and (d) to each of the different modulated regions. The image is false colored.

Download Full Size | PDF

  • 1) Demodulation of encoded spectral image by multiplication of raw data with a computer generated reference signal, with a frequency-matched fringe pattern.
  • 2) Fourier transformation of the demodulated image.
  • 3) Multiplication of Fourier transformed image with a low pass filter (Gaussian).
  • 4) Inverse Fourier transformation of the filtered image, revealing the decoded spectral image.

A mathematical description of the analysis procedure, which is analogous to temporal lock-in amplification [21] (albeit here in two dimensions), is provided in the Appendix at the end of this paper. Each demodulated spectral image is false colored before recombination into a multispectral image (Fig. 2(e)). For one spectral image extraction, Fig. 2(c) illustrates step 2) in the algorithm where the black circle indicates the filter used in step 3), and step 4) is shown in Fig. 2(d).

The treatment of the data, during the analysis, affects the extracted images obtained. Figure 3 illustrates the different outcomes obtained when four different sized Gaussian filters are applied to the Fourier transform of an imaged sample. Figure 3(a) shows the Fourier transform of a raw data image. Figure 3(b) shows a zoomed region of the Fourier transform where four different sized circles representing the full width half maximum (FWHM) of four Gaussian filters have been superimposed. As is seen in Figs. 3(c)-3(f) the different filter sizes affect the specificity and resolution of the extracted wavelength channel images. The filter size determines which compromised position on a sliding scale is chosen, i.e. which balance between the specificity and spatial resolution is selected. A smaller Gaussian filter selects for increased specificity since cross talk resulting from neighboring clusters becomes negligible. As the filter size is increased the spatial resolution of the resulting image increases, however, as shown in Fig. 3(f) indicated by the arrow, cross talk can cease to be negligible. It is possible to see the outline of additional letters (U and D) and numbers (2 and 1) which result from a different fluorophore excited by a different probe wavelength. Each sample requires individual parameter optimization in order to determine what the optimum filter size should be, depending on the users’ requirements, such that the best images are extracted.

 figure: Fig. 3

Fig. 3 (a) The Fourier transform of the raw image data. (b) Zoomed region of the Fourier transform where different filter sizes have been marked in red, blue, green and yellow. (c-f) A single probe wavelength fluorescence image using; (c) a small filter size (σ = 30) where the specificity is high but the spatial resolution is low, (d) a mid-sized filter (σ = 75) where the specificity and spatial resolution are good, (e) large filter size (σ = 200) where the specificity starts to become reduced due to some signal cross talk but where the spatial resolution is high, and (f) a very large filter size (σ = 350) where the specificity is poor due to clear cross talk, indicated by the arrow, but where the spatial resolution is high. It is possible to see the outline of the letters U and D and numbers 2 and 1, which originate from other fluorophores not pertaining to the probed wavelength in the given example.

Download Full Size | PDF

3. Experimental setup

Figure 4 illustrates the FRAME setup used for this proof of concept paper. Each laser source is a continuous wave diode laser. The operating at wavelengths of the different lasers are: 405nm (500mW), 450nm (2W), 532nm (200mW) and 447nm (500mW). To balance the signals on the detector, neutral density filters are used at the output of each source. The beams are initially expanded and collimated before propagating through a line pattern grating, which provides the encoding. Each grating (Edmund Optics #66-347) has a line spacing of (30 lp/mm) and each grating is rotated (about the axis of propagation) to provide unique encoding for each source. The diffracted beams are then combined by spatially overlapping them using a recombination arm, consisting of a series of dichroic mirrors and a beam splitter. Upon exiting the recombination arm, the beams are focused by a lens (f = 250 mm) onto a spatial filter which only transmits the +/−1 orders from each grating. As the diffracted beams overlap in space they each create, by interference, a sinusoidal intensity pattern with a line frequency of double the grating frequency. The benefit of this approach over imaging is that the spatial modulation is consistent over an extended z range. The sample is illuminated with the modulated line patterns and due to the linear response the fluorescence emissions maintain the same modulations. The detector records all the encoded fluorescence signals simultaneously. Notch filters are used in order to block any stray excitation light from reaching the detector.

 figure: Fig. 4

Fig. 4 Schematic of the FRAME setup where three different wavelengths have been aligned. PBS = Polarizing beam splitter, NF = Notch filter, DM = Dichroic Mirror, G = Grating.

Download Full Size | PDF

A variety of stationary samples are initially investigated to test the FRAME setup and to determine the correct parameters such as; laser intensities and Gaussian filter size. The stationary samples are imaged under FRAME conditions as well as sequential illumination such that ground truth data for comparison is collected. Subsequently using the determined parameters, the setup is used to image dynamic samples.

Stationary samples

  • ▪ Excitation sources 405nm, 450nm and 532nm using the full beam diameter.
  • ▪ Excitation sources 447nm and 450nm where each beam is 50% blocked.
  • ▪ Fluorescent paper shapes and fluorescent dye stamped onto paper used as the samples.
  • ▪ FRAME images obtained in a single exposure with all wavelengths illuminating the sample simultaneously.
  • ▪ Ground truth images obtained through individual sequential sample illumination and multiple exposures.

Z-sectioning

  • ▪ A uniformly fluorescent paper sample was imaged at different z positions to determine the z sectioning capabilities of the setup (f# 5.6).
    • Dynamic Samples
  • ▪ Excitation sources 450nm and 532nm using full beam diameter.
  • ▪ Liquid samples with two fluorescent dyes sequentially added are imaged.
  • ▪ A sequence of 100 instantaneous multispectral images are recorded at 50Hz.

4. Results and discussion

Multispectral fluorescence imaging requires the determination of fluorescent responses pertaining to different excitation wavelengths, often achieved through individual sequential illumination. Here the results of the FRAME technique for snapshot multispectral imaging of fluorescent samples is presented. The results demonstrate the capabilities of FRAME to separate the fluorescent responses due to different excitation wavelengths for both well separated and co-localized fluorophores. Two cases, demonstrating how FRAME can separate spectrally close excitation sources (3nm), are presented. In addition, the z sectioning nature, inherent to SI imaging schemes, is presented for the FRAME setup. Finally, dynamic sample results are given.

4.1 Spectral discrimination

Figure 5 shows the ground truth and extracted images for a stationary sample comprised of various different fluorescent shapes exhibiting different fluorophores. All the images have been analyzed using a Gaussian filter with a sigma value of 175 since this was concluded to give the optimum results, i.e. highest resolution while eliminating cross-talk. These results demonstrate that the FRAME technique is capable of distinguishing the fluorescence emissions resulting from individual excitation sources collected in a single exposure by a single detector. As a result, this demonstrates the ability to illuminate a sample with multiple different wavelengths simultaneously without the need to chromatically filter at either the excitation or detection ends of the imaging scheme in order to discriminate the fluorescence responses.

 figure: Fig. 5

Fig. 5 The top row shows FRAME extracted images for 405nm, 450nm and 532nm, shown in grayscale and combined in a false colored multispectral image. The corresponding ground truth images are shown in the bottom row. The results demonstrate how FRAME can determine the sample responses pertaining to each illumination source.

Download Full Size | PDF

4.2 Co-localized fluorophores

Figure 6 shows results which demonstrate the capability of FRAME to distinguish various fluorophores from each other, under simultaneous multispectral illumination, when the fluorophores are co-localized in the sample. Since the fluorescence due to each different excitation wavelength is distinguishable using the FRAME analysis, it becomes possible to use the technique in conjunction with linear unmixing for quantitative information.

 figure: Fig. 6

Fig. 6 The top row shows FRAME extracted images for 405nm, 450nm and 532nm, shown in grayscale and combined in a false colored multispectral image. The corresponding ground truth images are shown in the bottom row. These results demonstrate that FRAME can determine the sample responses pertaining to each different illumination source when the fluorophores are co-localized in the sample.

Download Full Size | PDF

4.3 Spectrally close sources

In some cases, it is useful to be able to use very spectrally close sources, for example in order to distinguish narrowband- and broadband absorbers – a common problem in e.g. combustion research. To identify the spatial distribution of a particular combustion species, flames are often imaged at two excitation wavelengths; one matching the narrowband absorption line for the species of interest, the other one slightly off this line. The latter image is then subtracted from the first [22–24]. However, due to the turbulent nature of flames, snapshot visualizations based on this approach become experimentally challenging, usually requiring two intensified cameras that acquire each data sequentially. With FRAME, sequential imaging can be circumvented due to the source encoding, which remains in the corresponding fluorescence. Additionally, since the current implementation of FRAME encodes each excitation source uniquely there is no theoretical limit on how spectrally close the sources can be, provided that they are not spatially coherent to each other. Figure 7 shows the results for a stationary case where the excitation sources are separated by only 3nm. Here each excitation source is partially blocked such that each illuminates the sample only with a semicircle of light. In the top right quarter circle both sources illuminate the sample. In the bottom quarter circle there is no illumination. The remaining two quarter circles are illuminated by each of the single sources, one per quarter. By comparing the extracted images to the ground truth it can be seen that FRAME is capable of distinguishing spectrally close lying- and well separated sources.

 figure: Fig. 7

Fig. 7 The top row shows FRAME extracted images for 447nm and 450nm, shown in grayscale and combined in a false colored multispectral image. The corresponding ground truth images are shown in the bottom row. The semicircles indicate the areas illuminated by the two sources. In the top right quarter the sample is illuminated by both wavelengths. These results demonstrate that due to the encoding used in FRAME it is possible to determine the fluorescence responses from the same fluorophore illuminated by two very spectrally close sources simultaneously.

Download Full Size | PDF

Figure 8 shows another sample under the same illumination as the sample in Fig. 7. Two line slices through the image have been extracted, with the specific cuts superimposed on the multispectral extracted and ground truth images. The intensity profiles of the extracted and ground truth images are compared. As can be seen in the plots in Fig. 8 the extracted images correspond very well with the ground truth images such that it is possible to distinguish the fluorescence contributions from the fluorophores under the two different illuminations from each other, in the overlapping region.

 figure: Fig. 8

Fig. 8 The top image shows the multispectral FRAME image of a sample illuminated by two semicircular beam profiles of sources at 447nm and 450nm. The bottom image shows the corresponding ground truth image. The top plot shows the FRAME and ground truth intensity profiles across the line AB marked in the top image. The bottom plot shows the same for the line slice CD. The R2 values are given for the line cuts AB and CD for wavelengths 447nm and 450nm. These results illustrate how FRAME can determine the fluorescence responses from the same fluorophore illuminated by two very spectrally close sources simultaneously.

Download Full Size | PDF

4.4 Z-sectioning

Figure 9 illustrates the z-sectioning capabilities of this FRAME system for a sigma value of 175 where the FWHM of the curve is approximately 7mm for a field-of-view of 59x50mm. Measurements were done using a relatively small aperture (f#5.6) and if an aperture with a lower f# were to be used, the sectioning capabilities are expected to improve. The results were obtained by illuminating a flat uniformly fluorescent sample with a single spatially modulated source and recording images at different z positions. The FRAME technique, like other SI imaging schemes, has z-sectioning abilities, i.e. allowing FRAME to reject light originating from out-of-focus regions.

 figure: Fig. 9

Fig. 9 A plot showing the normalized intensities of the images captured at different z positions. The Gaussian filter size defined by σ = 175 is used since this is the value all the images presented in this paper have been analyzed with.

Download Full Size | PDF

4.5 Dynamic samples

Figure 10(a) shows 5 different frames from a data set of 100 images of a dynamic sample where the two fluorescent dyes can be seen. A movie animating the entire data set is available online (see Visualization 1). One dye is initially added which has been false colored in the analysis stage to be red. A second dye is then added, false colored green, and the dynamics of it mixing with the first is followed using FRAME. The zoomed region illustrates the level of detail that can be obtained using the current FRAME setup. Figure 10(b) shows the same 5 frames however only for the second added dye. In these images the dynamics of the dyes are even more clear.

 figure: Fig. 10

Fig. 10 (a) Multispectral FRAME images of a dynamic sample of two different fluorescent dyes. Five frames are shown to illustrate the dynamic nature of the sample. A zoomed portion of the final frame is shown which highlights the structures resolvable using FRAME. A movie of the entire data set is available online (see Visualization 1). (b) Five frames showing the second dye only as it mixes with the first dye. (σ = 175)

Download Full Size | PDF

Figure 11(a) shows another series of images from a total of 100 acquired where the injection of a second dye shortly after a first dye can be seen. A movie animating the entire data set is available online (see Visualization 2). The syringe is still visible in the first few selected frames. It is therefore possible to observe the initial injection of the second dye and follow it as it spreads and mixes together in a ‘swirling’ pattern. Figure 11(b) shows 5 frames, chosen to highlight the dynamic process, of the second dye only.

 figure: Fig. 11

Fig. 11 (a) Multispectral FRAME images of a dynamic sample of two different fluorescent dyes. Nine frames are shown to illustrate the dynamic nature of the sample. Here a second dye is added to the first dye using a syringe. A movie of the entire data set is available online (see Visualization 2). (b) Five frames showing the first dye only as the second dye pushes it out and mixes with it. (σ = 175)

Download Full Size | PDF

The extracted images in Figs. 10 and 11 are all obtained using a sigma value of 175, as previously motivated.

5. Conclusion

In summary, in this implementation, FRAME encodes each excitation source which means it is possible to determine the exact source corresponding to each detection when a sample is multiply illuminated at the same time. As a result this method is therefore able to separate sample responses for infinitely spectrally close sources acquired simultaneously. To the best of the authors’ knowledge these two features do not exist in any snapshot multispectral imaging techniques today.

In this work, it has been shown that FRAME is capable of distinguishing emission signals due to different excitation sources both in a sample where the fluorophores are well separated and also where they are co-localized. Since all the spectral information is acquired simultaneously and in a single exposure, this shows that FRAME can be used for snapshot multispectral imaging of stationary and dynamic samples. Additionally, the method can be used for samples prone to photo bleaching allowing them to be multispectrally imaged before degradation of the sample.

Since the method allows for the control of individual sources in intensity, wavelength and spatial modulation it can be easily tailored to suit different samples. This makes it possible to avoid saturation, as well as cross talk, seen in other techniques which instead employ dispersive elements for signal discrimination [12,14]. The ability to separate the sample signals due to spectrally close sources allows the snapshot multispectral imaging FRAME method to be used to probe samples where narrow excitation sources are required. Additionally, spectrally overlapping sample emissions originating from different excitation sources are also resolvable. The method could also be compatible with linear unmixing making it possible to make quantitative measurements on dynamic samples.

The snapshot multispectral imaging FRAME method can be used in various optical configurations, e.g. schemes based on absorption, scattering and transmission, and is not limited to fluorescence imaging. One technical benefit when using fluorescence imaging, however, is that the method does not filter out broad spectral regions of the fluorescence emission. Consequently, the presented approach can, with the exception of the narrow regions where notch filters are used to remove stray source light, collect the entire spectral responses of all the different fluorophores.

Appendix

The intensity profile of each laser beam is modulated with a sinusoidal pattern where its frequency is given by ν and phase is given by ρ. The incident light, originating from three different sources, interacts with the sample and as a result the modulated fields and the sample structures (A) are superimposed multiplicatively such that the detected image follows:

Imod=BDC+A1sin(2πνx1x+2πνy1y+ρ1(x,y))+A2sin(2πνx2x+2πνy2y+ρ2(x,y))+A3sin(2πνx3x+2πνy3y+ρ3(x,y))
BDC represents the DC component which contains the background signal as well as the unmodulated sample structure amplitudes. For simplification νy is set to zero and the Imod for only one of the modulated components is considered:
Imod=0.5(A+Asin(2πνxx+ρx))
In order to extract the sample information, the detected image needs to be demodulated. It is known that a sinusoidal modulation was applied and therefore two reference matrices (R1,R2) can be constructed and multiplied with Imod.
R1=sin(2πνxx+ρ)
R2=sin(2πνxx+ρ+π/2)
R1Imod=0.5Asin(2πνxx+ρ)+0.5Asin(2πνxx+ρx)sin(2πνxx+ρ)
R2Imod=0.5Asin(2πνxx+ρ+π/2)+0.5Asin(2πνxx+ρx)sin(2πνxx+ρ+π/2)
The phase,ρx, is assumed to be constant in this analysis.
R1Imod=0.5Asin(2πνxx+ρ)+0.5A0.5(cos(ρxρ)cos(4πνxx+ρx+ρ))
R2Imod=0.5Asin(2πνxx+ρ+π/2)+0.5A0.5(cos(ρxρπ/2)cos(4πνxx+ρx+ρ+π/2))
Apply a low pass filter at νx to select the shifted modulated component only.

R1Imod=0.52A˜cos(ρxρ)
R2Imod=0.52A˜sin(ρxρ)
R1Imod2+R2Imod2=0.5A˜

Funding

Vetenskapsrådet (Agency ID: 501100004359, Award #121892).

References and links

1. D. Wu and D.-W. Sun, “Advanced applications of hyperspectral imaging technology for food quality and safety analysis and assessment: A review – Part I: Fundamentals,” Innov. Food Sci. Emerg. Technol. 19, 1–14 (2013). [CrossRef]  

2. M. Klukkert, J. X. Wu, J. Rantanen, J. M. Carstensen, T. Rades, and C. S. Leopold, “Multispectral UV imaging for fast and non-destructive quality control of chemical and physical tablet attributes,” Eur. J. Pharm. Sci. 90, 85–95 (2016). [CrossRef]   [PubMed]  

3. T. Zimmermann, “Spectral imaging and linear unmixing in light microscopy,” Adv. Biochem. Eng. Biotechnol. 95, 245–265 (2005). [CrossRef]   [PubMed]  

4. T. Zimmermann, J. Rietdorf, and R. Pepperkok, “Spectral imaging and its applications in live cell microscopy,” FEBS Lett. 546(1), 87–92 (2003). [CrossRef]   [PubMed]  

5. J. Hunicz and D. Piernikarski, “Investigation of combustion in a gasoline engine using spectrophotometric methods,” Proc. SPIE 4516, 307–314 (2001). [CrossRef]  

6. P. Kauranen, S. Andersson-Engels, and S. Svanberg, “Spatial mapping of flame radical emission using a spectroscopic multi-colour imaging system,” Appl. Phys. B-Photo. 53(4), 260–264 (1991).

7. C. E. Volin, B. K. Ford, M. R. Descour, J. P. Garcia, D. W. Wilson, P. D. Maker, and G. H. Bearman, “High-speed spectral imager for imaging transient fluorescence phenomena,” Appl. Opt. 37(34), 8112–8119 (1998). [CrossRef]   [PubMed]  

8. R. T. Kester, N. Bedard, L. Gao, and T. S. Tkaczyk, “Real-time snapshot hyperspectral imaging endoscope,” J. Biomed. Opt. 16(5), 056005 (2011). [CrossRef]   [PubMed]  

9. N. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng. 52(9), 090901 (2013). [CrossRef]  

10. M. W. Kudenov and E. L. Dereniak, “Compact real-time birefringent imaging spectrometer,” Opt. Express 20(16), 17973–17986 (2012). [CrossRef]   [PubMed]  

11. M. Tamamitsu, Y. Kitagawa, K. Nakagawa, R. Horisaki, Y. Oishi, S.-y. Morita, Y. Yamagata, K. Motohara, and K. Goda, “Spectrum slicer for snapshot spectral imaging,” Opt. Eng. 54(12), 123115 (2015). [CrossRef]  

12. A. Gorman, D. W. Fletcher-Holmes, and A. R. Harvey, “Generalization of the Lyot filter and its application to snapshot spectral imaging,” Opt. Express 18(6), 5602–5608 (2010). [CrossRef]   [PubMed]  

13. S. Dong, K. Guo, S. Jiang, and G. Zheng, “Recovering higher dimensional image data using multiplexed structured illumination,” Opt. Express 23(23), 30393–30398 (2015). [CrossRef]   [PubMed]  

14. A. A. Wagadarikar, N. P. Pitsianis, X. Sun, and D. J. Brady, “Video rate spectral imaging using a coded aperture snapshot spectral imager,” Opt. Express 17(8), 6368–6388 (2009). [CrossRef]   [PubMed]  

15. R. Shogenji, Y. Kitamura, K. Yamada, S. Miyatake, and J. Tanida, “Multispectral imaging using compact compound optics,” Opt. Express 12(8), 1643–1655 (2004). [CrossRef]   [PubMed]  

16. E. Kristensson, Z. Li, E. Berrocal, M. Richter, and M. Aldén, “Instantaneous 3D imaging of flame species using coded laser illumination,” Proc. Combust. Inst. 36(3), 4585–4591 (2017). [CrossRef]  

17. A. Ehn, J. Bood, Z. Li, E. Berrocal, M. Aldén, and E. Kristensson, “FRAME: femtosecond videography for atomic and molecular dynamics,” Light Sci. Appl. (accepted).

18. E. Kristensson, J. Bood, M. Aldén, E. Nordström, J. Zhu, S. Huldt, P.-E. Bengtsson, H. Nilsson, E. Berrocal, and A. Ehn, “Stray light suppression in spectroscopy using periodic shadowing,” Opt. Express 22(7), 7711–7721 (2014). [CrossRef]   [PubMed]  

19. E. Berrocal, J. Johnsson, E. Kristensson, and M. Aldén, “Single scattering detection in turbid media using single-phase structured illumination filtering,” J. Europ. Opt. Soc. Rap. Public. 7, 12015 (2012). [CrossRef]  

20. A. Gersho and R. M. Gray, Vector Quantization and Signal Compression (Springer US, 1992).

21. M. L. Meade, “Advances in lock-in amplifiers,” J. Phys. E Sci. Instrum. 15(4), 395–403 (1982). [CrossRef]  

22. M. G. Allen, K. R. McManus, D. M. Sonnenfroh, and P. H. Paul, “Planar laser-induced-fluorescence imaging measurements of OH and hydrocarbon fuel fragments in high-pressure spray-flame combustion,” Appl. Opt. 34(27), 6287–6300 (1995). [CrossRef]   [PubMed]  

23. S. Böckle, J. Kazenwadel, T. Kunzelmann, D.-I. Shin, C. Schulz, and J. Wolfrum, “Simultaneous single-shot laser-based imaging of formaldehyde, OH, and temperature in turbulent flames,” Proc. Combust. Inst. 28(1), 279–286 (2000). [CrossRef]  

24. R. Suntz, H. Becker, P. Monkhouse, and J. Wolfrum, “Two-dimensional visualization of the flame front in an internal combustion engine by laser-induced fluorescence of OH radicals,” Appl. Phys. B 47(4), 287–293 (1988). [CrossRef]  

Supplementary Material (2)

NameDescription
Visualization 1       A movie animating the entire data set corresponding to figure 10, where two fluorophores are mixing together.
Visualization 2       A movie anitmating the entire dataset corresponding to figure 11 where two fluorophores are mixing together.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 (a) A uniformly illuminated sample in the spatial domain (upper image) and the cluster of frequencies the collected image is comprised of in the Fourier domain (lower image). (b) A sample illuminated by a spatially modulated light source (of a single wavelength) in the spatial domain (upper image) and its corresponding Fourier transform (lower image) where additional copies of the sample information are located at higher frequency regions. (c) A sample illuminated by three different spatially modulated and rotated light sources (of three different wavelengths) in the spatial domain (upper image) and its corresponding Fourier transform (lower image) where additional sample copies from each wavelength are located at higher frequency regions and indicated by matching colors.
Fig. 2
Fig. 2 (a) Raw data captured by the detector of fluorescent shapes each containing different fluorophores. Modulations pertaining to each of the three different excitation sources are visible with enlargements shown in the colored boxes. (b) The Fourier transform of the image in (a). (c) The Fourier transform where one of the modulated regions shifted to the center on the Fourier domain hence demodulating the image information contained within. The black circle defines the limit of the applied Gaussian filter. (d) The resulting spatial domain image after inverse Fourier transformation of the filtered Fourier domain, pertaining to one of the probe wavelengths. (e) The multispectral image obtained after performing steps (b), (c) and (d) to each of the different modulated regions. The image is false colored.
Fig. 3
Fig. 3 (a) The Fourier transform of the raw image data. (b) Zoomed region of the Fourier transform where different filter sizes have been marked in red, blue, green and yellow. (c-f) A single probe wavelength fluorescence image using; (c) a small filter size (σ = 30) where the specificity is high but the spatial resolution is low, (d) a mid-sized filter (σ = 75) where the specificity and spatial resolution are good, (e) large filter size (σ = 200) where the specificity starts to become reduced due to some signal cross talk but where the spatial resolution is high, and (f) a very large filter size (σ = 350) where the specificity is poor due to clear cross talk, indicated by the arrow, but where the spatial resolution is high. It is possible to see the outline of the letters U and D and numbers 2 and 1, which originate from other fluorophores not pertaining to the probed wavelength in the given example.
Fig. 4
Fig. 4 Schematic of the FRAME setup where three different wavelengths have been aligned. PBS = Polarizing beam splitter, NF = Notch filter, DM = Dichroic Mirror, G = Grating.
Fig. 5
Fig. 5 The top row shows FRAME extracted images for 405nm, 450nm and 532nm, shown in grayscale and combined in a false colored multispectral image. The corresponding ground truth images are shown in the bottom row. The results demonstrate how FRAME can determine the sample responses pertaining to each illumination source.
Fig. 6
Fig. 6 The top row shows FRAME extracted images for 405nm, 450nm and 532nm, shown in grayscale and combined in a false colored multispectral image. The corresponding ground truth images are shown in the bottom row. These results demonstrate that FRAME can determine the sample responses pertaining to each different illumination source when the fluorophores are co-localized in the sample.
Fig. 7
Fig. 7 The top row shows FRAME extracted images for 447nm and 450nm, shown in grayscale and combined in a false colored multispectral image. The corresponding ground truth images are shown in the bottom row. The semicircles indicate the areas illuminated by the two sources. In the top right quarter the sample is illuminated by both wavelengths. These results demonstrate that due to the encoding used in FRAME it is possible to determine the fluorescence responses from the same fluorophore illuminated by two very spectrally close sources simultaneously.
Fig. 8
Fig. 8 The top image shows the multispectral FRAME image of a sample illuminated by two semicircular beam profiles of sources at 447nm and 450nm. The bottom image shows the corresponding ground truth image. The top plot shows the FRAME and ground truth intensity profiles across the line AB marked in the top image. The bottom plot shows the same for the line slice CD. The R2 values are given for the line cuts AB and CD for wavelengths 447nm and 450nm. These results illustrate how FRAME can determine the fluorescence responses from the same fluorophore illuminated by two very spectrally close sources simultaneously.
Fig. 9
Fig. 9 A plot showing the normalized intensities of the images captured at different z positions. The Gaussian filter size defined by σ = 175 is used since this is the value all the images presented in this paper have been analyzed with.
Fig. 10
Fig. 10 (a) Multispectral FRAME images of a dynamic sample of two different fluorescent dyes. Five frames are shown to illustrate the dynamic nature of the sample. A zoomed portion of the final frame is shown which highlights the structures resolvable using FRAME. A movie of the entire data set is available online (see Visualization 1). (b) Five frames showing the second dye only as it mixes with the first dye. (σ = 175)
Fig. 11
Fig. 11 (a) Multispectral FRAME images of a dynamic sample of two different fluorescent dyes. Nine frames are shown to illustrate the dynamic nature of the sample. Here a second dye is added to the first dye using a syringe. A movie of the entire data set is available online (see Visualization 2). (b) Five frames showing the first dye only as the second dye pushes it out and mixes with it. (σ = 175)

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

I mod = B DC + A 1 sin( 2π ν x 1 x+2π ν y 1 y+ ρ 1 ( x,y ) ) + A 2 sin( 2π ν x 2 x+2π ν y 2 y+ ρ 2 ( x,y ) ) + A 3 sin( 2π ν x 3 x+2π ν y 3 y+ ρ 3 ( x,y ) )
I mod =0.5( A+Asin( 2π ν x x+ ρ x ) )
R 1 =sin( 2π ν x x+ρ )
R 2 =sin( 2π ν x x+ρ+π/2 )
R 1 I mod =0.5Asin( 2π ν x x+ρ ) +0.5Asin( 2π ν x x+ ρ x )sin( 2π ν x x+ρ )
R 2 I mod =0.5Asin( 2π ν x x+ρ+π/2 ) +0.5Asin( 2π ν x x+ ρ x )sin( 2π ν x x+ρ+π/2 )
R 1 I mod =0.5Asin( 2π ν x x+ρ ) +0.5A0.5( cos( ρ x ρ )cos( 4π ν x x+ ρ x +ρ ) )
R 2 I mod =0.5Asin( 2π ν x x+ρ+π/2 ) +0.5A0.5( cos( ρ x ρπ/2 )cos( 4π ν x x+ ρ x +ρ+π/2 ) )
R 1 I mod = 0.5 2 A ˜ cos( ρ x ρ )
R 2 I mod = 0.5 2 A ˜ sin( ρ x ρ )
R 1 I mod 2 + R 2 I mod 2 =0.5 A ˜
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.