Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

A versatile, low-cost, snapshot multidimensional imaging approach based on structured light

Open Access Open Access

Abstract

The behaviour and function of dynamic samples can be investigated using optical imaging approaches with high temporal resolution and multidimensional acquisition. Snapshot techniques have been developed in order to meet these demands, however they are often designed to study a specific parameter, such as spectral properties, limiting their applicability. Here we present and demonstrate a frequency recognition algorithm for multiple exposures (FRAME) snapshot imaging approach, which can be reconfigured to capture polarization, temporal, depth-of-focus and spectral information by simply changing the filters used. FRAME is implemented by splitting the emitted light from a sample into four channels, filtering the light and then applying a unique spatial modulation encoding before recombining all the channels. The multiplexed information is collected in a single exposure using a single detector and extracted in post processing of the Fourier transform of the collected image, where each channel image is located in a distinct region of the Fourier domain. The approach allows for individual intensity control in each channel, has easily interchangeable filters and can be used in conjunction with, in principle, all 2D detectors, making it a low cost and versatile snapshot multidimensional imaging technique.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Multidimensional optical imaging [1,2] is an approach which gathers multiple dimensions, up to nine [1], of information about a sample. Depth-of-focus, polarization, temporal and spectral information about samples are some of the dimensions of interest in optical imaging applications. Depth-of-focus information is required in volumetric imaging such as is obtained using scanning confocal microscopy [3]. Polarization imaging is useful for separating specular and diffuse reflections [4], 3-dimensional reconstruction and biological imaging [5] and studying transparent objects [6], i.e. stress, strain and imperfections. Temporal imaging is useful for studying highly dynamic samples such as combustion processes [7,8] and molecular dynamics. Spectral imaging has applications in combustion diagnostics [9], fluorescence imaging [1012], in vivo biological imaging [13] and remote sensing [14]. Approaches able to capture more than 2 dimensions, conventionally the intensity distribution in x and y, have emerged but are accompanied by a variety of limitations such as high cost, limited use or poor point spread function and in non-snapshot cases, may not suitable for dynamic samples.

In order to image dynamic samples, high temporal resolution is required, which is met by snapshot imaging techniques [1,15]. These techniques acquire all the different dimensional information in a single exposure or through the use of parallelized detectors. Snapshot techniques typically discriminate the emitted light from a sample [5,6,1618] although there are some techniques which encode the illumination source [1922] and therefore discriminate based on the incident light on the sample. CASSI [16,18] and LATIS [17] are both snapshot multispectral techniques which employ dispersive elements for spectral discrimination. CASSI applies a coded pattern to the emitted light before spectral dispersion onto the detector. Each spectral element has the same pattern but at slightly shifted horizontal positions on the detector, such that they can be differentiated from each other. LATIS uses a lenslet array to generate sub images, with void spaces in-between, which are then spectrally dispersed using a prism. At the detector the spectrally dispersed images fill the void spaces and the lenslet array can be rotated in order to most efficiently and compactly fill them. Alternatively to using dispersive elements there are commercial systems, from IMEC and Polarsens, where camera integrated filter arrays are employed. IMECs snapshot hyperspectral cameras use an on chip spectral filter array with up to 32 spectral channels, depending on the model used. Polarsens has two models for either polarization or polarization and multispectral sensitive image capture in a snapshot. The first has a polarization filter array of four polarization filters (0°, 45°, 90° and 135°) on the camera sensor and the second has the addition of a Bayer filter (RGB) overlaid over the polarization filter array. In methods where dispersive elements are used there is a non-linear spread of the point-spread-function of the shorter and longer wavelengths which results in a variation in the spectral resolution with respect to wavelength. Camera integrated systems overcome this, but limit users to single purpose (fixed dimensions) equipment often at high cost.

Ultrafast videography methods have temporal limitations [23] imposed by various elements within the setup. FDMI [24] and TMSD [25] use devices to encode individual frames of a dynamic scene with spatial modulations. Their temporal limitation is set by the update speed of a spatial light modulator (SLM), for FDMI, and a digital micromirror device (DMD), for TMSD. COSUP [26] applies a binary pseudo random pattern to the image of a dynamic scene using a DMD, however its temporal resolution is set by the mechanical rotation speed of a galvanometer scanner which temporally shears the encoded scene along the pixels of a camera. T-CUP [27] applies a similar encoding to COSUP, however it uses a streak camera to shear the scene across the pixels. This transition from mechanical limitations in the form of rotating mirrors to electronic limitations in the form of the streak camera, allows for T-CUP to enter the Tfps regime. MUSIC [28] applies spatial modulations on individual optically delayed paths that, in conjunction with a gated camera, has demonstrated a frame rate of about 1 Gfps. STAMP [29] and its follow-up SF-STAMP [30] are illumination techniques, thus limited by illumination pulse length. The techniques obtain high temporal resolution by dividing short laser pulses into a pulse train of discrete spectral components. Each individual component is then spatially separated with a dispersive optical element and as a result, they are, in their current state, restricted to the monitoring of coherence-maintaining events. The temporal resolution achievable in all these videography methods is rigid, requiring custom adjustments and calibration to vary the temporal properties, therefore lacking versatility for imaging a broad variety of dynamic scenes.

FDMI [24,31], FRAME [1922], TMSD [25] and MUSIC [28,32] are all image multiplexing approaches based on frequency encoding. Using these techniques, high speed videography [20,24,25,28,31], field of view extension [32], volumetric and multispectral imaging [19,21,22] have been demonstrated from single exposure acquisition. Experimentally the encoding of the different images obtained in the single exposure is achieved by spatially modulating either the illumination sources or the light emitted from a scene. The sub images can then be computationally extracted from the multiplexed image using a computational analysis approach which is similar for FDMI, FRAME, TMSD and MUSIC alike. Recent work by our group [1922] introduced and demonstrated FRAME (frequency recognition algorithm for multiple exposures) and its applicability in multidimensional imaging, capturing temporal, spatial and spectral information in different experimental setups. In these demonstrations, multiplexed images are obtained experimentally by using illumination sources with a spatial modulation which is maintained by the light from the sample. Each modulated signal corresponds to either, spatial, temporal or spectral information and is collected using a single detector and in a single exposure. Through computational analysis using the FRAME algorithm, the multiplexed images can be separated by demodulating the different sub images, found in distinct regions of the Fourier domain. These demonstrations presented multispectral imaging where infinitely spectrally close source signals could be separated and temporal imaging of a pulse of light with 200 femtosecond resolution.

In this paper we present a versatile, snapshot multidimensional optical imaging approach setup based on spatial frequency encoding. This implementation of spatial frequency encoding - herein referred to as passive FRAME - works experimentally by splitting the light from the sample into different optical paths (channels), where the light is filtered and then uniquely encoded before being recombined and imaged onto the detector. Due to the encoding of the light, passive FRAME does not suffer from false light readings, an advantage it shares with other structured detection approaches and also integrated camera systems (e.g. PolarsensTM). This paper presents results from a single experimental setup where, by switching filters, we demonstrate depth-of-focus, polarization, temporal or spectral imaging. To the best of the authors’ knowledge, unlike previous experimental realisations of image multiplexing approaches based on frequency encoding, this setup is the first with this versatility as well as the first to demonstrate depth-of-field, channel balancing, spectral imaging with linear unmixing analysis and polarization imaging. Additionally, since it is not an integrated camera approach it can be easily added to existing setups and used in conjunction with, in principle, any 2D detector, such as high-speed-, RGB- or intensified (time-gated) cameras.

2. Passive FRAME approach

FRAME is an imaging technique based on encoding light. Until now, the demonstrations of FRAME by our group [1922] have been based on applying spatial modulations to the illumination sources in each setup, where spatial, temporal and spectral information have been encoded. In this paper we present, passive FRAME, an implementation of FRAME where instead, the light emitted from a sample is spatially encoded.

Figure 1 illustrates the camera view, i.e. the raw data, as well as the individual modulated images from each channel, 1-4, which it is comprised of. If the Fourier transform of the spatial domain images for each of the individual channels are taken then there are three distinct regions seen in the Fourier domain. The central region corresponds to the frequency distribution of the image as obtained in standard non-modulated imaging. The two outer clusters correspond to the same image information but superposed multiplicatively with the spatial modulation frequency, ν, thus shifted to the corresponding higher frequency positions of that applied modulation. Since it is possible to control both the frequency and rotation of the spatial modulation applied to the light it is correspondingly possible to control where the pair of outer clusters, or modulated information, is ‘placed’ in the Fourier domain. Each channel, 1-4, has a unique encoding applied such that when all channels are captured simultaneously by the detector in a single exposure, the resulting information will be spatially separated in the Fourier domain. It is therefore possible to use a computational algorithm, based on the lock-in detection principle [9,22,33], in order to separate each of the image cluster pairs from the raw image, demodulate them and transform them back to the spatial domain. As a result multiple encoded images can be captured in a snapshot and separated in post processing. For more information on how to analyze modulated images see [1,21,34].

 figure: Fig. 1.

Fig. 1. Passive FRAME image encoding. The unique modulation and corresponding sample information in each channel (1-4) is illustrated in both the spatial and Fourier domains. Copies of the sample information are located in the centre of the Fourier domain as well as at the higher frequency regions corresponding to the applied modulation. The first image pair in the figure illustrates the camera view in a single exposure where all four channels are recorded in a snapshot, and its corresponding Fourier domain. Copies of all the channels can be found overlapping in the centre of the Fourier domain as well as non-overlapping copies in each higher frequency region and rotation angle (indicated by matching colors) corresponding to the unique spatial encoding of each channel (1-4).

Download Full Size | PDF

Experimentally in this paper, passive FRAME is achieved by splitting the light collected from the sample into separate channels, where each channel contains a filter and a line grating (Fig. 2). Beam splitters (BS) are used to split the light into four channels and the filter (polarization, spectral or glass plate) in each channel transmits only specific light, e.g. a certain polarization orientation. After the filter there is a line grating (20 lp/mm) where an image of the sample is incident such that it becomes encoded with the grating modulation and corresponding rotation. Each of the four channels are then recombined using BS’s so that all the filtered images of the sample are spatially overlapped and imaged onto the detector.

 figure: Fig. 2.

Fig. 2. Passive FRAME experimental setup. Schematic showing the four uniquely modulated and filtered (color, polarization or glass plate) channels through which light from the sample is recorded in the experimental setup. BS = Beam splitter, F = Filter (Color filter/Polarization filter/Glass Plate), M = Mirror, G = Grating, L = Imaging lens.

Download Full Size | PDF

3. Results and discussion

Through the simple switching of filters in each of its channels we show experimentally that the passive FRAME setup can be used to image different depth-of-focus, polarization, temporal or spectral properties instantaneously. Static samples are imaged using the depth-of-focus, polarization and spectral setups where the results are verified against their ground truths. Videography of a rotating computer fan is captured at 40 times the acquisition speed of the camera in the temporal setup. Finally, qualitative linear un-mixing is demonstrated using fluorescent dyes in both static and dynamic samples.

3.1 Depth-of-focus imaging

Volumetric imaging requires in focus image capture at a variety of depths and acquisition of such data in stationary samples can be time consuming. If the sample is dynamic, sequential capture of the different depths is not feasible and therefore snapshot approaches are required. By replacing the filters in the passive FRAME setup with glass plates each channel is reconfigured so that the imaging plane in each is shifted along the z-direction, thus capturing different depths-of-focus of a sample, in this case an arrangement of optical elements (Fig. 3).

 figure: Fig. 3.

Fig. 3. Snapshot imaging of multiple depths-of-focus. An arrangement of optical components, the object, are illuminated with white light and imaged with each channel, 1-4, corresponding to a different z plane. (a) Raw data image comprised of all four modulated channels. (b) Extracted image corresponding to a z = 0 cm imaging plane. (c) Extracted image corresponding to a z = 3 cm imaging plane. (d) Extracted image corresponding to a z = 5 cm imaging plane. (e) Extracted image corresponding to a z = 10 cm imaging plane.

Download Full Size | PDF

3.2 Polarization imaging

To demonstrate the polarization imaging of passive FRAME a white light source was used in a transmission imaging arrangement. The light was linearly polarized vertically (0°) before being incident on the sample, in this case some partially overlapping shapes cut from transparent tape. Figure 4 shows the raw data, ground truth and extracted images for the sample. In this case each of the different channels of the setup have linear polarization filters and therefore correspond to light polarized at 0°, 45°, 90° and 135°. As expected, the signals in channels 1 and 3 are inverted since channel 1 (0°) is fully transmissive and channel 3 (90°) is cross polarized. In addition, comparing the extracted with the ground truth images we obtain R2 values of above 0.99, confirming that the passive FRAME approach is accurate and competent for polarization imaging.

 figure: Fig. 4.

Fig. 4. Transmission polarization images of layered transparent tape shapes. Sample transmission illuminated by linearly polarized (0°) white light. A raw data image captured from all four modulated channels. The Fourier Transform of the raw data image. Magnified regions of the raw data image where the modulations from the different channels are visible. The demodulated images of channels 1, 2, 3 and 4, extracted from the raw data image, corresponding to linear polarizations of 0°, 45°, 90° and 135° respectively. Ground truth images for each polarization channel.

Download Full Size | PDF

3.3 Videography by time multiplexing

The passive FRAME setup was used for temporally resolved imaging of a dynamic sample, in this case a rotating computer fan, illuminated by a multi-colored pulse train, in transmission mode. Spectral filters in each channel were wavelength matched to each of the four pulses in the pulse train (see Fig. 13 in the appendix). The images from each channel therefore correspond to four different points in time. A single exposure image capturing the time sequence of the moving blade as it blocks the individual pulses was taken (Fig. 5(a)). Upon demultiplexing, a 2 kHz, four frame video, with a temporal resolution of 500 µs, was extracted (lower panel of Fig. 5). The results when using spectral filters paired with the multi-colored pulse train show how passive FRAME can be used to increase the maximum frame-rate of a camera, in this case by a factor of 40, demonstrating its temporal imaging capabilities.

 figure: Fig. 5.

Fig. 5. 40x increase in camera speed using passive FRAME. (a) A single-exposure transmission mode image of a rotating computer fan, illuminated by a multi-color pulse train. The dotted lines show the leading edge of the fan blade moving in the direction of the arrow as it is captured at four different points in time. (b) A magnification of the raw data. Four regions are visible corresponding to different combinations of the modulations. (c) The Fourier transform of the raw image where each cluster pair corresponds to a temporally separated snapshot image of the rotating fan blade. Upon demultiplexing, a 2 kHz time series of a rotating fan blade is shown in the lower panel. Each frame corresponds to the transmission of a pulse of given wavelength separated in time.

Download Full Size | PDF

3.4 Spectral imaging and signal intensity balancing across channels

To demonstrate the spectral sensitivity of the passive FRAME approach, an X-Rite Color checker nano target was imaged using white light in an absorption/reflection arrangement. Figure 6(a) shows the raw data image of the sample, where, in the magnified region, the applied modulations can be seen. Since each of the squares in the X-Rite target are different colors it is pertinent to examine more closely, their spectral characteristics, as obtained using the passive FRAME setup. Four different colored squares were selected and their spectral signatures are presented in the bar plots (Fig. 6). It is possible to see that the spectral signatures for each of the regions is different, as we would expect due to their origin from different colored regions, demonstrating the setups spectral sensitivity and subsequent applicability for snapshot, multispectral imaging. Additionally it can be noted that the variation in the signal strengths across the channels, e.g. when looking at region of interest (ROI) 1, is quite large. When there is such a mixture of low and high intensity signals, the low intensity signals can suffer from cross talk from the neighbouring signals with higher intensity. Despite this, passive FRAME still achieves R2 values of 0.97 or higher (Fig. 6) when comparing the extracted and ground truth images for the unbalanced results. Due to the experimental setup construction it is, however, possible improve the results by using OD filters in the different channels in order to balance the signals. By doing so the cross talk can be reduced yielding R2 values of 0.99 for all channels, in the balanced results. Since the OD filter values are known, the channel intensities can then be rescaled to their original relative intensities, after extraction in the Fourier domain, in order to maintain the spectral sensitivity of the approach.

 figure: Fig. 6.

Fig. 6. Signal intensity balancing. An X-rite Color Checker imaged using white light illumination and four different spectral filters, one in each channel. The raw image of the target with all four modulated channels. The applied modulation patterns are visible in the magnified region. Regions of interest showing the spectral information of four regions of interest corresponding to four unique colors in the sample. Signal intensities for each of the channels are shown for both the unbalanced and balanced cases for the extracted and ground truth images. Demodulated channel images which are false colored images for channels 1 to 4 for the balanced and unbalanced extracted FRAME images compared with their corresponding ground truth images. R2 values are given for the balanced and unbalanced cases.

Download Full Size | PDF

3.5 Fluorescence imaging and linear unmixing

Linear unmixing is a mathematical method for separating different constituents in a sample and is a powerful technique for fluorophore identification in fluorescent imaging [13,35]. If a sample contains several different fluorophores which each have a characteristic spectral emission, linear unmixing can be used to determine which of these fluorophores the sample contains and in which relative quantitates. It is therefore possible to accurately image samples with multiple fluorophores with overlapping fluorescence signals and which are co-localized. Linear unmixing analysis is possible if one has access to a database of reference spectra pertaining to each of the different fluorophores that the sample may contain. By creating a series of simultaneous equations which represent the captured spectrum, and are defined using the reference spectra, they can then be solved to find the minimum values which give the minima of the reference spectra and thereby the relative concentrations of each fluorophore.

For the fluorescence imaging a blue LED was used in an absorption/reflection arrangement to image three different fluorescent dyes. The multiplexed image is shown in Fig. 7(a) along with the extracted images for each of the spectral channels. Analysis of the intensities of each of the dyes across the four spectral channels can be used to generate a catalogue of their spectral intensity profiles, or reference spectra. It is then possible to formulate simultaneous equations to describe the multiplexed image of the different dyes, in terms of these reference spectra, therefore linearly unmixing the dye letters. The result of this linear unmixing is that it becomes possible to identify which dye is present where, which is demonstrated in Fig. 7(b).

 figure: Fig. 7.

Fig. 7. Three fluorescent letters linearly unmixed. A sample of three fluorescent dyes illuminated with a blue LED (450 nm). The raw image of the dye letters L, T and H, captured using all four modulated channels with spectral filters. The inserts show the four channel intensities for each dye are shown in the bar plots, i.e. the spectral intensity profile of each dye. Multispectral image of the linearly unmixed dyes which are false colored red, green or blue.

Download Full Size | PDF

Figures 89 and 10 show a dynamic sample where the same three fluorescent dyes, characterised in Fig. 7, were dropped into water. A series of images were captured as a time series, whereby each image in the time series was an encoded snapshot image from all four spectral channels, i.e. allowing multispectral video capture. Figure 8(a) shows the raw data image from one of the moments in time captured in the time series. The Fourier transform of the image is seen in the inset where it is possible to see four pairs of modulated data corresponding to light captured from each of the four channels. The data is demodulated, as seen in Fig. 8(c), (d), (e) and (f), and then linearly unmixed and then combined into a single multispectral image as seen in Fig. 8(b). It can be seen from the linearly unmixed sub images, that at the chosen moment in time two of the three dyes are present, fluorophores 2 and 3. Figure 9 shows another moment from the same time series where only two of the three dyes are present, now fluorophores 1 and 3. Figure 10 shows 10 demodulated and linearly unmixed images from the time series of the dynamic sample, and a video animating the data set can be seen in Visualization 1.

 figure: Fig. 8.

Fig. 8. Linearly unmixed dynamic sample. An image of a dynamic sample of three fluorescent dyes dropped into water (side view). Two of the three dyes are visible in the selected image from the time series. (a) A raw data image captured from all four modulated channels. (b) A multispectral image of the demodulated and linearly unmixed data. (c) The demodulated image from channel 1 only. (d) The demodulated image from channel 2 only. (e) The demodulated image from channel 3 only. (f) The demodulated image from channel 4 only.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Fluorescent dyes falling through water. An image of a dynamic sample of three fluorescent dyes dropped into water (side view). Only two dyes are visible in the selected image from the time series. (a) A raw data image captured from all four modulated channels. (b) A multispectral image of the demodulated and linearly unmixed data. (c) The demodulated image from channel 1 only. (d) The demodulated image from channel 2 only. (e) The demodulated image from channel 3 only. (f) The demodulated image from channel 4 only.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Time series of linearly unmixed dynamic sample. A time series of multispectral snapshot images of a dynamic sample of three fluorescent dyes dropped into water (side view). At time 0s only one dye is present whereas at time 0.3s two are present and then at time 1.5s the first dye has passed and the latter two dyes are present. A video animating the data set is available online (Visualization 1).

Download Full Size | PDF

Another dynamic sample was recorded, also using the same three fluorescent dyes, however dropped into shallow water and viewed from above. An image showing a moment in time where the first two dyes have already been dropped and allowed to disperse in the water is shown in Fig. 11, along with a third dye seen as a droplet on the surface of the water. Shortly after this moment in time the droplet rapidly explodes across the water and the full sequence can be seen in Visualization 2.

 figure: Fig. 11.

Fig. 11. Fluorescent dye droplet and two dispersed dyes. An image of a dynamic sample of three fluorescent dyes, two dropped and dispersed into water and one as a droplet on the water (top view). All three dyes are visible in the selected image from the time series. (a) A raw data image captured from all four modulated channels. (b) A multispectral image of the demodulated and linearly unmixed data. (c) The demodulated image from channel 1 only. (d) The demodulated image from channel 2 only. (e) The demodulated image from channel 3 only. (f) The demodulated image from channel 4 only. A video animating the data set is available online (Visualization 2).

Download Full Size | PDF

4. Conclusion

In summary, this paper presents passive FRAME, a versatile multidimensional imaging approach. Imaging of depth-of-focus, polarization, temporal and spectral properties have been demonstrated experimentally. Imaging of these different dimensions was achieved by splitting light from a sample into four channels where it was then filtered and spatially encoded. The multidimensionality of the approach exhibits its applicability for imaging a broad variety of samples from biological fluorescence imaging to molecular dynamics.

Since passive FRAME is a snapshot technique it is much less sensitive to errors caused by movement in each of the extracted images and should the sample move during the acquisition, then all channels will be equally affected. It also means that multiple parameters can be acquired instantly and therefore dynamic samples can be imaged resulting in multidimensional videography being achieved. Conversely, if the samples are in fact stationary but in large quantities (many similar samples, e.g. biomedical samples) then the snapshot approach can reduce the total acquisition time, i.e. it has a high throughput.

A temporal resolution of 2 kHz has been demonstrated with the setup, corresponding to a 40x increase to the maximum frame rate of the camera. The setup is not restricted to transmission visualization since the only requirement is that temporally separated events are filtered in each of the channels. In addition, since the temporal limitation is set by the properties of the pulse train, broadband femtosecond laser sources could, in principle, push this technique into the GHz regime.

In the multispectral imaging format of passive FRAME, linear un-mixing, used for fluorescence imaging involving multiple different fluorophores, was demonstrated. To identify different fluorophores specific filters sets are used to accurately determine their spectral intensity profiles. Since a large catalogue of fluorophores exists, imaging systems are often developed using such specific filter sets, thus for specific sample/fluorophore applications. The presented approach therefore demonstrates how it is particularly compatible for fluorescence imaging since any filters can be used, which can be quickly and easily changed as required, for a broad variety of applications.

Due to the encoding being achieved in the four individual channels, it is not only possible to vary the filter types to achieve different dimensional imaging, but also to control the intensities in each channel by adding ND filters. This is useful when studying samples where there are large variations in the emission intensities. Since, for snapshot imaging, acquisition is performed in a single exposure, these weaker signals would normally be overwhelmed, however the combination of the encoding and the individual channels in passive FRAME means this can be overcome.

Finally, beyond the multidimensional imaging properties the setup is also versatile in its construction and combination with other lab equipment. Since it is not integrated into the detector system it is compatible with, in principle, all 2D detectors. As a result, the approach can be used in conjunction with user’s current detectors, reducing the cost of implementing it into existing research and industry environments. The setup is also fully static and constructed from standard optical components. As technological advances come about, in both detectors and optical elements, the passive FRAME setup can be upgraded to take advantage of such improvements without the need to invest in an entirely new imaging system. Finally, passive FRAME is not fundamentally limited to four channels and various solutions are available to increase the optical throughput and number of images/dimensions instantaneously achievable, making it an even more powerful snapshot, multidimensional imaging approach.

Appendix

A1. Spatial resolution trade-off

Since FRAME applies a spatial modulation in order to encode the different dimensional information it sacrifices spatial resolution in the resulting images obtained. In some applications this compromise may prove too costly. The results in Fig. 12, showing an image of a resolution target (Edmund Optics 38256) as acquired using the proof of concept passive FRAME setup, demonstrate that 11.30 lp/mm was resolvable and the corresponding field of view (FOV) for this resolution. It should be noted however that, depending on the application of the passive FRAME approach, detector used and other optical elements, the FOV and resolution achievable will vary, with the potential for even higher resolutions to be achieved.

 figure: Fig. 12.

Fig. 12. Spatial resolution and field of view. Resolution target imaged using the passive FRAME setup. The demodulated image from channel 1 of the setup (left) along with magnified regions of the higher resolution area of the target (right).

Download Full Size | PDF

A2. Pulse train characteristics

A multi-colored pulse train was used for temporally resolved imaging in transmission mode. It was generated by modulating individual diode lasers of wavelengths 405 nm, 450 nm, 532 nm and 850 nm with a separation of 500µs. The pulse train was optimized such that a maximum frame rate could be achieved while still respecting a reasonable definition for temporal resolution; the arrival of a pulse should coincide with the fall time, ${\tau _{1/2}}$, of the previous pulse. The measurement was performed using a Thorlabs DET10A\M fast photodetector yielding an average fall time of ${\tau _{1/2}} \approx 370\mu s$ for all four laser diodes (Fig. 13). This investigation, shows that the setup is indeed limited by the pulse train characteristics.

 figure: Fig. 13.

Fig. 13. Pulse Train Characteristics. Four individual, color coded pulses incident on the computer fan measured using a Thorlabs DET10A\M photodetector. The long fall time, calculated by finding the intersection between the pulse and the half the maximum intensity line, is on average 370µs for all the four laser diodes.

Download Full Size | PDF

Funding

Vetenskapsrådet (121892); European Research Council (803634).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. L. Gao and L. V. Wang, “A review of snapshot multidimensional optical imaging: measuring photon tags in parallel,” Phys. Rep. 616, 1–37 (2016). [CrossRef]  

2. A. Manakov, J. Restrepo, O. Klehm, R. Hegedus, E. Eisemann, H.-P. Seidel, and I. Ihrke, “A Reconfigurable Camera Add-On for High Dynamic Range, Multispectral, Polarization, and Light-Field Imaging,” ACM Trans. Graph. 32(4), 1 (2013). [CrossRef]  

3. J. T. Fredrich, “3D imaging of porous media using laser scanning confocal microscopy with application to microscale transport processes,” Physics and Chemistry of the Earth, Part A: Solid Earth and Geodesy 24(7), 551–561 (1999). [CrossRef]  

4. J. Kim and A. Ghosh, “Polarized Light Field Imaging for Single-Shot Reflectance Separation,” Sensors 18(11), 3803 (2018). [CrossRef]  

5. S. G. Demos and R. R. Alfano, “Optical polarization imaging,” Appl. Opt. 36(1), 150–155 (1997). [CrossRef]  

6. X. Xu, Y. Qiao, and B. Qiu, “Reconstructing the surface of transparent objects by polarized light measurements,” Opt. Express 25(21), 26296–26309 (2017). [CrossRef]  

7. J. Hunicz and D. Piernikarski, “Investigation of combustion in a gasoline engine using spectrophotometric methods,” Proc. SPIE 4516, 307–314 (2001). [CrossRef]  

8. P. Kauranen, S. Andersson-Engels, and S. Svanberg, “Spatial mapping of flame radical emission using a spectroscopic multi-colour imaging system,” Appl. Phys. B: Photophys. Laser Chem. 53(4), 260–264 (1991). [CrossRef]  

9. P. S. Hsu, D. Lauriola, N. Jiang, J. D. Miller, J. R. Gord, and S. Roy, “Fiber-coupled, UV–SWIR hyperspectral imaging sensor for combustion diagnostics,” Appl. Opt. 56(21), 6029–6034 (2017). [CrossRef]  

10. J. W. Lichtman and J.-A. Conchello, “Fluorescence microscopy,” Nat. Methods 2(12), 910–919 (2005). [CrossRef]  

11. D. M. Chudakov, S. Lukyanov, and K. A. Lukyanov, “Fluorescent proteins as a toolkit for in vivo imaging,” Trends Biotechnol. 23(12), 605–613 (2005). [CrossRef]  

12. C. E. Volin, B. K. Ford, M. R. Descour, J. P. Garcia, D. W. Wilson, P. D. Maker, and G. H. Bearman, “High-speed spectral imager for imaging transient fluorescence phenomena,” Appl. Opt. 37(34), 8112–8119 (1998). [CrossRef]  

13. T. Zimmermann, J. Rietdorf, and R. Pepperkok, “Spectral imaging and its applications in live cell microscopy,” FEBS Lett. 546(1), 87–92 (2003). [CrossRef]  

14. A. Jung, R. Michels, and G. Rainer, “Portable snapshot spectral imaging for agriculture,” Acta agrar. Debr. 150, 221–225 (2018). [CrossRef]  

15. N. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng. 52(9), 090901 (2013). [CrossRef]  

16. A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47, B44–B51 (2008).

17. J. G. Dwight and T. S. Tkaczyk, “Lenslet array tunable snapshot imaging spectrometer (LATIS) for hyperspectral fluorescence microscopy,” Biomed. Opt. Express 8(3), 1950–1964 (2017). [CrossRef]  

18. A. A. Wagadarikar, N. P. Pitsianis, X. Sun, and D. J. Brady, “Video rate spectral imaging using a coded aperture snapshot spectral imager,” Opt. Express 17(8), 6368–6388 (2009). [CrossRef]  

19. E. Kristensson, Z. Li, E. Berrocal, M. Richter, and M. Aldén, “Instantaneous 3D imaging of flame species using coded laser illumination,” Proc. Combust. Inst. 36(3), 4585–4591 (2017). [CrossRef]  

20. A. Ehn, J. Bood, Z. Li, M. Aldén, and E. Kristensson, “FRAME: femtosecond videography for atomic and molecular dynamics,” Light: Sci. Appl. 6(9), e17045 (2017). [CrossRef]  

21. K. Dorozynska and E. Kristensson, “Implementation of a multiplexed structured illumination method to achieve snapshot multispectral imaging,” Opt. Express 25(15), 17211–17226 (2017). [CrossRef]  

22. Z. Li, J. Borggren, E. Berrocal, A. Ehn, M. Aldén, M. Richter, and E. Kristensson, “Simultaneous multispectral imaging of flame species using Frequency Recognition Algorithm for Multiple Exposures (FRAME),” Combust. Flame 192, 160–169 (2018). [CrossRef]  

23. J. Liang and L. V. Wang, “Single-shot ultrafast optical imaging,” Optica 5(9), 1113–1127 (2018). [CrossRef]  

24. S. R. Khan, M. Feldman, and B. K. Gunturk, “Extracting sub-exposure images from a single capture through Fourier-based optical modulation,” Signal Process. Image Commun. 60, 107–115 (2018). [CrossRef]  

25. M. Gragston, C. D. Smith, and Z. Zhang, “High-speed flame chemiluminescence imaging using time-multiplexed structured detection,” Appl. Opt. 57(11), 2923–2929 (2018). [CrossRef]  

26. X. Liu, J. Liu, C. Jiang, F. Vetrone, and J. Liang, “Single-shot compressed optical-streaking ultra-high-speed photography,” Opt. Lett. 44(6), 1387–1390 (2019). [CrossRef]  

27. J. Liang, L. Zhu, and L. V. Want, “Single-shot real-time femtosecond imaging of temporal focusing,” Light: Sci. Appl. 7(1), 42 (2018). [CrossRef]  

28. M. Gragston, C. Smith, D. Kartashov, M. N. Shneider, and Z. Shang, “Single-shot nanosecond-resolution multiframe passive imaging by multiplexed structured image capture,” Opt. Express 26(22), 28441–28452 (2018). [CrossRef]  

29. K. Nakagawa and A. Iwasaki, “Sequentially timed all-optical all-optical mapping photography (STAMP),” Nat. Photonics 8(9), 695–700 (2014). [CrossRef]  

30. T. Suzuki and R. Hida, “Single-shot 25-frame burst imaging of ultrafast phase transition of Ge2Sb2Te5 with a sub-picosecond resolution,” Appl. Phys. Express 10(9), 092502 (2017). [CrossRef]  

31. B. K. Gunturk and M. Feldman, “Frequency division multiplexed imaging,” Proc. SPIE 8660, 86600P (2013). [CrossRef]  

32. M. Gragston, C. D. Smith, J. Harrold, and Z. Zhang, “Multiplexed structured image capture to increase the field of view for a single exposure,” OSA Continuum 2(1), 225–235 (2019). [CrossRef]  

33. E. Kristensson, A. Ehn, and E. Berrocal, “High dynamic spectroscopy using a digital micromirror device and periodic shadowing,” Opt. Express 25(1), 212–222 (2017). [CrossRef]  

34. E. Berrocal, J. Johnsson, E. Kristensson, and M. Aldén, “Single scattering detection in turbid media using single-phase structured illumination filtering,” J. Europ. Opt. Soc. Rap. Public. 7, 12015 (2012). [CrossRef]  

35. T. Zimmermann, “Spectral imaging and linear unmixing in light microscopy,” Adv. Biochem. Eng./Biotechnol. 95, 245–265 (2005). [CrossRef]  

Supplementary Material (2)

NameDescription
Visualization 1       Three fluorescent dyes are dropped into a cuvette of water and their dynamic motion as they fall is captured in a series of multiplexed, multispectral snapshot images (side view). The images are computationally separated and linearly unmixed before f
Visualization 2       Three fluorescent dyes are dropped into shallow water and captured from above. A series of multiplexed, multispectral snapshot images capture the first two dyes spreading through, and the droplet of the third dye exploding across, the water. The imag

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Passive FRAME image encoding. The unique modulation and corresponding sample information in each channel (1-4) is illustrated in both the spatial and Fourier domains. Copies of the sample information are located in the centre of the Fourier domain as well as at the higher frequency regions corresponding to the applied modulation. The first image pair in the figure illustrates the camera view in a single exposure where all four channels are recorded in a snapshot, and its corresponding Fourier domain. Copies of all the channels can be found overlapping in the centre of the Fourier domain as well as non-overlapping copies in each higher frequency region and rotation angle (indicated by matching colors) corresponding to the unique spatial encoding of each channel (1-4).
Fig. 2.
Fig. 2. Passive FRAME experimental setup. Schematic showing the four uniquely modulated and filtered (color, polarization or glass plate) channels through which light from the sample is recorded in the experimental setup. BS = Beam splitter, F = Filter (Color filter/Polarization filter/Glass Plate), M = Mirror, G = Grating, L = Imaging lens.
Fig. 3.
Fig. 3. Snapshot imaging of multiple depths-of-focus. An arrangement of optical components, the object, are illuminated with white light and imaged with each channel, 1-4, corresponding to a different z plane. (a) Raw data image comprised of all four modulated channels. (b) Extracted image corresponding to a z = 0 cm imaging plane. (c) Extracted image corresponding to a z = 3 cm imaging plane. (d) Extracted image corresponding to a z = 5 cm imaging plane. (e) Extracted image corresponding to a z = 10 cm imaging plane.
Fig. 4.
Fig. 4. Transmission polarization images of layered transparent tape shapes. Sample transmission illuminated by linearly polarized (0°) white light. A raw data image captured from all four modulated channels. The Fourier Transform of the raw data image. Magnified regions of the raw data image where the modulations from the different channels are visible. The demodulated images of channels 1, 2, 3 and 4, extracted from the raw data image, corresponding to linear polarizations of 0°, 45°, 90° and 135° respectively. Ground truth images for each polarization channel.
Fig. 5.
Fig. 5. 40x increase in camera speed using passive FRAME. (a) A single-exposure transmission mode image of a rotating computer fan, illuminated by a multi-color pulse train. The dotted lines show the leading edge of the fan blade moving in the direction of the arrow as it is captured at four different points in time. (b) A magnification of the raw data. Four regions are visible corresponding to different combinations of the modulations. (c) The Fourier transform of the raw image where each cluster pair corresponds to a temporally separated snapshot image of the rotating fan blade. Upon demultiplexing, a 2 kHz time series of a rotating fan blade is shown in the lower panel. Each frame corresponds to the transmission of a pulse of given wavelength separated in time.
Fig. 6.
Fig. 6. Signal intensity balancing. An X-rite Color Checker imaged using white light illumination and four different spectral filters, one in each channel. The raw image of the target with all four modulated channels. The applied modulation patterns are visible in the magnified region. Regions of interest showing the spectral information of four regions of interest corresponding to four unique colors in the sample. Signal intensities for each of the channels are shown for both the unbalanced and balanced cases for the extracted and ground truth images. Demodulated channel images which are false colored images for channels 1 to 4 for the balanced and unbalanced extracted FRAME images compared with their corresponding ground truth images. R2 values are given for the balanced and unbalanced cases.
Fig. 7.
Fig. 7. Three fluorescent letters linearly unmixed. A sample of three fluorescent dyes illuminated with a blue LED (450 nm). The raw image of the dye letters L, T and H, captured using all four modulated channels with spectral filters. The inserts show the four channel intensities for each dye are shown in the bar plots, i.e. the spectral intensity profile of each dye. Multispectral image of the linearly unmixed dyes which are false colored red, green or blue.
Fig. 8.
Fig. 8. Linearly unmixed dynamic sample. An image of a dynamic sample of three fluorescent dyes dropped into water (side view). Two of the three dyes are visible in the selected image from the time series. (a) A raw data image captured from all four modulated channels. (b) A multispectral image of the demodulated and linearly unmixed data. (c) The demodulated image from channel 1 only. (d) The demodulated image from channel 2 only. (e) The demodulated image from channel 3 only. (f) The demodulated image from channel 4 only.
Fig. 9.
Fig. 9. Fluorescent dyes falling through water. An image of a dynamic sample of three fluorescent dyes dropped into water (side view). Only two dyes are visible in the selected image from the time series. (a) A raw data image captured from all four modulated channels. (b) A multispectral image of the demodulated and linearly unmixed data. (c) The demodulated image from channel 1 only. (d) The demodulated image from channel 2 only. (e) The demodulated image from channel 3 only. (f) The demodulated image from channel 4 only.
Fig. 10.
Fig. 10. Time series of linearly unmixed dynamic sample. A time series of multispectral snapshot images of a dynamic sample of three fluorescent dyes dropped into water (side view). At time 0s only one dye is present whereas at time 0.3s two are present and then at time 1.5s the first dye has passed and the latter two dyes are present. A video animating the data set is available online (Visualization 1).
Fig. 11.
Fig. 11. Fluorescent dye droplet and two dispersed dyes. An image of a dynamic sample of three fluorescent dyes, two dropped and dispersed into water and one as a droplet on the water (top view). All three dyes are visible in the selected image from the time series. (a) A raw data image captured from all four modulated channels. (b) A multispectral image of the demodulated and linearly unmixed data. (c) The demodulated image from channel 1 only. (d) The demodulated image from channel 2 only. (e) The demodulated image from channel 3 only. (f) The demodulated image from channel 4 only. A video animating the data set is available online (Visualization 2).
Fig. 12.
Fig. 12. Spatial resolution and field of view. Resolution target imaged using the passive FRAME setup. The demodulated image from channel 1 of the setup (left) along with magnified regions of the higher resolution area of the target (right).
Fig. 13.
Fig. 13. Pulse Train Characteristics. Four individual, color coded pulses incident on the computer fan measured using a Thorlabs DET10A\M photodetector. The long fall time, calculated by finding the intersection between the pulse and the half the maximum intensity line, is on average 370µs for all the four laser diodes.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.