Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Novel time-resolved camera based on compressed sensing

Open Access Open Access

Abstract

Time-resolved cameras with high temporal resolution (down to ps) enable a huge set of novel applications ranging from biomedicine and environmental science to material and device characterization. In this work, we propose, and experimentally validate, a novel detection scheme for time-resolved imaging based on a compressed sampling approach. The proposed scheme unifies into a single element all the required operations, i.e. space modulation, space integration and time-resolved detection, paving the way to dramatic cost reduction, performance improvement and ease of use.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Light sampling in space and time, with high temporal resolution (down to ps), has a wealth of potential applications in many fields requiring the study of processes characterized by a fast dynamic with spatial variations. This occurs whenever fluorescence is exploited as a reporter in biology, medicine or material science. Further applications can be envisaged in environmental monitoring or industrial research. In biomedical field, beyond fluorescence imaging, the possibility of imaging the light propagating in a highly scattering medium, such as a biological tissue, allows one to recover its optical properties, which can correlate with tissue composition (absorption), microstructure (scattering) and endogenenous/exogeneous markers. These parameters provide, in turn, useful information for diagnostic (e.g. tumor detection), functional (e.g. brain) and molecular imaging studies either in experimental medicine or clinical practice [1,2]. Time-Gated Fluorescence Imaging has been used in experimental medicine to discriminate nanoparticle emission from tissue autofluorescence [3]. In cell biology, Fluorescence Lifetime Imaging (FLIM) is widely used to study the interaction of fluorophores with the microenvironment [4,5]. This includes, in turn, highly relevant biological information like pH, ion (Ca$^{2+}$, Mg$^{2+}$, etc.) concentration and molecular coupling (FRET) [6,7]. Beyond biomedical applications, it is worth mentioning environmental monitoring, LIDAR system for automotive, and FLIM for the characterization of combustion processes [8].

Intensified cameras (ICCDs) allow one to directly acquire a time gated image, but the temporal resolution is rather low (minimum gate of about 100 ps and electronic jitter higher than 50 ps), the dynamic range is limited and the cost is high. Alternatively, multianode photomultipliers provide a higher temporal resolution, but with a restricted spatial sampling (e.g 32×32 elements [9]). Recently, a new generation of detector arrays is becoming available, such as Single-Photon Avalanche Diode (SPAD) arrays and Silicon Photomultiplier (SiPM), featuring single photon sensitivity and high temporal resolution [1012]. Though the number of elements (pixels) has rapidly increased during the last few years (e.g. 192×128 [13]), also thanks to the emerging of the 3D stack technology [14], their maximum number is still limited by the electronics required to sense the ignition and to time each photon lag [15,16].

In a different implementation, where a single time-resolved detector serves the entire area of interest, a scanning system can be employed. This approach exploits the higher performances, in terms of temporal resolution and spectral sensitivity, of a point-like detector compared to an array. Generally speaking, the temporal sampling of the light emission is obtained by coupling the detector to a Time Correlated Single Photon Counting (TCSPC) board or to a Streak Camera system, while the region of interest is scanned over one or two dimensions [17]. The main drawback is the longer acquisition time compared to parallel detectors. It is worth mentioning that, beyond the spatial and temporal resolution performances, the cost, ease of use and reliability are fundamental parameters to take into consideration.

In the last 10 years, the topic of compressed sensing has received a lot of attention due to its potential to exploit the concept of sparsity of images to significantly reduce the number of measurements required to capture the information in a data set, compared to what is required by the Shannon-Nyquist sampling theorem for conventional data acquisition approaches [18]. This has strong implications, as an example, in those research areas where the total radiation exposure has to be limited not to damage the sample. The implementation of this idea resulted in the development of the Single Pixel Camera, and was later extended to a wide-variety of applications [19,20]. The basic idea is to replace the parallel detector with a spatial modulator, e.g. a Digital Micromirror Device (DMD) or Spatial Light Modulator (SLM), coupled to a single detector. The image of the sample, by an optical system, is formed on the spatially modulating device, where a specific pattern is represented. The light exiting from it, which is the product of the image and the pattern, is focused on a single pixel detector. By changing the modulation patterns (typically belonging to an orthonormal basis set), it is possible to sample the image in the spatial frequency domain, rather than raster scanning, by the light beam or the detector, the field of view. Finally, by applying an inversion algorithm, the image of the sample can be recovered.

In particular, this approach is quite useful when the problem is sparse or the spatial information has a low and rather limited bandwidth, as in the case of optical imaging of highly scattering media [21]. In these cases, sampling can be conveniently performed by directly capturing only the coefficients of the spatial frequencies of interest, rather than in the direct space. It is worth noting that such an approach allows one to image a sample using a point-like detector, which generally has superior performance with respect to a parallel detection matrix. Recently, this scheme has been coupled to a TCSPC chain to perform optical measurements resolved in space and time domains [22].

In this work, we propose a novel gated camera scheme that greatly simplifies the typical operation of a single pixel camera, by removing the need of a spatial modulator. This makes it possible to build an all-in-one fast gated camera featuring picosecond sampling, single photon sensitivity, and ease of operation.

2. Materials and methods

The basic idea of the proposed method is sketched in Fig. 1: it relies on the use of a high-density array of detection elements, operating in the single photon regime, which can be selectively enabled/disabled. All pixels are connected to one single timing circuit (e.g. Time-to-Digital Converter (TDC) or Time to Amplitude Converter (TAC)), operating for the whole detector array. The method can be schematically divided in the following steps: i) the subject under investigation is imaged onto the detector array by means of an optical system; ii) an enabling pattern is set to the detector array by a suitable activation scheme; iii) the output signals corresponding to single photon ignition, from all active elements, are conveyed by a single line (wired OR) to the timing circuit; iv) the single photon signal of the first firing element triggers the timing circuit to store the photon event in the proper time bin; v) a multitude of single photon events are arranged in a time histogram to obtain the temporal profile of the light detected within the pattern of step ii). By collecting many single photon signals for each activation pattern and repeating the measurement for a proper set of patterns, the optical signal is sampled in space and time and high temporal resolution images can be reconstructed via an inverse transform or more complex computational techniques [23]. In particular, Walsh-Hadamard (WH) patterns have been used in the present work [24,25]. This basis, made up of binary masks, is quite suitable for digital spatial modulation, as required by our method. Since WH patterns range between −1 and +1, two measurements with complementary positive patterns have been acquired. By applying an inverse WH transform, it is possible to obtain a stack of gated images as a function of time.

 figure: Fig. 1.

Fig. 1. Scheme of the proposed detection approach. i) detector array; ii) enable/disable pattern circuit; iii) active elements summed to a single Common Line; iv) timimg circuit; v) time histogram.

Download Full Size | PDF

Figure 2 schematically shows the experimental setup. We employed a supercontinuum laser source (SuperK Extreme, NKT Photonics), emitting light pulses ($\sim$10 ps width) in the 400-2400 nm spectral range, with a variable repetition rate (2 to 80 MHz). The supercontinuum is spectrally dispersed by a Pellin-Broca prism followed by a lens focusing the dispersed beam on a 50$\mu$m graded index fiber, which acts as a spatial filter. The distal end of the fiber is connected to a collimator providing a beam of about 2 mm diameter, which is delivered to the sample. More details on the system can be found in [26]. An objective lens (f = 25 mm) makes the image of an area of 50×50 mm$^2$ in the output plane of the sample into the 4×4 mm$^2$ detector’s area. As a detector we employed a digital silicon photomultiplier (dSiPM, DPC3200-22-44, Philips Digital Photon Counting, Aachen, Germany). In Appendix A a detailed description of the detector and its implementation is reported. The activation patterns were made by 16×16 WH binary masks implemented by properly binning 4×4 elements on a 64×64 SPAD area. The choice of 16×16 WH is compatible with the spatial bandwidth of the objects imaged in the experiments presented in this work. In order to obtain both positive and negative values of WH patterns the number of measurements is doubled leading to a total number of 512. This implementation has the advantage of being insensitive to the optical crosstalk between enabled and disabled SPADs because a disabled SPAD cannot count photons at all. Moreover, due to the connection through the wired OR, crosstalk between active elements does not affect the image quality because no direct image is acquired.

 figure: Fig. 2.

Fig. 2. Experimental set-up: a) time-resolved fluorescence imaging; b) time-resolved imaging through a scattering medium.

Download Full Size | PDF

As a preliminary test, we acquired the image of a target constituted by a white capital letter “F” printed on a black background, see Dataset 1 [27]. This reconstruction shows a good imaging capability of the system.

First, we carried out a fluorescence lifetime measurement in order to experimentally demonstrate the time-resolved imaging capability of the proposed system (see Fig. 2(a)). The laser emission was set at 630 nm with 5 nm bandwidth and a light power of about 100 $\mu$W and a repetition rate of 2 MHz. The light beam traveled twice through a cuvette filled with a fluorescent dye (Nile Blue). The double passage was achieved by means of a delay line set by a mirror placed 40 cm away from the cuvette. The reflected beam was slightly tilted; hence, the fluorescence signal originating from it was temporally and spatially separated from that of the incoming beam. A long-pass filter with cut-off wavelength at 650 nm was placed in front of the detector to remove the excitation light. 512 WH binary patterns, corresponding to 16×16 pairs of positive patterns, were sequentially loaded on the device. The signal was acquired immediately after the loading process of each pattern. Due to the hardware limitations explained in Appendix A, the maximum achievable collection rate was 17 kcps, which led to an acquisition time-per-pattern of about 23 s to achieve an average counts of 400,000 photons. The Instrumental Response Function (IRF), mainly due to the temporal jitter among the SPADs composing the device, was around 470 ps FWHM.

A second set of measurements was carried out with the aim of assessing the optical parameters (absorption coefficient $\mu _a$ and reduced scattering coefficient $\mu _s'$) of a highly scattering sample, mimicking a typical biological tissue. For this experiment (see Fig. 2(b)), the laser emission was set at 675 nm with 5 nm bandwidth and a light power of about 500 $\mu$W and a repetition rate of 2 MHz, which allowed us to reach the same photon counts of the fluorescence experiment. The sample consisted of a homogeneous slab (64×64×12 mm$^3$), whose optical properties ($\mu _a$ = 0.01 mm$^{-1}$, $\mu _s'$ = 1 mm$^{-1}$) were previously characterized with a state-of-the-art time-resolved spectroscopy system [26]. The sample was analyzed in transillumination geometry. The light exiting the sample was imaged on the detector with the same optical system used for the first experiment. 16×16 WH patterns were used, as well, to reconstruct the temporal and spatial distribution of the diffused light.

3. Results and discussion

Figure 3 displays two images of the fluorescence signal emitted by the Nile Blue dye inside the cuvette at two different time-windows, centered on the two fluorescence peaks, showing the passage of the incoming and returning beam, before and after the reflection by the mirror (as shown by the movie in Visualization 1). The temporal delay between the maximum intensity of the two beams is 2.7 ns in agreement with the additional travel distance (80 cm) of the reflected signal by the mirror. A mono-exponential fit taking into account the IRF is also shown, with a fitted decay time of 1.04 ns and 1.09 ns for the incoming and reflected beam, respectively. The measured lifetimes are very similar to each other unless a minor difference (within 5$\%$) due to a slightly different noise for the reflected beam. The measured values are in fairly good agreement with the literature even if different lifetimes are reported for different mixtures of solvents.

 figure: Fig. 3.

Fig. 3. Images (temporal bin width of 400 ps) taken at the first (a) and second (b) passage of the beam in the cuvette (depicted in overlay) after the reflection on the mirror (See Visualization 1). The images corresponds to the two fluorescence peaks, which occurred at 6.4 ns and 9.1 ns after excitation, respectively. c) Temporal profile (dot) extracted from the highest intensity pixel of the images together with the fitted mono-exponential model (line). The slight negative counts on the second dynamic is due to the low contribution given to the overall signal in the interval 6-8 ns by the second trace, which is overwhelmed by the first one.

Download Full Size | PDF

Figures 4(a) and 4(b) show the reconstructed output images, derived by the experiment on the diffusive phantom, at two different temporal windows (width of 400 ps) taken after 500 ps and 1 ns from the center of gravity of the IRF, which has been obtained by replacing the slab under analysis with a thin paper sheet. We clearly observe the broadening of the spatial profile as the delay increases, due to the longer paths followed by photons inside the highly scattering medium. The raw data of both the experiments are available in Dataset 1 [27] together with a code performing the WH inversion and generating the time-resolved images.

 figure: Fig. 4.

Fig. 4. Images (temporal window width of 400 ps) taken at 500 ps (a) and 1 ns (b) after the IRF center of gravity. c) Temporal profiles of two spatial frequencies (dot) together with the fitted analytical model (line). The IRF is also plotted.

Download Full Size | PDF

In order to characterize the optical parameters of the phantom by exploiting both the spatial and temporal capabilities of the proposed method, we fitted the time-resolved data to an analytical model describing the temporal propagation of the K-vectors of sinusoidal functions through a medium [21]. In particular, the Fourier transform of the light pattern in the output plane was calculated at every time bin and two temporal profiles at different wave vectors ($K_x = K_y =$ 0 and $K_x = K_y =$ 0.135mm$^{-1}$) were selected. Then, the equation describing the amplitude of a Fourier component transmitted trough a diffusive medium as a function of time (Eq. (4) of Ref [21]) was fitted, after convolution with the IRF, to the two profiles shown in Fig. 4. This allowed us to recover the optical properties of the homogeneous phantom ($\mu _a$ = 0.011 mm$^{-1}$ and $\mu _s'$ = 1.04 mm$^{-1}$). These parameters are in good agreement with the ones previously measured.

The proposed time-resolved camera implements the typical advantages of compressed sensing methods in a simple and compact hardware. All detection elements (space modulation, space integration and time-resolved detection) have been unified into a single device, without the need of an external spatial light modulator. Furthermore, this approach is based on a detector array, whose most complex element, i.e. timing circuit (e.g. the Time to Digital Converter - TDC), is shared among all pixels, still preserving precise photon timing across all sensitive elements. This allows one to simplify the circuitry required for each pixel and hence to increase the fill factor of the detector. In particular the number of pixels can be greatly increased, while the fabrication cost reduces due to the lower silicon area required with respect to other implementations which use a timing circuit for each pixel [16]. Moreover, the switching rate among different patterns can be much higher (up to MHz regime) respect to a DMD which is of the order of tens of KHz due to the required mechanical switching of the single reflecting elements. Another advantage of this acquisition scheme is given by the limited volume of data to be transferred, compared to the configuration employing one timing circuit for each pixel. The proposed method, being a single photon technique, provides ultimate sensitivity and dynamic range, definitely not achievable with fast gated cameras. A further advantage is the possibility to tune the acquisition time to the required spatial resolution. In fact, few WH patterns already give a rough reconstruction of the spatial profile, which might be sufficient in several applications, while a larger number of patterns can provide high resolution (e.g. microscopy). Adaptive methods could be also implemented to optimize the acquisition time by tailoring the patterns to the morphology of the sample. Furthermore, in order to break the pileup limit imposed by single photon detection, we foresee the use of a limited number of timing circuits, each of them coupled to a set of pixels, while preserving the advantages in terms of fill factor and production cost.

Finally, we note that the specific device used to experimentally demonstrate the proposed method is far from being optimized. The chip, mainly designed for high-energy physics applications, shows significant limitations for our purposes in terms of detection area (a small portion of a single tile) and acquisition rate (maximum photon counting rate of 17 kHz to avoid statistical distortion in the histogram). However, considering the recent developments in the field of SPAD arrays, the design of an optimized device does not show any significant challenge. In the present paper we aim at demonstrating the feasibility of a novel time-resolved camera scheme, while the optimization of the hardware is beyond the scope of the present work.

4. Conclusions

In conclusion, in this work we have proposed a novel time-resolved camera scheme based on compressive sensing approach and we have experimentally validated it by temporally and spatially monitoring the light propagating through a fluorescent dye and a highly scattering medium. The proposed method includes into a single chip all the required elements, making the device much simpler and compact compared to the implementations proposed so far. Moreover, the possibility of capturing time-resolved images using a single timing circuit makes it possible to save memory on the device and to increase the performances. The method can be conveniently employed in many applications where a gated camera or a time-resolved scanning system are currently used to acquire the spatial distribution of a time-resolved optical pattern. It is worth stressing that a detector array where each element is capable of timing individual photon overrules the need for compression. Yet, the proposed scheme with a single (or few) TDC/TAC is driven by simplicity and cost, rather than performance. In particular the potential cost reduction, ease of use and simple integration in a more complex system make our time-resolved imaging scheme an ideal detector for biomedical applications, LIDAR, environmental monitoring and material science. Furthermore, emerging research fields, like self driving cars, are eager for new technologies for range finding and mapping of fast changing scenarios. Currently, we are working on the realization of an optimized hardware and on the implementation of advanced compressive sensing schemes based on adaptive and non adaptive patters in order to reach the goal of sub-second acquisition of time-of-flight images.

A. Appendix: details on the device

A.1 Device description

In order to show a possible implementation of the proposed method, we employed a digital silicon photomultiplier (dSiPM, DPC3200-22-44, Philips Digital Photon Counting, Aachen, Germany). Differently from standard analog SiPMs, a dSiPM can also integrate CMOS logics on the same silicon die, thus allowing on-chip data processing [28,29]. It is worth noting that this very versatile device is designed for high-energy physics applications, therefore not optimized for our purpose and embedding functions that are surely relevant for other end users, but not useful for our particular application of time-correlated single-photon counting. Still, to the best of our knowledge, it is presently the only microelectronic detector available on the market that can be used to demonstrate the feasibility of our method. In order to facilitate the possibility to replicate our results or to reuse our data the terminology of the manufacturer will be adopted in Appendix A.

As shown in Fig. 5(a), the DPC3200-22-44 detector is comprised of an assembly (named as “tile” by the manufacturer) of 4×4 “dice”, each one divided into 2×2 identical elements (named as “pixels” by the manufacturer). Each pixel is an array of 3200 (50×64) avalanche photodiodes (named as “cells” by the manufacturer) operated in the so-called Geiger mode (i.e. single-photon avalanche diodes, SPADs). Each pixel is, in turn, divided into 2×2 identical elements (named as “sub-pixels” by the manufacturer) made up of 25×32 SPADs. Table 1 summarizes all the sensor parts with their relevant characteristics.

 figure: Fig. 5.

Fig. 5. (a) Structure of the DPC3200-22-44. Each die, integrating a single common TCD and counter, is divided in 4 pixels. (b) Example of superimposed illumination pattern to a die of the DPC3200-22-44. In the current implementation the pattern is superimposed to die 9.

Download Full Size | PDF

Tables Icon

Table 1. DPC3200-22-44 characteristics

All the SPADs of a die are connected to the same integrated time-to-digital converter (TDC) to measure the arrival time of the first photon triggering one of the SPADs as well as to an integrated counter to count the number of photons detected by the whole die within a given temporal window starting by the first photon detected. This feature is particularly useful in high-energy physics applications, as the number of photons generated by a particle into a scintillator crystal can be related to its energy. However, in our application we had to keep the die photon detection rate lower than one photon per laser pulse in order to limit the pile-up distortion of time-correlated single-photon counting (TCSPC) reconstructions.

As it is mentioned in the paper, the activation patterns were made by 16 x 16 Walsh-Hadamard (WH) binary masks that have been implemented by binning of 4 x 4 SPADs, thus using an array of 64 x 64 SPADs on a single die. Considering that each pixel of a die is composed by an array of 50 x 64 cells (see Table 1), the WH pattern has been implemented by making use of an entire pixel, as well as a portion (14 x 64) of the adjacent one, as it is shown in Fig. 5(b).

Each activation pattern was loaded before the acquisition in the “inhibit memory” of the device, which allows the physical enabling/disabling of each cell.

A.2 TCSPC implementation

In our experimental system, the laser source (SuperK Extreme, NKT Photonics) provides Nuclear Instrumentation Module (NIM) standard trigger-out signal pulses at a minimum possible repetition rate of 2 MHz, while the external synchronization (SYNC-EXT) input of the DPC3200-22-44 requires a Low-Voltage Transistor-Transistor Logic (LV-TTL) standard signal at a maximum rate of 1 MHz. Hence, the laser NIM output was converted into a LV-TTL signal using a custom home-made printed circuit board featuring a negligible time jitter (< 6 ps), which was provided to the DPC3200-22-44 SYNC-EXT input. Half of the laser trigger pulses were simply neglected by the DPC3200-22-44. It is worth noting that each die of the DPC3200-22-44 is designed to allow the triggering of the time stamp generation from the TDC not only when a SPAD of the die is ignited by a photon, but also when a SYNC-EXT pulse arrives to the die. Thus, only the first occurrence between these two different events can trigger the TDC. The DPC3200-22-44 was configured to distribute such external synchronization to all the 16 dice, however, a single die was enabled to detect photons. In this way, the active die can be triggered by the first occurring event between a SPAD ignition or the SYNC-EXT pulse, while the other disabled 15 dice can be triggered only by the SYNC-EXT pulse. Time stamps generated by TDCs are microscopic times (datasheet specifications: 19.5 ps bin width, 512 time bins, 10 ns full-scale range, 44 ps timing resolution) referred to the DPC3200-22-44 internal 200 MHz clock. Hence, if a die is active, TDC provides the microscopic time delay between an internal clock pulse and the first occurrence between a photon detection or a SYNC-EXT pulse. On the contrary, TDCs of disabled dice provide the microscopic time delay between an internal clock pulse and the SYNC-EXT pulse. Thus, to compute the microscopic delay of the detected photon with respect to the laser pulse synchronization, we have to compute the difference between time stamps provided by an active die and those provided by a disabled die.

In high-energy physics, the DPC3200-22-44 is used to estimate the energy of a particle travelling inside a scintillator crystal. Thus, the first photon arriving on a die also enables the die photon counter to count the number of SPAD ignitions after the first ignition. Usually, to take into account only significant signal, those events are processed only if the number of ignited SPADs on the die is relevant. Thus, it is possible to distinguish simple dark counts from events generated by a high-energy particle. To this purpose, the device permits to set a ”trigger threshold” (i.e. the minimum number of ignited SPADs on the die to be considered to process the event by the following electronics, see DPC3200-22-44 user manual). In our case, this threshold was set to ”1” in order to allow the processing of single photons. Additionally, in high-energy physics, it might be useful to allow the digital electronics to process an event only if it occurs within a given temporal window. Thus, also a timing validation condition is required by the DPC3200-22-44 to allow data processing. In the DPC3200-22-44, when a TDC of one die is triggered by the SYNC-EXT pulse, the time-stamp is automatically processed by the attached electronics with no need to accomplish with a validation condition, while photon ignitions are handled by a so-called ”validation logic”. In our case, as a validation condition we set a maximum time delay of 40 ns between a photon detection and the simultaneous ignition of all the 4 sub-pixels composing a pixel. Since the device works in the single-photon regime, this event can only happen when the SYNC-EXT signal arrives to the tile triggering the 15 disabled dice. In this way, the DPC3200-22-44 processed only SPAD ignitions occurring during a time window of 40 ns before the synchronization arrival time, thus limiting the overburden of the following electronics in processing dark counts occurring outside these 40 ns of observation. The DPC3200-22-44 can provide ”validated” time-stamps at a maximum rate of 122 kcps per die due to FPGA limitations.

The DPC3200-22-44 is connected to a dedicated laptop via an USB 2.0 connection, which reduces the effective event collection rate to 17 kcps. This leads to an acquisition time-per-pattern of about 23 s in order to achieve an average counts of 400,000 photons.

Among these, only $\sim$30,000 counts over a time interval of 2.5 ns are usable for the experiment. Other counts are related to events only triggered by SYNC-EXT, generating a high spike at 0 ns and dark counts.

Funding

H2020 European Research Council ERC Starting Grant SOLENALGAE (679814).

Acknowledgments

The research was partially supported by the ERC Starting Grant SOLENALGAE (679814). We thank Philips and in particular Dr. Ralf Schulze for the support in setting up the device.

Disclosures

The authors declare no conflicts of interest.

References

1. A. T. Eggebrecht, S. L. Ferradal, A. Robichaux-Viehoever, M. S. Hassanpour, H. Dehghani, A. Z. Snyder, T. Hershey, and J. P. Culver, “Mapping distributed brain function and networks with diffuse optical tomography,” Nat. Photonics 8(6), 448–454 (2014). [CrossRef]  

2. S. R. Cherry, “In vivo molecular and genomic imaging: new challenges for imaging physics,” Phys. Med. Biol. 49(3), R13–R48 (2004). [CrossRef]  

3. L. Gu, D. J. Hall, Z. Qin, E. Anglin, J. Joo, D. J. Mooney, S. B. Howell, and M. J. Sailor, “In vivo time-gated fluorescence imaging with biodegradable luminescent porous silicon nanoparticles,” Nat. Commun. 4(1), 2326 (2013). [CrossRef]  

4. K. Suhling, P. M. French, and D. Phillips, “Time-resolved fluorescence microscopy,” Photochem. Photobiol. Sci. 4(1), 13 (2005). [CrossRef]  

5. K. Abe, L. Zhao, A. Periasamy, X. Intes, and M. Barroso, “Non-invasive in vivo imaging of near infrared-labeled transferrin in breast cancer cells and tumors using fluorescence lifetime FRET,” PLoS One 8(11), e80269 (2013). [CrossRef]  

6. M. Y. Berezin and S. Achilefu, “Fluorescence Lifetime Measurements and Biological Imaging,” Chem. Rev. 110(5), 2641–2684 (2010). [CrossRef]  

7. Q. Pian, R. Yao, N. Sinsuebphon, and X. Intes, “Compressive hyperspectral time-resolved wide-field fluorescence lifetime imaging,” Nat. Photonics 11(7), 411–414 (2017). [CrossRef]  

8. J. Tachella, Y. Altmann, X. Ren, A. McCarthy, G. S. Buller, S. McLaughlin, and J.-Y. Tourneret, “Bayesian 3D Reconstruction of Complex Scenes from Single-Photon Lidar Data,” SIAM J. Imaging Sci. 12(1), 521–550 (2019). [CrossRef]  

9. V. Popov, “Advanced data readout technique for Multianode Position Sensitive Photomultiplier Tube applicable in radiation imaging detectors,” J. Instrum. 6(1), C01061 (2011). [CrossRef]  

10. R. a. Colyer, O. H. W. Siegmund, A. S. Tremsin, J. V. Vallerga, S. Weiss, and X. Michalet, “Phasor imaging with a widefield photon-counting detector,” J. Biomed. Opt. 17(1), 016008 (2012). [CrossRef]  

11. S. Burri, Y. Maruyama, X. Michalet, F. Regazzoni, C. Bruschini, and E. Charbon, “Architecture and applications of a high resolution gated SPAD image sensor,” Opt. Express 22(14), 17573 (2014). [CrossRef]  

12. S. Burri, C. Bruschini, and E. Charbon, “LinoSPAD: A Compact Linear SPAD Camera System with 64 FPGA-Based TDC Modules for Versatile 50 ps Resolution Time-Resolved Imaging,” Instruments 1(1), 6 (2017). [CrossRef]  

13. R. K. Henderson, N. Johnston, F. Mattioli Della Rocca, H. Chen, D. Day-Uei Li, G. Hungerford, R. Hirsch, D. Mcloskey, P. Yip, and D. J. S. Birch, “A $192 \times 128$ Time Correlated SPAD Image Sensor in 40-nm CMOS Technology,” IEEE J. Solid-State Circuits 54(7), 1907–1916 (2019). [CrossRef]  

14. E. Charbon, C. Bruschini, and M. J. Lee, “3D-Stacked CMOS SPAD Image Sensors: Technology and Applications,” in 2018 25th IEEE International Conference on Electronics Circuits and Systems (ICECS, 2019).

15. E. Charbon, “Single-photon imaging in complementary metal oxide semiconductor processes,” Philos. Trans. R. Soc., A 372(2012), 20130100 (2014). [CrossRef]  

16. F. Villa, R. Lussana, D. Bronzi, S. Tisa, A. Tosi, F. Zappa, A. Dalla Mora, D. Contini, D. Durini, S. Weyers, and W. Brockherde, “CMOS Imager With 1024 SPADs and TDCs for Single-Photon Timing and 3-D Time-of-Flight,” IEEE J. Sel. Top. Quantum Electron. 20(6), 364–373 (2014). [CrossRef]  

17. W. Becker, Advanced Time-Correlated Single Photon Counting Techniques, vol. 81 of Springer Series in Chemical Physics (Springer Berlin Heidelberg, Berlin, Heidelberg, 2005).

18. E. Candes and M. Wakin, “An Introduction To Compressive Sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008). [CrossRef]  

19. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008). [CrossRef]  

20. N. Sarhangnejad, N. Katic, Z. Xia, M. Wei, N. Gusev, G. Dutta, R. Gulve, H. Haim, M. M. Garcia, D. Stoppa, K. N. Kutulakos, and R. Genov, “5.5 Dual-Tap Pipelined-Code-Memory Coded-Exposure-Pixel CMOS Image Sensor for Multi-Exposure Single-Frame Computational Imaging,” in 2019 IEEE International Solid- State Circuits Conference - (ISSCC), (IEEE, 2019), pp. 102–104.

21. A. Bassi, C. D’Andrea, G. Valentini, R. Cubeddu, and S. Arridge, “Temporal propagation of spatial information in turbid media,” Opt. Lett. 33(23), 2836–2838 (2008). [CrossRef]  

22. A. Farina, M. Betcke, L. di Sieno, A. Bassi, N. Ducros, A. Pifferi, G. Valentini, S. Arridge, and C. D’Andrea, “Multiple-view diffuse optical tomography system based on time-domain compressive measurements,” Opt. Lett. 42(14), 2822 (2017). [CrossRef]  

23. F. Rousset, N. Ducros, A. Farina, G. Valentini, C. D’Andrea, and F. Peyrin, “Adaptive Basis Scan by Wavelet Prediction for Single-pixel Imaging,” IEEE Trans. Comput. Imaging 3(1), 36–46 (2017). [CrossRef]  

24. N. Huynh, E. Zhang, M. Betcke, S. Arridge, P. Beard, and B. Cox, “Single-pixel optical camera for video rate ultrasonic imaging,” Optica 3(1), 26 (2016). [CrossRef]  

25. F. Soldevila, E. Salvador-Balaguer, P. Clemente, E. Tajahuerce, and J. Lancis, “High-resolution adaptive imaging with a single photodiode,” Sci. Rep. 5(1), 14300 (2015). [CrossRef]  

26. S. Konugolu Venkata Sekar, A. Dalla Mora, I. Bargigia, E. Martinenghi, C. Lindner, P. Farzam, M. Pagliazzi, T. Durduran, P. Taroni, A. Pifferi, and A. Farina, “Broadband (600-1350 nm) Time-Resolved Diffuse Optical Spectrometer for Clinical Use,” IEEE J. Sel. Top. Quantum Electron. 22(3), 406–414 (2016). [CrossRef]  

27. A. Farina, A. Candeo, A. Dalla Mora, A. Bassi, R. Lussana, F. Villa, G. Valentini, S. Arridge, and C. D’Andrea, “Novel time-resolved camera based on compressed sensing: Dataset 1,” figshare (2019) [retrieved 17 October 2019], https://doi.org/10.6084/m9.figshare.9916517.

28. T. Frach, G. Prescher, C. Degenhardt, and B. Zwaans, “The digital silicon photomultiplier &#x2014; System architecture and performance evaluation,” in IEEE Nuclear Science Symposuim & Medical Imaging Conference, (IEEE, 2010), pp. 1722–1727.

29. D. R. Schaart, E. Charbon, T. Frach, and V. Schulz, “Advances in digital SiPMs and their application in biomedical imaging,” Nucl. Instrum. Methods Phys. Res., Sect. A 809, 31–52 (2016). [CrossRef]  

Supplementary Material (2)

NameDescription
Dataset 1       This archive contains raw data and reconstruction code related to all the experiment described in the paper: - an IRF measurement. - data related to Fig. 3 and Visualization 1. - data related to the measurement on a diffusive medium mimicking a biolo
Visualization 1       This movie shows the reconstructed time-resolved images related to the fluorescence experiment (see Fig. 3 of the paper). Here we see a collimated laser beam entering a cuvette, and entering back again after a reflection on a mirror.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Scheme of the proposed detection approach. i) detector array; ii) enable/disable pattern circuit; iii) active elements summed to a single Common Line; iv) timimg circuit; v) time histogram.
Fig. 2.
Fig. 2. Experimental set-up: a) time-resolved fluorescence imaging; b) time-resolved imaging through a scattering medium.
Fig. 3.
Fig. 3. Images (temporal bin width of 400 ps) taken at the first (a) and second (b) passage of the beam in the cuvette (depicted in overlay) after the reflection on the mirror (See Visualization 1). The images corresponds to the two fluorescence peaks, which occurred at 6.4 ns and 9.1 ns after excitation, respectively. c) Temporal profile (dot) extracted from the highest intensity pixel of the images together with the fitted mono-exponential model (line). The slight negative counts on the second dynamic is due to the low contribution given to the overall signal in the interval 6-8 ns by the second trace, which is overwhelmed by the first one.
Fig. 4.
Fig. 4. Images (temporal window width of 400 ps) taken at 500 ps (a) and 1 ns (b) after the IRF center of gravity. c) Temporal profiles of two spatial frequencies (dot) together with the fitted analytical model (line). The IRF is also plotted.
Fig. 5.
Fig. 5. (a) Structure of the DPC3200-22-44. Each die, integrating a single common TCD and counter, is divided in 4 pixels. (b) Example of superimposed illumination pattern to a die of the DPC3200-22-44. In the current implementation the pattern is superimposed to die 9.

Tables (1)

Tables Icon

Table 1. DPC3200-22-44 characteristics

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.