Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Dynamic time domain near-infrared optical tomography based on a SPAD camera

Open Access Open Access

Abstract

In many clinical applications it is relevant to observe dynamic changes in oxygenation. Therefore the ability of dynamic imaging with time domain (TD) near-infrared optical tomography (NIROT) will be important. But fast imaging is a challenge. The data acquisition of our handheld TD NIROT system based on single photon avalanche diode (SPAD) camera and 11 light sources was consequently accelerated. We tested the system on a diffusive medium simulating tissue with a moving object embedded. With 3D image reconstruction, the moving object was correctly located using only 0.2 s exposure time per source.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Time domain (TD) near infrared optical tomography (NIROT) is a novel method that is able to quantitatively image the oxygenation of tissue. Other non-invasive techniques to measure tissue oxygenation, including magnetic resonance imaging (MRI), electron paramagnetic resonance (EPR), positron emission tomography (PET), either cannot be applied bedside or require injection of a contrast agent. Near infrared spectroscopy (NIRS) can be applied for monitoring oxygenation at bedside [1], but it lacks spatial information. NIROT is the 3D version of NIRS, which utilizes non-ionizing radiation and safe to be used repeatedly. When NIROT is applied in e.g. early detection of neonatal brain injury and ischemic strokes [1] or functional brain imaging [2], the ability to detect dynamic changes of oxygenation is critical, because oxygenation may change quickly, and this may rapidly lead to adverse consequences. Therefore, there is a need for an instrument that is able to quantitatively image the oxygenation with a sufficient time resolution. Among the three NIROT modalities: continuous wave (CW) [2], frequency domain (FD) [3] and time domain (TD) [4,5], information contained in TD data is significantly richer which enables higher image quality. However, dynamic imaging with TD NIROT is a challenge. A 32-channel TD NIROT MONSTIR took 2 h to acquire data for reconstructing a 3D image [6]. Its second generation MONSTIR II reduced the acqusition time to 99 s [4], but only 2D images were reconstructed. In addition, the photomultiplier tubes-based systems, such as MONSTIRs, can only employ limited number of sources and detectors, and feature a bulky design. Scaling up the number of detectors will increase the system to a size incompatible with clinical application. MONSTIR is approximately the size of a fridge. Recent advance in single photon avalanche diode (SPAD) array technology not only provides a compact solution to TD NIROT design but also reduces the acquisition time to minutes or even seconds. Lyons et. al. has developed a computational imaging method based on a SPAD camera [7]. It requires 1s acquisition time which enables the proposed reconstruction method to obtain a 2D reconstruction with correct object localization. However, the method requires homogeneous background and prior knowledge of the object depth. Moreover, it was tested in transmission mode only. In contrast, reflection mode is commonly required In clinical applications, and the a priori information is limited. To overcome all these problems, we developed a new handheld TD NIROT called Pioneer functioning in reflection mode. It utilizes a $32\times 32$ SPAD camera measuring time of flight (ToF) for 12.5 ns with time resolution of 48.8 ps and 11 sources of pico-second laser radiation [8]. This system provides $2.88 \times 10^6$ measurement points and at the same time is featured with a compact design which allows it to be hand-held and bedside [5]. We have previously presented our system and showed successful reconstruction results of two objects embedded at different depths in a diffusive medium [5,9].

In this work, we present an updated system with an optimized control software which reduced the idle time between two exposures drastically. In addition, we exploit the data acquired in the scale of seconds for locating deep embedded object in a diffusive medium. We designed silicone phantom with a movable inclusion and performed measurements on this phantom with the inclusion at several positions. A 3D image reconstruction method was employed to locate the inclusion. The aim is to achieve a fast acquisition and still obtain an accurate image.

2. Methods and materials

2.1 System description

Figure 1(a) illustrates the TD NIROT system Pioneer applied on a baby head model. Pioneer employed a super-continuum laser SuperK Extreme EXR-15 (NKT, Denmark) to generate pico-second light pulses of a broad spectrum in the near infrared (NIR) range. A set of wavelengths in 650-900 nm range was selected with an acousto-optic tunable filter (AOTF) enabling spectral measurements. A custom designed $32\times 32$-pixel single-photon avalanche diodes (SPAD) sensor called Piccolo was used to acquire time-of-flight (ToF) information of photons [10]. It incorporated 128 time-to-digital converters (TDCs) with 50ps time resolution. Photon timestamps and addresses acquired by Piccolo were transferred to a Xilinx Kintex 7 FPGA board (XEM7360, Opal Kelly, USA) via 32 160 MHz serial outputs, which led to 220 million photon counts per second at a data rate of 640 MB/s at maximum speed. The single photon data were converted into histograms (16-bit $\times$ 256 bins per histogram) on a per-pixel basis in the FPGA block of RAM and then the histograms were transferred to PC. This drastically reduced the data bandwidth of the communications between the FPGA and the computer from a maximum of 640 MB/s to 1-10 MB/s. Thus, a single USB 3.0 connection was sufficient for data transmission and camera control in real time. A NIR light lens SY110M 1.68 mm F1.8 (Theia Technologies, US) projected the object surface to the SPAD sensor, with each corresponded to a square with a side length of 1.06 mm. To ensure the stability of the laser output, a small fraction of the light ($\sim$ 1%) was delivered by a custom-made fiber splitter (Thorlabs, US) to the PM100USB power meter (Thorlabs, US). The majority of the laser output was delivered to a fiber optical switch (Laser Components, Germany), which directed the light to one of 11 channels one by one. 11 optical fibers were fixed into a custom designed tube-shaped probe and arranged equidistantly in a circle with $\sim$22 mm radius around the camera field of view (FoV) [8,5]. This probe had a bio-compatible silicone surface and the tube structure shielded the FoV from the ambient light.

 figure: Fig. 1.

Fig. 1. (a) Photo of Pioneer imager probe applied on a baby head model. (b) Flowchart of Pioneer control software. All the error controls were excluded in the flowchart for simplification purposes.

Download Full Size | PDF

A GUI control software based on Qt / Cpp was designed to conduct experiments with Pioneer device. As illustrated in the flowchart Fig. 1(b), this software incorporated essential objects to control the hardware components, i.e. the laser instrument, SPAD camera, optical switch and power meter, through serial communication. To achieve simultaneous control of all these devices, finite state machine was employed. Error control was added to all devices to achieve reliable data acquisition. For example, all the devices’ status was examined before data acquisition and the feedback was kept automatically in a log file.

2.2 Phantom fabrication

To validate the NIROT system, a silicone phantom with a movable inclusion was designed and fabricated in our lab. The aim was to have a small cylindrical inclusion of Ø$1 \,cm\times 2\, cm$ (shown on the left in Fig. 2(a)) freely movable inside a large homogeneous silicone phantom. We used Silpuran 2420 two-component silicone (Wacker Chemie AG, Germany) to create this phantom. We made two silicone mixtures: mixture A ($\mu _a = 0.0097$ $mm^{-1}$, $\mu _s' = 0.40$ $mm^{-1}$ at 689 nm) for the bulk material and mixture B ($\mu _a=0.042$ $mm^{-1}$, $\mu _s' = 0.40$ $mm^{-1}$ at 689 nm) for the inclusion material. The scattering was controlled by adding white ink ELASTOSIL RAL 9010 (Wacker, Germany) and absorption was added by carbon black powder.

 figure: Fig. 2.

Fig. 2. (a) From the left to the right: an inclusion phantom for the phantom, a rod phantom with the inclusion embedded and a silicone mold to cast this rod phantom; (b) mold for the bulk phantom; (c) silicone phantom components used in the experiment: the rod phantom and the bulk phantom; (d) Pioneer probe placed on the phantom of a movable inclusion in a measurement.

Download Full Size | PDF

First, the small inclusion was casted in a 3D printed mold of an indented cylindrical shape with mixture B (Fig. 2(a) left). Another cylindrical silicone mold (Fig. 2(a) right) was used to cast a silicone rod (Fig. 2(a) middle). The silicone rod was casted in two steps: (1) we firstly filled a half of the cylindrical mold with mixture A and put it in an oven at 65$^{\circ }C$ for 1 hour to cure; (2) next, we inserted the inclusion (Fig. 2(a) left) into the phantom mold and filled the remaining volume of the phantom mold with mixture A. We kept it in the oven at 65$^{\circ }C$ for another hour. We used a 3D printed mold (Fig. 2(b)) for the bulk phantom. It consisted of three separate parts, including a rod part. We used this mold to cast a homogenous silicone phantom of Ø$11.4 \,cm\times 6.2\, cm$ with mixture A with a through hole of Ø$1.5\,cm$ cm at the depth of 15 mm from the top surface. Thus, the desired phantom with movable inclusion was accomplished with two silicone parts: the inclusion embedded silicone rod and the bulk silicone phantom (Fig. 2(c)).

2.3 Phantom experiment and data processing

We tested the speed of the system by means of repetitive measurements on a stationary phantom. We placed the probe on the center of the phantom (Fig. 2(d)). We performed 200 repetitions of data acquisition at 3 source positions and at 2 wavelengths, resulting in 1200 TD data files of 1024 histograms. The starting and finishing times were recorded to estimate the system idle time, i.e. time spent on transition between two exposure states of SPAD camera including switching sources, changing wavelengths, saving file, etc.

After that, we tested the ability of the system for visualization of the inclusion in the diffusive medium with only 0.2 s exposure time for illumination of each source. We moved the cylindrical inclusion to be -25 mm, -20 mm, -10 mm, 0 mm, 10 mm, 20 mm, 25 mm away from the center of the phantom. The - sign indicates that the inclusion was positioned at the left side with respect to the phantom center. At each location of the inclusion, ToF information from all 11 sources at 689 nm wavelength was captured by the image sensor Piccolo and converted to histograms of the different ToFs. The inclusion was kept still during each measurement. Histograms from all detectors, together with associated experimental parameters, e.g. power meter values and acquisition times, were transferred to a workstation with advanced GPU for data analysis and image reconstruction. The raw histograms were corrected by removing dark photon counts and deconvoluted with system response function. The corrected TD data was used for image reconstruction.

In this paper, physical model-based image reconstruction was employed. The forward process of light transport in tissue was described by the diffusion equation (DE) where the survival photon density distribution $\phi (r,t)$ at position $r$ and time point $t$ is calculated by [11]:

$$[-\nabla\cdot\kappa(r)\nabla+\mu_a(r)+\frac{1}{c_0}\frac{\partial}{\partial t}]\phi(r,t)=q(r,t),$$
with Robin conditions [12], where $q$ is a source term and $\mu _a$ stands for absorption coefficient and $\kappa = [3(\mu _a+\mu _s')]^{-1}$ is diffusion coefficient, with reduced scattering coefficient $\mu _s'$ and speed of light in the medium $c_0$. Finite element methods (FEM) are widely used numerical solutions for the DEs with complex geometries. We utilized NIRFASTer, a GPU facilitated FEM based open-source software, to model the forward and inverse problem [13,14,15]. We selected 216 detectors within the FoV and transformed the corresponding histogram into frequency domain with Discrete Fourier Transform [16]. To facilitate the image reconstruction, data at three frequencies 100 MHz, 200 MHz and 500 MHz were chosen.

To simulate accurate forward results, a fine cylindrical mesh of Ø$90 \,mm\times 40\, mm$ with 56648 nodes was generated and 11 sources and 216 detectors were placed on the surface as shown in Fig. 4(a). The image reconstruction, also called inverse problem was a minimization process of the quadratic difference of simulated and measured forward results which was regularized by a modified-Tikhonov method [17]. A coarse mesh of 2752 nodes was used for the inverse problem in order to speed up the calculation and reduce the memory burden [15]. The initial guess was a homogeneous volume with the same optical properties as the bulk phantom. The 3D distribution of absorption coefficient was reconstructed while reduced scattering was fixed to the value of the bulk phantom (mixture A). We set the regularization parameter to be 10. The reconstructions ended after 7 or 8 iterations when the criterion of no further improvement was reached. The whole process for each object position took less than 10 minutes. The reconstructed images were segmented based on thresholding to obtain the results.

3. Results and discussions

We firstly estimated the speed of the system. From the 1200 TD data files, the median idle time between two exposures was calculated to be only 85 ms.

In the tracking task, the SPAD camera collected $1\times 10^6$ photons for an exposure time of 0.2 s. The total exposure time for 11 sources took $\sim$2 s and the complete measurement (exposure + idle period) for 11 sources took $\sim$3 s. We estimate the signal-to-noise ratio (SNR) by taking the ratio of the mathematical expectation of signal and noise $SNR = E[S]/E[N]$, where $S$ is defined as the array of photon counts in the discernible range (30th to 80th time bin) and $N$ is the array of dark photon counts. Since the chance of registering a photon to a time bin is equal for all bins in a SPAD, the $E[]$ here is the simple average. Figure 3(a) plots the SNR values of the pixels along the central column. The increase of the source-detector distance reduced the number of detected photons and therefore decreased the SNR. Figure 3(b) demonstrates three ToFs at different pixels of which the locations are shown in the inset of Fig. 3(a). The SPAD array was specifically optimized for NIROT with a particularly high photon detection efficiency of 3.4% at 800nm, which is $\geqslant 17$ times higher and world leading compared to similar previous CMOS SPAD chips [10].

 figure: Fig. 3.

Fig. 3. (a) SNR of pixels in the central column. The inset picture illustrates the positions of the source and detectors. (b) An example of ToF histograms captured with the same pixels.

Download Full Size | PDF

The reconstruction results are shown in Fig. 4(b)-(h). The positions of the embedded objects were correctly reconstructed from -25 mm to 25 mm. Artifacts appear in the region far from the center e.g. the case of -25 mm (Fig. 4(b)) and 25 mm (Fig. 4(h)), where the reconstructed inclusion is closer to the center than its actual location. This artifact is caused by the arrangement of the sources and SPAD camera. The most sensitive area is located near the center of FoV, where the more accurate reconstructions were achieved. Moreover, for all of the locations, the reconstruction was more accurate closer to the surface. This is due to the decrease of sensitivity with depth [18]. Interestingly, when the inclusion located outside of the camera FoV, which is $\sim$12 mm in the radius and the radius of the sources is $\sim$22 mm, the locations were still recovered. This results from the late photons which corresponds to those photons that take longer paths due to more scattering events. Indeed, they carry the most information about the object because they traveled the longest. The aforementioned reconstruction results show that they even obtain information from the area which is outside of FoV of the camera. This rich data enables Pioneer to reconstruct images of more complex objects, which has been demonstrated in [9] where the system generated images for a silicone phantom with two inclusions at different depths: one at 10 mm and the other at 15 mm.

 figure: Fig. 4.

Fig. 4. (a) Mesh used with 11 sources (in red) and $\sim$ 200 selected detectors (in blue) placed on the top surface. It took only $\sim$0.2 s to obtain enough photons for each source. Note that we used a mesh, which is smaller than the actual phantom, to speed up the reconstruction process (the cylindrical mesh of 90 mm diameter and 40 mm thickness). Reconstruction results are shown for the moving object at a distance of (b) -25 mm, (c) -20 mm, (d) -10 mm, (e) 0 mm, (f) 10 mm, (g) 20 mm, (h) 25 mm from the center of the phantom. The gray areas represent the ground truth and the blue points stand for the reconstructed regions.

Download Full Size | PDF

There is a room for improvement in the acquisition speed. In particular, SPAD camera-based TD NIROT can be further facilitated by employing micro lenses on the SPAD sensor to increase the sensitivity and decrease the exposure time. Additionally, the data processing time can be shortened. The current image reconstruction took 10 minutes and can be speeded up by the development of parallel computing techniques [19]. Moreover, other types of reconstruction algorithms can also be applied, e.g. statistical methods such as deep learning [20]. When a well-trained network is used, the 3D reconstruction can be obtained in almost real-time. However, the challenge of finding a reliable way to train the neural networks remains.

4. Conclusion

We developed a handheld TD NIROT system Pioneer based on a SPAD camera and programmed a custom-made GUI software to achieve dynamic 3D imaging. We tested Pioneer throughout a phantom experiment and proved that the system was able to track relatively fast-moving object embedded in a diffusive medium. To the best of our knowledge, this is the fastest TD NIROT device that operates in reflection mode and is able to track an embedded object accurately based on a full 3D image reconstruction. This brings the system closer to the real clinical applications where detection of dynamic changes accurately in tissue is important.

Funding

Consejo Nacional de Ciencia y Tecnología (CVU-627802); National Competence Center in Biomedical imaging (NCCBI); Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung (159490); Krebsforschung Schweiz (KFS-3732-08-2015).

Acknowledgement

We would like to thank Jonas Künzli for optimizing the control software, and Josua Graf for testing the system.

Disclosures

The authors declare the following conflicts of interest: MW is president of the board and co-founder of OxyPrem AG.

References

1. T. Arri, S. Muehlemann, M. Biallas, H. Bucher, and M. Wolf, “Precision of cerebral oxygenation and hemoglobin concentration measurements in neonates measured by near-infrared spectroscopy,” J. Biomed. Opt. 16(4), 047005 (2011). [CrossRef]  

2. A. T. Eggebrecht, S. L. Ferradal, A. Robichaux-Viehoever, M. S. Hassanpour, H. Dehghani, A. Z. Snyder, T. Hershey, and J. P. Culver, “Mapping distributed brain function and networks with diffuse optical tomography,” Nat. Photonics 8(6), 448–454 (2014). [CrossRef]  

3. M. Doulgerakis, A. T. Eggebrecht, and H. Dehghani, “High-density functional diffuse optical tomography based on frequency-domain measurements improves image quality and spatial resolution,” Neurophotonics 6(3), 035007 (2019). [CrossRef]  

4. R. J. Cooper, E. Magee, N. Everdell, S. Magazov, M. Varela, D. Airantzis, A. P. Gibson, and J. C. Hebden, “Monstir ii: A 32-channel, multispectral, time-resolved optical tomography system for neonatal brain imaging,” Rev. Sci. Instrum. 85(5), 053105 (2014). [CrossRef]  

5. A. Kalyanov, J. Jiang, S. Lindner, L. Ahnen, A. Di Costanzo, J. Mata Pavia, S. Sanchez Majos, and M. Wolf, “Time domain near-infrared optical tomography with time-of-flight spad camera: The new generation,” Biophotonics Congress: Biomedical Optics Congress 2018 (2018).

6. F. E. W. Schmidt, M. E. Fry, E. M. C. Hillman, J. C. Hebden, and D. T. Delpy, “A 32-channel time-resolved instrument for medical optical tomography,” Rev. Sci. Instrum. 71(1), 256–265 (2000). [CrossRef]  

7. A. Lyons, F. Tonolini, A. Boccolini, A. Repetti, R. Henderson, Y. Wiaux, and D. Faccio, “Computational time-of-flight diffuse optical tomography,” Nat. Photonics 13(8), 575–579 (2019). [CrossRef]  

8. J. Jiang, A. di Costanzo, S. Lindner, M. Wolf, and A. Kalyanov, “Tracking objects in a diffusive medium with time domain near infrared optical tomography,” in Biophotonics Congress: Biomedical Optics 2020 (Translational, Microscopy, OCT, OTS, BRAIN), (Optical Society of America, 2020), p. JTu3A.18.

9. J. Jiang, A. Di Costanzo Mata, S. Lindner, C. Zhang, E. Charbon, M. Wolf, and A. Kalyanov, “Image reconstruction for novel time domain near infrared optical tomography: towards clinical applications,” Biomed. Opt. Express 11(8), 4723–4734 (2020). [CrossRef]  

10. S. Lindner, C. Zhang, I. M. Antolovic, A. Kalyanov, J. Jiang, L. Ahnen, A. di Costanzo, J. M. Pavia, S. S. Majos, E. Charbon, and M. Wolf, “A novel 32 × 32, 224 mevents/s time resolved spad image sensor for near-infrared optical tomography,” in Biophotonics Congress: Biomedical Optics Congress 2018 (Microscopy/Translational/Brain/OTS), (Optical Society of America, 2018), p. JTh5A.6.

11. S. R. Arridge, “Optical tomography in medical imaging,” Inverse Probl. 15(2), R41–R93 (1999). [CrossRef]  

12. S. R. Arridge and M. Schweiger, “Photon-measurement density functions. part 2: Finite-element-method calculations,” Appl. Opt. 34(34), 8026–8037 (1995). [CrossRef]  

13. “Nirfaster,” https://github.com/nirfaster/NIRFASTer.

14. M. Doulgerakis-Kontoudis, A. Eggebrecht, S. Wojtkiewicz, J. Culver, and H. Dehghani, “Toward real-time diffuse optical tomography: accelerating light propagation modeling employing parallel computing on gpu and cpu,” J. Biomed. Opt. 22(12), 125001 (2017). [CrossRef]  

15. H. Dehghani, M. E. Eames, P. K. Yalavarthy, S. C. Davis, S. Srinivasan, C. M. Carpenter, B. W. Pogue, and K. D. Paulsen, “Near infrared optical tomography using nirfast: Algorithm for numerical model and image reconstruction,” Commun. Numer. Meth. Engng. 25(6), 711–732 (2009). [CrossRef]  

16. J. Jiang, M. Wolf, and S. S. Majos, “Fast reconstruction of optical properties for complex segmentations in near infrared imaging,” J. Mod. Opt. 64(7), 732–742 (2017). [CrossRef]  

17. P. K. Yalavarthy, B. W. Pogue, H. Dehghani, and K. D. Paulsen, “Weight-matrix structured regularization provides optimal generalized least-squares estimate in diffuse optical tomography,” Med. Phys. 34(6Part1), 2085–2098 (2007). [CrossRef]  

18. A. D. Mora, D. Contini, S. Arridge, F. Martelli, A. Tosi, G. Boso, A. Farina, T. Durduran, E. Martinenghi, A. Torricelli, and A. Pifferi, “Towards next-generation time-domain diffuse optics for extreme depth penetration and sensitivity,” Biomed. Opt. Express 6(5), 1749–1760 (2015). [CrossRef]  

19. J. Jiang, L. Ahnen, A. Kalyanov, S. Lindner, M. Wolf, and S. S. Majos, “A new method based on graphics processing units for fast near-infrared optical tomography,” Adv. Exp. Med. Biol. 977, 191–197 (2017). [CrossRef]  

20. H. Ben Yedder, A. BenTaieb, M. Shokoufi, A. Zahiremami, F. Golnaraghi, and G. Hamarneh, “Deep learning based image reconstruction for diffuse optical tomography,” Mach. Learn. for Med. Image Reconstr. Mlmir 2018 11074, 112–119 (2018).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. (a) Photo of Pioneer imager probe applied on a baby head model. (b) Flowchart of Pioneer control software. All the error controls were excluded in the flowchart for simplification purposes.
Fig. 2.
Fig. 2. (a) From the left to the right: an inclusion phantom for the phantom, a rod phantom with the inclusion embedded and a silicone mold to cast this rod phantom; (b) mold for the bulk phantom; (c) silicone phantom components used in the experiment: the rod phantom and the bulk phantom; (d) Pioneer probe placed on the phantom of a movable inclusion in a measurement.
Fig. 3.
Fig. 3. (a) SNR of pixels in the central column. The inset picture illustrates the positions of the source and detectors. (b) An example of ToF histograms captured with the same pixels.
Fig. 4.
Fig. 4. (a) Mesh used with 11 sources (in red) and $\sim$ 200 selected detectors (in blue) placed on the top surface. It took only $\sim$0.2 s to obtain enough photons for each source. Note that we used a mesh, which is smaller than the actual phantom, to speed up the reconstruction process (the cylindrical mesh of 90 mm diameter and 40 mm thickness). Reconstruction results are shown for the moving object at a distance of (b) -25 mm, (c) -20 mm, (d) -10 mm, (e) 0 mm, (f) 10 mm, (g) 20 mm, (h) 25 mm from the center of the phantom. The gray areas represent the ground truth and the blue points stand for the reconstructed regions.

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

[ κ ( r ) + μ a ( r ) + 1 c 0 t ] ϕ ( r , t ) = q ( r , t ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.