Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Coaxial interferometry for camera-based ultrasound-modulated optical tomography with paired illumination

Open Access Open Access

Abstract

Ultrasound-modulated optical tomography (UOT), which combines the advantages of both light and ultrasound, is a promising imaging modality for deep-tissue high-resolution imaging. Among existing implementations, camera-based UOT gains huge advances in modulation depth through parallel detection. However, limited by the long exposure time and the slow framerate of modern cameras, the measurement of UOT signals always requires holographic methods with additional reference beams. This requirement increases system complexity and is susceptible to environmental disturbances. To overcome this challenge, we develop coaxial interferometry for camera-based UOT in this work. Such a coaxial scheme is enabled by employing paired illumination with slightly different optical frequencies. To measure the UOT signal, the conventional phase-stepping method in holography can be directly transplanted into coaxial interferometry. Specifically, we performed both numerical investigations and experimental validations for camera-based UOT under the proposed coaxial scheme. One-dimensional imaging for an absorptive target buried inside a scattering medium was demonstrated. With coaxial interferometry, this work presents an effective way to reduce system complexity and cope with environmental disturbances for camera-based UOT.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Ultrasound-modulated optical tomography (UOT), also known as acousto-optic imaging, is a hybrid imaging modality that integrates light and ultrasound [14]. To probe the imaging information inside the scattering medium, a focused ultrasound locally modulates scattered light. By exploiting acousto-optic effect, a portion of the scattered light within the ultrasonic focus is then frequency shifted and is then measured outside the scattering medium. Since the number of frequency-shifted photons, also called tagged photons, is proportional to the optical intensity within the ultrasonic focus, optical absorption contrast with acoustic resolution can be realized. Due to the biological transparency of the ultrasound, UOT has been demonstrated at penetration depths around centimeter scale or tens of transport mean free paths [59]. Recently, UOT was demonstrated with the capability to image through a heavily scattering human skull [10] and even achieves a resolution beyond the acoustic diffraction limit [11]. Moreover, acousto-optic effect was also exploited in other optical imaging modalities to extend their imaging depths [1214]. Despite these accomplishments, UOT needs to detect a weak signal overwhelmed in a huge background. Such a condition is due to the relatively weak light-sound interaction, which is restricted by safety constraints of both light and ultrasound. As this condition deteriorates further at deeper depths and higher resolutions, researches on how to efficiently measure these tagged photons is becoming the mainstream of UOT.

In the early years, UOT employs single-pixel detectors, such as photodiodes [15] and photomultiplier tubes [16,17], with direct detection. These detectors can respond fast up to Gigahertz, allowing direct measurement of the interference between the tagged signal and the untagged background. This advantage in terms of response speed also enables improved imaging performance by coding ultrasound [18], as well as the attempts of employing UOT for in vivo studies [17]. However, UOT with single-pixel detectors generally suffers from the incapability of handling scattered light with low modulation depth, due to the limited dynamic range of electronic devices [3]. Various efforts, including Fabry-Perot interferometers [7,19], spectral-hole burning [20], laser cavities [21], and amplification through a gain medium [22] were proposed to improve the modulation depth by either filtering out the untagged background or enhancing the tagged signal. However, these approaches require special layouts and only work for a very narrow spectral range. Alternatively, parallel detection can effectively boost the modulation depth by replacing single-pixel detectors with cameras [2325]. It has been shown that the modulation depth can be enhanced by a factor of N1/2, where N is the number of employed camera pixels or captured speckles. Considering that high-performance cameras with millions of pixels are now readily available, camera-based UOT gains huge improvement in terms of modulation depth. Due to this reason, parallel detection with cameras is becoming the most efficient implementation for UOT [24] and has been demonstrated to image blood flow inside scattering media [26]. Besides, parallel detection is also adopted in many diffuse optical imaging modalities for the same reason [27,28].

Compared with single-pixel detectors, cameras usually have a slow framerate and a long exposure time. As a few camera shots are generally required to determine the scattered wavefront of tagged light [24,29], camera-based UOT is always criticized for its incapability of handling fast speckle decorrelation in live animals. Nonetheless, this shortcoming has been alleviated by the newly developed single-exposure holographic wavefront measurement methods, including off-axis holography [30], lock-in detection [31], and speckle statistical analysis [32]. Besides employing cameras for parallel detection, certain types of fast photorefractive crystals have been demonstrated to process a large number of speckles in UOT with holographic approaches, but they typically work for a narrow spectral range [22,33,34]. While holographic approaches can employ a strong reference beam to achieve shot-noise limited detection in theory [25], environmental disturbances can easily deteriorate imaging performance in practice. In contrast, direct detection is insensitive to environmental disturbances such as phase drifting, but it is not compatible with camera-based UOT. This condition originates from the fact that cameras cannot respond fast to resolve the interference term between the tagged signal and the untagged background, which oscillates at the Megahertz range. Attempts have been made to employ direct detection for camera-based UOT. For example, Ref. [23] modulates the laser source with the same frequency as the ultrasound, allowing the camera to act as a low-pass filter to detect the beating term between the tagged signal and the untagged background. However, directly modulating the laser source temperature fluctuation, causing instability to the laser source. Another example is to exploit the contrasts of laser speckles, which are affected by the existence of ultrasound [35,36]. However, without utilizing the interference term between the tagged signal and the untagged background, both the detected signal and the contrast are quite small. As a result, nearly all existing camera-based UOT employs an additional reference beam to measure the tagged signal through holographic approaches.

In this work, we fill this blank by developing a camera-based UOT with coaxial interferometry, which keeps the interferometric design and is insensitive to environmental disturbances. This new scheme requires paired illumination with two co-propagated beams. These two beams have slightly different optical frequencies so mutual interference between the tagged signal from one beam and the untagged background from the other beam occurs. To measure UOT signals, conventional wavefront measurement methods like phase stepping can be adopted [29]. Moreover, the single-shot method with a lock-in camera can be applied to coaxial interferometry as well, which is capable of fighting against fast speckle decorrelation [31]. In the following, we will demonstrate the coaxial interferometry for camera-based UOT through both numerical simulations and experimental results.

2. Operational principle of coaxial interferometry with paired illumination

We first describe the operational principle of coaxial interferometry with paired illumination, which is schematically shown in Fig. 1. Paired illumination requires the generation of two co-propagated light beams with distinct frequencies, which can be realized through an acousto-optic modulator (AOM). For example, one can drive the AOM with the superposition of two sinusoidal waves with frequencies at fc + fs and fc - fs. In this condition, only the two first-order diffracted beams with frequencies at f0 + fc + fs and f0 + fc - fs are kept, while the other diffraction orders are disregarded. Here, fc is the central frequency of the AOM and fs denotes the separation amount between the paired illuminating beams. For convenience, we name them the ‘upper’ beam and the ‘lower’ beam, based on their frequencies. The divergence angle between these two beams is given by 2λfs/v, where λ is the optical wavelength and v is the speed of ultrasound. When the relative distance between the sample and the AOM multiplied by the divergence angle is smaller than the beam waist, these two beams can be considered spatially inseparable. To perform UOT, the focused ultrasonic transducer operates at fus = 2fs + fb. In this condition, the scattered field $E(t )$ arriving at a given pixel of the camera is contributed from six terms (Fig. 1), which can be described as

$$\begin{aligned}E(t)&= E_{\textrm{upper}}^\textrm{u}{e^{i2\pi ({{f_0} + {f_\textrm{c}} + {f_\textrm{s}}} )t}} + E_{\textrm{upper}}^\textrm{t}{e^{i2\pi ({{f_0} + {f_\textrm{c}} + 3{f_\textrm{s}} + {f_\textrm{b}}} )t}} + E_{\textrm{upper}}^\textrm{t}{e^{i2\pi ({{f_0} + {f_\textrm{c}} - {f_\textrm{s}} - {f_\textrm{b}}} )t}} \\ &+ E_{\textrm{lower}}^\textrm{u}{e^{i2\pi ({{f_0} + {f_\textrm{c}} - {f_\textrm{s}}} )t}} + E_{\textrm{lower}}^\textrm{t}{e^{i2\pi ({{f_0} + {f_\textrm{c}} + {f_\textrm{s}} + {f_\textrm{b}}} )t}} + E_{\textrm{lower}}^\textrm{t}{e^{i2\pi ({{f_0} + {f_\textrm{c}} - 3{f_\textrm{s}} - {f_\textrm{b}}} )t}}\end{aligned}$$

Here, E denotes the field and the following exponential term describes the fast oscillating phase. The superscript ‘u’ and ‘t’ represents untagged and tagged light, respectively, while the subscript ‘upper’ and ‘lower’ denote their origins. Given the relatively long exposure time of the camera (millisecond scale), the measured intensity $I(t )$ becomes

$$\begin{aligned}I(t )= {\langle|{E(t )} |^2\rangle} = &\textrm{Background} + 2A_{\textrm{upper}}^\textrm{u}A_{\textrm{lower}}^\textrm{t}\textrm{cos}({2\pi {f_\textrm{b}}t + {\varphi_1}} )\\&+ 2A_{\textrm{upper}}^\textrm{t}A_{\textrm{lower}}^\textrm{u}\textrm{cos}({2\pi {f_\textrm{b}}t + {\varphi_2}} )\end{aligned}$$
where A denotes the amplitude. Here, $\langle\cdots\rangle $ denotes taking time average with respect to the camera exposure. ${\varphi _1}$ denotes the phase difference between $E_{\textrm{upper}}^\textrm{u}$ and $E_{\textrm{lower}}^\textrm{t}$, while ${\varphi _2}$ denote the phase difference between $E_{\textrm{upper}}^\textrm{t}$ and $E_{\textrm{lower}}^\textrm{u}$. As we can see from Eq. (2) that, only two interference terms produce slow-varying holograms that are measurable. These two terms also carry the information of local optical properties at the ultrasonic focus. Other interference terms change rapidly with time and thus are averaged out as informationless background. The direct current terms that are not functions of time also contribute to the background. Thus, the useful holograms that contribute to the UOT signal come from the interference between ultrasonically tagged light from one illuminating beam and the untagged light from another illuminating beam. The summation of these two interference terms is pictorially depicted in the upper right corner of Fig. 1, leading to a new net phasor that represents the UOT signal in this scheme. Thanks to the nonzero fb introduced, the UOT signal oscillates slowly at frequency fb, which can be extracted through a conventional four-step phase-shifting method [29]. In particular, a lock-in camera that operates at fb can measure the UOT signal with a single shot as well [37], when handling fast speckle decorrelation is required.

 figure: Fig. 1.

Fig. 1. Operational principle of coaxial interferometry with paired illumination. Optical fields with different frequencies are labeled with different colors in the top left corner. The resultant UOT signal is depicted in the top right corner. AOM: acousto-optic modulator; UT: ultrasonic transducer.

Download Full Size | PDF

3. Simulation results to investigate the performance of coaxial interferometry

The feasibility of the camera-based UOT with coaxial interferometry is firstly examined using numerical tools. To match experimental conditions, the camera is assumed to have 2,560 × 2,160 pixels. Moreover, for the convenience of considering shot noise, the measured intensity at each camera pixel is converted into the representation of photon numbers. The paired illuminating beams are assumed to have the same intensity on average. The tagging efficiency is set to a fixed value so that on the camera sensor the total number of tagged photons is 0.5% of the total number of scattered photons. For each unit on the camera sensor, the optical fields for both tagged light and untagged light from both illuminating beams are drawn from Rayleigh distributions, while the phase values are drawn from uniform distributions. Figure 2(a) plots the determined UOT signal as a function of the averaged number of scattered photons per pixel. Each data point is averaged from 300 independent runs, where only shot noise is considered. As we can see from the figure that the UOT signal increases linearly proportional to the total number of scattered photons, which is consistent with Eq. (2). Correspondingly, the standard deviation of these independent runs, which originates purely from the shot noise, is considered as the noise. Since the level of shot noise is proportional to only one-half the power of the total number of photons collected, the signal-to-noise ratio (SNR) in Fig. 2(b) increases accordingly to the average number of scattered photons per pixel. It is worth emphasizing that the SNR only quantifies the measurement accuracy affected by the shot noise.

 figure: Fig. 2.

Fig. 2. Numerical simulations of the camera-based UOT with coaxial interferometry. Each data point is obtained from 300 independent runs to minimize statistical errors. (a) The UOT signals as a function of the averaged number of scattered photons per pixel. (b) The signal-to-noise ratio (SNR) as a function of the averaged number of scattered photons per pixel. Only shot noise is considered. (c) The contrast-to-noise ratio (CNR) as a function of the averaged number of scattered photons per pixel. Identical independent Rayleigh distributions are assigned to the speckles in these independent runs.

Download Full Size | PDF

We further investigate how the fluctuations in speckles affect the measured UOT signal. In this case, identical independent Rayleigh distributions are assigned to the speckles in these 300 independent runs, so that not only shot noise but also the variations in speckle distributions affect the measured data. Such a condition is identical to the determination of the contrast-to-noise ratio (CNR) during experiments, in which the fluctuations for different parts of the target with the same optical contrast are investigated. Here, the contrast and the noise can be treated as the mean and the standard deviation of these 300 independent runs. Figure 2(c) plots the CNR as a function of the averaged number of scattered photons per pixel. Different from the behaviors of the SNR, the CNR remains a flat line, indicating that different variations in the speckle patterns magnify the total fluctuations in the measured data. We note that similar observations are also found in holographic-based UOT with an additional reference beam. Before proceeding, we emphasize that the signal, the SNR, and the CNR presented here are calculated using the entire 2,560 × 2,160 pixels. Thus, these values reflect the collective behavior of all pixels rather than a single pixel.

4. Experimental demonstrations on camera-based UOT

Having described the principle and validated its feasibility numerically, we then built an experimental setup for UOT with coaxial interferometry. A schematic figure is depicted in Fig. 3(a). A continuous-wave laser (MSL-FN-473-30 mW, CNI) with a frequency f0 served as the light source, and a variable neutral density filter (NDC-50C-4M, Thorlabs) was used to control the power being dumped into the system. To create paired illumination, the superposition of two sinusoidal waves at frequencies of 47.5 MHz and 52.5 MHz was sampled at 500 MHz, resulting in a series of 199 digitized numbers. The produced digital signal was then fed to a function generator to produce an analog signal periodically. After being amplified, the analog signal drove the AOM (AOM-505AF1, IntraAction) to diffract light. In this condition, two first-order diffracted light with frequencies at f0 + 47.5 MHz and f0 + 52.5 MHz were generated with a relative divergence angle of about 0.0016 rads. In practice, these two beams have slightly different intensities, but can be tuned by modifying the weight of the digital signal. Nonetheless, such a modification is unnecessary, as this discrepancy almost has no affection on the imaging performance. Output light with other diffraction orders was blocked using an iris diaphragm. The paired probing light illuminated the sample and was scattered. Experimentally, we found that although being spatially inseparable, the speckle patterns generated by these two beams are quite different due to different incident angles. A focused ultrasonic transducer (UT, ShanChao5Z10SJ30DJ; 5 MHz central frequency; 3 cm focal length; 0.918 mm focal spot size) was employed to guide the imaging position. The UT was driven by another amplified sinusoidal signal with a frequency at fUT, which was generated by the same function generator. After interacting with the sample, the output scattered light was collected by a collection lens (AC254-050-A, Thorlabs) and subsequently measured by a scientific CMOS camera (pco.edge 5.5, PCO-TECH; 2,560 × 2,160 pixels; exposure time 0.5 ms).

 figure: Fig. 3.

Fig. 3. Schematics of the coaxial interferometry for camera-based UOT. (a) Experimental setup. NDF: neutral density filter; AOM: acousto-optic modulator; ID: iris digraph; UT: ultrasonic transducer; P: phantom; L: lens; sCMOS: scientific CMOS camera; PA: power amplifier: FG: function generator. (b) Illustration of the sample, which consists of two pieces of tissue-mimicking phantoms separated by 55 mm. (c) Illustration of the absorptive target, which has a width of 3.2 mm.

Download Full Size | PDF

The sample we used consists of two pieces of tissue-mimicking phantoms separated by 55 mm. The thicknesses of P1 and P2 are 2 mm and 3 mm, respectively, which is illustrated in Fig. 3(b). These phantoms were made by mixing intralipid, gelatin, and water with a mass ratio of 1.6:10:88.4, following the recipe described in Ref. [38]. The scattering coefficient and the reduced scattering coefficient at 473 nm were measured to be about 16 mm-1 and 1.6 mm-1, respectively. As shown in Fig. 3(c), a black pen marked a rectangular area (3.2 mm wide) on the inner side of P2, serving as the absorptive target to be imaged.

The four-step phase-shifting method was employed to provide a one-dimensional image for the absorptive target. In particular, we set fUT = 5 MHz + 12.5 Hz so that the beating signal operates at 12.5 Hz. Therefore, a camera framerate set at 50 Hz allows the extraction of the UOT signal through the four-step phase-shifting method. Figure 4 plots the UOT signal using red asterisks with a scanning step size of 0.25 mm, faithfully reproducing the one-dimensional profile of the absorptive target. The lateral resolution of the imaging system ${r_{\textrm{FWHM}}}$, which is defined as the full width at half maximum of the line spread function, can be estimated by fitting the experimental data to $\textrm{y}(x )= a\textrm{erf}\left[ {2\sqrt {\textrm{ln}2} ({x - {x_1}} )/{r_{\textrm{FWHM}}}} \right] - b\textrm{erf}\left[ {2\sqrt {\textrm{ln}2} ({x - {x_2}} )/{r_{\textrm{FWHM}}}} \right]$ [31,32]. In this formula, a and b represent the height of the flat shoulder on each side, while ${x_1}$ and ${x_2}$ describe the starting and end points of the dip. $\textrm{erf}(x )$ is the error function. Numerically, ${r_{\textrm{FWHM}}}$ was estimated to be about 956 µm, which is consistent with the size of the ultrasonic focus (918 µm). As a control, when the ultrasound is off, the absorptive target is barely seen in a direct point-by-point transmission image. These results confirm the validity of camera-based UOT in coaxial interferometry. To further enable two-dimensional imaging, high axial resolution is also required for UOT. Such a capability has been realized by employing pulsed ultrasound with single-pixel detection [7,8,18,19,22]. In contrast, cameras cannot respond fast to scrutinize the position of ultrasonic pulses based on the time of flight. Therefore, both pulsed optical illumination and pulsed ultrasound are required for camera-based UOT [14,26,39]. This configuration can be conveniently realized by programming the function generators that drive the AOM and the ultrasonic transducer.

 figure: Fig. 4.

Fig. 4. Experimental results of performing one-dimensional imaging for camera-based UOT with coaxial interferometry. The grey squares and the red asterisks represent experimental data obtained through the direct point-by-point transmission method and the four-step phase-shifting method. The fitting curve is also provided. All signals were normalized to their maximum values.

Download Full Size | PDF

5. Discussion and conclusions

In this work, one-dimensional imaging is performed as a demonstration of principle. Nonetheless, the coaxial scheme for camera-based UOT can be directly extended for performing two- or even three-dimensional imaging. To improve the axial resolution, one can encode ultrasound, sweep frequencies, or modulate the duty cycle of both light and ultrasound [18,26,40]. Given the robustness brought about by coaxial interferometry, enhanced modulation depth benefited from parallel detection, and potential fast response with the lock-in detection [37], this system is promising for future in vivo studies. It is also worth noting that, while holographic methods employ a strong reference beam to elevate the weak signal and beat the shot noise in UOT, the coaxial interferometry developed here does not have the luxury to do so. This condition is because the illuminating intensity on the surface of the biological sample is capped due to safety issues. Nonetheless, in conditions where phase disturbance is dominant, the developed coaxial scheme can be beneficial.

In conclusion, we demonstrated coaxial interferometry for camera-based UOT, which is enabled by paired illumination with slightly different optical frequencies. Conventional phase-stepping methods that exploit a beat signal are compatible with the coaxial scheme to determine the UOT signal. We further performed one-dimensional imaging for an absorptive target buried inside tissue-mimicking phantoms to validate our claims. The coaxial scheme with paired illumination is insusceptible to environmental disturbances, such as the phase drifting caused by air turbulence. As a result, coaxial interferometry gains robustness and reduces system complexity for camera-based UOT, thus holding promises for future applications.

Funding

National Key Research and Development Program of China (2019YFA0706301); National Natural Science Foundation of China (12004446, 92150102); Fundamental and Applied Basic Research Project of Guangzhou (202102020603); Open Fund of State Key Laboratory of Information Photonics and Optical Communications (IPOC2020A003).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. L. V. Wang, “Ultrasound-mediated biophotonic imaging: a review of acousto-optical tomography and photo-acoustic tomography,” Dis. Markers 19(2-3), 123–138 (2004). [CrossRef]  

2. D. S. Elson, R. Li, C. Dunsby, R. Eckersley, and M. X. Tang, “Ultrasound-mediated optical tomography: a review of current methods,” Interface Focus 1(4), 632–648 (2011). [CrossRef]  

3. Resink and G. Steffen, “State-of-the art of acousto-optic sensing and imaging of turbid media,” J. Biomed. Opt. 17(4), 040901 (2012). [CrossRef]  

4. J. Gunther and S. Andersson-Engels, “Review of current methods of acousto-optical tomography for biomedical applications,” Front. Optoelectron. 10(3), 211–238 (2017). [CrossRef]  

5. J. Li, S. Sakadžić, G. Ku, and L. V. Wang, “Transmission- and side-detection configurations in ultrasound-modulated optical tomography of thick biological tissues,” Appl. Opt. 42(19), 4088–4094 (2003). [CrossRef]  

6. G. Rousseau, A. Blouin, and J.-P. Monchalin, “Ultrasound-modulated optical imaging using a powerful long pulse laser,” Opt. Express 16(17), 12577–12590 (2008). [CrossRef]  

7. G. Rousseau, A. Blouin, and J.-P. Monchalin, “Ultrasound-modulated optical imaging using a high-power pulsed laser and a double-pass confocal Fabry–Perot interferometer,” Opt. Lett. 34(21), 3445–3447 (2009). [CrossRef]  

8. P. Lai, X. Xu, and L. Wang, “Ultrasound-modulated optical tomography at new depth,” J. Biomed. Opt. 17(6), 066006 (2012). [CrossRef]  

9. Z. Hua, K. Zhu, Y. Wang, Z. Zeng, and Y. Tan, “Contrast-enhanced ultrasound-modulated laser feedback imaging with microbubbles,” Opt. Lasers Eng. 157, 107134 (2022). [CrossRef]  

10. Y. Liu, R. Cao, J. Xu, H. Ruan, and C. Yang, “Imaging through highly scattering human skulls with ultrasound-modulated optical tomography,” Opt. Lett. 45(11), 2973–2976 (2020). [CrossRef]  

11. D. Doktofsky, M. Rosenfeld, and O. Katz, “Acousto optic imaging beyond the acoustic diffraction limit using speckle decorrelation,” Commun. Phys. 3(1), 5 (2020). [CrossRef]  

12. J. O. Schenk and M. E. Brezinski, “Ultrasound induced improvement in optical coherence tomography (OCT) resolution,” Proc. Natl. Acad. Sci. U. S. A. 99(15), 9761–9764 (2002). [CrossRef]  

13. M. Jang, H. Ko, J. H. Hong, W. K. Lee, J.-S. Lee, and W. Choi, “Deep tissue space-gated microscopy via acousto-optic interaction,” Nat. Commun. 11(1), 710 (2020). [CrossRef]  

14. H. Ruan, Y. Liu, J. Xu, Y. Huang, and C. Yang, “Fluorescence imaging through dynamic scattering media with speckle-encoded ultrasound-modulated light correlation,” Nat. Photonics 14(8), 511–516 (2020). [CrossRef]  

15. M. Kempe, M. Larionov, D. Zaslavsky, and A. Z. Genack, “Acousto-optic tomography with multiply scattered light,” J. Opt. Soc. Am. A 14(5), 1151–1158 (1997). [CrossRef]  

16. L. Wang, S. L. Jacques, and X. Zhao, “Continuous-wave ultrasonic modulation of scattered laser light to image object in turbid media,” Opt. Lett. 20(6), 629–631 (1995). [CrossRef]  

17. A. Lev and B. Sfez, “In vivo demonstration of the ultrasound-modulated light technique,” J. Opt. Soc. Am. A 20(12), 2347–2354 (2003). [CrossRef]  

18. A. Levi, S. Monin, E. Hahamovich, A. Lev, B. G. Sfez, and A. Rosenthal, “Increased SNR in acousto-optic imaging via coded ultrasound transmission,” Opt. Lett. 45(10), 2858–2861 (2020). [CrossRef]  

19. S. Sakadžić and L. V. Wang, “High-resolution ultrasound-modulated optical tomography in biological tissues,” Opt. Lett. 29(23), 2770–2772 (2004). [CrossRef]  

20. Y. Li, H. Zhang, C. Kim, K. H. Wagner, P. Hemmer, and L. V. Wang, “Pulsed ultrasound-modulated optical tomography using spectral-hole burning as a narrowband spectral filter,” Appl. Phys. Lett. 93(1), 011111 (2008). [CrossRef]  

21. K. Zhu, B. Zhou, Y. Lu, P. Lai, S. Zhang, and Y. Tan, “Ultrasound-modulated laser feedback tomography in the reflective mode,” Opt. Lett. 44(22), 5414–5417 (2019). [CrossRef]  

22. B. Jayet, J. P. Huignard, and F. Ramaz, “Optical phase conjugation in Nd:YVO4 for acousto-optic detection in scattering media,” Opt. Lett. 38(8), 1256–1258 (2013). [CrossRef]  

23. S. Lévêque, A. C. Boccara, M. Lebec, and H. Saint-Jalmes, “Ultrasonic tagging of photon paths in scattering media: parallel speckle modulation processing,” Opt. Lett. 24(3), 181–183 (1999). [CrossRef]  

24. J. Li and L. V. Wang, “Methods for parallel-detection-based ultrasound-modulated optical tomography,” Appl. Opt. 41(10), 2079–2084 (2002). [CrossRef]  

25. M. Gross, P. Goy, and M. Al-Koussa, “Shot-noise detection of ultrasound-tagged photons in ultrasound-modulated optical imaging,” Opt. Lett. 28(24), 2482–2484 (2003). [CrossRef]  

26. A. Hussain, W. Steenbergen, and I. M. Vellekoop, “Imaging blood flow inside highly scattering media using ultrasound modulated optical tomography,” J. Biophotonics 11(1), e201700013 (2018). [CrossRef]  

27. J. Xu, A. K. Jahromi, J. Brake, J. E. Robinson, and C. Yang, “Interferometric speckle visibility spectroscopy (ISVS) for human cerebral blood flow monitoring,” APL Photonics 5(12), 126102 (2020). [CrossRef]  

28. J. Xu, A. K. Jahromi, and C. Yang, “Diffusing wave spectroscopy: A unified treatment on temporal sampling and speckle ensemble methods,” APL Photonics 6(1), 016105 (2021). [CrossRef]  

29. F. Le Clerc, L. Collot, and M. Gross, “Numerical heterodyne holography with two-dimensional photodetector arrays,” Opt. Lett. 25(10), 716–718 (2000). [CrossRef]  

30. M. Gross, “Selection of the tagged photons by off axis heterodyne holography in ultrasound-modulated optical tomography,” Appl. Opt. 56(7), 1846–1854 (2017). [CrossRef]  

31. Y. Liu, Y. Shen, C. Ma, J. Shi, and L. V. Wang, “Lock-in camera based heterodyne holography for ultrasound-modulated optical tomography inside dynamic scattering media,” Appl. Phys. Lett. 108(23), 231106 (2016). [CrossRef]  

32. D. Yuan, J. Luo, D. Wu, R. Zhang, P. Lai, Z. Li, and Y. Shen, “Single-shot ultrasound-modulated optical tomography with enhanced speckle contrast,” Opt. Lett. 46(13), 3095–3098 (2021). [CrossRef]  

33. T. W. Murray, L. Sui, G. Maguluri, R. A. Roy, A. Nieva, F. Blonigen, and C. A. DiMarzio, “Detection of ultrasound-modulated photons in diffuse media using the photorefractive effect,” Opt. Lett. 29(21), 2509–2511 (2004). [CrossRef]  

34. M. Lesaffre, F. Jean, F. Ramaz, A. C. Boccara, M. Gross, P. Delaye, and G. Roosen, “In situ monitoring of the photorefractive response time in a self-adaptive wavefront holography setup developed for acousto-optic imaging,” Opt. Express 15(3), 1030–1042 (2007). [CrossRef]  

35. J. Li, G. Ku, and L. V. Wang, “Ultrasound-modulated optical tomography of biological tissue by use of contrast of laser speckles,” Appl. Opt. 41(28), 6030–6035 (2002). [CrossRef]  

36. S. G. Resink and W. Steenbergen, “Tandem-pulsed acousto-optics: an analytical framework of modulated high-contrast speckle patterns,” Phys. Med. Biol. 60(11), 4371–4382 (2015). [CrossRef]  

37. Y. Liu, C. Ma, Y. Shen, and L. V. Wang, “Bit-efficient, sub-millisecond wavefront measurement using a lock-in camera for time-reversal based optical focusing inside scattering media,” Opt. Lett. 41(7), 1321 (2016). [CrossRef]  

38. P. Lai, X. Xu, and L. V. Wang, “Dependence of optical scattering from Intralipid in gelatin-gel based tissue-mimicking phantoms on mixing temperature and time,” J. Biomed. Opt. 19(3), 035002 (2014). [CrossRef]  

39. L. J. Nowak and W. Steenbergen, “Reflection-mode acousto-optic imaging using a one-dimensional ultrasound array with electronically scanned focus,” J. Biomed. Opt. 25(09), 096002 (2020). [CrossRef]  

40. G. Yao, S. Jiao, and L. V. Wang, “Frequency-swept ultrasound-modulated optical tomography in biological tissue by use of parallel detection,” Opt. Lett. 25(10), 734–736 (2000). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. Operational principle of coaxial interferometry with paired illumination. Optical fields with different frequencies are labeled with different colors in the top left corner. The resultant UOT signal is depicted in the top right corner. AOM: acousto-optic modulator; UT: ultrasonic transducer.
Fig. 2.
Fig. 2. Numerical simulations of the camera-based UOT with coaxial interferometry. Each data point is obtained from 300 independent runs to minimize statistical errors. (a) The UOT signals as a function of the averaged number of scattered photons per pixel. (b) The signal-to-noise ratio (SNR) as a function of the averaged number of scattered photons per pixel. Only shot noise is considered. (c) The contrast-to-noise ratio (CNR) as a function of the averaged number of scattered photons per pixel. Identical independent Rayleigh distributions are assigned to the speckles in these independent runs.
Fig. 3.
Fig. 3. Schematics of the coaxial interferometry for camera-based UOT. (a) Experimental setup. NDF: neutral density filter; AOM: acousto-optic modulator; ID: iris digraph; UT: ultrasonic transducer; P: phantom; L: lens; sCMOS: scientific CMOS camera; PA: power amplifier: FG: function generator. (b) Illustration of the sample, which consists of two pieces of tissue-mimicking phantoms separated by 55 mm. (c) Illustration of the absorptive target, which has a width of 3.2 mm.
Fig. 4.
Fig. 4. Experimental results of performing one-dimensional imaging for camera-based UOT with coaxial interferometry. The grey squares and the red asterisks represent experimental data obtained through the direct point-by-point transmission method and the four-step phase-shifting method. The fitting curve is also provided. All signals were normalized to their maximum values.

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

E ( t ) = E upper u e i 2 π ( f 0 + f c + f s ) t + E upper t e i 2 π ( f 0 + f c + 3 f s + f b ) t + E upper t e i 2 π ( f 0 + f c f s f b ) t + E lower u e i 2 π ( f 0 + f c f s ) t + E lower t e i 2 π ( f 0 + f c + f s + f b ) t + E lower t e i 2 π ( f 0 + f c 3 f s f b ) t
I ( t ) = | E ( t ) | 2 = Background + 2 A upper u A lower t cos ( 2 π f b t + φ 1 ) + 2 A upper t A lower u cos ( 2 π f b t + φ 2 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.