Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Ultrafast acousto-optic imaging with ultrasonic plane waves

Open Access Open Access

Abstract

Due to multiple light scattering inside biological tissues, deep non-invasive optical medical imaging is very challenging. Acousto-optic imaging is a technique coupling ultrasound and light that allows recovering optical contrast at depths of few centimeters with a millimeter resolution. Recent advances in acousto-optic imaging are using short focused ultrasound pulses often averaged over several hundred or thousand pulses. As the pulsing rate of commercial probes is limited to about few ultrasound cycles every 100 μs, acquiring an acousto-optic image usually takes several tens of seconds due to the high number of acoustic pulses excitation. We propose here a new acousto-optic imaging technique based on the use of ultrasound plane waves instead of focused ones that allows increasing drastically the imaging rate.

© 2016 Optical Society of America

1. Introduction

Biological tissues are very strong light scattering media, so deep optical imaging is impossible unless invasive techniques are used. Many techniques have been developed recently to overcome these difficulties, such as diffuse optical tomography (DOT) [1], but the signal extraction is not straightforward since it often requires solving a light transport inverse problem. Acousto-optic (AO) imaging [2, 3] is a multi-wave imaging technique [4] that couples ultrasound (US) and light in order to recover an optical contrast deep inside biological tissues with an ultrasonic resolution of few millimeters or less. AO imaging can be coupled with a commercial US scanner [5] and recently showed the first promising results concerning potential medical applications such as ex vivo results on real pathologies [6], spectroscopy sensing of blood oxygenation [7] or attempts to beat speckle decorrelation in vivo [8, 9]. However, all these experiments suffer from the low imaging rate of AO imaging due to two main limitations.

The first limitation concerns the waiting time between two US pulses. With commercial US scanners, it is usually necessary to let the probe cool down between two pulses in order not to damage the transducers. This cooling delay depends on the type of transducers but, for MHz probes, it usually limits the firing rate to 1 pulse of few cycles - 2 or 3 - every 100 μs. Using more resistant probes may allow to decrease the waiting time down to the ultimate physical limit of the technique which is the US time-of-flight across the sample at 1.5 mm · μs−1, still of the order of several tens of μs. Classical AO US sequences use focused US pulses and need about 100 lines, i.e. as many pulses, to get an image so that the theoretical framerate using standard US scanners is of the order 100 fps.

Though this theoretical limit is acceptable for video rate display and thus most of in vivo applications, it is dramatically reduced by the low amount of useful signal in AO imaging and the resulting high number of averaging. This second limitation implies that images obtained at few centimeters deep in the light beam direction (z-axis) require several thousands averaging pulses for each line [5, 6] so that the US firing rate requires at least few tens of seconds in order to obtain an image.

It appears here that current AO imaging rates are not compatible with real-time in vivo experiments. Since the waiting delay between two pulses can hardly be reduced, it is absolutely necessary to reduce the number of US pulses before any clinical applications can be thought of. Two approaches - that can be coupled easily - can be considered in order to achieve that goal. The first one consists in improving the detection scheme so that the signal-to-noise ratio (SNR) is sufficient to reduce the number of averaging pulses. The second one consists in changing the emitted US pattern in order to reduce the number of US pulses needed to obtain an image. As an example, in [10] Lai et al. used an optimized photorefrective-based detection thanks to, among others, a very high numerical aperture fiber bundle and long US pulses (from 10 to 100 cycles) in order to decrease the number of averaging down to 64 at the cost of the longitudinal resolution (along the US propagation direction). Though this method reaches unprecedented depths, its main drawbacks are that it uses a single element transducer at relatively low repetition rate (100 Hz).

In this paper, we propose a new pattern based on transducers arrays and US plane waves that allows reducing the number of pulses necessary to create an image while maintaining the firing rate at the limit of the probe heat damage threshold. Coupled with a tomographic reconstruction, this pattern needs about 50 times fewer US pulses at equal SNR. The idea was first suggested in [11] for monochromatic waves and the experimental demonstration of tomographic techniques was performed in [12] with single element transducers. Here, the novelty lies in the use of a transducer array at a fixed position instead of a translating single transducer in order to recover a projection in one pulse.

Similarly to ultrafast acousto-electric imaging technique [13], it will be shown below that high imaging framerates can be performed at the cost of an image distortion and a resulting loss of resolution in the lateral direction, the amplitude of which will be quantified.

2. Theoretical considerations

Let us consider a static sample, with negligible Brownian motion, illuminated by a monochromatic optical wave. The irradiance (W.m−2) inside the scattering medium in a plane perpendicular to the optical beam axis (z-axis) at a fixed depth z, is Ψ(x, y). The aim of AO imaging is to recover Ψ(x, y). The light is modulated by an arbitrary US wave PUS(r, t) propagating along the r = (x, y) direction in this plane:

PUS(r,t)=P0(tKUSrωUS)sin(ωUSt)
where we considered the retarded potential. P0 represents the temporal shape of the US pulse. In practice, the US wave is geometrically focused along the z-direction (in the elevation direction) so that we will consider that it propagates within a 2D plane (x, y) at a fixed position z. This ultrasonic pressure field induces a phase modulation of the light along one path l that can be expressed as [14]:
ΔφUS(t,l)=sl,0+δslsin(ωUSt)
where sl,0 is the static optical length of the path l and δsl is the modulation amplitude. This last coefficient is proportional to the acoustical pressure [15]:
δsl=αP0(tKUSrωUS)
where both scatterers position and refractive index modulation are taken into account in the efficiency coefficient α and are considered in first approximation as uncorrelated effects. α is usually small enough so that δslsl,0 and 2πδslλ1.

By using what was derived by Kempe et al. [16], it can be shown that the autocorrelation function of the electric field inside the static scattering medium at time t measured on a detector on the outside at the position r′ is:

G(r,t,τ)=A0βΩ(r,r)Ψ(r)P02(tKUSrωUS)(1cos(ωUSτ))dxdy
where we only considered the first order in the US contribution. Here A0 stands for the light power flux collected on the detector at r′ in the absence of US, Ψ(r) is the light irradiance distribution inside the scattering sample in the (x, y) imaged plane and Ω(r, r′) is the probability of having a tagged photon at r reaching the detector at r′. β is a proportionality coefficient that depends on α. According to the Wiener-Khintchine theorem, the previous equation implies that the power spectral density of the scattered field contains two main components: the untagged photons at the light frequency and the tagged photons shifted by the US frequency. In the subsequent paragraphs, only the tagged photons intensity will be considered. In order to simplify the following expressions, we will get rid of Ω(r, r′) without changing the physics of the phenomenon by introducing the AO image to be recovered I(r, r′) = Ω(r, r′)Ψ(r). Indeed, acousto-optic imaging actually images a section of the scattering envelope, the shape of which depends on the position of the detector. Therefore, it measures Ω(r, r′)Ψ(r) instead of Ψ(r) only:
IT=βI(r,r)P02(tKUSrωUS)dxdy

The r′ variable will not be kept in the subsequent paragraphs by considering that the detector is at a fixed position and defines a certain shape of the scattering envelope. I(r) is thus the measured light irradiance that implicitly depends on the real light irradiance inside the medium, the position of the detector and the probability that a photon in r reaches it. This formula is just expressing the fact that photons are tagged over the entire US pulse volume and indistinctly counted on the detector.

Let us now consider that the US pattern we use consists in a plane wave generated by a commercial scanner and propagating within the sample with an angle θ, see Fig. 1 and [17]. As photons are tagged along the entire US volume, the single detector indistinctly counts all tagged photons coming from the plane wavefront. Here, we use a wavefront adaptive photorefractive crystal-based detection in a negative gain configuration [14, 18] with Sn2P2S6:Te crystals (SPS-Te) [19, 20] in order to record a voltage proportional to the tagged photons intensity on the photodiode. When interfering in a PRC, a signal beam (the multiply scattered light) and a reference plane wave engrave a dynamic hologram in the form of a refractive index grating i.e. a thick phase hologram. This dynamic hologram is then self-read by these very same beams as they diffract on it during their propagation through the crystal. This phenomenon is so-called Two-Wave Mixing (TWM) and gives a signal proportional to the light irradiance inside the medium integrated along the US wavefront:

s(t,θ)I(x,y)P02(txcosθ+ysinθVUS)dxdy
where VUS is the sound velocity inside the medium. It is possible to write Eq. (6) as
s(t,θ)I(x,y)[P02(t)*δ(txcosθ+ysinθVUS)]dxdy
where δ represents the Dirac distribution and * stands for the convolution product. Except from the term P02(t), this is very close to the Radon transform of the light irradiance repartition inside the medium. This problem is similar to the CT Scanner imaging technique, and the solution is known and widely studied in this field [21].

 figure: Fig. 1

Fig. 1 Schematic of the US plane wave propagating inside an illuminated sample with an angle θ. The US are focused along the z-direction so that they propagate in a plane perpendicular to light propagation direction. The figure represents this section defined by the US probe position. The three plain blue lines represent the ultrasonic pulse propagating at VUS. At time t, the wave is at the position of the dashed blue line. A voxel (black square) is located through its coordinates (x, y). Photons are indistinctly tagged along the entire wavefront so that the signal recorded on a single detector is the integrated light intensity along the dashed line.

Download Full Size | PDF

In order to proceed further studies on the AO signal with US plane waves, we will use the projection-slice theorem [22] that states that the temporal Fourier transform (FT) of one projection at an angle θ is a slice of the spatial 2D Fourier transform (ℱ𝒯, to be distinguished from the temporal FT) of the unknown image I(x, y) along the straight line at the same angle, see Figs. 2(a) and 2(b):

s˜(ν,θ)=FT{s(t,θ)}=P˜(ν)I(x,y)exp(2iπνxcosθ+ysinθVUS)dxdy
where P˜(ν)=FT{P02(t)}. Apart from the term derived from the frequency response of the US excitation, one can see here the spatial ℱ𝒯 of the irradiance along the line defined by the couples (kx,θ, ky,θ) such as kx,θ=νVUScosθ and ky,θ=νVUSsinθ. This is a slice of the spatial 2D ℱ𝒯 Ĩ(kx, ky) of the image inside the medium in the direction θ. It is therefore possible to reconstruct the latter by recording different projections at different angles and iteratively reconstructing the 2D Fourier plane back. The image is eventually recovered thanks to an inverse 2D ℱ𝒯 noted ℱ𝒯−1 in the following, see Fig. 2(b).

 figure: Fig. 2

Fig. 2 Illustration of the projection-slice theorem and the limitation of the Fourier space (a) The temporal FT of the signal recorded on the photodiode is taken and represents a slice at θ of spatial 2D ℱ𝒯 of the light irradiance. (b) The slice is placed back in its rightful place in the spatial 2D Fourier space and it is done for all angles. The inverse 2D ℱ𝒯 of the reconstructed Fourier space gives the expected image. (c) The receiving and emitting bandwidths limit the highest accessible frequency modulus and the probe limits the angular range so that the accessible region is limited to a segment of the Fourier plane (darker gray portion).

Download Full Size | PDF

In order to take into account the electronic system beyond the photodiode that may alter the tagged photon signal s(t, θ), let us introduce temporal bandwidth (ν) of the detection scheme.

s˜(k,θ)=R˜(k)P˜(k)I(kx,θ,ky,θ)
where k=νVUS=kx2+ky2, so that the reconstructed image (x, y) of the light irradiance within the scattering sample is obtained by summing over all angles:
(x,y)=𝒯1{θs˜(k,θ)}=𝒯1{R˜(k)P˜(k)}*𝒯1{θI˜(kx,θ,ky,θ)}
The quality of the reconstructed image is then influenced by:
  • The receiving bandwidth of the optical detection scheme. In order not to degrade the image, it is important that the system can follow the propagation of the plane wave pulse. Ultimately, this bandwidth is limited by the sampling rate of the acquisition system.
  • The shape of the US pulse. The broader it is, the lower the resolution is.
  • The number of angles used for the reconstruction. This is the most critical parameter. The 2D Fourier plane must be assessed precisely in order to ensure images of good quality. The influence of the angular extent will be studied in the next section.

3. Influence of the angular exploration

In an ideal case, angles should be scanned over π with carefully chosen steps. As the whole angular domain is not accessible, the resulting image will necessarily be degraded. Elements on a US commercial flat probe have a typical directivity of ±20° so that the angular domain is limited to 40°. The goal of this section is to derive the point spread function (PSF) of the system as a function of different angular configurations.

Let us consider that the 2D Fourier plane is now truncated by a limited angle exploration range ±θm. It means that all angles outside the angular domain [−θm, θm] are not physically accessible through the system. The image obtained is:

(x,y)=𝒯1{θ[θm,θm]s˜(ν,θ)}

In order to assess the PSF of the system, let us consider a point object δ(x, y). The receiving and emitting bandwidths (k) and (k) limit the maximum frequency modulus accessible so that the accessible portion of the Fourier plane is a segment of finite radius km between ±θm, see Fig. 2(c). As (k) and (k) result in an homogeneous broadening in both x and y directions, they will be left aside in the following and taken into account in first approximation in a finite radius km of the Fourier spectrum. The PSF can then be expressed as:

PSF(x,y)=𝒯1{θ[θm,θm]δ˜(kx,θ,ky,θ)}
where δ̃(kx, ky) is the 2D Fourier transform of a Dirac distribution and is equal to 1 all over the Fourier plane. In order to find an analytical solution, we will simplify the problem by replacing the discrete sum over θ with a truncation of the Fourier space. The problem is now reduced to finding the inverse 2D Fourier transform of a cone between ±θm.

Let us switch to polar coordinates where the coordinates of the real space are r and ϕ and their corresponding transform variables k and θ. Equation (12) can thus be rewritten:

PSF(r,φ)=kmkmθmθmδ˜(k,θ)exp[2iπkrcos(θφ)]|k|dkdθ
This expression could be used in order to compute the PSF with MATLAB (Mathworks, Boston, MA, USA). Yet, as polar Fast Fourier Transform (FFT) algorithms are not easily available, it takes a lot of time to compute the double integral.

However, this expression can be decomposed as a sum of Bessel functions:

PSF(r,φ)=n=+ein(π2φ)θmθmeinθdθkmkmJn(2πkr)|k|dk
where Jn is the nth order Bessel function of the first kind. When the integral over θ is calculated:
PSF(r,φ)=n=+2nsin(nθm)ein(π2φ)kmkmJn(2πkr)|k|dk

The PSF can then be developed as follows thanks to the substitution k = kmu:

PSF(r,φ)=2θmkm211J0(2πkmru)|u|du+kmn=1+2nsin(nθm)(ein(π2φ)+(1)nein(π2φ))11Jn(2πkmru)|u|du

Due to the parity of the Bessel functions, all odd orders are equal to zero. The PSF is then equal to:

PSF(r,φ)=4θmkm201J0(2πkmru)|u|du+km2p=1+4p(1)psin(2pθm)cos(2pφ)01J2p(2πkmru)|u|du

As expected, if the angular range is ±π2, i.e. the whole angular domain is scanned, only the first term remains. As the integral of the 0th order Bessel function can be rewritten thanks to the properties of the Bessel functions, the PSF is in this case:

PSF(r,φ)4θmkm2=J1(2πkmr)2πkmr

This is the Airy disk and the spreading is due to bandwidth limitation km. Equation (17) above can be numerically assessed. As the integral over J2p also takes time to be computed, it is interesting to use the properties of the Bessel functions to derive an inductive relationship between the different orders. The PSF can be expressed eventually as (see Appendix A):

PSF(r,φ)4θmkm2=A(r)+p=1Bp(r,φ)+p=1Cp(r,φ)
in which each term can be separately assessed and obeys to the following relationships:
{A(r)=J1(2πkmr)2πkmrBp(r,φ)=2(1)p+1sinc(2pθm)cos(2pφ)J2p1(2πkmr)2πkmrCp(r,φ)=2p(1)pπkmrsinc(2pθm)cos(2pφ)cp(r)
where cp(r) can be calculated by recursion thanks to the following relationships:
{c0(r)=J0(2πkmr)12πkmrcp+1(r)=cp(r)1πkmr(J2p(2πkmr)J2p(0))
The relationships above are much faster to compute than the integral. The functions J2p(2π.km.r)2π.km.r decrease very fast with p and r. It appears that they rapidly become negligible around the center of the PSF so that higher orders affect regions further from the center. In our case, only a dozen of orders or less are enough to obtain a good approximation (with an error of less than 0.1%) of the PSF on a disk area of 4 mm diameter.

Figure 3(a) shows the PSF for an angular range of ±20° which is the typical range for a commercial linear probe. One can see that the finite angular range distorts the PSF in the lateral direction compared to the isotropic case. A slice along the ϕ = 0° (x-axis) and ϕ = 90° (y-axis) directions are plotted respectively on Figs. 3(b) and 3(c). The full width at half maximum (FWHM) of the main lobe in each direction was assessed thanks to these two plots and it was found that the lateral size is about 3 times larger than the longitudinal one. One way of interpreting this fact is to consider the Fourier plane represented in Fig. 2(c). The maximum spatial frequency accessible in the x-direction (US propagation axis) is km given by the bandwidth of the system. In the y-direction however, the maximum ky accessible is km sin (θm) which is approximately equal to 0.34km in the case of an angular range of ±20°. We thus expect a distortion of future images in the lateral direction by a factor of about 3.

 figure: Fig. 3

Fig. 3 (a) PSF for an angular range of ±20° which is the typical range for a commercial linear ultrasound probe. One can see here that the limited angular range degrades the lateral resolution. (b) Slice along the ϕ = 0° direction (x-axis). (c) Slice along the ϕ = 90° direction (y-axis). (d) Plot of the resolution (FWHM) along both directions as a function of the angular range. Crosses stand for the computed values. Plain lines correspond to the FWHM of the main lobe assuming the 1D PSF is a section of an Airy disk.

Download Full Size | PDF

The radial PSF in the longitudinal and lateral directions can be computed for different angular ranges so that the resolution is assessed in both directions and plotted on Fig. 3(d). The green and blue crosses respectively correspond to the computed values of the lateral and longitudinal resolution. A simple way of physically understanding the evolution of the two curves is to assimilate the 1D PSF to a section of an Airy disk corresponding to a Fourier space of radius respectively kx,m and ky,m, where kx,m and ky,m are the maximum spatial frequency values accessible in the Fourier plane in each direction. The FWHM of the primary lobe is given in this simple case by:

{Δx=0.7kmΔy=0.7kmsinθm

This simple model was plotted in plain lines on Fig. 3(d) and is in good agreement with the computed values of the resolution. It is interesting to note that the longitudinal FWHM is even more degraded compared to an isotropic case, but its evolution is quite consistent. The fact that the 1D PSF is not perfectly an Airy function is also appearing in the longitudinal direction along which the secondary lobes are important. In this case the computed FWHM is below the predicted value. An interesting point here is that the longitudinal size of the PSF is actually increasing with the angular range to fit the isotropic PSF radius. Qualitatively, one can guess that the final images will also be slightly distorted in the longitudinal direction leading to an image slightly sharper in this direction.

Here it is important to notice that the lateral spatial cutoff frequency of the process depends on the spectral content of the object contrary to the diffraction limit case for instance. It will result in an object-dependent distortion and not a simple anisotropic blurring, though this distortion also results in a loss of resolution in the lateral direction.

4. Reconstruction method

The two previous sections showed that AO imaging with US plane waves is a physical problem very close to the CT scanner technique. It is known that several tomographic methods can then be used in order to recover the final image. Theoretically, the most immediate way is to calculate (x, y) directly thanks to a 2D inverse polar Fourier transform of the reconstructed Fourier plane. However, polar FFT algorithms are not easy to implement numerically and often need to interpolate the polar Fourier Transform on a square grid. The filtered backprojection [23] is usually preferred and is mathematically equivalent. This is the technique used in this paper.

The idea of the filtered backprojection is the following. Instead of replacing each projection in the Fourier plane and then calculating the inverse ℱ𝒯, the backprojection calculates the contribution of each projection in real space and sum over all contributions. However, in order to take into account the fact that the Fourier plane is in polar coordinates, one must filter each projection with the Jacobian matrix of coordinates switch |k|. In practice, this filtering step is critical due to the presence of noise in projection signals. In order not to overweight high frequencies dominated by white noise, this filter is usually multiplied by a rectangle function with a width corresponding to the bandwidth of the image. In order to smooth the image, the ramp is sometimes also windowed so that remaining high frequencies are attenuated (for instance cosine, sinc or hamming windows). The resulting filter is noted f(k).

The principle of the filtered backprojection in AO imaging is the following. First, the temporal FT (ν, θ) is calculated for each temporal AO signal s(t, θ). As shown in section 2, the FT (ν, θ) is also a slice of the spatial 2D ℱ𝒯 (k, θ). It is filtered by the correcting filter f(k) and each filtered 1D signal sf (r, θ) = sf (VUS × t, θ) is then back-projected in real space using a delay-and-sum beamforming algorithm very similar to what is used in conventional US. Briefly, for each voxel of the image, the one-way time of flight of the US plane wave is calculated. This time of flight targets a portion of the 1D temporal signal which defines the value of the corresponding voxel. The different values of the voxels defined by the different angles are then summed in order to recover the final image.

In our case, each projection was filtered with the following filter f(k) before being placed back in the real space:

f(k)=|k|cos(πk2km)rectkm(k)
where rectkm (k) represents the rectangle function which value is 1 over [−km, km] and 0 elsewhere. In practice, km=BWVUS where BW is the temporal bandwidth of the detection scheme and VUS is the sound velocity in the medium

5. Experimental results

The previous sections showed that it was possible to recover an AO image thanks to a tomographic reconstruction. However, the limited angular range accessible for commercial ultrasound probes degrades the lateral resolution by a factor of the order of 1sinθm. The goal of this section is to experimentally demonstrate the capability of this method in terms of contrast and framerate.

The setup used in this section is the same setup as in [6]. The AO imaging setup is coupled with a commercial US scanner (Aixplorer, SuperSonic Imagine, Aix en Provence, France). The ultrasound probe is a commercial US transducers array (SL10-2, 192 elements, 0.2 mm pitch, SuperSonic Imagine) with a central frequency of 6 MHz. The main difference from the setup described in [6] is that the illumination and the collection of scattered light are performed thanks to optical multimode fibers, see Fig. 4. The illuminating beam exits from a Thorlabs multimode fiber ended with a collimator leading to an illuminating beam power of the order of 1.2 W on a disc area with a 1 cm diameter. The collection is performed with a liquid core multimode fiber with a numerical aperture of 0.4 and a collection area of 1 cm2. The temporal bandwidth is 5 MHz and was chosen in order to be the lowest possible without degrading the shape of the inclusion.

 figure: Fig. 4

Fig. 4 Schematic of the experimental setup. The laser source is a laser diode added to a 2 W tapered amplifier in order to have a MOPA system (Sacher Lasertechnik GmBH). The wavelength is 790 nm. The light beam is split into two beams thanks to a beam splitter (BS). The signal beam is collected in a Thorlabs multimode optical fiber (MMOF) through a commercial collimator and guided to the scattering sample. The scattered light is collected through a liquid core optical fiber (LCOF) with a collection area of 1 cm2 and 0.4 NA and guided to the SPS photorefractive crystal (PRC). The reference beam (few dozens of mW) is used to perform two wave mixing (TWM) and filter the untagged photons [6]. The AO signal is measured on a photodiode (PD) and processed on a computer.

Download Full Size | PDF

The sample studied here is an absorbing ink inclusion embedded within a scattering gel matrix. The gel was made according to the method described in [6]. The embedding gel is 4 × 4 × 3 cm3 with a reduced scattering coefficient μ′s = 10 cm−1 and negligible absorbing properties. The cylindrical inclusion is located approximately in the middle of the gel. It has a diameter of 2.5 mm and an absorption coefficient of the order of 5 cm−1. A first image was taken using US 2-cycles pulses at 6 MHz focused at a depth of 20 mm along the US propagation beam (x-axis). The whole image needed 188 lines and each line was averaged 2000 times, thus leading to a total of 376000 US pulses. Therefore, the US sequence lasts typically 40 s at 100 μs pulsing rate. The corresponding image is shown on Fig. 5(a) with a picture of the real inclusion at the bottom. The inclusion is pointed out thanks to a white arrow. A vertical profile along the inclusion was plotted on Fig. 5(b). The AO profile was fitted thanks to a sum of two Gaussian curves:

Σ=ΣdiffΣincl+C
where Σdiff represents a Gaussian fit of the envelope, Σincl a Gaussian fit of the inclusion and C is a fitted constant. This represents a good approximation of an absorbing inclusion in the middle of a diffused light pattern [24]. The size of the inclusion can be measured from the FWHM of Σincl and has a value of 2.5 mm which is the expected value. Let us define the image contrast as the ratio between the inclusion signal depth and the profile max value. With focused US, we find a contrast of 0.34 for the inclusion. The contrast-to-noise ratio (CNR) is defined as the ratio between the inclusion signal depth and the amount of noise in the image. This definition physically comes from the fact that the presence of an inclusion is detected through a decrease of the AO signal compared to the diffuse light envelope. Here, the inclusion is detected with a CNR of 9.

 figure: Fig. 5

Fig. 5 (a) AO image using focused pulses. The white arrow points to the inclusion. (b) 1D vertical profile along the black dashed line. (c) Picture of the inclusions taken before it was embedded. (d) 1D horizontal profile along the white dashed line.

Download Full Size | PDF

The image obtained with focused pulses is compared to an image obtained with plane waves. The plane waves are also 2-cycles pulses at 6 MHz. The plane waves angles were ranging from −20° to 20° with 1° steps (41 angles) and each 1D signal was also averaged over 2000 pulses, thus decreasing the number of pulses needed by almost a factor of 5. The bandwidth of the ramp filter was defined in post-processing and set to 1 MHz. The image is presented on Fig. 6(a). As expected, the horizontal resolution is degraded compared to the isotropic one. A vertical profile along the black dashed line and a horizontal one along the white dotted line are respectively plotted on Figs. 6(b) and 6(c). It is visible here that the vertical profile is sharper than what was obtained with focused pulses. This is due to the small distortion that also occurs in the longitudinal direction and that was noticed in section 3. The size of the inclusion can be calculated in both directions thanks to the same kind of fit as presented in Eq. (24). One finds then a vertical size of the inclusion of 2 mm, which is still relevant with the expected size, though a bit smaller. The horizontal size is 7.8 mm, which is relevant with the expected distortion (3 × 2.5 mm). The contrast can be measured the same way as for focused waves. Here we find 0.49 which is higher than in the focused wave image. The CNR is here equal to 24, almost 3 times higher. This suggests that, in order to obtain an image with equivalent CNR as with focused US pulses, the number of averaging can be divided by almost 10. In order to confirm this hypothesis, the number of averaging was decreased down to 1000, 200 and 100 pulses and the CNR was calculated in each case. It was plotted on Fig. 6(d) and shows the intuitive and expected evolution as a square root of the number of averaging pulses. As expected, at equivalent CNR, only 200 averaging pulses are needed with plane waves. The US sequence in itself then needs 8200 pulses and lasts 0.82 s at 100 μs pulsing rate, which is almost 50 times faster than a focused wave sequence. The computation time must be added in order to have an idea of the overall time to get an image. In the reported example here, it took 1.8 s to compute the image using MATLAB, leading to an overall imaging time of 2.6 s. However, the computation time can be drastically reduced with parallel computation through the use of the GPU provided in the US scanner for instance.

 figure: Fig. 6

Fig. 6 (a) AO image using plane waves ranging from −20° to +20° with 1° steps. (b) vertical profile along the black dashed line. (c) horizontal profile along the white dotted line. (d) Evolution of the CNR as a function of the averaging. The CNR level for focused US averaged 2000 times is indicated by the blue line for comparison purpose.

Download Full Size | PDF

Another interesting point to the use of plane waves is the decrease of the mechanical index (MI) of the sequence. MI is defined as the ratio of the peak negative pressure (PNP, in MPa) in tissues to the square root of the frequency (in MHz) and quantifies the risk of creating cavitation bubbles. FDA guidelines limit the mechanical index to 1.9 for medical applications. The PNP in water was measured for our two sequences thanks to an optical interferometer and a calibrated membrane. In the case of ultrasonic focused waves the corresponding MI in tissues is 1.14, much higher than with ultrasonic plane waves, for which the MI is only 0.43, way below FDA guidelines. Interestingly, the acoustic pressure can be increased up to the safety limits if necessary, leading to an increase of AO signal - as the square of the pressure increase - much higher for plane waves than focused ones.

This increased CNR that allows decreasing the number of averaging can be interpreted by considering the amount of US energy delivered to the sample. Due to the high MI of focused sequences and the directivity of the transducers, only small apertures of few tens of transducers can be used to produce focused US. On the contrary, plane waves sequences use the whole probe: the number of pulsing transducers is increased by a factor of almost 5, and so is the US energy delivered to the system. In other words, the US tagging volume is much bigger with plane waves than with focused waves as the US pressure is only 2.5 times weaker. With focused waves, the tagging volume is of the order of the impulse size at focus, that is to say of the order of λUS3~2102mm3. For plane waves, the tagging volume extent is of the order of λUS in the x- and z-directions but is the length of the probe in the y-direction which is about 150 times the wavelength: the tagging volume is thus of the order of LprobeλUS2~2mm3. Hence the number of tagged photons that is expected to be higher with plane waves than with focused waves.

6. Improvement of the lateral resolution

The distortion of the image in the lateral direction can be compensated by adding another probe perpendicular to the first one in order to access the missing side of the sample. A proof of principle was realized with another sample with two inclusions disposed one beside the other in the same imaging plane. The second sample is 4 × 4 × 4 cm3 with a scattering coefficient μ′s = 10 cm−1. The two inclusions have a diameter of 2 mm and are separated by about 5 mm. A schematic of the experiment is presented on Fig. 7(a). The two inclusions are close enough so that they overlap on plane waves images using probe number 1 and are impossible to distinguish. Probe number 2 is added perpendicularly to probe number 1 and is also used to perform plane waves AO imaging. Figure 7(b) presents a picture of the two inclusions. The focused US AO image using probe number 1 is presented on Fig. 7(c) and the two inclusions are visible and well separated. The plane waves AO image using probe number 1 and an angular range of ±20° with 1° steps is presented on Fig. 7(d). As expected according to the sample design, the two inclusions are too close one from another to be separated. It then seems to be only one big inclusion. The plane waves AO image obtained from the two probes and the same angular domain as above is presented on Fig. 7(e). It appears that few distortions still remain due to the absence of information along the diagonal of the sample but the two inclusions are clearly appearing.

 figure: Fig. 7

Fig. 7 (a) Schematic of the sample with two inclusions and the positions of the two probes. Numbers 1 and 192 on each probe represent the locations of the first and last elements. The two inclusions are close enough so that they can’t be distinguished in the plane wave image using only probe 1. (b), (c), (d) and (e) show the results on this gel. The colorbar is the same for each AO image and is showed on the left side of (c). The black dashed line on each image is the line along which the profiles of Fig. 8 were plotted. (b) Picture of the gel with the two black inclusions. (c) AO image using focused US. (d) AO image using plane waves from one probe. (e) AO image using plane waves from two perpendicular probes.

Download Full Size | PDF

Profiles along the black dashed lines visible on Figs. 7(c), 7(d) and 7(e) are plotted on Fig. 8. The expected positions of the inclusions are pointed out with two black arrows. The green profile corresponds to the inclusions on the focused US AO image which are well separated. The red profile corresponds to the inclusions on the plane wave image using only probe number 1 and confirms that the two inclusions are not separated. The distortion of the image is clearly appearing here since the scattered light envelope is almost flat though it is expected to be Gaussian-shaped as explained in Eq. (24) and visible on the green profile. The blue profile corresponds to the inclusions on the plane waves AO image using two probes and confirms that the two inclusions are well separated. The DC level on this profile is a residual distortion due to the fact that it still misses information in the diagonal directions of the image. Indeed, the cross-shape of the image responsible of this DC level can be understood as a remaining of the elongated shape of the scattered light envelope that is not properly compensated as angles are not scanned over 360°. It is appearing here that the two inclusions on the two-probes image are slightly shifted compared to their position in the image obtained with focused US. This may be due to a small inaccuracy in the determination of the position of probe number 2 in relation to probe number 1, a wrong estimation of which would directly affect the lateral position of the inclusions. This issue can easily be addressed by using a rigid structure that could hold the two probes in the same imaging plane at fixed and controlled positions.

 figure: Fig. 8

Fig. 8 Profiles along the black dashed lines of Fig. 7. The black arrows point the positions of the two inclusions. As expected, with one probe, it is impossible to distinguish the two inclusions as they clearly appear with 2 probes.

Download Full Size | PDF

7. Conclusion

This paper showed that it was possible to obtain AO images using emitted US plane waves from a commercial US scanner and a tomographic reconstruction. With a maximum angular range of ±20°, it is possible to recover an image with equivalent CNR using almost 50 times fewer pulses. We showed that this increased framerate and CNR comes at the cost of a lateral distortion of the image and a resulting loss resolution. However, distorted images could be used for fast preliminary imaging in order to detect the position of suspicious regions before using slower and more precise techniques. Furthermore, this paper showed that the distortion can be addressed by using two perpendicular probes.

Reconstruction with a priori knowledge of the shape of the objects can eventually be imagined in order to correct the distortion. This a priori knowledge could be extracted for example from classical US scans.

Appendix A: Derivation of the inductive relationship of section 3

Let us recall the formula for the PSF that was derived above in the article.

PSF(r,φ)4θmkm2=J1(2πkmr)2πkmr+p=1+2(1)psinc(2pθm)cos(2pφ)01J2p(2πkmru)udu

This formula can be numerically computed in order to derive further properties of the PSF. Here the presence of the integral of J2p considerably slows the computation. It is interesting to develop this term and use the properties of the Bessel functions in order to find an inductive relationship between the different orders.

Thanks to the following property of the Bessel functions:

Jn+1(x)=nJn(x)xJn(x)
it is possible to write for all values of p:
01J2p(2πkmru)udu=2p12πkmr01J2p1(2πkmru)du12πkmr01ddu{J2p1(2πkmru)}udu

The second term of this expression can be calculated analytically through an integration by part:

01ddu{J2p1(2πkmru)}udu=J2p1(2πkmr)01J2p1(2πkmru)du
so that it results
01J2p(2πkmru)udu=2p2πkmr01J2p1(2πkmru)duJ2p1(2πkmr)2πkmr
The second term of Eq. (29) can be easily calculated through MATLAB and gives the term Bp in Eq. (20). Let us now focus on the remaining integral that will lead to the term Cp in Eq. (20). The Bessel functions obey to the following inductive relationship:
Jn1(x)=Jn+1(x)+2Jn(x)
By defining cp(r):
cp(r)=01J2p1(2πkmru)du
we now have:
cp(r)=01J2p+1(2πkmru)du+1πkmr01ddu{J2p(2πkmru)}du
hence the inductive relationship of Eq. (21) :
cp+1(r)=cp(r)1πkmr(J2p(2πkmr)J2p(0))
and the first term:
c0(r)=01J1(2πkmru)du=J0(2πkmr)12πkmr

Hence the expression of the PSF found in section 3 with the given expressions in Eq. (20) and Eq. (21).

Acknowledgments

We thank Jean Provost for useful discussions. This work was supported by the Pierre-Gilles de Gennes foundation and LABEX WIFI (Laboratory of Excellence ANR-10-LABX-24) within the French Program 245 Investments for the Future under reference ANR-10-IDEX-0001-02 PSL.

References and links

1. S. R. Arridge, “Optical tomography in medical imaging,” Inverse Probl. 15(12), R41 (1999). [CrossRef]  

2. W. Leutz and G. Maret, “Ultrasonic modulation of multiply scattered light,” Physica B 204(1–4), 14–19 (1995). [CrossRef]  

3. G. Yao and L. V. Wang, “Theoretical and experimental studies of ultrasound-modulated optical tomography in biological tissue,” Appl. Opt. 39(14), 659–664 (2000). [CrossRef]  

4. M. Fink and M. Tanter, “Multiwave imaging and super resolution,” Phys. Today 63(12), 28–33 (2010). [CrossRef]  

5. E. Bossy, L. Sui, T. W. Murray, and R. A. Roy, “Fusion of conventional ultrasound imaging and acousto-optic sensing by use of a standard pulsed-ultrasound scanner,” Opt. Lett. 30(17), 744–746 (2005). [CrossRef]   [PubMed]  

6. J.-B. Laudereau, E. Benoità, La Guillaume, V. Servois, P. Mariani, A. A. Grabar, M. Tanter, J.-L. Gennisson, and F. Ramaz, “Multi-modal acousto-optic/ultrasound imaging of ex vivo liver tumors at 790 nm using a Sn2P2S6 wavefront adaptive holographic setup,” J. Biophotonics 8, 429–436 (2014). [CrossRef]  

7. J. E. P. Honeysett, E. Stride, J. Deng, and T. S. Leung, “An algorithm for sensing venous oxygenation using ultrasound-modulated light enhanced by microbubbles,” Proc. SPIE 8223, 82232Z (2012). [CrossRef]  

8. S. Resink, E. Hondebrink, and W. Steenbergen, “Solving the speckle decorrelation challenge in acousto-optic sensing using tandem nanosecond pulses within the ultrasound period,” Opt. Lett. 39(122), 6486–6489 (2014). [CrossRef]   [PubMed]  

9. Y. Li, P. Hemmer, C. Kim, H. Zhang, and L. V. Wang, “Detection of ultrasound-modulated diffuse photons using spectral-hole burning,” Opt. Express 16(119), 14862–14874 (2008). [CrossRef]   [PubMed]  

10. P. Lai, X. Xu, and L. V. Wang, “Ultrasound-modulated optical tomography at new depth,” J. Biomed. Opt. 17(6), 066006 (2012). [CrossRef]   [PubMed]  

11. P. Kuchment and L. Kunyansky, “Synthetic focusing in ultrasound modulated tomography,” Inverse Probl. Imaging 4(14), 665–673 (2010). [CrossRef]  

12. J. Li and L. V. Wang, “Ultrasound-modulated optical computed tomography of biological tissues,” Appl. Phys. Lett. 84(9), 1597 (2004). [CrossRef]  

13. J. Provost, W. Kwiecinski, M. Fink, M. Tanter, and M. Pernot, “Ultrafast acoustoelectric imaging,” in Proceedings of IEEE 11th International Symposium on Biomedical Imaging (IEEE, 2014), pp. 702–705.

14. M. Gross, F. Ramaz, B. C. Forget, M. Atlan, A. C. Boccara, P. Delaye, and G. Roosen, “Theoretical description of the photorefractive detection of the ultrasound modulated photons in scattering media,” Opt. Express 13(118), 7097–7112 (2005). [CrossRef]   [PubMed]  

15. L. V. Wang, “Mechanisms of ultrasonic modulation of multiply scattered coherent light: an analytic model,” Phys. Rev. Lett. 87(14), 043903 (2001). [CrossRef]   [PubMed]  

16. M. Kempe, M. Larionov, D. Zaslavsky, and A. Z. Genack, “Acousto-optic tomography with multiply scattered light,” J. Opt. Soc. Am. A 14(15), 1151–1158 (1997). [CrossRef]  

17. G. Montaldo, M. Tanter, J. Bercoff, N. Benech, and M. Fink, “Coherent plane-wave compounding for very high frame rate ultrasonography and transient elastography,” IEEE Trans. Ultrason. Ferr. 56(13), 489–506 (2009). [CrossRef]  

18. F. J. Blonigen, A. Nieva, C. A. DiMarzio, S. Manneville, L. Sui, G. Maguluri, T. W. Murray, and R. A. Roy, “Computations of the acoustically induced phase shifts of optical paths in acoustophotonic imaging with photorefractive-based detection,” Appl. Opt. 44(118), 3735–3746 (2005). [CrossRef]   [PubMed]  

19. S. Farahi, G. Montemezzani, A. A. Grabar, J.-P. Huignard, and F. Ramaz, “Photorefractive acousto-optic imaging in thick scattering media at 790 nm with a Sn2P2S6:Te crystal,” Opt. Lett. 35(111), 1798–1800 (2010). [CrossRef]   [PubMed]  

20. T. Bach, M. Jazbinsek, G. Montemezzani, P. Gnter, A. A. Grabar, and Y. M. Vysochanskii, “Tailoring of infrared photorefractive properties of Sn2P2S6 crystals by Te and Sb doping,” J. Opt. Soc. Am. B 24(17), 1535–1541 (2007). [CrossRef]  

21. D. C. Solmon, “The X-ray transform,” J. Math. Anal. Appl. 56(11), 61–83 (1976). [CrossRef]  

22. S. Dunne, S. Napel, and B. Rutt, “Fast reprojection of volume data,” in Proceedings of the First Conference on Visualization in Biomedical Computing (IEEE, 1990), pp 11–18. [CrossRef]  

23. N. G. Ramachandran and V. A. Lakshminarayanan, “Three-dimensional reconstruction from radiographs and electron micrographs: application of convolutions instead of Fourier transforms,” PNAS 68(19), 2236–2240 (1971). [CrossRef]   [PubMed]  

24. M. Lesaffre, S. Farahi, F. Ramaz, and M. Gross, “Experimental study of z resolution in acousto-optical coherence tomography using random phase jumps on ultrasound and light,” Appl. Opt. 52(15), 949–957 (2013). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Schematic of the US plane wave propagating inside an illuminated sample with an angle θ. The US are focused along the z-direction so that they propagate in a plane perpendicular to light propagation direction. The figure represents this section defined by the US probe position. The three plain blue lines represent the ultrasonic pulse propagating at VUS. At time t, the wave is at the position of the dashed blue line. A voxel (black square) is located through its coordinates (x, y). Photons are indistinctly tagged along the entire wavefront so that the signal recorded on a single detector is the integrated light intensity along the dashed line.
Fig. 2
Fig. 2 Illustration of the projection-slice theorem and the limitation of the Fourier space (a) The temporal FT of the signal recorded on the photodiode is taken and represents a slice at θ of spatial 2D ℱ𝒯 of the light irradiance. (b) The slice is placed back in its rightful place in the spatial 2D Fourier space and it is done for all angles. The inverse 2D ℱ𝒯 of the reconstructed Fourier space gives the expected image. (c) The receiving and emitting bandwidths limit the highest accessible frequency modulus and the probe limits the angular range so that the accessible region is limited to a segment of the Fourier plane (darker gray portion).
Fig. 3
Fig. 3 (a) PSF for an angular range of ±20° which is the typical range for a commercial linear ultrasound probe. One can see here that the limited angular range degrades the lateral resolution. (b) Slice along the ϕ = 0° direction (x-axis). (c) Slice along the ϕ = 90° direction (y-axis). (d) Plot of the resolution (FWHM) along both directions as a function of the angular range. Crosses stand for the computed values. Plain lines correspond to the FWHM of the main lobe assuming the 1D PSF is a section of an Airy disk.
Fig. 4
Fig. 4 Schematic of the experimental setup. The laser source is a laser diode added to a 2 W tapered amplifier in order to have a MOPA system (Sacher Lasertechnik GmBH). The wavelength is 790 nm. The light beam is split into two beams thanks to a beam splitter (BS). The signal beam is collected in a Thorlabs multimode optical fiber (MMOF) through a commercial collimator and guided to the scattering sample. The scattered light is collected through a liquid core optical fiber (LCOF) with a collection area of 1 cm2 and 0.4 NA and guided to the SPS photorefractive crystal (PRC). The reference beam (few dozens of mW) is used to perform two wave mixing (TWM) and filter the untagged photons [6]. The AO signal is measured on a photodiode (PD) and processed on a computer.
Fig. 5
Fig. 5 (a) AO image using focused pulses. The white arrow points to the inclusion. (b) 1D vertical profile along the black dashed line. (c) Picture of the inclusions taken before it was embedded. (d) 1D horizontal profile along the white dashed line.
Fig. 6
Fig. 6 (a) AO image using plane waves ranging from −20° to +20° with 1° steps. (b) vertical profile along the black dashed line. (c) horizontal profile along the white dotted line. (d) Evolution of the CNR as a function of the averaging. The CNR level for focused US averaged 2000 times is indicated by the blue line for comparison purpose.
Fig. 7
Fig. 7 (a) Schematic of the sample with two inclusions and the positions of the two probes. Numbers 1 and 192 on each probe represent the locations of the first and last elements. The two inclusions are close enough so that they can’t be distinguished in the plane wave image using only probe 1. (b), (c), (d) and (e) show the results on this gel. The colorbar is the same for each AO image and is showed on the left side of (c). The black dashed line on each image is the line along which the profiles of Fig. 8 were plotted. (b) Picture of the gel with the two black inclusions. (c) AO image using focused US. (d) AO image using plane waves from one probe. (e) AO image using plane waves from two perpendicular probes.
Fig. 8
Fig. 8 Profiles along the black dashed lines of Fig. 7. The black arrows point the positions of the two inclusions. As expected, with one probe, it is impossible to distinguish the two inclusions as they clearly appear with 2 probes.

Equations (34)

Equations on this page are rendered with MathJax. Learn more.

P US ( r , t ) = P 0 ( t K US r ω US ) sin ( ω US t )
Δ φ US ( t , l ) = s l , 0 + δ s l sin ( ω US t )
δ s l = α P 0 ( t K US r ω US )
G ( r , t , τ ) = A 0 β Ω ( r , r ) Ψ ( r ) P 0 2 ( t K US r ω US ) ( 1 cos ( ω US τ ) ) d x d y
I T = β I ( r , r ) P 0 2 ( t K US r ω US ) d x d y
s ( t , θ ) I ( x , y ) P 0 2 ( t x cos θ + y sin θ V US ) d x d y
s ( t , θ ) I ( x , y ) [ P 0 2 ( t ) * δ ( t x cos θ + y sin θ V US ) ] d x d y
s ˜ ( ν , θ ) = FT { s ( t , θ ) } = P ˜ ( ν ) I ( x , y ) exp ( 2 i π ν x cos θ + y sin θ V US ) d x d y
s ˜ ( k , θ ) = R ˜ ( k ) P ˜ ( k ) I ( k x , θ , k y , θ )
( x , y ) = 𝒯 1 { θ s ˜ ( k , θ ) } = 𝒯 1 { R ˜ ( k ) P ˜ ( k ) } * 𝒯 1 { θ I ˜ ( k x , θ , k y , θ ) }
( x , y ) = 𝒯 1 { θ [ θ m , θ m ] s ˜ ( ν , θ ) }
PSF ( x , y ) = 𝒯 1 { θ [ θ m , θ m ] δ ˜ ( k x , θ , k y , θ ) }
PSF ( r , φ ) = k m k m θ m θ m δ ˜ ( k , θ ) exp [ 2 i π k r cos ( θ φ ) ] | k | d k d θ
PSF ( r , φ ) = n = + e in ( π 2 φ ) θ m θ m e in θ d θ k m k m J n ( 2 π k r ) | k | d k
PSF ( r , φ ) = n = + 2 n sin ( n θ m ) e in ( π 2 φ ) k m k m J n ( 2 π k r ) | k | d k
PSF ( r , φ ) = 2 θ m k m 2 1 1 J 0 ( 2 π k m r u ) | u | d u + k m n = 1 + 2 n sin ( n θ m ) ( e in ( π 2 φ ) + ( 1 ) n e in ( π 2 φ ) ) 1 1 J n ( 2 π k m r u ) | u | d u
PSF ( r , φ ) = 4 θ m k m 2 0 1 J 0 ( 2 π k m r u ) | u | d u + k m 2 p = 1 + 4 p ( 1 ) p sin ( 2 p θ m ) cos ( 2 p φ ) 0 1 J 2 p ( 2 π k m r u ) | u | d u
PSF ( r , φ ) 4 θ m k m 2 = J 1 ( 2 π k m r ) 2 π k m r
PSF ( r , φ ) 4 θ m k m 2 = A ( r ) + p = 1 B p ( r , φ ) + p = 1 C p ( r , φ )
{ A ( r ) = J 1 ( 2 π k m r ) 2 π k m r B p ( r , φ ) = 2 ( 1 ) p + 1 sinc ( 2 p θ m ) cos ( 2 p φ ) J 2 p 1 ( 2 π k m r ) 2 π k m r C p ( r , φ ) = 2 p ( 1 ) p π k m r sinc ( 2 p θ m ) cos ( 2 p φ ) c p ( r )
{ c 0 ( r ) = J 0 ( 2 π k m r ) 1 2 π k m r c p + 1 ( r ) = c p ( r ) 1 π k m r ( J 2 p ( 2 π k m r ) J 2 p ( 0 ) )
{ Δ x = 0.7 k m Δ y = 0.7 k m sin θ m
f ( k ) = | k | cos ( π k 2 k m ) rect k m ( k )
Σ = Σ diff Σ incl + C
PSF ( r , φ ) 4 θ m k m 2 = J 1 ( 2 π k m r ) 2 π k m r + p = 1 + 2 ( 1 ) p sinc ( 2 p θ m ) cos ( 2 p φ ) 0 1 J 2 p ( 2 π k m r u ) u d u
J n + 1 ( x ) = n J n ( x ) x J n ( x )
0 1 J 2 p ( 2 π k m r u ) u d u = 2 p 1 2 π k m r 0 1 J 2 p 1 ( 2 π k m r u ) d u 1 2 π k m r 0 1 d d u { J 2 p 1 ( 2 π k m r u ) } u d u
0 1 d d u { J 2 p 1 ( 2 π k m r u ) } u d u = J 2 p 1 ( 2 π k m r ) 0 1 J 2 p 1 ( 2 π k m r u ) d u
0 1 J 2 p ( 2 π k m r u ) u d u = 2 p 2 π k m r 0 1 J 2 p 1 ( 2 π k m r u ) d u J 2 p 1 ( 2 π k m r ) 2 π k m r
J n 1 ( x ) = J n + 1 ( x ) + 2 J n ( x )
c p ( r ) = 0 1 J 2 p 1 ( 2 π k m r u ) d u
c p ( r ) = 0 1 J 2 p + 1 ( 2 π k m r u ) d u + 1 π k m r 0 1 d d u { J 2 p ( 2 π k m r u ) } d u
c p + 1 ( r ) = c p ( r ) 1 π k m r ( J 2 p ( 2 π k m r ) J 2 p ( 0 ) )
c 0 ( r ) = 0 1 J 1 ( 2 π k m r u ) d u = J 0 ( 2 π k m r ) 1 2 π k m r
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.