Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Optimized SNR simultaneous multispectral photoacoustic imaging with laser diodes

Open Access Open Access

Abstract

Multispectral photoacoustic laser diode systems have multiple wavelengths available simultaneously. In addition to multispectral imaging, this can be exploited to increase the signal to noise ratio (SNR) by combining these wavelengths to form a combined image, but at the loss of spectral information. Here, a novel signal processing concept is introduced, which optimizes the SNR in the reconstructions of single wavelength data from combined acquisitions while simultaneously permitting to obtain a higher SNR fused image from the same data. The concept is derived for an arbitrary number of wavelengths; it is also applicable at low pulse repetition frequencies. The concept is applied in an experiment using two wavelengths, verifying the theoretical results.

© 2015 Optical Society of America

1. Introduction

For clinical and preclinical applications of photoacoustic imaging (PAI), such as molecular imaging [1,2], cancer imaging [3], and blood vessel imaging [4], typically multiple wavelengths are applied to resolve exogenous or endogenous contrast agents [5]. For example, multispectral photoacoustic imaging has been demonstrated to aid in osteoarthritis detection without the need of exogenous contrast agents [6], and it can also be used to resolve other intrinsic contrast agents such as oxy-/deoxyhemoglobin, and melanin [7], which also leads to dependent information, such as spatially resolved blood oxygenation [8].

In most conventional systems, which typically use a single tunable laser source (e.g. Nd:YAG with an optical parametric oscillator), multiple wavelengths can only be applied sequentially. In other systems, such as systems based upon broad band light sources using, for example, a pumped dye cell [9], stimulated Raman scattering inside a single mode fiber [10], or laser diode based setups [8,1113] multiple wavelengths are available simultaneously. Mienkina et al. [13] utilize a Golay code to apply multiple wavelengths at once and recover the single wavelength information later, Beckmann et al. [14] used pseudorandom codes instead. Wang et al. [9] use a digital mirror device to apply fast switching between wavelengths and to apply a Hadamard code to the active wavelengths in photoacoustic microscopy to activate multiple wavelengths at once and recover them later by decoding. Laser diode systems additionally allow for compact, cheap, and portable imaging systems if they are integrated with the acoustical detector, such as in [11,12].

The laser safety standards [15, 16] can be used as a guideline for safe application of photoacoustic imaging [4]. Assuming a constant irradiated area, laser safety limits both the amount of single pulse energy and average optical power that is applied to the patients’ skin. For example, at 800 nm, it limits the average irradiance to about 300 mW/cm2 and the radiant exposure of a single pulse to 30 mJ/cm2.

State of the art laser diode systems apply single pulses of about 1 mJ/cm2 [17]. This is far below the single pulse limit and is not high enough to gain images with sufficient SNR. Consequently, either the pulse energy needs to be increased or multiple pulses have to be averaged. The former is more effective, if the total energy is fixed, but this typically requires additional laser diodes, increasing the cost of the system. Averaging, however, is also limited: E.g. at 1 mJ/cm2 only about 300 acquisitions per second are allowed due to the maximum average irradiance.

To increase the SNR, we propose to utilize the simultaneous availability of multiple wavelengths in multispectral laser diode systems and combine multiple simultaneously active wavelengths into a single high SNR “fused” image. Even if we do not increase the average irradiance, which would counteract laser safety precautions, the simultaneous application of multiple wavelengths instead of applying them sequentially can lead to an increase in SNR.

Most tissue absorbs in a wide range of wavelengths. Hence, the application of multiple wavelengths at once to form one “fused” image can provide enhanced SNR, which can be very useful for structural imaging and was already demonstrated for increasing the depth of field in optical resolution photoacoustic microscopy by Hajireza et al. [10]. A range of multiple applications for the fused image exists: Besides the primary application to provide a structural image with high SNR and better penetration depth, such an image could be used as a mask for the single wavelength data. Because the SNR of the image is better, it could potentially be used as a more reliable base for segmentation and thresholding than the regular data. For example, blood vessels, which are the primary absorbing structures and absorb a large range of wavelengths in the near infrared spectrum, could be displayed using the high SNR fused image, while the noisier hemoglobin concentrations or oxygenation maps can be improved by removing regions known from the high SNR image to only contain noise and overlaid on the high SNR image to provide overall better background separation or allow spatial smoothing. Also, since it is based on the same structural information as the single wavelength images, it could potentially be used as a background image while using the single wavelength data acquired simultaneously as a functional overlay (again, for example, for oxygenation information).

However, illuminating with multiple wavelengths in one acquisition also means that spectroscopic information is lost for that acquisition. Here, to recover single wavelength information at acceptable SNR and simultaneously optimize the SNR of the fused image, we propose to use a new pulsing and reconstruction concept at limited average power. A special case of this concept has been demonstrated in [18], where Golay codes have been used for the single wavelength reconstruction. Here, we apply a generalization that is applicable to any pulse combination and delivers an optimal reconstruction in a least squares sense. We also demonstrate the improvement by this approach.

The remainder of the paper is structured as follows: First, the necessary theory is developed for optimized single wavelength recovery and maximizing the fused image SNR. Next, the theory is applied to an illustrative example, in which all wavelengths are treated equally. Finally, an experimental setup for two wavelengths is used to demonstrate the theoretical results from the example in practice.

2. Theory

2.1. Least squares optimized separation of wavelengths

The topic of single wavelength reconstruction from pulse codes has been examined for various pulse codes [1921]. However, for a limited average irradiance and higher single pulse radiant exposures the considerations are different: It is not necessary to consider ambiguity, because it can be assumed that the repetition rate is low enough in order to stay below laser safety limits. In the following, we will show that the optimal reconstruction can be calculated from the pulsing matrix C. The matrix C consists of ones and zeros and has one row per acquisition and one column per wavelength applied. If cij is one, this corresponds to an active wavelength λj in acquisition i.

Let us consider a system of Nλ wavelengths. In general, the reconstruction m(λl,t) of single wavelength data from multiple acquisitions ai can be performed by using a weighted mean approach:

m(λl,t)=i=1Mwilai=i=1Mwil(j=1Nλcijsλj(t)+ni(t)).

Here, wil are the unknown weights applied to each acquisition ai, cij are the elements of the Matrix C, sλj is the signal excited by wavelength λj and ni is the noise added in acquisition ai. M is the number of acquisitions. From the reconstructions, a vector m(t) can be formed, which then fulfills:

m(t)=WTa

Our goal is to optimize the weights in W, such that the variance of m is minimized and we obtain an unbiased estimator of sλl(t). That is, E[m(λl,t)]=sλl(t), where E[m(λl,t)] is the expected value of m(λl,t). This ensures that the reconstruction result does not introduce systematic errors. The side condition results in

i=1Mwilcij={1forj=l0forjl
to obtain an unbiased estimator and
Var(m(λl,t))=i=1Mwil2σi2,
for the variance to be minimized. Typically, it can be assumed that σi2=σ2i[1,M] where σi2 is the variance of the noise in acquisition i. The optimization problem can be solved for example by using Lagrange’s multiplier, resulting in
W=C(CTC)1.

Thus, the optimal weights W can be calculated from the Moore-Penrose pseudoinverse of the matrix CT. The performance of the resulting code can be assessed by calculating the variance for the reconstructions using Eq. (3), or, including the covariance between different reconstructions by

V=(CTC)1σ2.

Here, the variances are the diagonal elements of the matrix V while the remainder of the matrix describes the covariances. If one is only interested in the mean squared error (MSE) resulting for a single wavelength, this can be calculated by

MSESC,l=δlT(CTC)1δlσ2.

Here, δl is a column vector with Nλ elements containing only zeros except in row l.

It should be noted that the reconstructed single wavelength datasets m(t) are perfectly separated from each other if the photoacoustic signal generation is linear. Thus, any desired image reconstruction method and spectral unmixing method can be applied to the reconstructed data without any restrictions. For example, it is possible to include a simple linear unmixing approach directly in the weights W. For this, we assume that our chromophore concentrations χ can be calculated by

χ=Rm,
where R is determined by the unmixing algorithm and describes the dependence of a chromophore concentration estimation on each of the applied wavelengths. Thus, we assume that spectral unmixing is a linear process which calculates the concentrations directly from the single wavelength signals, such as the unmixing described by Li et al. [22], and we assume that the image reconstruction and the unmixing can be interchanged. To directly calculate χ from the acquisitions a as χ=W˜Ta, we can calculate a new weight matrix W˜:
W˜=C(CTC)1RT.

The covariance matrix for χ becomes:

Vχ=R(CTC)1RTσ2.

2.2. Least squares optimized reconstruction of a fused multispectral image

A similar concept can be applied to the reconstruction of a maximum SNR “fused” image containing all wavelengths:

Let us, again, consider a system of Nλ wavelengths. We assume that relative differences in the expected signal amplitudes are known and can be represented in a column vector g. g contains one element per wavelength and it should be chosen such that s˜λj=gjs, where s ≠ 0 is a constant and s˜λj are the expected signal amplitudes. Then, it is possible to apply up to Nλ wavelengths simultaneously in each acquisition. The combination of wavelengths in each acquisition can be completely arbitrary. For a given set of acquisitions, the SNR of a fused image can be maximized using the same weighted mean approach:

s^(t)=i=1Mwi(j=1Nλcijsλj(t)+ni(t)).

Here, ŝ(t) is the resulting fused image data. The weights wi can be tuned to optimize the SNR in a least-squares sense. To do this, it is necessary to minimize the variance of ŝ(t):

Var[s^(t)]=i=1Mwi2σi2.

Again, it is assumed that σi2=σ2i. In addition to the minimization of the variance, it is useful to enforce E[ŝ] = s, where E[ŝ] is the expected value of ŝ(t). This results in a minimum variance unbiased estimator (MVUE) for s and thus a meaningful result, avoidance of the zero solution, and a constraint for the optimization.

Using Lagrange’s multiplier ξ, the above conditions lead to an optimization problem of

(i=1Mwi2σ2+ξ(1i=1Mwij=1Nλcijgj))wl=!0,
(i=1Mwi2σ2+ξ(1i=1Mwij=1Nλcijgj))ξ=!0,
for all l ∈ [1,Nλ ]. This is a linear equation system in wi and ξ, which can be explicitly solved. From this, the optimal weights w result as:
w=CggTCTCg.

The scaling in the denominator of Eq. (14) ensures that the resulting reconstruction is equal in amplitude to a reconstruction from only a single one-wavelength acquisition if there are no expected differences between the wavelengths (g=(1,,1)T), while the scaling in the numerator ensures larger contributions by acquisitions with larger inherent signal amplitude.

Assuming an image formation with the introduced weights, the resulting mean squared error of the reconstructed fused image can be calculated. In the best case, all wavelengths truly contribute to the signal in the expected way. The MSE for this best case scenario is:

MSEF=σ2gTCTCg.

Here, a signal amplitude of 1 was assumed, and σ2 is the noise variance of a single acquisition. Similar to [18], it is possible to review a worst case scenario as well, in which instead of the assumed contribution of all wavelengths, only one wavelength actually is absorbed and generates photoacoustic signals. While in this case, due to the perfect separation of the single wavelengths, the performance of the single wavelength reconstructions (Eq. (6)) remains unchanged, the performance of the fused reconstruction (Eq. (15)) will be reduced. For the worst case, if only wavelength λt contributes, Eq. (15) transforms into

MSEF,wc=gTCTCg(δtTCTCg)2σ2.

3. Example

Let us now consider a more specific example, which is similar to the one demonstrated in [18]. In the example, it is possible to calculate the MSE as in Eq. (15) explicitly, and compare three basic cases (see Fig. 1): Consider Nλ wavelengths, for which the expected signal amplitudes are unknown and we therefore choose g=(1,,1)T, thus assuming equal contribution. Instead of allowing arbitrary combinations of wavelengths in each acquisition, we restrict ourselves to only using either (1) one wavelength at once NP times each or (2) all wavelengths at once (this occurs NC = NP times). The third case (3) is a mixture of cases (1) and (2), where in general, NC ≤ NP.

 figure: Fig. 1:

Fig. 1: Graphical representation of the three basic cases 1–3 in (a)–(c), respectively, and Nλ = 4, NP = 4. When pulses are shown at multiple wavelengths beneath each other, this represents simultaneous emission of the wavelengths and receiving a combined acquisition from these. If only one wavelength is shown in a column, only that single wavelength is active in the corresponding acquisition.

Download Full Size | PDF

If we only use case (1), the best possible SNR would be achieved by averaging over the same amount of pulses for all wavelengths. For the single wavelengths, this would lead to a noise related mean squared error of

MSES1=σ2Np.

In the best case, when all wavelengths are absorbed equally, for the fused reconstruction this would lead to

MSEF1=σ2NλNP.

If we were only interested in the fused image, we could instead apply case (2). This will ideally increase our signal amplitude to Nλ, if we really have equal absorption at all wavelengths, meaning the best possible mean squared error achievable becomes:

MSEF2=σ2Nλ2NP.

By using both single wavelength illumination and simultaneous illumination in case (3), Eqs. (17)(19) can be tuned: By reducing the number of acquisitions for each wavelength by one, it is possible to add one combined pulse, while maintaining the same average irradiance. Using this approach and allowing only mixtures of the cases described above (i.e. only single wavelength and all wavelengths combined acquisitions) leads to a cyclic Nλ × Nλ matrix CT C containing NP on the main diagonal and NC everywhere else. This matrix can be inverted explicitly and inserted into Eq. (6). This leads to the mean squared error:

MSESC3=(11NλNPNC+1(NP+NC(Nλ1))Nλ)σ2,
where NC is the number of simultaneous illuminations and NP is the number of single wavelength pulses (without combined pulses), as before. The fused performance in this case becomes:
MSEFC3=σ2Nλ(NPNC)+Nλ2NC.

If, instead, one uses only the single wavelength acquisitions and disregards the combined acquisitions completely, the single wavelength performance becomes:

MSES3=σ2NPNC,=MSESC3+NCσ2(NPNC)(NP+NC(Nλ1))0.

This is equivalent to setting NP equal to NPNC and Nλ equal to 1 in Eq. (20) or setting NP equal to NPNC in Eq. (17).

Likewise, ignoring the single pulse acquisitions completely for the fused image will result in

MSEF3=σ2Nλ2NC,=MSEFC3+(NPNC)σ2Nλ2NC(NP+NC(Nλ1))0.

This is equivalent to setting NP equal to NC in Eq. (21) or Eq. (19).

It can be noted that the improvement in Eq. (20) over Eq. (22) is dependent on the number of wavelengths that were applied in total. More wavelengths will result in a decreased performance for each wavelength, eventually reducing the MSE to Eq. (22) for an infinite number of wavelengths. But as can be seen from Eq. (22), the MSE will always be smaller for finite Nλ if all acquisitions are considered. However, the performance is reduced compared to Eq. (17). The reason for this is that, for a perfect reconstruction of the single wavelengths, it is necessary to remove the other wavelengths’ contributions from each combined acquisition. Thus, during the reconstruction, each combined acquisition is accompanied by the subtraction of one single wavelength acquisition for all wavelengths that are not currently reconstructed. With regard to the desired wavelength, these only contribute noise; therefore the code will reduce the weighting of both the other wavelengths’ acquisitions and the combined acquisition by using small weights in W for these acquisitions and larger weights for the other acquisitions. For more wavelengths more noise gets added and therefore these weights are reduced further, eventually approaching zero for Nλ → ∞. For finite Nλ some information is still contained in the combined acquisition, the inclusion at reduced amplitude will therefore still lead to an improvement of the SNR.

A similar comparison can be performed for fused images: When comparing Eqs. (21) and (23) it can be observed that the inclusion of all acquired data leads to a smaller MSE. Therefore, it is better to use all available acquisitions for the fused image than to disregard the single wavelength acquisitions.

To choose the desired NP and NC for a specific application, it is helpful to visualize the tradeoff between single wavelength SNR and fused SNR in a single graph with NC/NP as a parameter (see Fig. 2).

 figure: Fig. 2:

Fig. 2: Fused and single wavelength SNR over NC/NP of the coded acquisitions (Eqs. (21) and (20), solid lines) and the consecutive acquisition of single wavelengths and fused data (Eqs. (22) and (23), dashed lines). The plots for Nλ = 2 were normalized by Eqs. (17) and (19). For all plots the total average irradiance was kept constant assuming all wavelengths provide equal irradiance. Asterisks (*) mark steps of 0.2 for NC/NP, starting at 0 on the right of each curve up to 1 on the left. SNR was calculated as SNRi = 1/MSEi.

Download Full Size | PDF

For the worst case that was reviewed at the end of section 2.2, Eq. (21) transforms into

MSEFC3,wc=Nλσ2(Nλ1)NC+NP.

The equivalent worst case scenario can also be listed for Eqs. (18), (19), and (23):

MSEF1,wc=Nλσ2NP,
MSEF2,wc=σ2NP,
MSEF3,wc=σ2NC.

By comparing the equations, it can be seen that Eq. (24) ranges between Eqs. (25) and (26) depending on the choice of NC/NP, while it is always smaller or equal to Eq. (27).

4. Experimental setup

To verify the predicted improvement in SNR, experiments with two wavelengths were performed: Two laser diodes at λ1 = 650 nm and λ2 = 905 nm were chosen as the light sources for the coding (pulse duration: 100 ns, pulse energy: 5.7 μJ and 4.4 μJ, respectively). The diodes were coupled into an optical fiber and directed on a sample (see Fig. 3). The sample was a blackened polystyrene foil which provides high absorption of the light at both wavelengths.

 figure: Fig. 3:

Fig. 3: Experimental Setup: The two diodes were coupled into an optical fiber, which was directed on the sample, consisting of a blackened polystyrene foil. A transducer was placed behind the absorber and coupled to an amplifier and an oscilloscope for data acquisition.

Download Full Size | PDF

A Panametrics single element transducer (C380, 3.5 MHz center frequency, −6 dB bandwidth: 126%, focal distance 75 mm) was used to receive the signal, which was then amplified using a Miteq AU-3A-0110 amplifier and displayed and recorded using a LeCroy WaveRunner 104MXi Oscilloscope. The oscilloscope recorded a signal over time for several microseconds per acquisition to receive the signal emitted from the absorber as well as an estimation for the noise. For a detailed offline analysis of the codes, all necessary wavelength combinations were recorded multiple times: Each wavelength was recorded separately 1000 times and the simultaneous emission of both wavelengths was also recorded 1000 times.

After recording, the signals were filtered to remove high frequency noise and outliers were removed. Then, the signals were used to construct the codes, averages and the fused signals. The number of acquisitions in each case was chosen such that the average power of compared experiments was kept constant. Finally, SNRs were estimated using the square of the maximum of the signal detected from the absorber and divide it by a variance estimate of the noise obtained from the recording segment before the absorber signal reached the detector. The SNRs were compared to averaging using single wavelength signals from both wavelengths and compared to averaged simultaneous emission images at equal average optical power. The SNR results for the fused image were compared to the averaged simultaneous emission images and the fused single wavelength images. The SNR results for the decoded signals were compared to those of the single wavelength averages. The number of acquisitions included was varied to assess its influence on the results.

5. Experimental results and discussion

The performance of the reconstructed single wavelength signals correlates well with expectations (see Fig. 4). The reconstructions yield an unbiased estimation of the original signal amplitude but at slightly lower SNR. It can be clearly seen that the performance for S3, which is the result of separately reconstructing only single wavelength acquisitions when also a fused image is formed which contains 50% combined acquisitions, is about only 66% of the performance of the decoded single wavelength reconstruction also using the combined acquisitions.

 figure: Fig. 4:

Fig. 4: SNR for different code lengths of the fused data reconstructions (a) and of the single wavelength reconstructions (b). The names are chosen equivalent to the MSE in section 3: F1: fused reconstruction from single wavelength data. F2: fused reconstruction from combined pulses. FC3: fused reconstruction resulting from acquisition scheme and NC/NP = 0.5, F3: Fused reconstruction from combined pulses, but using only half the number of pulses as in F2. S1λ1/λ2: Averaged single wavelength data. S3: Averaged single wavelength data using the other half of the average optical power left from the reconstruction of F3. SC3λ1/λ2: Reconstructed single wavelength data resulting from choosing NC/NP = 0.5 distribution for wavelength λ12. Error bars indicate the standard deviation of the displayed means including error propagation, lines are drawn to guide the eye.

Download Full Size | PDF

As was expected from the calculations in the previous sections, all types of fused reconstructions outperform the single wavelength reconstructions, but the performance of F1 and F3 is low and only slightly better than the better of the two single wavelengths. If the two wavelengths were absorbed exactly identical and the optical pulse energy was the same, the SNR for F1 and F3 should be identical and exactly twice the SNR of S1λ1 and S1λ2, which should also be identical. The deviations from this, seen in Fig. 4, can be attributed to slightly different absorption areas in the experiment as well as different amplitudes, both of which can cause a reduction of the fused results. The fused reconstruction for NC/NP = 0.5 (FC3) should be exactly in the middle between the fused reconstruction for using only all-fused acquisitions (F2) and only all-single acquisitions (F1). This is the case, as can be seen in Fig. 4. The performance of the single wavelength reconstructions (SC3) should again be in the middle between S1 and S3 for the same wavelength and S1 should yield twice the SNR of S3.

The curves shown in Fig. 5 closely follow the theoretical expectations shown in Fig. 2. The plot marks correspond to NC/NP = {0,1/8,2/8,…,1} from right to left on each curve. The coded acquisition outperforms the consecutive acquisition for both wavelengths except for the case NC/NP = 1, where both cases are equal. While the improvement of fused SNR with increasing NC/NP is linear for the code as according to Eq. (21), the decrement in single wavelength increases with NC/NP.

 figure: Fig. 5:

Fig. 5: Fused and single wavelength SNR over NC/NP for (a) wavelength λ1 and (b) wavelength λ2, comparing single wavelength SNR and fused SNR for applying the acquisition scheme (solid line) or using the same ratio for consecutive acquisitions of single wavelength and fused data (F3 and S3, dashed line). The curve was obtained by varying NC/NP from 0 to 1 for NP = 8 using the experimental data. Error bars indicate the standard deviation of the displayed means including error propagation, lines are drawn to guide the eye.

Download Full Size | PDF

The fused image clearly increases the SNR compared to the single wavelength acquisitions. In the experiments, if only fused acquisitions are used, the SNR is almost four times as large as for single wavelength acquisitions only. Unfortunately, the single wavelength information cannot be recovered if this type of acquisition is chosen. The introduced acquisition concept in turn allows to tune the ratio between single wavelength SNR and fused SNR and outperforms the consecutive acquisition of the two images.

The tuning is straightforward, if we restrict ourselves to treating all wavelengths equally and limit ourselves to acquire only single wavelength acquisitions or combine all wavelengths into one acquisition. In this case, we can simply tune the ratio of NC/NP until we reach the desired performance ratio and choose C accordingly.

If we wish to apply more than two wavelengths, however, or wish to tune the specific single wavelength SNR individually for each wavelength, the optimization becomes much more complex. Even applying only three wavelengths already introduces additional options: Instead of applying all wavelengths at once, it is also possible to apply groups of two wavelengths. For four wavelengths, additionally groups of three wavelengths become available.

To optimize the choice of C, it is necessary to evaluate the possible pulse sequences, calculate the resulting MSE for single wavelength and fused reconstructions using Eqs. (5) or (9) and (15) and minimize the MSE according to ones needs. The optimization of C is then similar to a nonlinear multi-objective knapsack problem, which is NP-hard. The optimization of the matrix C for specific applications is therefore subject to further research. When only single wavelength data is desired or only fused data is desired, this procedure can be skipped: Regardless of the number of wavelengths used, the optimization of C will always result in applying only single wavelengths or all wavelengths simultaneously, respectively.

One potential drawback of the method is that for the fused reconstruction an assumption is necessary for the relative differences in the expected signal amplitudes. The experiments in [18] have shown that in extreme cases, when one wavelength contributes significantly while all others hardly provide any contrast, the SNR performance of the fused data can be fairly low, even below the best of the single wavelengths if this assumption is wrong. On the other hand, the same results have shown that even in these extreme cases the fused SNR can be substantially improved compared to the best single wavelength, if we tune NC/NP to large values. Hereby, we put additional emphasis on the fused image. The experiments in [18] used NC/NP = 0.5, which was sufficient to obtain at least similar amplitudes for the fused image as the best wavelength even though the other wavelength contributed almost negligible signal.

Golay codes introduced by Mienkina et al. [13], are very similar to the acquisition scheme presented here. However, the application of both types of codes is different: While both types of codes are aimed at improving the SNR of laser diode based photoacoustic systems, the scope of both codes is different. Mienkina et al. assume very high pulse repetition rates, which will result in aliasing during the application, if no code were used. The gain in SNR is therefore essentially obtained by being able to apply more pulses within the same time frame than what averaging would allow to apply within the same time. In other words, the Golay codes - as well as all other similar codes - obtain increased SNR by applying more average irradiance. This is a viable option, if laser safety is not of concern, or the total applied irradiance of the laser diodes does not exceed the limit even if the laser is operated at its intended maximum PRF. The new acquisition scheme, on the other hand, assumes a constant average irradiance constraint. Essentially, the acquisition scheme is limited to distributing the available total irradiance between the acquisitions. It therefore covers all cases, where high irradiance limit the maximum PRF of application such that no aliasing will occur anyway. Additionally, Golay codes are intended for the recovery of single wavelengths. Using the acquisition concept presented here, it is possible to recover SNR optimized fused images from Golay code acquisitions only if no aliasing occurred. Otherwise, it would be necessary to find a reconstruction code that will eliminate the aliasing as well as optimizing the SNR of the fused image.

6. Conclusion

We proposed to use fused images in order to improve SNR and demonstrated the improvement in experiments. Additionally, we introduced a novel acquisition and decoding concept. It is applicable at all pulse repetition frequencies, at which no aliasing occurs, and allows to reconstruct single wavelength data from combined acquisitions while simultaneously permitting to obtain a higher SNR fused image from the same data. It is possible to tune the relative performance of the acquisitions to match the specific needs of the situation at hand. The behavior of the concept was demonstrated experimentally for two wavelengths. It was shown that the fused data can increase the signal to noise ratio and the improvement becomes greater with increasing number of simultaneously available wavelengths.

Acknowledgments

The research leading to these results has received funding from the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 318067.

References and links

1. M. Eghtedari, A. Oraevsky, J. A. Copland, N. A. Kotov, A. Conjusteau, and M. Motamedi, “High sensitivity of in vivo detection of gold nanorods using a laser optoacoustic imaging system,” Nano Lett. 7, 1914–1918 (2007). [CrossRef]   [PubMed]  

2. S. Mallidi, T. Larson, J. Aaron, K. Sokolov, and S. Emelianov, “Molecular specific optoacoustic imaging with plasmonic nanoparticles,” Opt. Express 15, 6583–6588 (2007). [CrossRef]   [PubMed]  

3. S. Manohar, S. E. Vaartjes, J. C. G. van Hespen, J. M. Klaase, F. M. van den Engh, W. Steenbergen, and T. G. van Leeuwen, “Initial results of in vivo non-invasive cancer imaging in the human breast using near-infrared photoacoustics,” Opt. Express 15, 12277–12285 (2007). [CrossRef]   [PubMed]  

4. R. Kolkman, W. Steenbergen, and T. van Leeuwen, “In vivo photoacoustic imaging of blood vessels with a pulsed laser diode,” Laser. Med. Sci. 21, 134–139 (2006). [CrossRef]  

5. D. Razansky, C. Vinegoni, and V. Ntziachristos, “Multispectral photoacoustic imaging of fluorochromes in small animals,” Opt. Lett. 32, 2891–2893 (2007).

6. J. Xiao and J. He, “Multispectral quantitative photoacoustic imaging of osteoarthritis in finger joints,” Appl. Opt. 49, 5721–5727 (2010). [CrossRef]   [PubMed]  

7. A. Buehler, M. Kacprowicz, A. Taruttis, and V. Ntziachristos, “Real-time handheld multispectral optoacoustic imaging,” Opt. Lett. 38, 1404–1406 (2013).

8. C.-S. Friedrich, M. P. Mienkina, C. Brenner, N. C. Gerhardt, M. Jörger, A. Strauß, M. F. Beckmann, G. Schmitz, and M. R. Hofmann, “Photoacoustic blood oxygenation imaging based on semiconductor lasers,” J. Photon. Optoelectron. 1, 48–54 (2012).

9. Y. Wang, K. Maslov, and L. V. Wang, “DMD-encoded spectral photoacoustic microscopy,” Proc. SPIE 8223, 822312 (2012). [CrossRef]  

10. P. Hajireza, A. Forbrich, and R. J. Zemp, “Multifocus optical-resolution photoacoustic microscopy using stimulated Raman scattering and chromatic aberration,” Opt. Lett. 38, 2711–2713 (2013).

11. K. Daoudi, P. van den Berg, O. Rabot, A. Kohl, S. Tisserand, P. Brands, and W. Steenbergen, “Handheld probe for portable high frame photoacoustic/ultrasound imaging system,” Proc. SPIE 8581, 85812l (2013). [CrossRef]  

12. K. Daoudi, P. van den Berg, O. Rabot, A. Kohl, S. Tisserand, P. Brands, and W. Steenbergen, “Handheld probe combining laser diode and ultrasound transducer array for ultrasound/photacoustic dual modality imaging,” SPIE Photonics West, San Francisco, CA, 1–6 February 2014.

13. M. P. Mienkina, C.-S. Friedrich, N. C. Gerhardt, M. F. Beckmann, M. F. Schiffner, M. R. Hofmann, and G. Schmitz, “Multispectral photoacoustic coded excitation imaging using unipolar orthogonal Golay codes,” Opt. Express 18, 9076–9087 (2010). [CrossRef]   [PubMed]  

14. M. F. Beckmann, C.-S. Friedrich, M. P. Mienkina, N. C. Gerhardt, M. R. Hofmann, and G. Schmitz, “Multispectral photoacoustic coded excitation using pseudorandom codes,” Proc. SPIE 8223, 82231E (2012). [CrossRef]  

15. IEC 60825-1:2007, “Safety of laser products part 1: Equipment classification and requirements,” (2007).

16. IEC 601-2-22:1995, “Medizinische elektrische Geräte - diagnostische und therapeutische Lasergeräte,” (1995).

17. K. Daoudi, P. J. van den Berg, O. Rabot, a. Kohl, S. Tisserand, P. Brands, and W. Steenbergen, “Handheld probe integrating laser diode and ultrasound transducer array for ultrasound/photoacoustic dual modality imaging,” Opt. Express 22, 26365–74 (2014). [CrossRef]   [PubMed]  

18. M. F. Beckmann, H.-M. Schwab, and G. Schmitz, “Multispectral photoacoustic coded excitation with low PRF high power laser diodes,” in Proceedings of IEEE International Ultrasonics Symposium, (IEEE, 2014), pp. 1288–1291.

19. M. F. Beckmann, M. P. Mienkina, G. Schmitz, C.-S. Friedrich, N. C. Gerhardt, and M. R. Hofmann, “Monospectral photoacoustic imaging using Legendre sequences,” in Proceedings of IEEE International Ultrasonics Symposium, (IEEE, 2010), pp. 386–389.

20. M. F. Beckmann, M. P. Mienkina, G. Schmitz, C.-S. Friedrich, N. C. Gerhardt, and M. R. Hofmann, “Photoacoustic coded excitation using periodically perfect sequences,” in Proceedings of IEEE International Ultrasonics Symposium, (IEEE, 2011), pp. 1179–1182.

21. M. F. Beckmann and G. Schmitz, “Photoacoustic coded excitation using pulse position modulation,” in Proceedings of the joint UFFC, EFTF and PFM Symposium, (IEEE, 2013), pp. 1853–1856.

22. M.-L. Li, J.-T. Oh, X. Xie, G. Ku, W. Wang, C. Li, G. Lungu, G. Stoica, and L. V. Wang, “Simultaneous molecular and hypoxia imaging of brain tumors in vivo using spectroscopic photoacoustic tomography,” Proc. IEEE 96, 481–489 (2008). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1:
Fig. 1: Graphical representation of the three basic cases 1–3 in (a)–(c), respectively, and Nλ = 4, NP = 4. When pulses are shown at multiple wavelengths beneath each other, this represents simultaneous emission of the wavelengths and receiving a combined acquisition from these. If only one wavelength is shown in a column, only that single wavelength is active in the corresponding acquisition.
Fig. 2:
Fig. 2: Fused and single wavelength SNR over NC/NP of the coded acquisitions (Eqs. (21) and (20), solid lines) and the consecutive acquisition of single wavelengths and fused data (Eqs. (22) and (23), dashed lines). The plots for Nλ = 2 were normalized by Eqs. (17) and (19). For all plots the total average irradiance was kept constant assuming all wavelengths provide equal irradiance. Asterisks (*) mark steps of 0.2 for NC/NP, starting at 0 on the right of each curve up to 1 on the left. SNR was calculated as SNRi = 1/MSEi.
Fig. 3:
Fig. 3: Experimental Setup: The two diodes were coupled into an optical fiber, which was directed on the sample, consisting of a blackened polystyrene foil. A transducer was placed behind the absorber and coupled to an amplifier and an oscilloscope for data acquisition.
Fig. 4:
Fig. 4: SNR for different code lengths of the fused data reconstructions (a) and of the single wavelength reconstructions (b). The names are chosen equivalent to the MSE in section 3: F1: fused reconstruction from single wavelength data. F2: fused reconstruction from combined pulses. FC3: fused reconstruction resulting from acquisition scheme and NC/NP = 0.5, F3: Fused reconstruction from combined pulses, but using only half the number of pulses as in F2. S 1 λ 1 / λ 2: Averaged single wavelength data. S3: Averaged single wavelength data using the other half of the average optical power left from the reconstruction of F3. SC 3 λ 1 / λ 2: Reconstructed single wavelength data resulting from choosing NC/NP = 0.5 distribution for wavelength λ12. Error bars indicate the standard deviation of the displayed means including error propagation, lines are drawn to guide the eye.
Fig. 5:
Fig. 5: Fused and single wavelength SNR over NC/NP for (a) wavelength λ1 and (b) wavelength λ2, comparing single wavelength SNR and fused SNR for applying the acquisition scheme (solid line) or using the same ratio for consecutive acquisitions of single wavelength and fused data (F3 and S3, dashed line). The curve was obtained by varying NC/NP from 0 to 1 for NP = 8 using the experimental data. Error bars indicate the standard deviation of the displayed means including error propagation, lines are drawn to guide the eye.

Equations (28)

Equations on this page are rendered with MathJax. Learn more.

m ( λ l , t ) = i = 1 M w i l a i = i = 1 M w i l ( j = 1 N λ c i j s λ j ( t ) + n i ( t ) ) .
m ( t ) = W T a
i = 1 M w i l c i j = { 1 for j = l 0 for j l
Var ( m ( λ l , t ) ) = i = 1 M w i l 2 σ i 2 ,
W = C ( C T C ) 1 .
V = ( C T C ) 1 σ 2 .
M S E SC , l = δ l T ( C T C ) 1 δ l σ 2 .
χ = R m ,
W ˜ = C ( C T C ) 1 R T .
V χ = R ( C T C ) 1 R T σ 2 .
s ^ ( t ) = i = 1 M w i ( j = 1 N λ c i j s λ j ( t ) + n i ( t ) ) .
Var [ s ^ ( t ) ] = i = 1 M w i 2 σ i 2 .
( i = 1 M w i 2 σ 2 + ξ ( 1 i = 1 M w i j = 1 N λ c i j g j ) ) w l = ! 0 ,
( i = 1 M w i 2 σ 2 + ξ ( 1 i = 1 M w i j = 1 N λ c i j g j ) ) ξ = ! 0 ,
w = C g g T C T C g .
M S E F = σ 2 g T C T C g .
M S E F , wc = g T C T C g ( δ t T C T C g ) 2 σ 2 .
M S E S 1 = σ 2 N p .
M S E F 1 = σ 2 N λ N P .
M S E F 2 = σ 2 N λ 2 N P .
M S E SC 3 = ( 1 1 N λ N P N C + 1 ( N P + N C ( N λ 1 ) ) N λ ) σ 2 ,
M S E FC 3 = σ 2 N λ ( N P N C ) + N λ 2 N C .
M S E S 3 = σ 2 N P N C , = M S E SC 3 + N C σ 2 ( N P N C ) ( N P + N C ( N λ 1 ) ) 0 .
M S E F 3 = σ 2 N λ 2 N C , = M S E FC 3 + ( N P N C ) σ 2 N λ 2 N C ( N P + N C ( N λ 1 ) ) 0 .
M S E FC 3 , wc = N λ σ 2 ( N λ 1 ) N C + N P .
M S E F 1 , wc = N λ σ 2 N P ,
M S E F 2 , wc = σ 2 N P ,
M S E F 3 , wc = σ 2 N C .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.