Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Broadening the detection view of 2D photoacoustic tomography using two linear array transducers

Open Access Open Access

Abstract

Although commercial linear array transducers are widely used in clinical ultrasound, their application in photoacoustic tomography (PAT) is still limited due to the limited-view problem that restricts the image quality. In this paper, we propose a simple approach to address the limited-view problem in 2D by using two linear array transducers to receive PAT signal from different orientations. The positions of the two transducers can be adjusted to fit the specific geometry of an imaging site. This approach is made possible by using a new calibration method, where the relative position between the two transducers can be calibrated using ultrasound by transmitting ultrasound wave with one transducer while receiving with the other. The calibration results are then applied in the subsequent PAT imaging to incorporate the detected acoustic signals from both transducers and thereby increase the detection view. In this calibration method, no calibration phantom is required which largely simplifies and shortens the process. The efficacy of the calibration and improvement on the PAT image quality are demonstrated through phantom studies and in vivo imaging.

© 2016 Optical Society of America

1. Introduction

Photoacoustic tomography (PAT) imaging has arisen as a promising biomedical imaging technology in recent years. It is a cross-sectional or three-dimensional imaging technique using the photoacoustic effect where the energy of incident short-pulse light is converted into acoustic energy through thermo-elastic expansion. In particular, the irradiated tissue sample absorbs photons by specific tissue chromophores, such as hemoglobin, and converts the absorbed energy to heat. The subsequent thermo-elastic expansion generates acoustic pressure waves which can be acquired by a transducer array at the surface of the sample. Compared with ultrasonic echography, PAT provides optical absorption contrast for biomedical imaging, while maintaining high-resolution image quality beyond the optical diffusion limit by combining optical excitation and acoustic acquisition [1–3].

After the acoustic waves are detected by the transducer, various algorithms can be utilized to reconstruct the original pressure distribution. Among the reconstructed image details, tissue interfaces are of particular interest to identify organ boundaries. Because each boundary consists of small flat segments and each segment transmits acoustic waves along the two opposite directions perpendicular to it, a boundary can be well reconstructed given that all the local normal direction of the boundary passes through the transducer. Therefore, to exactly reconstruct an arbitrary boundary in 2D, it is intuitive to choose a ring-shaped transducer array to enclose the object. In practice, specialized ring-shaped transducer arrays [4–7] are designed to achieve exact reconstruction in full-view detection. A sample is placed at the center of the transducer and the photoacoustic signal generated from the sample can be detected at different receiving angles over a complete 360 degree circle for 2D imaging. However, these transducers require special customized design and are not commonly available.

On the other hand, linear array transducers are widely used in clinical ultrasound (US) applications due to their versatile imaging of the human body, free-hand guidance, and real-time capability. However, linear arrays can only detect photoacoustic signals from a limited range of view angles. This shortcoming causes reconstruction difficulties and is usually referred to as the limited-view problem [8]. For example, a transducer of 38 mm width to image a sample at 30 mm away has a maximum receiving angle of 65 degrees. This problem results in loss of sample information and blurring of sharp object details. Hence the detectable tissue structure is highly restricted by the tissue orientation relative to the receiving transducer [9,10].

In order to increase the detection view for PAT using linear array transducers, various methods have been developed. A straightforward solution is to rotate the linear array transducer or the sample to increase the view angle or even achieve full-view PAT [11–13]. When rotating the transducer, the positions of the transducer are predetermined and restricted by the rotation center and radius of a rotational stage. It makes this approach difficult to implement in clinical applications where the body sites or organs usually have irregular shape and geometry. Another approach is to use acoustic reflectors to redirect the otherwise undetected PAT signals to the linear array transducer. Cox et al. [14] utilized two acoustic reflectors perpendicular to the linear array transducer to create an infinitely wide virtual transducer array. However, this method was only demonstrated in numerical simulations. Huang et al. [15] and Li et al. [16] proposed to use a single 45-degree acoustic reflector or two acoustic reflectors set at 120 degree relative to each other to double or triple the detection view. The relative position of the linear array and the acoustic reflectors is predetermined before data acquisition. The acoustic reflector generates a virtual transducer to acquire acoustic signal from different directions and hence increase the detection view. However, the acoustic reflector usually should be placed close to the imaging area, which is not practical in clinical application.

In addition to hardware improvements, various image processing algorithms have also been proposed to address the limited-view problem. Wu et al. [17] made use of acoustic speckle noise, which contained scattered and reflected photoacoustic signals from other directions, to expand the detection view. Because US imaging can provide the information of acoustically inhomogeneous properties of tissue, it was employed to estimate the Green’s function of the tissue. With the approximated Green’s function and the recorded photoacoustic signal, PAT image from speckle noise can be extracted to achieve larger detection view. Ma et al. [18] proposed to combine the filtered mean back-projection method with an iteration algorithm to reconstruct the distribution of optical absorption. The filtered mean back-projection was used to produce an initial distribution to shorten the iteration time and enhance the reconstruction speed. Huang et al. [19] developed a full-wave approach of iterative image reconstruction in PAT. Based on the acoustic wave equation, they established a discrete model and implemented an associated discrete forward and back-projection operator pair to minimize a cost function. These image processing algorithms show promising improvements in image quality and do not require extra hardware for data acquisition. However, they are usually computationally intensive and time consuming [15,16].

In this paper, we present a simple approach to address the limited-view problem for PAT imaging using linear array transducers. We use two linear array transducers placed at different directions to broaden the detection view. The positions of the two transducers are not predetermined and can be adjusted with flexibility to fit the specific geometry of an imaging site. This approach is made possible by a new method of calibrating the relative position of the two transducers. Our method of calibration uses ultrasound imaging, where one transducer transmits ultrasound signal over a few selected channels and the other transducer receives the signal. To the best of our knowledge, this calibration method hasn’t been reported before. The calibrated relative position is then applied on the subsequent PAT imaging for correlating and combining the received acoustic signals from the two transducers to broaden the view angle. Compared with previous mentioned methods, this approach has several advantages. First, the imaging system does not need any mechanical rotation, which can significantly reduce the imaging time. Second, the calibration process uses the ultrasound imaging from the same transducers and does not require a calibration phantom. Therefore, the calibration process has the potential to be performed in seconds and fully automatically. The signal-to-noise ratio in the ultrasound imaging is quite high which is beneficial for the calibration. Third, this approach provides flexibility on the positioning of the two transducers, which will be convenient for clinical applications. We believe our approach can provide a practical and simple solution for improving the limited view problem of PAT imaging with linear array transducer for clinical applications.

2. Calibration method

In order to increase the view angle, two linear array transducers can be used simultaneously to acquire photoacoustic signals from the same imaging plane. The challenge of using two linear array transducers is to calibrate the relative position of the two transducers. One approach is to image a calibration phantom. A calibration phantom will be placed inside the field of view (FOV) of the system and calibration images will be acquired. When calibration is finished, the calibration phantom will be removed and samples will be placed inside the FOV for PAT imaging. This approach can be applied for ex vivo or small animal studies but is difficult to be implemented on patients in clinical applications. The use of calibration phantom is also cumbersome and time consuming.

Our method uses the ultrasound imaging of the two transducers for calibration without the need of a calibration phantom, which greatly simplifies the process and makes it suitable for clinical applications. Figure 1 illustrates the top view of the imaging configuration. Two identical type linear array transducers (Transducer A and B) are positioned in the same imaging plane. The relative angle between the two transducersαcan be an arbitrary angle ranging from 90 degrees (perpendicular position) to 180 degrees (face to face position), as long as the ultrasound signal transmitted from one transducer can be received by the other.

 figure: Fig. 1

Fig. 1 Top view of the imaging configuration using two identical linear array transducers. (a) shows the calibration process using ultrasound imaging, in which several channels of Transducer B (highlighted in red) are transmitting and Transducer A is receiving acoustic waves. (b) shows the photoacoustic imaging where both transducers are receiving photoacoustic waves.

Download Full Size | PDF

At first, the system operates in the ultrasound modality for calibration as shown in Fig. 1(a). Several discrete channels on Transducer B are enabled to transmit ultrasound signal as acoustic source points (SPs). The distances between the adjacent SPs are chosen to be different so that they can be differentiated. A virtual image which represents the locations of those SPs on the FOV of Transducer B can be obtained. The location of Transducer B determines the middle top position of the virtual image so that the SPs are located on the very first row of the virtual image. Since the sequence of the enabled channels is already known, the column location of the SPs can also be determined on the virtual image. Therefore, based on the geometry and the sequence of the enabled channels, the virtual image containing the physical positions of the SPs can be obtained. Meanwhile, Transducer A works in receiving mode to detect the ultrasound signal and reconstructs an image of the same SPs in the FOV of Transducer A. Since the virtual image and the reconstructed image capture the same set of SPs, the FOVs of Transducer A and B can be registered by matching the relative position of the corresponding SP pairs on the two images. Currently, manual matching of the SP pairs are applied to the two images. Since the distances between the adjacent SPs have been chosen to be different, the order of the SPs can be clearly identified on both the virtual and reconstructed images. Thus an SP in the virtual image can be matched with the corresponding SP in the reconstructed image. Alternatively, pattern matching algorithms can also be used to automatically match the corresponding SP pairs between the two images.

The next step of the calibration process is to solve the transformation parameters between the two FOVs based on the correspondence of the SP pairs in the virtual and reconstructed images. In our experimental setup where two identical linear array transducers are positioned in the same plane, only image rotation and translation need to be considered. Therefore, global rigid transformation or Euclidean transformation [20] is used as the transformation function to translate and rotate the reconstructed image with respect to the virtual image, keeping the distance between points and the angle between lines unchanged after transformation.

The transformation function can be written as [20]

[XY1]=[10h01k001][cosθsinθ0sinθcosθ0001][xy1]
where(X,Y)and(x,y)are the corresponding coordinates of one SP in the virtual and reconstructed image, respectively. The translation parameters(h,k)determine the image shift along the two coordinate axes, and the rotation parameterθshows the orientation difference of the virtual image with respect to the reconstructed image in the counter-clockwise direction. In order to solve for the three transformation parameters, at least two pairs of corresponding SPs, which provides four simultaneous equations, are required. The coordinates(X,Y)of all the SPs in the virtual image are determined by the sequence of the transmitting channels. The coordinates(x,y)of all the SPs in the reconstructed image are determined by the image reconstruction of ultrasound image. Plugging(X,Y)and(x,y)of all the SP pairs in Eq. (1) and applying a regression algorithm, the transformation parameters(θ,h,k)can be solved. After the transformation parameters are solved, the relative position between the two linear array transducers is calibrated and their FOVs can be overlapped. The FOV of transducer A can be transformed with respect to the FOV of transducer B by the transformation parameters(θ,h,k).

Next, the imaging system is switched from ultrasound to photoacoustic modality. Both linear array transducers work in receiving mode to acquire photoacoustic signals as shown in Fig. 1(b). A PAT image can be reconstructed from the signal received from each transducer. Because the PAT image share the same FOV as the ultrasound image for each transducer, the two reconstructed PAT images by these transducers can be aligned together using the same transformation parameters resolved from the ultrasound modality.

As discussed above, when using two linear array transducers to increase the view angle in PAT, the transducers are assumed to be located in the same imaging plane. In experiment, the two transducers can be aligned to the same plane using a simple visual guidance by the eye. We will show later that this approach can provide reasonable accuracy. In cases where the two transduces can’t be seen by the eye or a higher accuracy is needed, the alignment can be guided by monitoring the intensity of the received ultrasound signal. The two transducers are considered to be aligned in the same plane when the reconstructed image by the receiving transducer shows the highest intensity of the reconstructed SPs.

To validate the efficacy of aligning the two transducers using the intensity monitoring approach, simulations are carried out to study the signal intensity when one transducer is rotated or translated to out of the imaging plane as shown in Fig. 2. Several channels on Transducer A are enabled to transmit ultrasound signals, and Transducer B receives the ultrasound signals for reconstruction at different off-plane rotation and translation locations. The calculation of the forward ultrasound wave propagation is performed by k-space model [21–23], on a 160 × 160 × 130 voxel grid corresponding to 48 mm × 48 mm × 39 mm space. The spatial step is 0.3 mm and time step is 58.44 ns. The transducers are discretized in 128 pixels, corresponding to 39 mm in width. From the reconstructed image, the maximum pixel intensity is obtained to indicate the quality of the reconstructed image. In Fig. 2(a), Transducer B is initially set within the same imaging plane with Transducer A at 90 degree relative angle and is then subsequently rotated around the x axis at different orientations to receive ultrasound signal. The change of the maximum pixel intensity at the different rotation angles is shown in Fig. 2(b). As we can see, the maximum pixel intensity drops down quickly when the rotation angle around x axis increases. In Fig. 2(c), Transducer B is firstly positioned in the same plane with Transducer A at 90 degree relative angle, and is then translated along the z axis. Figure 2(d) shows the dependency of the maximum pixel intensity as a function of the translation distance. The maximum intensity decreases rapidly when the translation in z axis increases. The simulation results show that the maximum pixel intensity of reconstructed SPs can be used to guide the alignment of the transducers to the same image plane with high accuracy.

 figure: Fig. 2

Fig. 2 Simulation results showing the variation of the maximum pixel intensity of the reconstructed SPs by Transducer B when it rotates or translates out of the image plane of Transducer A. (a)-(b) Transducer B receives ultrasound signal at different rotation angles around x axis. (c)-(d) Transducer B receives signal at different translational location along z axis.

Download Full Size | PDF

3. Experimental setup

Figure 3 shows the schematic of the experimental setup for PAT imaging with two transducers. The laser source contains a Q-switched Nd:YAG laser (Surelite II, Continuum, Inc., Santa Clara, CA, USA) at 532 nm wavelength. The 532 nm light pumps an optical parametric oscillator (OPO) from the same manufacturer to generate tunable wavelengths. The OPO output is tunable from 680 nm to 2500 nm wavelength, with a pulse width of 5 ns and a repetition rate of 10 Hz. The imaging system uses two identical conventional ultrasound linear array transducers controlled by a SonixMDP ultrasound imaging system (Ultrasonix Medical Corporation, Richmond, BC, Canada). The ultrasound imaging system is capable of enabling selected transducer channels for acoustic wave transmitting. The two linear array transducers (L14-5/38 Linear) are positioned in the same plane using visual guidance by the eye. Each linear array transducer has 128 channels with 7.2 MHz center frequency, minimum 70% fractional bandwidth (at −6dB), and 0.3 mm element pitch. The acoustic signal received by the transducer array is sent to a specialized research module, SonixDAQ (Ultrasonix Medical Corporation, Richmond, BC, Canada), for data acquisition. This module acquires and digitizes pre-beamformed radio-frequency signal from all the 128 channels individually at a sampling rate of 40 MHz and 12-bit resolution. Both photoacoustic and ultrasound images are reconstructed using delay-and-sum beamforming algorithm. The lateral and axial resolution of the systems is 0.52 mm and 0.44 mm [24], respectively.

 figure: Fig. 3

Fig. 3 Schematic of the imaging system setup. ND:YAG: Surelite II pulsed laser. OPO: optical parametric oscillator.

Download Full Size | PDF

4. Experimental results

Figure 4 demonstrates the experimental result of the calibration process utilizing ultrasound. The relative angle between the two transducers is ~100 degree and four channels in Transducer A are enabled to transmit ultrasound signals, as shown in Fig. 4(a). The enabled channels are No.64, No.96, No.110 and No.128. Figure 4(b) shows the sketch of Transducer A and the virtual image, which indicates the FOV of Transducer A. There are four corresponding SPs representing the enabled channels as SP1, SP2, SP3 and SP4, respectively, at the top of the image. The exact positions of the SPs in the virtual image are determined by the sequence of the enable channels on Transducer A. Figure 4(c) shows the sketch of Transducer B and the reconstructed ultrasound image using the signal received by Transducer B. When the enabled channels of Transducer A are within the receiving angle (~45 degree), the corresponding SPs can be observed in the reconstructed image as SP1', SP2', SP3′ and SP4'. Because the distances between each two adjacent SPs are different, the matching correspondence can be found by comparing the two images as SP1 to SP1', SP2 to SP2', SP3 to SP3′ and SP4 to SP4'. Hence, the correspondence between the SP pairs from the two images can be established and the transformation parameters can be solved by applying the coordinates of the corresponding SP pairs into Eq. (1). For the example shown in Fig. 4, the transformation parameters are obtained as h = 51.9 mm, k = 101.2 mm, and = 100.8 degree. The reconstructed image after applying the transformation is shown in Fig. 4(d), which matches with Fig. 4(b). These transformation parameters can be applied to the subsequent PAT imaging for image reconstruction as long as the transducers do not move.

 figure: Fig. 4

Fig. 4 (a) Top view of the imaging platform with ~100 degree relative angle between the two linear array transducers. (b) Virtual ultrasound image and the sketch diagram of Transducer A. SP1, SP2, SP3, SP4 represent the enabled channel No.64, No.96, No.110 and No.128 respectively (c) Reconstructed image and the sketch diagram of Transducer B. SP1', SP2', SP3′, SP4' represent the reconstructed channel No.64, No.96, No.110 and No.128, respectively. (d) The reconstructed image of Transducer B after applying the transformation.

Download Full Size | PDF

After the transformation parameters are obtained from the ultrasound modality, PAT modality can be switched on where a laser illuminates the sample and both Transducers A and B receive photoacoustic signal. Figure 5(a) shows the PAT imaging of a phantom containing a piece of paper with 5 printed points. The illumination wavelength is 700 nm and the incident local light fluence rate on the surface of the phantom is ~75 mJ/cm2. Here a relatively high local fluence rate is utilized to achieve high signal to noise ratio. Figure 5(b) shows the PAT image acquired by Transducer B. Figure 5(c) shows the PAT image reconstructed by Transducer A and transformed using the calibration parameters obtained from the ultrasound modality. Figure 5(d) shows the combined image by summing up the previous two images pixel by pixel. In Fig. 5(d), the points overlap well and the intensity of the points is increased. This shows that the calibration parameters obtained from the ultrasound imaging can be applied to the PAT imaging, and the images from the two transducers can be combined to increase the view angle and signal to noise ratio.

 figure: Fig. 5

Fig. 5 PAT imaging of a phantom containing a piece of paper printed with 5 points. (a) Photograph of the printed paper phantom and sketch diagram of the transducers. (b) Reconstructed image of the phantom acquired by Transducer B. (c) Reconstructed image of the phantom acquired by Transducer A after transformation. (d) Overlapped image which combines acquired data from Transducer A and Transducer B.

Download Full Size | PDF

In the above ultrasound based calibration approach, Transducer A transmits while B receives ultrasound (Calibration1). The calibration can also be performed by Transducer B transmitting and A receiving ultrasound (Calibration 2). In the other approach using a calibration phantom, the two transducers should image the same phantom and the images are then registered to find the calibration parameters (Calibration 3). Here the printed points can be used as the calibration phantom and the two photoacoustic images of the points can be registered to obtain the calibration parameters. Table 1 shows the transformation parameters obtained using the three calibration approaches. As we can see, the transformation parameters match closely among the three approaches, indicating that the ultrasound based calibration has similar accuracy as the phantom based calibration. In Table 1, the standard deviation of the translations is 0.3 mm and 0.5 mm, respectively, and that of the rotation angle is 0.2 degree. Compared with the PAT system resolution of 0.44 mm and 0.52 mm for axial and lateral directions respectively [24], the deviation is within the resolution of the system. The accuracy of the proposed ultrasound calibration method is affected by the accuracy of acoustic speed, which is a common issue in PAT imaging. Using the ultrasound to perform the calibration without the need of a phantom has the advantage in clinical applications where positioning a phantom can be cumbersome and sometimes unpractical.

Tables Icon

Table 1. Transformation parameters obtained under different calibration approaches

In order to experimentally demonstrate the validity of the proposed method, we carried out phantom study. In Fig. 6, a piece of paper printed with a regular octagon and 3 points is imaged. Figure 6(a) shows the phantom and the transducer positions. The excitation wavelength is 700 nm. The incident local light fluence rate on the phantom surface is ~75 mJ/cm2. The relative angle between the two transducers is ~100 degree. Figure 6(b) shows the photoacoustic image reconstructed by Transducer B and Fig. 6(c) shows that by Transducer A after transformation. From Fig. 6(c), we can see that Transducer A is unable to detect the horizontal borders of the regular octagon because the photoacoustic wave-fronts from those edges propagate along the vertical direction which is out of the receiving angle of Transducer A. However, these photoacoustic signals can be well acquired by Transducer B. Similar observation can be made to the vertical borders, which can be clearly detected by Transducer A but is out of the receiving angle of Transducer B. In contrast, the combined image, shown in Fig. 6(d), can overcome this problem by incorporating the photoacoustic signals received from both linear array transducers. Both vertical and horizontal borders are visualized in the combined image.

 figure: Fig. 6

Fig. 6 PAT images of a sheet of paper printed with regular octagon and 3 points. (a) Diagram of the printed paper phantom and sketch diagram of the transducers. (b) Image of the printed paper reconstructed by Transducer B. (c) Image of the printed paper reconstructed by Transducer A after transformation. (d) Reconstructed image using data acquired by both transducers.

Download Full Size | PDF

In order to demonstrate that the transducers can be positioned with flexibility to fit the geometry of an imaging site, the two transducers are tested at three different orientations. The phantom is a piece of paper printed with a regular octagon and 3 points. Figures 7(a)-7(c) show the combined PAT images in gray scale when the relative angle between the two transducers is 180, 120, and 90 degree, respectively. Figures 7(d)-7(f) show the same combined PAT images but with the image acquired by one transducer shown in gray scale and the other in red so that the signals from the two transducers can be differentiated. Figures 7(a) and 7(d) show the result when the relative angle between the two transducers is 180 degree. At 180 degree angle, the two linear array transducers are oriented in face-to-face or parallel position. The two transducers detect almost the same tissue structure, and combining the images can increase the signal level. However, the structure that is perpendicular to the transducer surface cannot be captured by either transducer, as shown by the missing edges of the octagon. Figures 7(b) and 7(e) show the result when the relative angle between the two transducers is 120 degree. Since the two transducers are not parallel to each other, they can capture some different structures and the combined image shows a more complete octagon. Figures 7(c) and 7(f) show the result when the relative angle between the two transducers is 90 degree. At 90 degree angle, the two linear array transducers are oriented in a perpendicular position. The structures captured by the two transducers are more complementary and the combined image shows most structures of the octagon.

 figure: Fig. 7

Fig. 7 Combined PAT images of a sheet of paper printed with regular octagon and 3 points using two linear array transducers at (a) 180 degree (b) 120 degree (c) 90 degree. (d)-(f) The corresponding combined PAT images as the top row, except that the images obtained from different transducers are shown in different colors, gray scale and red, respectively.

Download Full Size | PDF

This result shows that the calibration method works for different orientation angles between the two transducers, as long as the two transducers fall within the FOV of the ultrasound imaging of each other, which is not difficult to achieve considering the relatively large FOV of ultrasound imaging. How much improvement on the PAT imaging can be achieved will depend on the orientation of the transducers and the property of the tissue. For structures with sharp edges, the 90 degree orientation of the two transducers should be optimum. For structures with no sharp edges, a wide range of angle orientation should all improve the PAT imaging.

A leaf skeleton embedded in bovine gelatin, mimicking blood vessel branches in tissue, is also imaged. Photograph of the sample and the transducer positions are shown in Fig. 8(a). The relative angle between the two linear array transducers is ~90 degree. The excitation wavelength is 532 nm (output from the Nd:YAG laser) and the local light fluence rate on the sample surface is kept below 20 mJ/cm2 under the ANSI safety limit [25]. Figure 8(b) shows the photoacoustic image reconstructed by Transducers B and Fig. 8(c) shows the image reconstructed by Transducer A after transformation. In both images, each linear transducer can only capture part of the leaf skeleton structure which is near parallel to the lateral axis of the transducer. Figure 8(c) displays the central leaf stem captured by Transducer A, and in Fig. 8(b), Transducer B detects acoustic signal from the leaf branches. By incorporating the acoustic signals acquired by both linear transducers, a more complete leaf skeleton structure can be obtained, as indicated in Fig. 8(d).

 figure: Fig. 8

Fig. 8 PAT images of a leaf skeleton phantom. (a) Photograph of the leaf skeleton phantom and sketch diagram of the transducers. (b) Image of the leaf skeleton phantom reconstructed by Transducer B. (c) Image of the leaf skeleton phantom reconstructed by Transducer A after transformation. (d) Reconstructed image using data acquired by both transducers.

Download Full Size | PDF

Another sample phantom of soft tubing filled with fresh blood is also imaged to show the PAT imaging of blood. The photograph of the sample is shown in Fig. 9(a). The inverted U-curved transparent soft tubing is filled with fresh rabbit blood and embedded in bovine gelatin. The two linear transducers are positioned at ~90 degree relative angle. The excitation wavelength is 532 nm and the local light fluence rate on the sample surface is kept below 20 mJ/cm2. The Transducer A and Transducer B are only able to capture part of the vessel structure, shown in Fig. 9(c) and Fig. 9(b) respectively. The combined image in Fig. 9(d) shows a more complete structure of the vessel. Thus combining two linear array transducers can increase the detection view angle and provide more structural information about tissue.

 figure: Fig. 9

Fig. 9 PAT imaging of a plastic tube filled with rabbit blood. (a) Photograph of rabbit blood phantom and sketch diagram of the transducers. (b) Image of the phantom acquired by Transducer B. (c) Image of the phantom acquired by Transducer A after transformation. (d) Reconstructed image using data acquired by both transducers.

Download Full Size | PDF

The potential of the proposed method for clinical applications is demonstrated by in vivo imaging of the human hand. The hand is submerged in a water bath. Laser light illuminates the skin surface from the top. One transducer is positioned on the same side as the incoming light, and the other transducer is put close to a perpendicular position. Thus the relative angle between the two transducers is ~90 degree. The incident laser wavelength is 850 nm and the local light fluence rate on the skin surface is kept below 20 mJ/cm2 under the ANSI safety limit. Figure 10 shows the result of the in vivo imaging, where the cross-section of blood vessels and skin surface are observed. Figures 10(a) and 10(b) are the PAT images from the two transducers separately. Figure 10(c) is the combined image in gray scale. Figure 10(d) is the same combined image, except the images from the two transducers are shown in different colors, gray scale and red, respectively. From Figs. 10(a) and 10(b), different parts of the skin surface are visible as indicated by the solid and dashed circles. Several blood vessels are also visible as dots in each image. By combining the data from the two transducers, a more comprehensive cross-sectional image of the skin surface and blood vessels can be obtained as shown in Figs. 10(c) and 10(d). Several blood vessels are captured by both the transducers as indicated by the arrows. This result shows that the two transducer approach has great potential to be implemented in clinical applications.

 figure: Fig. 10

Fig. 10 In vivo PAT imaging of the cross-section of blood vessels and skin surface in human hand. (a) PAT image from one transducer (b) PAT image from the other transducer at ~90 degree orientation after transformation. (c) Reconstructed image using data acquired by both transducers. (d) The same combined PAT image as in (c), except that the images obtained from the two transducers are shown in different colors, (a) in gray scale and (b) in red. The circles indicate the skin surface and the arrows indicate the overlapped blood vessels.

Download Full Size | PDF

In the current experiments, the alignment of the two transducers to the same imaging plane is performed using visual guidance by the eye. From the combined images, good overlap of common structures is observed, which shows that aligning the two transducers to the same plane can be achieved with good accuracy under simple visual guidance. In some clinical applications, such as in prostate imaging using a transrectal ultrasound transducer and an abdominal ultrasound transducer, the two transducers may not be visible by the eye simultaneously. Aligning the transducers by monitoring the ultrasound intensity in the calibration process will be necessary in that case.

5. Conclusion

In this paper, we present a simple method to increase the detection view of PAT by using two linear array transducers, thereby enhancing the PAT image quality. The positions of the two transducers can be adjusted to fit the specific geometry of the imaging site. The positions of the two transducers are then calibrated in the ultrasound modality by transmitting ultrasound signal with one transducer and receiving with the other. In the PAT imaging, the two transducers both receive signals, and the calibrated position information is used to transform the FOV of one transducer to match with the FOV of the other transducer. By combining the PAT images from the two transducers, significant improvement on the image quality through complementing the structural information is achieved. Our approach does not require a calibration phantom, which can largely simplify the calibration process and shorten the acquisition time. The use of linear array transducers and the flexibility of positioning the transducers to fit tissue geometry make this approach promising for clinical applications.

Acknowledgments

This work is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Canadian Institutes of Health Research (CIHR).

References and links

1. L. V. Wang, “Multiscale photoacoustic microscopy and computed tomography,” Nat. Photonics 3(9), 503–509 (2009). [CrossRef]   [PubMed]  

2. P. Beard, “Biomedical photoacoustic imaging,” Interface Focus 1(4), 602–631 (2011). [CrossRef]   [PubMed]  

3. L. V. Wang and S. Hu, “Photoacoustic tomography: in vivo imaging from organelles to organs,” Science 335(6075), 1458–1462 (2012). [CrossRef]   [PubMed]  

4. J. Xia, Z. Guo, K. Maslov, A. Aguirre, Q. Zhu, C. Percival, and L. V. Wang, “Three-dimensional photoacoustic tomography based on the focal-line concept,” J. Biomed. Opt. 16(9), 090505 (2011). [CrossRef]   [PubMed]  

5. J. Xia, M. R. Chatni, K. Maslov, Z. Guo, K. Wang, M. Anastasio, and L. V. Wang, “Whole-body ring-shaped confocal photoacoustic computed tomography of small animals in vivo,” J. Biomed. Opt. 17(5), 050506 (2012). [CrossRef]   [PubMed]  

6. J. Gamelin, A. Maurudis, A. Aguirre, F. Huang, P. Guo, L. V. Wang, and Q. Zhu, “A real-time photoacoustic tomography system for small animals,” Opt. Express 17(13), 10489–10498 (2009). [CrossRef]   [PubMed]  

7. C. Li, A. Aguirre, J. Gamelin, A. Maurudis, Q. Zhu, and L. V. Wang, “Real-time photoacoustic tomography of cortical hemodynamics in small animals,” J. Biomed. Opt. 15(1), 010509 (2010). [CrossRef]   [PubMed]  

8. Y. Xu, L. V. Wang, G. Ambartsoumian, and P. Kuchment, “Reconstructions in limited-view thermoacoustic tomography,” Med. Phys. 31(4), 724–733 (2004). [CrossRef]   [PubMed]  

9. S. Preisser, N. L. Bush, A. G. Gertsch-Grover, S. Peeters, A. E. Bailey, J. C. Bamber, M. Frenz, and M. Jaeger, “Vessel orientation-dependent sensitivity of optoacoustic imaging using a linear array transducer,” J. Biomed. Opt. 18(2), 026011 (2013). [CrossRef]   [PubMed]  

10. J. L. Su, R. R. Bouchard, A. B. Karpiouk, J. D. Hazle, and S. Y. Emelianov, “Photoacoustic imaging of prostate brachytherapy seeds,” Biomed. Opt. Express 2(8), 2243–2254 (2011). [CrossRef]   [PubMed]  

11. D. W. Yang, D. Xing, S. H. Yang, and L. Z. Xiang, “Fast full-view photoacoustic imaging by combined scanning with a linear transducer array,” Opt. Express 15(23), 15566–15575 (2007). [CrossRef]   [PubMed]  

12. J. Gateau, M. Á. A. Caballero, A. Dima, and V. Ntziachristos, “Three-dimensional optoacoustic tomography using a conventional ultrasound linear detector array: whole-body tomographic system for small animals,” Med. Phys. 40(1), 013302 (2013). [CrossRef]   [PubMed]  

13. L. Nie, D. Xing, Q. Zhou, D. Yang, and H. Guo, “Microwave-induced thermoacoustic scanning CT for high-contrast and noninvasive breast cancer imaging,” Med. Phys. 35(9), 4026–4032 (2008). [CrossRef]   [PubMed]  

14. B. T. Cox, S. R. Arridge, and P. C. Beard, “Photoacoustic tomography with a limited-aperture planar sensor and a reverberant cavity,” Inverse Probl. 23(6), S95–S112 (2007). [CrossRef]  

15. B. Huang, J. Xia, K. Maslov, and L. V. Wang, “Improving limited-view photoacoustic tomography with an acoustic reflector,” J. Biomed. Opt. 18(11), 110505 (2013). [CrossRef]   [PubMed]  

16. G. Li, J. Xia, K. I. Maslov, and L. V. Wang, “Broadening the detection view of high-frequency linear-array-based photoacoustic computed tomography by using planar acoustic reflectors,” Proc. SPIE 8943, 89430H (2014). [CrossRef]  

17. D. Wu, C. Tao, and X. Liu, “Photoacoustic tomography extracted from speckle noise in acoustically inhomogeneous tissue,” Opt. Express 21(15), 18061–18067 (2013). [CrossRef]   [PubMed]  

18. S. Ma, S. Yang, and H. Guo, “Limited-view photoacoustic imaging based on linear-array detection and filtered mean-backprojection-iterative reconstruction,” J. Appl. Phys. 106(12), 123104 (2009). [CrossRef]  

19. C. Huang, K. Wang, L. Nie, L. V. Wang, and M. A. Anastasio, “Full-wave iterative image reconstruction in photoacoustic tomography with acoustically inhomogeneous media,” IEEE Trans. Med. Imaging 32(6), 1097–1110 (2013). [CrossRef]   [PubMed]  

20. A. A. Goshtasby, Image Registration: Principles, Tools and Methods (Springer, 2012).

21. M. Tabei, T. D. Mast, and R. C. Waag, “A k-space method for coupled first-order acoustic propagation equations,” J. Acoust. Soc. Am. 111(1), 53–63 (2002). [CrossRef]   [PubMed]  

22. B. T. Cox, S. Kara, S. R. Arridge, and P. C. Beard, “K-space propagation models for acoustically heterogeneous media: application to biomedical photoacoustics,” J. Acoust. Soc. Am. 121(6), 3453–3464 (2007). [CrossRef]   [PubMed]  

23. B. E. Treeby and B. T. Cox, “k-Wave: MATLAB toolbox for the simulation and reconstruction of photoacoustic wave fields,” J. Biomed. Opt. 15(2), 021314 (2010). [CrossRef]   [PubMed]  

24. L. L. Pan, Photoacoustic Imaging for Prostate Brachytherapy (University of British Columbia, 2014).

25. Laser Institute of America, “American National Standard for Safe Use of Lasers ANSI Z136.1-2000,” (2000).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Top view of the imaging configuration using two identical linear array transducers. (a) shows the calibration process using ultrasound imaging, in which several channels of Transducer B (highlighted in red) are transmitting and Transducer A is receiving acoustic waves. (b) shows the photoacoustic imaging where both transducers are receiving photoacoustic waves.
Fig. 2
Fig. 2 Simulation results showing the variation of the maximum pixel intensity of the reconstructed SPs by Transducer B when it rotates or translates out of the image plane of Transducer A. (a)-(b) Transducer B receives ultrasound signal at different rotation angles around x axis. (c)-(d) Transducer B receives signal at different translational location along z axis.
Fig. 3
Fig. 3 Schematic of the imaging system setup. ND:YAG: Surelite II pulsed laser. OPO: optical parametric oscillator.
Fig. 4
Fig. 4 (a) Top view of the imaging platform with ~100 degree relative angle between the two linear array transducers. (b) Virtual ultrasound image and the sketch diagram of Transducer A. SP1, SP2, SP3, SP4 represent the enabled channel No.64, No.96, No.110 and No.128 respectively (c) Reconstructed image and the sketch diagram of Transducer B. SP1', SP2', SP3′, SP4' represent the reconstructed channel No.64, No.96, No.110 and No.128, respectively. (d) The reconstructed image of Transducer B after applying the transformation.
Fig. 5
Fig. 5 PAT imaging of a phantom containing a piece of paper printed with 5 points. (a) Photograph of the printed paper phantom and sketch diagram of the transducers. (b) Reconstructed image of the phantom acquired by Transducer B. (c) Reconstructed image of the phantom acquired by Transducer A after transformation. (d) Overlapped image which combines acquired data from Transducer A and Transducer B.
Fig. 6
Fig. 6 PAT images of a sheet of paper printed with regular octagon and 3 points. (a) Diagram of the printed paper phantom and sketch diagram of the transducers. (b) Image of the printed paper reconstructed by Transducer B. (c) Image of the printed paper reconstructed by Transducer A after transformation. (d) Reconstructed image using data acquired by both transducers.
Fig. 7
Fig. 7 Combined PAT images of a sheet of paper printed with regular octagon and 3 points using two linear array transducers at (a) 180 degree (b) 120 degree (c) 90 degree. (d)-(f) The corresponding combined PAT images as the top row, except that the images obtained from different transducers are shown in different colors, gray scale and red, respectively.
Fig. 8
Fig. 8 PAT images of a leaf skeleton phantom. (a) Photograph of the leaf skeleton phantom and sketch diagram of the transducers. (b) Image of the leaf skeleton phantom reconstructed by Transducer B. (c) Image of the leaf skeleton phantom reconstructed by Transducer A after transformation. (d) Reconstructed image using data acquired by both transducers.
Fig. 9
Fig. 9 PAT imaging of a plastic tube filled with rabbit blood. (a) Photograph of rabbit blood phantom and sketch diagram of the transducers. (b) Image of the phantom acquired by Transducer B. (c) Image of the phantom acquired by Transducer A after transformation. (d) Reconstructed image using data acquired by both transducers.
Fig. 10
Fig. 10 In vivo PAT imaging of the cross-section of blood vessels and skin surface in human hand. (a) PAT image from one transducer (b) PAT image from the other transducer at ~90 degree orientation after transformation. (c) Reconstructed image using data acquired by both transducers. (d) The same combined PAT image as in (c), except that the images obtained from the two transducers are shown in different colors, (a) in gray scale and (b) in red. The circles indicate the skin surface and the arrows indicate the overlapped blood vessels.

Tables (1)

Tables Icon

Table 1 Transformation parameters obtained under different calibration approaches

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

[ X Y 1 ]=[ 1 0 h 0 1 k 0 0 1 ][ cosθ sinθ 0 sinθ cosθ 0 0 0 1 ][ x y 1 ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.