Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Spatially-variant image deconvolution for photoacoustic tomography

Open Access Open Access

Abstract

Photoacoustic tomography (PAT) system can reconstruct images of biological tissues with high resolution and contrast. However, in practice, the PAT images are usually degraded by spatially variant blur and streak artifacts due to the non-ideal imaging conditions and chosen reconstruction algorithms. Therefore, in this paper, we propose a two-phase restoration method to progressively improve the image quality. In the first phase, we design a precise device and measuring method to obtain spatially variant point spread function samples at preset positions of the PAT system in image domain, then we adopt principal component analysis and radial basis function interpolation to model the entire spatially variant point spread function. Afterwards, we propose a sparse logarithmic gradient regularized Richardson-Lucy (SLG-RL) algorithm to deblur the reconstructed PAT images. In the second phase, we present a novel method called deringing which is also based on SLG-RL to remove the streak artifacts. Finally, we evaluate our method with simulation, phantom and in vivo experiments, respectively. All the results show that our method can significantly improve the quality of PAT images.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Photoacoustic tomography (PAT) is a novel, noninvasive biomedical imaging modality and has rapidly developed in the past two decades [1]. Based on photoacoustic effect, PAT system uses nanosecond laser pulses to illuminate biological tissues, the tissues absorb the light energy then induce temperature rise, leading to ultrasound emissions via thermoelastic effect. The acoustic signal is recorded by ultrasound detectors and used to map the optical absorption distribution in the tissues with reconstruction algorithms. Results show that PAT has the advantages of both high ultrasonic resolution and optical contrast [2,3]. Thus, PAT holds a great promise to become the preferred method in animal [4] and clinical imaging [57].

Classic photoacoustic tomography image reconstruction algorithms are mostly derived from an idealized PAT system with the following assumptions [8], i.e., the acoustic detector should have a point-like aperture, an infinite bandwidth and a full view angle. However, the practical PAT system cannot satisfy these requirements, resulting in inevitable resolution degradation, i.e., blur in the reconstructed image. Taking PAT system with ring-shape ultrasound transducer array as an example, researches [810] have shown that the reconstructed images suffer from spatially variant blur, i.e., as the distance from the center of the PAT system increases, the image blur becomes more and more serious, which can be modeled by the convolution of original clear image with spatially variant point spread function (SVPSF), mathematically.

There are several kinds of methods which have been developed to correct the blur degradation. The first kind optimizes the model-based image reconstruction algorithms, e.g., the methods in Refs. [1117] incorporate impulse response characteristic of transducer into the imaging model and compensate for blur during image reconstruction. However, these schemes improve image quality at the cost of large computational workload and memory demand. The second kind utilizes deconvolution in data domain to mitigate the blur. Some of them deconvolve the recorded PA signals with either spatial impulse response (SIR) [1820] or electrical impulse response (EIR) [2123], then use the processed PA signals for image reconstruction. While the others develop new reconstruction methods which are based on deconvolution with the total system impulse response, e.g., the deconvolution reconstruction (DR) algorithm [24,25]. But in practice, it is difficult to measure a transducer’s impulse response accurately. The third directly performs deconvolution in image domain [2629], e.g., the deconvolution methods in Refs. [2628] show resolution enhancement, whereas they assume the PSF is spatially invariant which is not consistent with the reality. Reference [29] uses measured SVPSF for deconvolution and significantly improve the resolution of PAT images.

Besides image blur, the PAT images reconstructed by classic algorithms are usually contaminated by streak artifacts [30]. Model-based reconstructed methods with sparse regularizations such as L1-norm and total variation (TV) can suppress them [3134], but some useful image details may also be removed. Meanwhile, some researches modify the direct, non-iterative back projection (BP) algorithms [35,36] to suppress the streak artifacts. For example, in Ref. [37], a specially designed weighting function is incorporated into the BP algorithm, while in Ref. [38], a modified BP algorithm named Contamination-Tracing Back-Projection is proposed. From these methods we can see that how to achieve a trade-off between the details preservation and streak artifacts suppression is still a very challenging task. Similarly, since deconvolution can only enhance image resolution while has little effect on the streak artifacts, how to remove the blur and streak artifacts simultaneously from the PAT images is also very difficult.

To effectively correct the blur degradation and remove the streak artifacts in PAT images, we present a two-phase restoration method based on spatially variant deconvolution. The first phase aims to improve the image resolution. Firstly, we design a precise device and measuring method to obtain the SVPSF samples at preset positions of the PAT system in image domain. Then we use principal component analysis (PCA) to transform the obtained SVPSF samples into a linear combination of some spatially invariant eigen-PSFs [3942], and adopt radial basis function (RBF) interpolation on the weighting coefficient matrices of eigen-PSFs to achieve a representation of entire SVPSF of the PAT system. With the usage of PCA and RBF, a limited number of eigen-PSFs is sufficient to represent most of the spatial variations, which can reduce computing requirements and enable deconvolution with fast Fourier transform (FFT) over the entire image [43]. Afterwards, we design a new regularization method named sparse logarithmic gradient (SLG) regularization and integrate it with the classic Richardson-Lucy (RL) algorithm [44] to restore the blurred PAT images. For simplicity, we call it SLG-RL in the following parts. The SLG regularization outperforms traditional regularization method such as Tikhonov regularization and ensures a more stable restoration result. In the second phase, we propose a novel method called deringing to suppress the streak artifacts. We find that the streak artifacts in the PAT images obtained from a ring-shape transducer array oscillate approximately periodically, thus we rotate the deconvolved PAT image of the first phase clockwise and counterclockwise around the image center for certain degree, and calculate the average image to counteract the peaks and valleys of the streak artifacts, then the SLG-RL is adopted once more to remove the blur resulting from rotation and average operations. The streak artifacts will be significantly suppressed without damaging image details. Finally, we carry out experiments on simulated and real reconstructed PAT images to verify the effectiveness of the proposed method, the results show that it is very capable in improving the quality of PAT images.

This paper is organized as follows: Section 2 makes a detailed description of the problem. Section 3 presents the algorithmic steps of the proposed method. Section 4 and 5 show the configuration and implementation details of simulation and real experiments. Finally, discussion and conclusion are made in Section 6 and 7, respectively.

2. Problem formulation

According to Refs. [8,10,29], finite size of the detector aperture can cause spatially variant blur in the PAT images except for planar detection geometry. Based on the theory of linear system response, the formation of spatially variant blur can be described by Fredholm integral of the first kind [45], i.e.,

$$I(x,y) = \int_{ - \infty }^\infty {\int_{ - \infty }^\infty {O(u,v)\textrm{ }P(u,v,x,y)dudv} } + \eta (x,y),$$
where O and I denote the original clear image and blurred image, respectively. P and η represent the spatially variant PSF and measurement error, respectively. The coordinate (x, y) denotes the pixel location in the blurred image, while the coordinate (u, v) is used to indicate which pixels in O and P are integrated in the Fredholm integral.

If the PSF is considered as spatially invariant, then the right hand of Eq. (1) becomes a standard convolution and image deconvolution based on FFT can be rapidly carried out over the entire image for restoration [45]. However, in practical PAT system with a ring-shape transducer array, each detector has a certain aperture size, leading to PSF being spatially variant [8,10,46]. To explain the impact intuitively, Fig. 1 shows a simple example, we can see that in the degraded image, the center is of less blur than the region near the image boundaries.

 figure: Fig. 1.

Fig. 1. Image degradation model with SVPSF

Download Full Size | PDF

Compared with the spatially invariant image deconvolution problem, spatially variant deconvolution is much more difficult to deal with. Firstly, for spatially invariant system, just know the PSF at one position, then the deconvolution can be carried out with it for the entire image, while for spatially variant system, the PSF corresponding to each pixel needs to be known. Secondly, the SVPSF limits the application of FFT, which makes deconvolution computations prohibitively extensive [43,47]. In order to avoid fastidious computations, we propose to recast the SVPSF as a sum of orthogonal functions, i.e.,

$$P(u,v,x,y) = \sum\limits_{i = 1}^N {{a_i}(u,v)} {p_i}(x,y)\textrm{ ,}$$
where ${p_i}$ are the orthogonal functions called eigen-PSFs and ${a_i}$ are the weighting coefficient matrices, each ${a_i}$ is of the same size as O, N is number of ${p_i}$ or ${a_i}$. With the decomposition in Eq. (2), the degradation model in Eq. (1) becomes
$$I(x,y) = \sum\limits_{i = 1}^N {[{{a_i}(u,v)O(u,v)} ]\ast } {p_i}(x,y)\textrm{ + }\eta (x,y)\textrm{ ,}$$
where * denotes the convolution operator.

According to Eq. (3), if we denote the term ${a_i}O$ as weighted image, the degradation process is decomposed as follows: convolve each eigen-PSF with the weighted image, then sum the convolution results and add some measurement error. It allows us to transform a point-by-point convolution using a unique PSF at every pixel location into a collection of standard convolutions over the entire image. As a result, FFT can be used for accelerating the deconvolution process, which makes efficiently estimating O from Eq. (3) possible.

Based on the above analysis, the goal of the paper is to design an efficient spatially variant deconvolution method to estimate O accurately from Eq. (3) with giving I. In order to achieve the above goals, we must solve some related problems: (1) How to obtain the eigen-PSFs and the entire weighting coefficient matrices over the image domain. (2) Spatially variant deconvolution is inherently ill-posed, which means the measurement error tends to be amplified as noise and back propagated into the restored image. Therefore, appropriate regularization must be designed to suppress it. (3) The reconstructed PAT images usually suffer from streak artifacts, how to suppress them to enhance the image quality is also very necessary.

3. Our approach

To improve the quality of the PAT images, we present a restoration method consisting of two phases: deblurring and deringing (Fig. 2(a)). The first phase focuses on improving image resolution while the second phase aims to suppress streak artifacts. Both of them are based on the proposed SLG-RL algorithm.

 figure: Fig. 2.

Fig. 2. Graphical description of the algorithm flow. (a) Flowchart of the whole algorithm. (b) Flowchart of SVPSF modeling.

Download Full Size | PDF

As shown in Fig. 2(a), the phase of deblurring consists of two parts: (1) SVPSF modeling (Fig. 2(b)), i.e., acquire the SVPSF samples at preset positions of the PAT system in image domain, then use PCA to obtain the eigen-PSFs and partial weighting coefficient matrices from the samples, finally construct the entire weighting coefficient matrices for all the pixels over image domain with RBF interpolation (for simplicity, here we use partial and entire weighting coefficient matrices to represent that before and after interpolation, respectively). (2) Restore the image via the proposed SLG-RL algorithm.

Also as shown in Fig. 2(a), the procedures of deringing phase can be summarized by: (1) Rotate the deconvolved image clockwise and counterclockwise around the image center with a certain degree and get the average image. (2) Calculate the SVPSF caused by the rotation and average operations. (3) Restore the average image with SLG-RL algorithm.

In the following sections, we will address the main issues one by one.

3.1 Deblurring

3.1.1 SVPSF modeling

Assuming that we have obtained N SVPSF samples (the method for obtaining the SVPSF samples is given in Section 4 and 5), according to the analysis of Eq. (2), we adopt PCA to generate N eigen-PSFs and N partial weighting coefficient matrices. Because the elements in each partial weighting coefficient matrix only match with the pixels where the centers of SVPSF samples locate in image domain, the elements for other pixels are missing, the next step is to achieve the entire weighting coefficient matrices over the whole image domain. We first rearrange the elements of each partial weighting coefficient matrix so that the spatial positions of each element align with the pixel coordinates of the SVPSF samples. Then we perform RBF interpolation on the partial weighting coefficient matrix to obtain the elements corresponding to other pixel coordinates. It is true that we can use all N eigen-PSFs to represent the SVPSF, but from the theory of PCA, we know that a subset of K (K < N) eigen-PSFs will be sufficient because the eigen-PSFs corresponding to small eigenvalues are dominated by noise and can be safely discarded to reduce the computation complexity [41,48]. The analysis of how many eigen-PSFs we should use is given in Supplement 1.

3.1.2 SLG-RL algorithm

Considering the stochastic characteristics of the measurement error, the problem of image deconvolution can be modeled by the Bayesian maximum a posterior estimation framework, i.e., find an optimal O that maximizes the following conditional probability $p(O|I)$:

$$p(O|I) \propto p(I|O)p(O)\textrm{ ,}$$
where p(O) is the prior probabilistic distribution of original clear image. It is equivalent to solve the following minimization problem:
$$O = \arg {\min _O}[{ - \ln p(I|O) - \ln p(O)} ]\textrm{ }.$$

If we assume that each pixel in O is independent and identically follows the same Poisson distribution [49,50], then $- \ln p(I|O)$ can be expressed as:

$${J_1}(O )={-} \ln p(I|O) \propto \sum\limits_{(x,y)} {\left\{ {\left[ {\sum\limits_{i = 1}^N {({a_i} \cdot O) \ast {p_i}} } \right] - I\ln \left[ {\sum\limits_{i = 1}^N {({a_i} \cdot O) \ast {p_i}} } \right]} \right\}} \textrm{ }.$$
where ${\cdot} $ represents the element-wise multiplication. If we directly minimize Eq. (6) to the find the maximum likelihood estimation of O, we can obtain the very famous RL algorithm [44]:
$${O_{k + 1}} = \left\{ {\sum\limits_{i = 1}^N {{a_i} \cdot \left[ {\frac{I}{{\sum\limits_{i = 1}^N {({a_i} \cdot {O_k}) \ast {p_i}} }}} \right] \ast p_i^\ast } } \right\}{O_{k}}\textrm{ ,}$$
where k denotes the iteration index, $p_i^\ast $ denotes the result of rotating ${p_i}$ by 180 degrees. However, because it is non-regularized, as the iteration increases, the noise will be amplified and damage the details of restored image. Therefore, we have to introduce a regularization term to suppress the noise and preserve details as much as possible, i.e., design a reasonable model of p(O) which can fit the probabilistic distribution of the original image O accurately.

Although there have been various regularization methods such as the classic Tikhonov regularization [51]. The design of robust regularization method still plays a very important role in the research field of image deconvolution, especially for the challenging spatially variant deconvolution. Recent studies show that the probabilistic distribution of a natural image gradients is sparse, i.e., the values of gradients concentrate around zero [52,53]. Therefore, according to the Markov Random Fields theory and Hammersley–Clifford theorem [54], we use a heavy-tailed function to model p(O):

$$p(O) \propto {({1 + \mathrm{\mu }{{|{\nabla O} |}^2}} )^{ - \gamma }}\textrm{ ,}$$
where μ and γ are two constants. Therefore, the proposed SLG regularization can be expressed as
$$SLG (O) \propto \textrm{In}[{1 + \mu {{|{\nabla O} |}^2}} ]\textrm{ }\textrm{.}$$

Then we combine Eq. (5), Eq. (6) and Eq. (9), the final optimization problem is given by

$$O = \arg {\min _O}\textrm{ }{J_1}(O)\textrm{ + }\lambda \textrm{ }\textrm{SLG}(O)\textrm{ ,}$$
where λ is the regularization parameter. We adopt the Expectation-Maximization (EM) method [55,56] to solve for the solution, the iteration is given by the following equation:
$${O_{k + 1}}\textrm{ = }\frac{{{O_k}}}{{1 - 2\mu \lambda \textrm{ }div \left( {\frac{{\nabla {O_k}}}{{1\textrm{ + }\mu {{|{\nabla {O_k}} |}^2}}}} \right)}} \cdot \left\{ {\sum\limits_{i = 1}^N {{a_i} \cdot \left[ {\frac{I}{{\sum\limits_{i = 1}^N {({a_i} \cdot {O_k}) \ast {p_i}} }}} \right]} \ast {p_i}^\ast } \right\}\textrm{ }\textrm{.}$$

3.2 Deringing

Due to the non-ideal imaging conditions, streak artifacts always occur in the reconstructed PAT images, especially in in vivo experiments where PA absorbers have stronger optical absorption than their background, they are more severe in BP based image reconstruction algorithms [8,30,38]. Although the SLG-RL can correct image blur and suppress noise, it cannot remove streak artifacts effectively, if we enhance the regularization to suppress them, the image details are also damaged. Therefore, we propose an auxiliary method to remove the streak artifacts in the deconvolved images.

Our method is mainly applicable to PAT system with ring-shape ultrasound transducer array. We find that the streak artifacts are approximately periodic, if we rotate the deconvolution result around its center with a certain degree clockwise and counterclockwise and take the average of them, the peaks and valleys of the streak artifacts will superimpose and counteract. In practice, the procedures are as follows: Firstly, rotate the deconvolved image 1 degree clockwise and counterclockwise, respectively (the reason why we choose 1 degree is discussed in Supplement 1, Fig. S1). Then the two rotated images are superimposed with the deconvolved image and calculate the average as shown in Eq. (12)

$${O_{ave}} = {{({O_{deconv}^{ + 1}\textrm{ + }{O_{deconv}} + O_{deconv}^{ - 1} + {O_{deconv}}} )} / 4}\textrm{ ,}$$
where ${O_{deconv}}$ denotes the deconvolved image of SLG-RL algorithm, $O_{deconv}^{ + 1}$ and $O_{deconv}^{ - 1}$ represent rotating ${O_{deconv}}$ by 1 degree clockwise and counterclockwise, respectively. ${O_{ave}}$ is the average image.

In fact, Eq. (12) is equivalent to a rotational blur process, so ${O_{ave}}$ will appear slightly blur. Fortunately, we can construct the SVPSF corresponding to the rotation and average operations at every pixel according to Eq. (12) and use the SLG-RL algorithm to remove the rotational blur (the method for constructing the SVPSF can be seen in Supplement 1). Since most of the peaks and valleys of the streak artifacts counteract and the energy totally disappear, they will not emerge again after the deconvolution. Finally, we can get a high-quality PAT image with few streak artifacts.

4. Simulation results

4.1 Simulation settings

All the simulations mentioned in this paper, including signal generation, propagation, and reception, are simulated using the k-Wave toolbox [57] in MATLAB. The parameters of the transducer array used in the simulation are consistent with those in the real experiment (see details in section 5). The images are reconstructed with time-reversal algorithm [58].

Acquiring the SVPSF is a prerequisite for deconvolution. In simulation, we assume that some numerical dots with diameter 100 µm (see Supplement 1 for the reason) are placed at preset positions which are shown as black dots in Fig. 3(a). The blue dotted line represents the ring-shape ultrasound transducer array. The interval between two adjacent dots is 2.5 mm. The size of the field-of-view is 51.2 × 51.2 mm. We use time-reversal algorithm to reconstruct the SVPSF samples at 169 locations, which are shown in Fig. 3(b). We can see that the further away from the center, the more serious deformation of the SVPSF samples, which is corresponding to rotation blur.

 figure: Fig. 3.

Fig. 3. Acquisition of the SVPSF samples in simulation. (a) Location map of the dots for SVPSF simulation. (b) The simulated SVPSF samples.

Download Full Size | PDF

After obtaining the SVPSF samples at 169 locations, we use PCA to get 169 eigen-PSFs and their corresponding partial weighting coefficient matrices respectively. Then we can obtain the entire weighting coefficient matrices for all the pixels with RBF interpolation, see Supplement 1, Fig. S2(a) and (b) for the results. As mentioned in 3.1, we do not need all the eigen-PSFs to represent the SVPSF samples, the analysis of how many eigen-PSFs we should use is given in Supplement 1, Fig. S3.

4.2 Restoration results

In this section, we carry out two simulations on blood vessels and mouse embryo to evaluate the proposed method.

4.2.1 Blood vessels

Figure 4 presents the restoration result of blood vessels. Figures 4(a)–(c) are the original image, the simulated PAT image (i.e., the degraded image) and its deconvolution result, respectively. Figures 4(d)–(f) are the zooms of Figs. 4(a)-(c) at the same regions indicated by the blue box, respectively. As we can see, the blood vessels in the simulated PAT image becomes blurred and widened, the contrast is also lowered. After deconvolution with SLG-RL, the resolution and contrast are successfully enhanced. At the same time, the edges of the blood vessels become sharper. Also, the normalized intensity profile along the blue solid line in Fig. 4(d) is plotted in Fig. 4(g). The profiles of the restored and original images fit very well, which proves that the proposed method can improve the resolution of PAT image effectively.

 figure: Fig. 4.

Fig. 4. Image restoration result of blood vessels. (a) The original image. (b) The simulated PAT image, i.e., the degraded image. (c) Restored image with λ = 0.003, µ = 0.5. (d)–(f) Zooms of (a)-(c), respectively, at the same regions indicated by the blue box in (a). (g) Normalized intensity profiles along the blue solid line in (d).

Download Full Size | PDF

4.2.2 Mouse embryo

Figure 5 shows the restoration result of a mouse embryo. Figures 5(a)–(c) are the original image, the simulated PAT image and its deconvolution result. Zooms of Figs. 5(a)–(c) indicated by the blue box are shown in Figs. 5(d)–(f). The original image contains abundant organism structure, but the degradation almost hides these details so that the organs are blurred and indistinguishable. After deconvolution, the shapes of organs become sharp and the boundaries between the organs become clear. Figure 5(g) shows the normalized intensity profile along the blue solid line in Fig. 5(d). The profile of the restored image is similar to that of the original image. It shows that the proposed deconvolution method has a strong ability to improve the resolution of complex images, which further verifies the effectiveness of the algorithm.

 figure: Fig. 5.

Fig. 5. Image restoration result of a mouse embryo. (a) The original mouse embryo image. (b) The simulated PAT image, i.e., the degraded image (c) Restored image with λ = 0.004, µ = 0.6. (d)–(f) Zooms of (a)-(c), respectively, at the same regions indicated by the blue box in (a). (g) Normalized intensity profiles along the blue solid line in (d).

Download Full Size | PDF

5. Experimental results

5.1 Experimental setup

In experiment, a tunable laser system is used for PA signal excitation. The wavelength is fixed at 750 nm. A ring-shape ultrasound transducer array is used for receiving signal with a center frequency of 7 MHz and bandwidth of 73%. The transducer array is connected to a data acquisition system for data processing [59]. The ring-shape transducer array has 256 unfocused elements with small flat rectangular aperture. The aperture size of each element is 0.51 mm, the spacing between the element is 0.1 mm and the radius of the transducer array is 25 mm. The images are reconstructed via the filtered BP algorithm [35,36] .We design a precise method which adopts single black microsphere (Cospheric, LLC, Santa Barbara, California) with diameter of 100 µm to obtain SVPSF samples at preset positions of the PAT system. Since the single microsphere is difficult to control, we prepare a gel phantom to wrap it. Also, because the density of the phantom is relatively lower than the medium, we design a sample holder to fix it. The placement of phantom and sample holder is shown in Fig. 6(a).

 figure: Fig. 6.

Fig. 6. Measuring and reconstructing SVPSF samples for the PAT system. (a) Placement of single microsphere phantom and transducer array. (b) Two motorized translation stages are combined to realize the X-Y scanning of the transducer array. (c) The reconstructed SVPSF samples for the PAT system.

Download Full Size | PDF

It is obvious that the microsphere phantom is hard to move, so we carry out the measurement of SVPSF by moving the transducer array. The schematic diagram is represented in Fig. 6(b). Two motorized translation stages are combined to realize X-Y scanning. With the help of translation stages, we can acquire the SVPSF sample at any position.

The procedures are as follows. Firstly, we adjust the position of the transducer to ensure that the microsphere is at the center of the transducer array. Next, we choose the right speed of sound to ensure the reconstructed image of the microsphere is a very small point. Finally, we use the motorized translation stage to move the transducer array at a step of 2.5 mm and obtain the SVPSF samples at different preset positions. SVPSF samples at 99 positions are reconstructed in the experiment, we capture 5 frames at each position and average them to reduce the noise power. Figure 6(c) shows obtained SVPSF samples, which is consistent with that in simulation. Similarly, we also use PCA to obtain the eigen-PSFs and partial weighting coefficient matrices, then get the entire weighting coefficient matrices for all the pixels by RBF interpolation, see Supplement 1 Fig. S2(c) and (d) for the results.

5.2 Phantom experiments

First of all, we apply our restoration method on phantom experiments. We prepare three types of phantoms, i.e., microspheres phantom, hair phantom and vessel phantom. Different phantoms can verify the stability and robustness of the proposed method.

5.2.1 Microspheres phantom

Figures 7(a)–(c) show the microspheres phantom image, the reconstructed PAT image and its deconvolution result, respectively. In Fig. 7(d), we provide the zooms of the reconstructed image and the restored image at different labeled locations. As we can see, the microspheres near the boundaries of the reconstructed image are stretched. Also, some of the microspheres are clustered and hard to distinguish. After deconvolution, the microspheres are successfully separated. The shape of the microspheres near the boundaries almost becomes a point. Overall, the resolution of the image is evidently improved. The normalized intensity profiles along the yellow solid line in Fig. 7(d) is plotted in Fig. 7(e). After deconvolution, the individual peaks within a cluster can be separated.

 figure: Fig. 7.

Fig. 7. Image restoration result of the microspheres phantom. (a) The microspheres phantom. (b) The reconstructed PAT image. (c) Restored image with λ = 0.001, µ = 0.1. (d) Zooms of the boxes in (b)-(c). (e) Normalized intensity profiles along the yellow solid line in (d).

Download Full Size | PDF

5.2.2 Hair phantom

Figure 8 illustrates the restoration result of hair phantom. Figures 8(a)–(c) are the hair phantom, the reconstructed PAT image and its deconvolution result, respectively. Figures 8(d) and (e) correspond to the zooms of Figs. 8(b) and (c), respectively, at the same regions indicated by the yellow box in Fig. 8(b). After deconvolution, the hair appears thinner than that in PAT image. In addition, adjacent hair is better separated. This also can be seen in the normalized intensity profiles in Fig. 8(f). In general, the proposed deconvolution algorithm shows effectiveness on resolution improvements.

 figure: Fig. 8.

Fig. 8. Image restoration result of the hair phantom. (a) The hair phantom. (b) The reconstructed PAT image. (c) Restored image with λ = 0.001, µ = 0.1. (d)–(e) Zooms of (b)-(c), respectively, at the same regions indicated by the yellow box in (b). (f) Normalized intensity profiles along the yellow solid line in (d).

Download Full Size | PDF

5.2.3 Blood vessel phantom

Moreover, we also use the deconvolution algorithm to restore the complex phantom image such as blood vessel phantom. Figures 9(a)–(c) show the blood vessel phantom, the reconstructed PAT image and its deconvolution result, respectively. Figures 9(d) and (e) correspond to the zooms of Figs. 9(b) and (c) at the same regions indicated by the yellow box, respectively. As we can see, the vessels are blurred and widened in the reconstructed PAT image. After the deconvolution, the shape of vessels becomes sharper and the boundaries between the adjacent vessels become clearer, see in Fig. 9(f). The resolution of the image is improved obviously.

 figure: Fig. 9.

Fig. 9. Image restoration result of the vessel phantom. (a) The vessel phantom image. (b) The reconstructed PAT image. (c) Restored image with λ = 0.003, µ = 0.2. (d)–(e) Zooms of (b)-(c), respectively, at the same regions indicated by the yellow box in (b). (f) Normalized intensity profiles along the yellow solid line in (d).

Download Full Size | PDF

5.3 In vivo experiments

To further verify the effectiveness of the algorithm, we also use the proposed algorithm to restore the reconstructed PAT images obtained from in vivo animal experiments on blood vessels of two mice and a human finger. The results are as follows.

5.3.1 Abdominal blood vessels of a normal mouse

Figure 10 shows the restoration result of abdominal blood vessels of a normal mouse. Figures 10(a) and (b) are the reconstructed PAT image and its deconvolution result. As we can see, the shape of vessels becomes sharper after deconvolution. However, the streak artifacts appear obviously. Therefore, we use the proposed deringing method to remove the streak artifacts, the result is presented in Fig. 10(c) which is also the final restored image. Zooms of Figs. 10(a)–(c) at the same regions indicated by the yellow box are shown in Figs. 10(d)–(f). With the help of deringing, most of the streak artifacts are removed labeled by yellow arrows in Figs. 10(d)–(f). Meanwhile, blood vessels become sharper as pointed out by the white arrows. Besides, the profile of deconvolved image is similar to that of deringinged image in Fig. 10(g), which verifies that the deringing method can eliminate streak artifacts without damaging image details.

 figure: Fig. 10.

Fig. 10. Image restoration result of abdominal blood vessels of a normal mouse. (a) The reconstructed PAT image. (b) Deconvolved image with λ = 0.006, µ = 0.4. (c) Deringing result (the final restored image) with λ = 0.026, µ = 0.4. (d)–(f) Zooms of (a)-(c), respectively, at the same regions indicated by the yellow box in (a). (g) Normalized intensity profiles along the yellow solid line in (d).

Download Full Size | PDF

5.3.2 Blood vessels of a tumor mouse

Figure 11 presents the restoration result of blood vessels of a tumor mouse. 4T1 mammary carcinoma is implanted on the right hind leg of mouse and develops for 13 days. We take an image of the blood vessels on the surface of the tumor. Figure 11(a) is the reconstructed PAT image. Compared with the normal mouse, blood supplies are richer around the tumor. Figure 11(b) is the deconvolution result. The blood vessels are sharper after deconvolution while the streak artifacts still exist. Therefore, we use deringing method to remove the streak artifacts. Figure 11(c) is the ultimate result of restoration, zooms of Figs. 11(a)–(c) at the same regions indicated by the yellow box are shown in Figs. 11(d)–(f), respectively. It is pointed out by yellow arrows that the restoration makes blood vessels clear and easier to distinguish, this is also can be seen in the normalized intensity profile in Fig. 11(g).

 figure: Fig. 11.

Fig. 11. Image restoration result of blood vessels of a tumor mouse. (a) The reconstructed PAT image. (b) Deconvolved image with λ = 0.006, µ = 0.4. (c) Deringing result (the final restored image) with λ = 0.02, µ = 0.4. (d)–(f) Zooms of (a)-(c), respectively, at the same regions indicated by the yellow box in (a). (g) Normalized intensity profiles along the yellow solid line in (d).

Download Full Size | PDF

5.3.3 Human finger

Figure 12 shows the restoration result of a human finger. Figures 12(a)–(c) are the reconstructed PAT image, its deconvolution result and deringing result, respectively. Figures 12(d)–(f) are close-up images of Figs. 12(a)–(c) at the same regions indicated by the yellow box, respectively. Compared with the PAT image, the streak artifacts around the finger are removed away in the final restored image. In addition, the contour of the vessels becomes clearer as pointed out by yellow arrows. Also, we can get the same conclusion in Fig. 12(g).

 figure: Fig. 12.

Fig. 12. Image restoration result of a human finger. (a) The reconstructed PAT image. (b) Deconvolved image with λ = 0.003, µ = 0.55. (c) Deringing result (the final restored image) with λ = 0.026, µ = 0.4. (d)–(f) Zooms of (a)-(c), respectively, at the same regions indicated by the yellow box in (a). (g) Normalized intensity profiles along the yellow solid line in (d).

Download Full Size | PDF

6. Discussion

In this paper, we design a restoration method based on spatially variant deconvolution to enhance the quality of the PAT images. We first verify its feasibility in simulation, and then further prove its robustness in phantom and in vivo experiments. All the results show that our restoration method can improve the image resolution and remove the streak artifacts effectively.

The advantages of our method can be summarized as follows. Firstly, compared with the deconvolution algorithm using spatially invariant PSF, using SVPSF can improve the quality of the restored images more effectively. Secondly, the SVPSF corresponding to a certain transducer array is fixed, we only need to measure it once, and all the images collected by the system can be restored with the same SVPSF. Thirdly, we adopt PCA and RBF interpolation to model the SVPSF, which is beneficial to reduce the computation complexity. In addition, the proposed SLG regularization performs better than classic Tikhonov regularization in image restoration, see Supplement 1, Figs. S4-S6.

Also, our restoration method has some limitations. First of all, there are multiple factors that impact the acquisition of SVPSF, such as acoustic attenuation and acoustic heterogeneity. For simplicity, these processes are not taken into account in our measurement. In order to further improve the quality of restored images, these factors should be considered in subsequent studies. Secondly, for different transducer arrays, the phantoms for measuring SVPSF need to be prepared respectively, which is tedious and time consuming. Blind deconvolution methods may be developed to estimate the SVPSF and restore the image simultaneously in the future. Thirdly, the proposed restoration method is a 2D deconvolution method, which may be extended to 3D to improve elevation resolution.

7. Conclusion

In summary, we design a two-phase progressive restoration method to enhance the quality of the reconstructed PAT images. In the first phase, we use numerical dot and black microsphere to acquire the SVPSF samples at preset positions in the PAT system. Afterwards, the PCA and RBF interpolation are adopted to achieve the eigen-PSFs and entire weighting coefficient matrices for all the pixels in image domain, which is beneficial to reduce the computation complexity. Then, we design a novel SLG regularization method and incorporate it with the RL algorithm to reduce the influence of amplified noise caused by ill-posedness. In the second phase, we propose the deringing method which is also based on SLG-RL algorithm to further remove the streak artifacts in the PAT images. Finally, we carry out simulation, phantom and in vivo experiments to evaluate our approach, respectively. All the results show that our restoration method can improve the image resolution and contrast significantly as well as removing the streak artifacts effectively.

Funding

National Natural Science Foundation of China (12174368, 61705216, 61905112, 62122072); National Key Research and Development Program of China (2022YFA1404400); Anhui Provincial Department of Science and Technology (18030801138, 202203a07020020); University of Science and Technology of China (YD2090002015).

Acknowledgments

We would like to thank Heren Li and Chenxi Zhang for their assistance with experiment.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. L. V. Wang and S. Hu, “Photoacoustic tomography: in vivo imaging from organelles to organs,” Science 335(6075), 1458–1462 (2012). [CrossRef]  

2. L. V. Wang and J. Yao, “A practical guide to photoacoustic tomography in the life sciences,” Nat. Methods 13(8), 627–638 (2016). [CrossRef]  

3. M. Xu and L. V. Wang, “Photoacoustic imaging in biomedicine,” Rev. Sci. Instrum. 77(4), 041101 (2006). [CrossRef]  

4. C. Tian, W. Zhang, A. Mordovanakis, X. Wang, and Y. M. Paulus, “Noninvasive chorioretinal imaging in living rabbits using integrated photoacoustic microscopy and optical coherence tomography,” Opt. Express 25(14), 15947–15955 (2017). [CrossRef]  

5. S. Yang, D. Xing, Y. Lao, D. Yang, L. Zeng, L. Xiang, and W. R. Chen, “Noninvasive monitoring of traumatic brain injury and post-traumatic rehabilitation with laser-induced photoacoustic imaging,” Appl. Phys. Lett. 90(24), 243902 (2007). [CrossRef]  

6. T. Feng, Y. Zhu, R. Morris, K. M. Kozloff, and X. Wang, “Functional photoacoustic and ultrasonic assessment of osteoporosis: a clinical feasibility study,” BME Front. 2020, 1–15 (2020). [CrossRef]  

7. Z. Cheng, H. Ma, Z. Wang, and S. Yang, “In vivo volumetric monitoring of revascularization of traumatized skin using extended depth-of-field photoacoustic microscopy,” Front. Optoelectron. 13(4), 307–317 (2020). [CrossRef]  

8. C. Tian, C. Zhang, H. Zhang, D. Xie, and Y. Jin, “Spatial resolution in photoacoustic computed tomography,” Rep. Prog. Phys. 84(3), 036701 (2021). [CrossRef]  

9. C. Tian, M. Pei, K. Shen, S. Liu, Z. Hu, and T. Feng, “Impact of system factors on the performance of photoacoustic tomography scanners,” Phys. Rev. Appl. 13(1), 014001 (2020). [CrossRef]  

10. M. Xu and L. V. Wang, “Analytic explanation of spatial resolution related to bandwidth and detector aperture size in thermoacoustic or photoacoustic reconstruction,” Phys. Rev. E 67(5), 056605 (2003). [CrossRef]  

11. U. A. T. Hofmann, W. Li, X. L. Dean-Ben, P. Subochev, H. Estrada, and D. Razansky, “Enhancing optoacoustic mesoscopy through calibration-based iterative reconstruction,” Photoacoustics 28, 100405 (2022). [CrossRef]  

12. K. Wang, S. A. Ermilov, R. Su, H. P. Brecht, A. A. Oraevsky, and M. A. Anastasio, “An imaging model incorporating ultrasonic transducer properties for three-dimensional optoacoustic tomography,” IEEE Trans. Med. Imaging 30(2), 203–214 (2011). [CrossRef]  

13. K. Wang, R. Su, A. A. Oraevsky, and M. A. Anastasio, “Investigation of iterative image reconstruction in three-dimensional optoacoustic tomography,” Phys. Med. Biol. 57(17), 5399–5423 (2012). [CrossRef]  

14. K. Mitsuhashi, K. Wang, and M. A. Anastasio, “Investigation of the far-field approximation for modeling a transducer's spatial impulse response in photoacoustic computed tomography,” Photoacoustics 2(1), 21–32 (2014). [CrossRef]  

15. A. Rosenthal, V. Ntziachristos, and D. Razansky, “Model-based optoacoustic inversion with arbitrary-shape detectors,” Med. Phys. 38(7), 4285–4295 (2011). [CrossRef]  

16. D. R. Sanny, J. Prakash, S. K. Kalva, M. Pramanik, and P. K. Yalavarthy, “Spatially variant regularization based on model resolution and fidelity embedding characteristics improves photoacoustic tomography,” J. Biomed. Opt. 23(10), 1–4 (2018). [CrossRef]  

17. K. B. Chowdhury, J. Prakash, A. Karlas, D. Justel, and V. Ntziachristos, “A Synthetic Total Impulse Response Characterization Method for Correction of Hand-Held Optoacoustic Images,” IEEE Trans. Med. Imaging 39(10), 3218–3230 (2020). [CrossRef]  

18. Y. Xu, D. Feng, and L. V. Wang, “Exact frequency-domain reconstruction for thermoacoustic tomography–I: Planar geometry,” IEEE Trans. Med. Imaging 21(7), 823–828 (2002). [CrossRef]  

19. T. Lu, Y. Wang, J. Li, J. Prakash, F. Gao, and V. Ntziachristos, “Full-frequency correction of spatial impulse response in back-projection scheme using space-variant filtering for optoacoustic mesoscopy,” Photoacoustics 19, 100193 (2020). [CrossRef]  

20. A. A. Oraevsky, M.-L. Li, L. V. Wang, and C.-C. Cheng, “Reconstruction of photoacoustic tomography with finite-aperture detectors: deconvolution of the spatial impulse response,” Proc. SPIE 7564, 75642S (2010). [CrossRef]  

21. D. Van de Sompel, L. S. Sasportas, J. V. Jokerst, and S. S. Gambhir, “Comparison of deconvolution filters for photoacoustic tomography,” PLoS One 11(3), e0152597 (2016). [CrossRef]  

22. N. A. Rejesh, H. Pullagurla, and M. Pramanik, “Deconvolution-based deblurring of reconstructed images in photoacoustic/thermoacoustic tomography,” J. Opt. Soc. Am. A 30(10), 1994–2001 (2013). [CrossRef]  

23. Y. Wang, D. Xing, Y. Zeng, and Q. Chen, “Photoacoustic imaging with deconvolution algorithm,” Phys. Med. Biol. 49(14), 3117–3124 (2004). [CrossRef]  

24. C. Zhang and Y. Wang, “Deconvolution reconstruction of full-view and limited-view photoacoustic tomography: a simulation study,” J. Opt. Soc. Am. A 25(10), 2436–2443 (2008). [CrossRef]  

25. C. Zhang, C. Li, and L. V. Wang, “Fast and robust deconvolution-based image reconstruction for photoacoustic tomography in circular geometry: experimental validation,” IEEE Photonics J. 2(1), 57–66 (2010). [CrossRef]  

26. J. Chen, R. Lin, H. Wang, J. Meng, H. Zheng, and L. Song, “Blind-deconvolution optical-resolution photoacoustic microscopy in vivo,” Opt. Express 21(6), 7316–7327 (2013). [CrossRef]  

27. X. Song, L. Song, A. Chen, J. Wei, C. R. Valenta, J. A. Shaw, and M. Kimata, “Deconvolution optical-resolution photoacoustic microscope for high-resolution imaging of brain,” Proc. SPIE 11525, 89 (2020). [CrossRef]  

28. T. Jetzfellner and V. Ntziachristos, “Performance of blind deconvolution in optoacoustic tomography,” J. Innovative Opt. Health Sci. 04(04), 385–393 (2011). [CrossRef]  

29. L. Qi, J. Wu, X. Li, S. Zhang, S. Huang, Q. Feng, and W. Chen, “Photoacoustic Tomography Image Restoration With Measured Spatially Variant Point Spread Functions,” IEEE Trans. Med. Imaging 40(9), 2318–2328 (2021). [CrossRef]  

30. X. L. Dean-Ben, A. Buehler, V. Ntziachristos, and D. Razansky, “Accurate model-based reconstruction algorithm for three-dimensional optoacoustic tomography,” IEEE Trans. Med. Imaging 31(10), 1922–1928 (2012). [CrossRef]  

31. J. Meng, L. V. Wang, L. Ying, D. Liang, and L. Song, “Compressed-sensing photoacoustic computed tomography in vivo with partially known support,” Opt. Express 20(15), 16510 (2012). [CrossRef]  

32. Y. Han, S. Tzoumas, A. Nunes, V. Ntziachristos, and A. Rosenthal, “Sparsity-based acoustic inversion in cross-sectional multiscale optoacoustic imaging,” Med. Phys. 42(9), 5444–5452 (2015). [CrossRef]  

33. Y. Han, L. Ding, X. L. Ben, D. Razansky, J. Prakash, and V. Ntziachristos, “Three-dimensional optoacoustic reconstruction using fast sparse representation,” Opt. Lett. 42(5), 979–982 (2017). [CrossRef]  

34. Y. Zhang, Y. Wang, and C. Zhang, “Total variation based gradient descent algorithm for sparse-view photoacoustic image reconstruction,” Ultrasonics 52(8), 1046–1055 (2012). [CrossRef]  

35. M. H. Xu and L. H. V. Wang, “Pulsed-microwave-induced thermoacoustic tomography: Filtered backprojection in a circular measurement configuration,” Med. Phys. 29(8), 1661–1669 (2002). [CrossRef]  

36. M. Xu and L. V. Wang, “Universal back-projection algorithm for photoacoustic computed tomography,” Phys. Rev. E 71(1), 016706 (2005). [CrossRef]  

37. G. Paltauf, R. Nuster, and P. Burgholzer, “Weight factors for limited angle photoacoustic tomography,” Phys. Med. Biol. 54(11), 3303–3314 (2009). [CrossRef]  

38. C. Cai, X. Wang, K. Si, J. Qian, J. Luo, and C. Ma, “Streak artifact suppression in photoacoustic computed tomography using adaptive back projection,” Biomed. Opt. Express 10(9), 4803–4814 (2019). [CrossRef]  

39. E. Marchetti, L. M. Close, J.-P. Véran, É. Thiébaut, L. Denis, F. Soulez, and R. Mourya, “Spatially variant PSF modeling and image deblurring,” Proc. SPIE Int. Soc. Opt. Eng. (2016).

40. S. Ben Hadj and L. Blanc-Feraud, “Modeling and removing depth variant blur in 3d fluorescence microscopy,” IEEE Int. Conf. Acoust. Speech Signal Proc., 689–692 (2012).

41. M. J. Jee, J. P. Blakeslee, M. Sirianni, A. R. Martel, R. L. White, and H. C. Ford, “Principal component analysis of the time- and position-dependent point-spread function of the advanced camera for surveys,” Publ. Astron. Soc. Pac. 119(862), 1403–1419 (2007). [CrossRef]  

42. L. Denis, E. Thiébaut, F. Soulez, J.-M. Becker, and R. Mourya, “Fast approximations of shift-variant blur,” Int. J. Comput. Vision 115(3), 253–278 (2015). [CrossRef]  

43. T. R. Lauer, “Deconvolution with a spatially-variant PSF,” Proc. SPIE Int. Soc. Opt. Eng., 167–173 (2002).

44. L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745 (1974). [CrossRef]  

45. R. E. W. Rafael and C. Gonzalez, Digital Image Processing, Image Restoration and Construction (Pearson).

46. M. Haltmeier and G. Zangerl, “Spatial resolution in photoacoustic tomography: effects of detector size and detector bandwidth,” Inverse Probl. 26(12), 125002 (2010). [CrossRef]  

47. R. Turcotte, E. Sutu, C. C. Schmidt, N. J. Emptage, and M. J. Booth, “Deconvolution for multimode fiber imaging: modeling of spatially variant PSF,” Biomed. Opt. Express 11(8), 4759–4771 (2020). [CrossRef]  

48. P. Jia, R. Sun, W. Wang, D. Cai, and H. Liu, “Blind deconvolution with principal components analysis for wide-field and small-aperture telescopes,” Mon. Not. R. Astron. Soc. 470(2), 1950–1959 (2017). [CrossRef]  

49. N. Dey, L. Blanc-Feraud, C. Zimmer, P. Roux, Z. Kam, J. C. Olivo-Marin, and J. Zerubia, “Richardson-Lucy algorithm with total variation regularization for 3D confocal microscope deconvolution,” Microsc. Res. Tech. 69(4), 260–266 (2006). [CrossRef]  

50. J. L. Starck, E. Pantin, and F. Murtagh, “Deconvolution in astronomy: A review,” Publ. Astron. Soc. Pac. 114(800), 1051–1069 (2002). [CrossRef]  

51. A. N. Tikhonov, “On the stability of inverse problems,” Dok. Acad. Sci. URSS 39, 195–198 (1943).

52. A. Levin and Y. Weiss, “User assisted separation of reflections from a single image using a sparsity prior,” IEEE Trans. Pattern Anal. Mach. Intell. 29(9), 1647–1654 (2007). [CrossRef]  

53. B. A. Olshausen and D. J. Field, “Emergence of simple-cell receptive field properties by learning a sparse code for natural images,” Nature 381(6583), 607–609 (1996). [CrossRef]  

54. J. Besag, “Spatial interaction and the statistical analysis of lattice systems,” J. Roy. Stat. Soc. B Met. 36, 192–225 (1974). [CrossRef]  

55. P. J. Green, “Bayesian reconstructions from emission tomography data using a modified EM algorithm,” IEEE Trans. Med. Imaging 9(1), 84–93 (1990). [CrossRef]  

56. P. J. Green, “On use of the EM algorithm for penalized likelihood estimation,” J. Roy. Stat. Soc. B Met. 52, 443–452 (1990). [CrossRef]  

57. B. E. Treeby and B. T. Cox, “k-Wave: MATLAB toolbox for the simulation and reconstruction of photoacoustic wave fields,” J. Biomed. Opt. 15(2), 021314 (2010). [CrossRef]  

58. B. E. Treeby, E. Z. Zhang, and B. T. Cox, “Photoacoustic tomography in absorbing acoustic media using time reversal,” Inverse Probl. 26(11), 115003 (2010). [CrossRef]  

59. S. Liu, H. Wang, C. Zhang, J. Dong, S. Liu, R. Xu, and C. Tian, “In vivo photoacoustic sentinel lymph node imaging using clinically-approved carbon nanoparticles,” IEEE Trans. Biomed. Eng. 67, 2033–2042 (2020). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Supplemental Document

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Image degradation model with SVPSF
Fig. 2.
Fig. 2. Graphical description of the algorithm flow. (a) Flowchart of the whole algorithm. (b) Flowchart of SVPSF modeling.
Fig. 3.
Fig. 3. Acquisition of the SVPSF samples in simulation. (a) Location map of the dots for SVPSF simulation. (b) The simulated SVPSF samples.
Fig. 4.
Fig. 4. Image restoration result of blood vessels. (a) The original image. (b) The simulated PAT image, i.e., the degraded image. (c) Restored image with λ = 0.003, µ = 0.5. (d)–(f) Zooms of (a)-(c), respectively, at the same regions indicated by the blue box in (a). (g) Normalized intensity profiles along the blue solid line in (d).
Fig. 5.
Fig. 5. Image restoration result of a mouse embryo. (a) The original mouse embryo image. (b) The simulated PAT image, i.e., the degraded image (c) Restored image with λ = 0.004, µ = 0.6. (d)–(f) Zooms of (a)-(c), respectively, at the same regions indicated by the blue box in (a). (g) Normalized intensity profiles along the blue solid line in (d).
Fig. 6.
Fig. 6. Measuring and reconstructing SVPSF samples for the PAT system. (a) Placement of single microsphere phantom and transducer array. (b) Two motorized translation stages are combined to realize the X-Y scanning of the transducer array. (c) The reconstructed SVPSF samples for the PAT system.
Fig. 7.
Fig. 7. Image restoration result of the microspheres phantom. (a) The microspheres phantom. (b) The reconstructed PAT image. (c) Restored image with λ = 0.001, µ = 0.1. (d) Zooms of the boxes in (b)-(c). (e) Normalized intensity profiles along the yellow solid line in (d).
Fig. 8.
Fig. 8. Image restoration result of the hair phantom. (a) The hair phantom. (b) The reconstructed PAT image. (c) Restored image with λ = 0.001, µ = 0.1. (d)–(e) Zooms of (b)-(c), respectively, at the same regions indicated by the yellow box in (b). (f) Normalized intensity profiles along the yellow solid line in (d).
Fig. 9.
Fig. 9. Image restoration result of the vessel phantom. (a) The vessel phantom image. (b) The reconstructed PAT image. (c) Restored image with λ = 0.003, µ = 0.2. (d)–(e) Zooms of (b)-(c), respectively, at the same regions indicated by the yellow box in (b). (f) Normalized intensity profiles along the yellow solid line in (d).
Fig. 10.
Fig. 10. Image restoration result of abdominal blood vessels of a normal mouse. (a) The reconstructed PAT image. (b) Deconvolved image with λ = 0.006, µ = 0.4. (c) Deringing result (the final restored image) with λ = 0.026, µ = 0.4. (d)–(f) Zooms of (a)-(c), respectively, at the same regions indicated by the yellow box in (a). (g) Normalized intensity profiles along the yellow solid line in (d).
Fig. 11.
Fig. 11. Image restoration result of blood vessels of a tumor mouse. (a) The reconstructed PAT image. (b) Deconvolved image with λ = 0.006, µ = 0.4. (c) Deringing result (the final restored image) with λ = 0.02, µ = 0.4. (d)–(f) Zooms of (a)-(c), respectively, at the same regions indicated by the yellow box in (a). (g) Normalized intensity profiles along the yellow solid line in (d).
Fig. 12.
Fig. 12. Image restoration result of a human finger. (a) The reconstructed PAT image. (b) Deconvolved image with λ = 0.003, µ = 0.55. (c) Deringing result (the final restored image) with λ = 0.026, µ = 0.4. (d)–(f) Zooms of (a)-(c), respectively, at the same regions indicated by the yellow box in (a). (g) Normalized intensity profiles along the yellow solid line in (d).

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

I ( x , y ) = O ( u , v )   P ( u , v , x , y ) d u d v + η ( x , y ) ,
P ( u , v , x , y ) = i = 1 N a i ( u , v ) p i ( x , y )  ,
I ( x , y ) = i = 1 N [ a i ( u , v ) O ( u , v ) ] p i ( x , y )  +  η ( x , y )  ,
p ( O | I ) p ( I | O ) p ( O )  ,
O = arg min O [ ln p ( I | O ) ln p ( O ) ]   .
J 1 ( O ) = ln p ( I | O ) ( x , y ) { [ i = 1 N ( a i O ) p i ] I ln [ i = 1 N ( a i O ) p i ] }   .
O k + 1 = { i = 1 N a i [ I i = 1 N ( a i O k ) p i ] p i } O k  ,
p ( O ) ( 1 + μ | O | 2 ) γ  ,
S L G ( O ) In [ 1 + μ | O | 2 ]   .
O = arg min O   J 1 ( O )  +  λ   SLG ( O )  ,
O k + 1  =  O k 1 2 μ λ   d i v ( O k 1  +  μ | O k | 2 ) { i = 1 N a i [ I i = 1 N ( a i O k ) p i ] p i }   .
O a v e = ( O d e c o n v + 1  +  O d e c o n v + O d e c o n v 1 + O d e c o n v ) / 4  ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.