Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Super-resolution reconstruction algorithm for optical-resolution photoacoustic microscopy images based on sparsity and deconvolution

Open Access Open Access

Abstract

The lateral resolution of the optical-resolution photoacoustic microscopy (OR-PAM) system depends on the focusing diameter of the probe beam. By increasing the numerical aperture (NA) of optical focusing, the lateral resolution of OR-PAM can be improved. However, the increase in NA results in smaller working distances, and the entire imaging system becomes very sensitive to small optical imperfections. The existing deconvolution-based algorithms are limited by the image signal-to-noise ratio when improving the resolution of OR-PAM images. In this paper, a super-resolution reconstruction algorithm for OR-PAM images based on sparsity and deconvolution is proposed. The OR-PAM image is sparsely reconstructed according to the constructed loss function, which utilizes the sparsity of the image to combat the decrease in the resolution. The gradient accelerated Landweber iterative algorithm is used to deconvolve to obtain high-resolution OR-PAM images. Experimental results show that the proposed algorithm can improve the resolution of mouse retinal images by approximately 1.7 times without increasing the NA of the imaging system. In addition, compared to the Richardson–Lucy algorithm, the proposed algorithm can further improve the image resolution and maintain better imaging quality, which provides a foundation for the development of OR-PAM in clinical research.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Photoacoustic imaging (PAI) is one of the rapidly developing nondestructive biomedical imaging modalities that combines the advantages of high optical contrast and low acoustic scattering [14]. In PAI, a short-pulsed laser beam is used to illuminate the region of interest of the biological tissue. The tissue’s absorption of this pulsed light energy then induces an instantaneous temperature rise and transient thermoelastic expansion, leading to ultrasonic emissions (also termed as photoacoustic waves) from the tissue. By detecting the photoacoustic waves, a light absorption distribution image of the sample is reconstructed. Currently, PAI has been applied in a wide range of biomedical scenarios, including clinical and preclinical tumor imaging [5,6], functional microvascular angiography [79], atherosclerotic vulnerable plaque identification [1012], imaging in ophthalmology [1315], and cerebral imaging [16,17].

Optical-resolution photoacoustic microscopy (OR-PAM) is one of the major implementation embodiments of PAI [1,18]. OR-PAM uses an optically focused structure to detect phototransduction ultrasound signals and images the location of light absorption in biological tissues to obtain high-spatial-resolution images. The ability of OR-PAM to provide a submicron-scale resolution enables biological tissue imaging at microscopic dimensions [19,20], which is essential for studies in leading biomedical fields [2123]. Compared to other medical imaging methods, OR-PAM has imaging advantages. From a physical perspective, it can provide information on chemical aspects of biological tissues. From an optical perspective, it can provide optical parameters on light absorption in biological tissues and can achieve functional imaging without contrast agents. From a safety perspective, the excitation source in the system is nondamaging to the human body.

In the traditional OR-PAM, ultrasonic and optical components scan point to point in the field of view through a mechanical scanning mechanism to obtain high-resolution images of deep tissues. To obtain a higher resolution, more sampling points are required, which leads to an increased sampling time and is not conducive to real-time imaging. In addition, the significant increase in data volume poses a challenge to the acquisition and storage of the imaging system. As the lateral resolution of OR-PAM is defined by optical focusing, another approach to improving the lateral resolution is to increase the optical numerical aperture (NA) of the system [24]. However, optical focusing with a high NA leads to disadvantages. First, the depth of focus is inversely proportional to the square of the NA, and thus the in-focus depth decreases sharply with the increase in the NA. Second, the working distance of the objective also decreases with the increase in the NA, thus reducing the flexibility of the system for in-vivo imaging. Third, a high-NA OR-PAM system is very sensitive to tiny optical imperfections, so that additional correction optics are required to compensate for the induced aberrations [21,23]. Therefore, it is of significance to improve the image resolution without increasing the NA of the system under limited experimental conditions.

In the development of PAI, improvements in imaging algorithms had an integral role in improving the imaging quality. Usually, OR-PAM images are obtained using the maximum amplitude projection (MAP) algorithm [25]. The resolution limit of OR-PAM approaches the diffraction limit. However, it is difficult to reach this limit resolution in practical applications. Imaging algorithms have been proposed to improve the image quality of OR-PAM. For example, Tang et al. proposed the concept of differential photoacoustic microscopy [26]. Chen et al. proposed a blind deconvolution algorithm to improve the lateral resolution of OR-PAM. The used deconvolution is based on the Richardson–Lucy (R–L) algorithm [27]. Song et al. measured the focus of the OR-PAM to obtain the transverse point spread function (PSF) of the system and used the measured PSF as an initial system PSF to perform R–L deconvolution to improve the lateral resolution [28]. Although the deconvolution-based algorithm has achieved good results in improving the resolution of OR-PAM images, there are still some problems. This deconvolution is based on the fact that the acquired blurred image is the result of the convolution of the original image and PSF of the imaging system. Notably, using the deconvolution technique to recover the high-frequency components of the object out of the passband of the imager PSF, the resolution improvement would be limited by the dynamic range and noise level of the imager. Signal-to-noise ratio (SNR) degradation and loss of some small structures in in-vivo images could be noticed in deconvolution-based PAM results [29].

To address the problem that the deconvolution algorithm is affected by the image SNR in improving the resolution of OR-PAM images, a super-resolution reconstruction algorithm for OR-PAM images based on sparsity and deconvolution is proposed. In this study, to reduce the influence of noise in the image and obtain a higher resolution, the sparsity of the PAM image is used to construct a loss function with sparse constraints, and the OR-PAM image is sparsely reconstructed based on the loss function. The reconstructed image exhibits a largely reduced noise and an enhanced high-frequency component. The final OR-PAM image is obtained by gradient accelerated Landweber (LW) iterative deconvolution. Experimental results of mouse retinal imaging show that the proposed algorithm can still break the optical diffraction limit without modifying the system hardware and can effectively improve the lateral resolution and image quality of OR-PAM. In addition, a comparison to the R–L algorithm proves the advantages of the proposed algorithm in the super-resolution reconstruction of OR-PAM images.

2. Methodology

2.1 Overview of the super-resolution reconstruction algorithm for OR-PAM images

Generally, the OR-PAM imaging system is considered a linear displacement invariant system. Considering the existence of additive noise in the system, the imaging process can be expressed by

$$g({x,y} )= h({x,y} )\ast o({x,y} )+ n({x,y} ),$$
where $g({x,y} )$ is the system output image, $h({x,y} )$ denotes the PSF of the OR-PAM system, $o({x,y} )$ is the original image, $n({x,y} )$ is the noise term, and ${\ast} $ denotes the convolution process. In practice, an image obtained by an imaging system can be degraded for many reasons. Two important causes of image degradation are the blurring of the point spread effect and the presence of statistical noise.

The available deconvolution methods iterate directly from $g({x,y} )$ to find the optimal solutions of $h({x,y} )$ and $o({x,y} )$. However, the deconvolution operation converges to the noise-dominated solution after a small number of iterations in the presence of noise. When deconvolution-based methods are used to improve the image resolution, the dynamic range and noise level of the imaging system can limit the final result. Because the super-resolution reconstruction of OR-PAM images is a pathological problem whose solution is highly sensitive to the noise in the input data, it is usually unstable. For image super-resolution reconstruction, some prior information or constraints should be imposed on the solution to obtain better results. In this regard, the sparsity of PAM images is introduced as an apriori knowledge to counteract the decrease in resolution, extract high-frequency information, and suppress the noise in the images. An increase in spatial resolution always leads to a smaller PSF for a given OR-PAM. Compared to conventional microscopy, the convolution of the object with the smaller PSF in super-resolution imaging always leads to a relative increase in sparsity. The increased image sparsity implies that the OR-PAM system has a smaller PSF, which leads to a higher spatial resolution of the image. The background of the image is estimated and removed first in the case of the large background noise intensity, and then sparse reconstruction is performed. Next, the final high-resolution image is obtained by deconvolution iteration. The gradient accelerated LW algorithm is used for deconvolution to improve the convergence speed during the iterative process. Overall, the proposed algorithm is divided into two main parts: sparse reconstruction and LW deconvolution (described in detail in Sections 2.2 and 2.3). The specific algorithm steps are shown in the flowchart in Fig. 1.

 figure: Fig. 1.

Fig. 1. Flowchart of the super-resolution reconstruction algorithm for OR-PAM images based on sparsity and deconvolution.

Download Full Size | PDF

2.2 Sparse reconstruction

According to the analysis in Section 2.1, for the super-resolution reconstruction of photoacoustic images, some prior information or constraints should be added to the objective function. The sparse constraints are beneficial to improving the spatial resolution of OR-PAM images. Numerous studies have shown that most medical images have sparsity after a suitable sparse transformation and that photoacoustic images also have spatial sparsity properties [30,31]. Therefore, a loss function with sparse constraints is proposed. Different from the algorithm that directly deconvolves the objective function (Eq. (1)), the proposed algorithm initially uses the loss function to suppress the noise in the image and enhance the high-frequency information:

$$\mathop o\limits^ \wedge ({x,y} )= \mathop {\arg \min }\limits_{o({x,y} )} \left\{ {\frac{\lambda }{2}\left\|{g({x,y} )- h({x,y} )\ast o({x,y} )- n({x,y} )} \right\|_2^2 + {\lambda_{L1}}{{\left\|{o({x,y} )} \right\|}_1}} \right\}.$$

In this expression, the first term on the right-hand side is the fidelity term. $\lambda $ is the image fidelity weight factor, which indicates the similarity between the recovered image $o({x,y} )$ and input image $g({x,y} )$. The second term of the equation is the sparse constraint term. ${\lambda _{L1}}$ denotes the weight factor balancing the image’s sparsity. Instead of the ${l_0}$ norm (absolute sparsity) being used as the start point in the compressive sensing theory [32], the ${l_1}$ norm is used directly, i.e., the sum of the absolute values of each element, which can handle both absolutely and relatively sparse structures and constrain the extraction of high-spatial-frequency information.

The algorithm of Bregman iteration has become one of the most effective methods to solve optimization problems such as the ${l_1}$ norm. Among them, the split Bregman algorithm [33,34] not only improves the quality of image processing but also ensures the efficiency of computation. To optimize the objective function (Eq. (2)), the problem is solved using the split Bregman algorithm. By introducing the auxiliary variable $z = o$, the objective function can be reexpressed as follows:

$$\mathop {\arg \min }\limits_o \left\{ {\frac{\lambda }{2}\left\|{g - h \ast o - n} \right\|_2^2 + {\lambda_{L1}}{{\left\|o \right\|}_1}} \right\}\;\;\textrm{s}\textrm{.t}\textrm{.}\;\;z = o.$$

The main steps of the split Bregman algorithm are as follows:

$$\left\{ {\begin{array}{c} {({{o^{k + 1}},{z^{^{k + 1}}}} )= \mathop {\arg \min }\limits_{o,z} J({o,z} )+ \frac{\mu }{2}\left\|{z - o - {b^k}} \right\|_2^2}\\ {{b^{k + 1}} = {b^k} + ({{z^{^{k + 1}}} - {o^{k + 1}}} )} \end{array}} \right.,$$
where k is the current number of iterations, J represents the convex function, $\mu $ is the constraint term coefficient ($\mu > 0$), and ${b^{k + 1}}$ is an intermediate variable in the iterative optimization process. The first minimization problem in Eq. (4) can be optimized for two subproblems ${o^{k + 1}}$ and ${z^{k + 1}}$:
$$\left\{ {\begin{array}{c} {{o^{k + 1}} = \mathop {\arg \min }\limits_o \frac{\lambda }{2}\left\|{g - h \ast o - n} \right\|_2^2 + \frac{\mu }{2}\left\|{{z^k} - o - {b^k}} \right\|_2^2}\\ {{z^{k + 1}} = \mathop {\arg \min }\limits_z {{\left\|z \right\|}_1} + \frac{\mu }{2}\left\|{z - {o^{k + 1}} - {b^k}} \right\|_2^2} \end{array}} \right.,$$
where ${z^{k + 1}}$ is a complex combinatorial problem. The optimal solution can be obtained directly using the contraction operator (soft threshold) method [35]. The expression of the solution is as follows:
$$\left\{ {\begin{array}{c} {z_i^{k + 1} = \textrm{soft}\left( {{{({{o^{k + 1}} + {b^k}} )}_i},\frac{1}{\mu }} \right)\;\;i = 1,2, \ldots ,n}\\ {\textrm{soft}({o,\gamma } )= \textrm{sign}(o )\ast \max ({|o |- \gamma ,0} )} \end{array}} \right.,$$
where $\gamma $ is the set constant parameter that determines the width of the middle zero-setting region of the soft threshold function. Based on the above split Bregman algorithm steps, the proposed loss function (Eq. (2)) is solved to obtain the sparse reconstructed image ${o^{k + 1}}$.

2.3 Gradient accelerated Landweber algorithm

Sparse reconstruction suppresses the noise level in the image. The residual noise in the reconstructed image is negligible, and thus the imaging model can be expressed as $g({x,y} )= h({x,y} )\ast o({x,y} )$. The light source wavelength and NA of the OR-PAM system are measured to obtain the theoretical resolution. The resolution of OR-PAM is used as the FWHM of Gaussian function to construct PSF of the system. If the theoretical resolution of the system differs greatly from the actual resolution, the actual resolution is used as the FWHM. Using the sparsely reconstructed $o({x,y} )$ as $g({x,y} )$ and obtained PSF as an initial value of the system PSF, the deconvolution operation is performed to find the optimal estimation of the high-resolution image. The LW algorithm is an unconstrained minimalist iterative algorithm that can efficiently handle inverse problems. The basic idea is to transform the OR-PAM image deconvolution problem into an optimization problem with a minimal value of the objective function. Equation (1) is transformed into the following objective function:

$$\min f(o )= \frac{1}{2}{\left\|{h \ast o - g} \right\|^2},$$
where the gradient of $f(o )$ is
$$\nabla f(o )= {h^T}h \ast o - {h^T}g = {h^T}({h \ast o - g} ).$$

Although the LW algorithm achieves a good compromise between the quality and speed of image reconstruction, it still requires several iterations and does not converge stably. To improve the convergence speed, the gradient accelerated LW algorithm is used for image deconvolution. According to the Ref. [36], a gradient acceleration step is added to the LW algorithm to reduce the iterations and achieve a steady state of convergence. The algorithm iteration format can be represented as follows:

$${w_{k + 1}} = {o_k} + {\partial _k}{h^T}({g - h \ast {o_k}} ),$$
$${t_{k + 1}} = \frac{1}{2}\left( {1 + \sqrt {1 + 4t_k^2} } \right),$$
$${o_{k + 1}} = {w_{k + 1}} + ({{t_k} - {1 / {{t_{k + 1}}}}} )({{w_{k + 1}} - {w_k}} ).$$

Equation (9) is a variant of the classical LW algorithm, where w is an approximation of o, ${w_0} = {o_0}$, ${\partial _k}$ is the iteration step length, and ${t_k}$ is the parameter used for the acceleration step (${t_0} = 1$). As the time-domain convolution is equivalent to the frequency-domain product, the Fourier transform can be used here instead of the convolution. The result of the convolution can be obtained by Fourier transform $h \ast {o_k}$ in the time domain, point multiplication in the frequency domain, and inverse Fourier transform. By setting the optimal number of iterations, high-resolution OR-PAM images can be reconstructed.

2.4 Evaluation metrics for the algorithm

The full width at half maximum (FWHM) of the structure in the image is used as an evaluation function to assess the improvement of the final OR-PAM image resolution. A smaller FWHM indicates a better super-resolution reconstruction. However, the FWHM of the edge structure in the image as an evaluation function is only applicable to images with bright edges. In the OR-PAM system, the sample is irradiated by a pulsed laser, which generates a photoacoustic signal and is received by the ultrasonic transducer. The probe of the ultrasonic transducer is needle-shaped. During the experiment, the needle transducer points to the center of the sample and remains stationary, so that it can only receive signals from one sector in a specific direction. When the signal position gradually deviates, the intensity of the signal received by the ultrasonic transducer becomes weaker. In addition, the image is affected by the system noise, so that the edges of the sample structure in the OR-PAM image are not particularly clear and bright. This is not suitable to evaluate the final reconstruction effect of the algorithm by FWHM. Other evaluation indices are needed to complement them.

Sharpness is one of the parameters used to evaluate the image quality. Herein, the sharpness is utilized as another metric for the evaluation of the proposed algorithm. For OR-PAM images, the sharpness is determined by the high-frequency components in the image. More high-frequency components in the image lead to a sharper image [37]. The main sharpness evaluation functions include methods based on Fourier transform, wavelet transform, discrete cosine transform, image histogram, and image gray value gradient [38]. In this paper, the coefficient of variation is used as a sharpness evaluation function of OR-PAM images. The gray value of pixels is discrete in an image. When the PSF of the OR-PAM system is smaller, the gray value dispersion of image pixels is larger. The coefficient of variation is the ratio of standard deviation to the average value, which is suitable for the measurement of the dispersion of the gray value of pixels in an OR-PAM image. A larger coefficient of variation corresponds to a clearer image. It can be expressed as follows:

$$\textrm{C}\textrm{.V = }\frac{{\sqrt {\frac{1}{N}\sum\limits_x {\sum\limits_y {{{[{f({x,y} )- \mu } ]}^2}} } } }}{\mu },$$
where N is the number of image pixels, $f({x,y} )$ is the gray value of the pixel, and $\mu $ is the average of the pixel gray values.

3. Experiment

To verify the effectiveness of the proposed algorithm, the OR-PAM system is used to image the target sample (black tape) and obtain the MAP along the depth direction. The theoretical lateral resolution of the system is 2.82 µm, while the actual measured lateral resolution of the system is approximately 5.02 µm. Here, the actual resolution is usually lower than the theoretical resolution due to factors such as system aberrations and scan step size. Figure 2(a) shows the original image of the black tape with dimensions of $706\;\textrm{pixels} \times 706\;\textrm{pixels}$. The image fidelity is set to 1000, the sparsity is 10, and the number of iterations based on the split Bregman algorithm is 50. The LW deconvolution operation is performed on the reconstructed image. The Gaussian function with a FWHM of 5.02 µm is used as an initial value of the system PSF. The number of iterations is set at 12. The reconstructed image obtained by the proposed algorithm is shown in Fig. 2(b). As shown in the Fig. 3, the quantitative comparison results are obtained by Gaussian fitting of the cross-sectional profiles at the position of horizontal and vertical lines in Fig. 2. The FWHM at the position of horizontal line is reduced from 6.12 µm to 3.80 µm; the FWHM at the position of vertical line is reduced from 6.20 µm to 3.82 µm. The resolution of the original image can be increased by about 1.7 times based on the proposed algorithm.

 figure: Fig. 2.

Fig. 2. High-resolution reconstruction of black tape image based on the proposed algorithm. (a) Original black tape image, (b) high-resolution black tape image based on the proposed algorithm.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Quantitative comparison of two black tape images before and after the use of the proposed algorithm. (a) Cross-sectional profiles and their corresponding Gaussian fits at the position of the horizontal line in Fig. 2(a) and Fig. 2(b), (b) Cross-sectional profiles and their corresponding Gaussian fits at the position of the vertical line in Fig. 2(a) and Fig. 2(b).

Download Full Size | PDF

Moreover, the OR-PAM system is used to image the mouse retina and obtain the MAP along the depth direction. A retinal image with dimensions of $256\;\textrm{pixels} \times 253\;\textrm{pixels}$ is selected for processing, as shown in Fig. 4(a). The background is estimated for this image, and then the sparse reconstruction is performed after the removal of the background. The image fidelity is set to 1000, the sparsity is 10, and the number of iterations based on the split Bregman algorithm is 50. The LW deconvolution operation is performed on the reconstructed image. The number of iterations is set at 10. The high-resolution image shown in Fig. 4(b) was obtained. The comparison of the two images shows that the mouse retinal structure in Fig. 4(b) has a higher resolution and better SNR than in the original image. The retinal structures at the same locations in both images (shown in the white dashed box in the figure) are selected for local magnification comparison. Figure 4(c) shows that the mouse retinal structure in the original image is blurred and that it is difficult to distinguish the fine structure in the image. More biological structure information can be obtained from the mouse retinal image with the increase in resolution following the reconstruction using the proposed algorithm, as shown in Fig. 4(d). The cross-sectional profiles of the vessel at the position of horizontal and vertical lines in Fig. 4 are curved to obtain the result in Fig. 5(a) and Fig. 5(c). The FWHMs at the positions of L1, L2, L3, L4 and L5 are quantified and compared. As shown in Fig. 5(b) and Fig. 5(d), the FWHMs at the positions of horizontal line are reduced from 6.25, 6.41, and 10.02 µm to 3.71, 3.95, and 7.50 µm, respectively; the FWHMs at the positions of vertical line are reduced from 6.31 and 5.22 µm to 4.50 and 3.29 µm, respectively, with a resolution improvement of approximately 1.7 times. In addition, the final imaging quality is evaluated by the evaluation function. The coefficients of variation in Fig. 4(a) and Fig. 4(b) are 0.60 and 2.85, respectively, which indicates that the mouse retinal image pixels obtained based on the proposed algorithm have a larger grayscale dispersion and more high-frequency components in the image and thus can display more detailed information. Overall, the final imaging performance of Fig. 4(b) is better.

 figure: Fig. 4.

Fig. 4. High-resolution reconstruction of mouse retinal images based on the proposed algorithm. (a) Original retinal image, (b) high-resolution retinal image based on the proposed algorithm, (c) enlarged details of retinal structures in the white dashed box in Fig. 4(a), (d) enlarged details of retinal structures in the white dashed box in Fig. 4(b).

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Quantitative comparison of two mouse retinal images before and after the use of the proposed algorithm. (a) Cross-sectional profiles at the position of the horizontal line in Fig. 4(a) and Fig. 4(b), (b) FWHMs at the positions of L1, L2, and L3 in Fig. 5(a). (c) Cross-sectional profiles at the position of the vertical line in Fig. 4(a) and Fig. 4(b), (d) FWHMs at the positions of L4 and L5 in Fig. 5(c).

Download Full Size | PDF

In addition, the effectiveness of the proposed algorithm in improving image resolution depends on the optimal choice of the setup parameters. According to the experiments, the number of iterations of LW deconvolution has a large impact on the resolution of the final OR-PAM image. The LW deconvolution is performed for Fig. 4(a) using different numbers of iterations while maintaining other parameters constant. The cross-sectional profile of the vessel at the same position as in Fig. 4(a) is obtained. The changes in FWHM at position L1 with the number of iterations are shown in Fig. 6. In the initial stage, the FWHM of the mouse retinal structure decreases, and the image resolution increases significantly with the increase in the number of iterations. After the number of iterations exceeds 11, the resolution of mouse retinal images stabilizes at a horizontal height. Therefore, when super-resolution reconstruction is performed for different samples in practice, a suitable number of iterations needs to be chosen to achieve better image reconstruction and avoid redundant computational steps. In general, the number of iterations for LW deconvolution is set in the vicinity of $ \ge 10$ to obtain the best value.

 figure: Fig. 6.

Fig. 6. Effect of the number of deconvolution iterations on the image resolution.

Download Full Size | PDF

To further evaluate the performance of the proposed algorithm, high-resolution reconstructions of mouse retinal images were performed by the proposed algorithm and the existing R–L algorithm [27]. Three original images shown in Fig. 7 are selected, denoted as OR-PAM-A, OR-PAM-B, and OR-PAM-C. The first column in Fig. 5 shows the original image of the mouse retina, the second column shows the result obtained by the R–L algorithm; the third column shows the result obtained by the proposed algorithm; the fourth column compares the cross-sectional profiles of blood vessels at the same locations in the three images. As shown in Fig. 7, the resolution of the mouse retinal images obtained by both algorithms was improved compared to the original images; both can provide finer features of the vascular tissue. Table 1 shows the coefficient of variation of the mouse retinal images in Fig. 7 and FWHM at the positions shown by the black arrows in the fourth column of the plots. The comparison of the cross-sectional profiles of different color curves in the fourth column plot shows that all red curves have the smallest FWHM, which indicates that the proposed algorithm improves the resolution better than the R–L algorithm. The comparison of the coefficients of variation of the three original images in Table 1 shows that the SNR of OR-PAM-B is lower. The high-resolution image obtained by OR-PAM-B after reconstruction based on the R–L algorithm shows many speckles that do not exist in the original images, as shown in the rectangular box in Fig. 7(b2), which is consistent with the findings in Ref. [25]. This occurs as the R–L algorithm seeks the optimal estimation of the original target by the maximum likelihood function, and the amplification of the noise in the image leads to the appearance of speckles. In contrast, Fig. 7(b3) obtained based on the proposed algorithm does not show such speckles even in the case of high noise and retains the detailed structure in the original image, as the algorithm introduces a constraint term in the sparse reconstruction to better identify the vascular patterns in the image while mitigating the noise problem. As a result, the images can still maintain good reconstruction results while increasing the resolution. In addition, the coefficient of variation in Table 1 also illustrates that the high-resolution images obtained by the algorithm in this paper are sharper and have better quality. The computationnal time of three mouse retinal images processed by R–L algorithm is 14 s, 12 s and 15 s respectively. The computationnal time of three mouse retinal images processed by the proposed algorithm is 21 s, 20 s and 22 s respectively. To sum up, both algorithms can effectively improve image resolution. The R–L algorithm has high computational efficiency, but it is only suitable for processing original images with low noise. The proposed algorithm can achieve good image reconstruction quality for both high-noise images or low-noise images.

 figure: Fig. 7.

Fig. 7. Super-resolution reconstruction of mouse retinal images based on the R–L algorithm and proposed algorithm. (a1)–(c1) Original retinal images, (a2)–(c2) reconstruction results of the R–L algorithm, (a3)–(c3) reconstruction results of the proposed algorithm, (a4)–(c4) cross-sectional profiles at the position of the blue line in the corresponding left three images.

Download Full Size | PDF

Tables Icon

Table 1. Coefficients of variation (CVs) of mouse retinal images and FWHM/µm at the position of the blue line.

4. Conclusion

In this paper, an OR-PAM image super-resolution reconstruction algorithm based on sparsity and deconvolution is proposed. The algorithm constructs the loss function using the sparsity of OR-PAM images as apriori knowledge and solves it using the split Bregman algorithm. Finally, the high-resolution image is obtained by deconvolution based on the gradient accelerated LW algorithm. The proposed algorithm was used to process the mouse retinal images. The reconstructed images show that the proposed algorithm can effectively improve the resolution of OR-PAM images without increasing the system NA. The comparison results showed that the proposed algorithm is not only better than the R–L algorithm in improving the resolution but can also eliminate the interference of noise to a certain extent, and the higher image reconstruction quality can still be maintained even in the case of high image noise.

Funding

National Natural Science Foundation of China (61971406, 81927801); Science and Technology Commission of Shanghai Municipality (19441910000); Youth Innovation Promotion Association of the Chinese Academy of Sciences.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. L. V. Wang, “Multiscale photoacoustic microscopy and computed tomography,” Nat. Photonics 3(9), 503–509 (2009). [CrossRef]  

2. L. V. Wang and S. Hu, “Photoacoustic tomography: in vivo imaging from organelles to organs,” Science 335(6075), 1458–1462 (2012). [CrossRef]  

3. X. Wang, K. Xiong, X. Jin, and S. Yang, “Tomography-assisted Doppler photoacoustic microscopy: proof of concept,” Chin. Opt. Lett. 18(10), 101702 (2020). [CrossRef]  

4. D. Li, Y. Zhang, C. Liu, J. Chen, D. Sun, and L. Wang, “Review of photoacoustic imaging for microrobots tracking in vivo,” Chin. Opt. Lett. 19(11), 111701 (2021). [CrossRef]  

5. S. A. Ermilov, T. Khamapirad, A. Conjusteau, M. H. Leonard, R. Lacewell, K. Mehta, T. Miller, and A. A. Oraevsky, “Laser optoacoustic imaging system for detection of breast cancer,” J. Biomed. Opt. 14(2), 024007 (2009). [CrossRef]  

6. J. Laufer, P. Johnson, E. Zhang, B. Treeby, B. Cox, B. Pedley, and P. Beard, “In vivo preclinical photoacoustic imaging of tumor vasculature development and therapy,” J. Biomed. Opt. 17(5), 1 (2012). [CrossRef]  

7. S. Hu and L. V. Wang, “Photoacoustic imaging and characterization of the microvasculature,” J. Biomed. Opt. 15(1), 011101 (2010). [CrossRef]  

8. J. Yao, K. I. Maslov, Y. Zhang, Y. Xia, and L. V. Wang, “Label-free oxygen-metabolic photoacoustic microscopy in vivo,” J. Biomed. Opt. 16(7), 076003 (2011). [CrossRef]  

9. Z. Xie, W. Roberts, P. Carson, X. Liu, C. Tao, and X. Wang, “Evaluation of bladder microvasculature with high-resolution photoacoustic imaging,” Opt. Lett. 36(24), 4815–4817 (2011). [CrossRef]  

10. T. J. Allen, A. Hall, A. P. Dhillon, J. S. Owen, and P. C. Beard, “Spectroscopic photoacoustic imaging of lipid-rich plaques in the human aorta in the 740 to 1400 nm wavelength range,” J. Biomed. Opt. 17(6), 061209 (2012). [CrossRef]  

11. K. Jansen, A. F. van der Steen, H. M. van Beusekom, J. W. Oosterhuis, and G. van Soest, “Intravascular photoacoustic imaging of human coronary atherosclerosis,” Opt. Lett. 36(5), 597–599 (2011). [CrossRef]  

12. S. Sethuraman, J. H. Amirian, S. H. Litovsky, R. W. Smalling, and S. Y. Emelianov, “Spectroscopic intravascular photoacoustic imaging to differentiate atherosclerotic plaques,” Opt. Express 16(5), 3362–3367 (2008). [CrossRef]  

13. S. Hu, B. Rao, K. Maslov, and L. V. Wang, “Label-free photoacoustic ophthalmic angiography,” Opt. Lett. 35(1), 1–3 (2010). [CrossRef]  

14. A. de la Zerda, Y. M. Paulus, R. Teed, S. Bodapati, Y. Dollberg, B. T. Khuri-Yakub, M. S. Blumenkranz, D. M. Moshfeghi, and S. S. Gambhir, “Photoacoustic ocular imaging,” Opt. Lett. 35(3), 270–272 (2010). [CrossRef]  

15. S. Jiao, M. Jiang, J. Hu, A. Fawzi, Q. Zhou, K. K. Shung, C. A. Puliafito, and H. F. Zhang, “Photoacoustic ophthalmoscopy for in vivo retinal imaging,” Opt. Express 18(4), 3967–3972 (2010). [CrossRef]  

16. X. Wang, Y. Pang, G. Ku, X. Xie, G. Stoica, and L. V. Wang, “Noninvasive laser-induced photoacoustic tomography for structural and functional in vivo imaging of the brain,” Nat. Biotechnol. 21(7), 803–806 (2003). [CrossRef]  

17. J. Laufer, E. Zhang, G. Raivich, and P. Beard, “Three-dimensional noninvasive imaging of the vasculature in the mouse brain using a high resolution photoacoustic scanner,” Appl. Opt. 48(10), D299–D306 (2009). [CrossRef]  

18. Y. Liu, C. Zhang, and L. V. Wang, “Effects of light scattering on optical-resolution photoacoustic microscopy,” J. Biomed. Opt. 17(12), 126014 (2012). [CrossRef]  

19. F. Duan, H. Ma, J. Zhang, S. Li, H. Li, Z. Wu, and L. Nie, “Optical-resolution photoacoustic microscopy continually monitors macrophages activities of acute inflammation in vivo,” Chin. Opt. Lett. 18(12), 121701 (2020). [CrossRef]  

20. L. Deng, Q. Chen, Y. Bai, G. Liu, L. Zeng, and X. Ji, “Compact long-working-distance laser-diode-based photoacoustic microscopy with a reflective objective,” Chin. Opt. Lett. 19(7), 071701 (2021). [CrossRef]  

21. K. Maslov, H. F. Zhang, S. Hu, and L. V. Wang, “Optical-resolution photoacoustic microscopy for in vivo imaging of single capillaries,” Opt. Lett. 33(9), 929–931 (2008). [CrossRef]  

22. L. Song, K. Maslov, and L. V. Wang, “Multifocal optical-resolution photoacoustic microscopy in vivo,” Opt. Lett. 36(7), 1236–1238 (2011). [CrossRef]  

23. R. Ma, S. Söntges, S. Shoham, V. Ntziachristos, and D. Razansky, “Fast scanning coaxial optoacoustic microscopy,” Biomed. Opt. Express 3(7), 1724–1731 (2012). [CrossRef]  

24. C. Zhang, K. Maslov, S. Hu, R. Chen, Q. Zhou, K. K. Shung, and L. V. Wang, “Reflection-mode submicron-resolution in vivo photoacoustic microscopy,” J. Biomed. Opt. 17(2), 020501 (2012). [CrossRef]  

25. R. L. White, “Image restoration using the damped Richardson-Lucy method,” In Instrumentation in Astronomy VIII (SPIE) (1994) Vol. 2198, pp. 1342-1348.

26. H. Tang, Z. Tang, Y. Wu, Q. Cai, L. Wu, and Y. Chi, “Differential photoacoustic microscopy technique,” Opt. Lett. 38(9), 1503–1505 (2013). [CrossRef]  

27. J. Chen, R. Lin, H. Wang, J. Meng, H. Zheng, and L. Song, “Blind-deconvolution optical-resolution photoacoustic microscopy in vivo,” Opt. Express 21(6), 7316–7327 (2013). [CrossRef]  

28. X. Song, L. Song, A. Chen, and J. Wei, “Deconvolution optical-resolution photoacoustic microscope for high-resolution imaging of brain,” SPIE Future Sensing Technologies 11525, 601–605 (2020). [CrossRef]  

29. D. Cai, Z. Li, Y. Li, Z. Guo, and S. L. Chen, “Photoacoustic microscopy in vivo using synthetic-aperture focusing technique combined with three-dimensional deconvolution,” Opt. Express 25(2), 1421–1434 (2017). [CrossRef]  

30. Z. Guo, C. Li, L. Song, and L. V. Wang, “Compressed sensing in photoacoustic tomography in vivo,” J. Biomed. Opt. 15(2), 021311 (2010). [CrossRef]  

31. M. Sun, N. Feng, Y. Shen, X. Shen, L. Ma, J. Li, and Z. Wu, “Photoacoustic imaging method based on arc-direction compressed sensing and multi-angle observation,” Opt. Express 19(16), 14801–14806 (2011). [CrossRef]  

32. E. J. Candes and T. Tao, “Near-optimal signal recovery from random projections: Universal encoding strategies?” IEEE Trans. Inf. Theory 52(12), 5406–5425 (2006). [CrossRef]  

33. J. F. Cai, S. Osher, and Z. Shen, “Split Bregman methods and frame based image restoration,” Multiscale Model. Simul. 8(2), 337–369 (2010). [CrossRef]  

34. T. Goldstein and S. Osher, “The split Bregman method for L1-regularized problems,” SIAM J. Imaging Sci. 2(2), 323–343 (2009). [CrossRef]  

35. W. Deng, W. Yin, and Y. Zhang, “Group sparse optimization by alternating direction method,” In Wavelets and Sparsity XV (SPIE) (2013), Vol. 8858, pp. 242-256.

36. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci. 2(1), 183–202 (2009). [CrossRef]  

37. N. Damera-Venkata, T. D. Kite, W. S. Geisler, B. L. Evans, and A. C. Bovik, “Image quality assessment based on a degradation model,” IEEE Trans. Image Process. 9(4), 636–650 (2000). [CrossRef]  

38. R. Ferzli and L. J. Karam, “A no-reference objective image sharpness metric based on the notion of just noticeable blur,” IEEE Trans. Image Process. 18(4), 717–728 (2009). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Flowchart of the super-resolution reconstruction algorithm for OR-PAM images based on sparsity and deconvolution.
Fig. 2.
Fig. 2. High-resolution reconstruction of black tape image based on the proposed algorithm. (a) Original black tape image, (b) high-resolution black tape image based on the proposed algorithm.
Fig. 3.
Fig. 3. Quantitative comparison of two black tape images before and after the use of the proposed algorithm. (a) Cross-sectional profiles and their corresponding Gaussian fits at the position of the horizontal line in Fig. 2(a) and Fig. 2(b), (b) Cross-sectional profiles and their corresponding Gaussian fits at the position of the vertical line in Fig. 2(a) and Fig. 2(b).
Fig. 4.
Fig. 4. High-resolution reconstruction of mouse retinal images based on the proposed algorithm. (a) Original retinal image, (b) high-resolution retinal image based on the proposed algorithm, (c) enlarged details of retinal structures in the white dashed box in Fig. 4(a), (d) enlarged details of retinal structures in the white dashed box in Fig. 4(b).
Fig. 5.
Fig. 5. Quantitative comparison of two mouse retinal images before and after the use of the proposed algorithm. (a) Cross-sectional profiles at the position of the horizontal line in Fig. 4(a) and Fig. 4(b), (b) FWHMs at the positions of L1, L2, and L3 in Fig. 5(a). (c) Cross-sectional profiles at the position of the vertical line in Fig. 4(a) and Fig. 4(b), (d) FWHMs at the positions of L4 and L5 in Fig. 5(c).
Fig. 6.
Fig. 6. Effect of the number of deconvolution iterations on the image resolution.
Fig. 7.
Fig. 7. Super-resolution reconstruction of mouse retinal images based on the R–L algorithm and proposed algorithm. (a1)–(c1) Original retinal images, (a2)–(c2) reconstruction results of the R–L algorithm, (a3)–(c3) reconstruction results of the proposed algorithm, (a4)–(c4) cross-sectional profiles at the position of the blue line in the corresponding left three images.

Tables (1)

Tables Icon

Table 1. Coefficients of variation (CVs) of mouse retinal images and FWHM/µm at the position of the blue line.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

g ( x , y ) = h ( x , y ) o ( x , y ) + n ( x , y ) ,
o ( x , y ) = arg min o ( x , y ) { λ 2 g ( x , y ) h ( x , y ) o ( x , y ) n ( x , y ) 2 2 + λ L 1 o ( x , y ) 1 } .
arg min o { λ 2 g h o n 2 2 + λ L 1 o 1 } s .t . z = o .
{ ( o k + 1 , z k + 1 ) = arg min o , z J ( o , z ) + μ 2 z o b k 2 2 b k + 1 = b k + ( z k + 1 o k + 1 ) ,
{ o k + 1 = arg min o λ 2 g h o n 2 2 + μ 2 z k o b k 2 2 z k + 1 = arg min z z 1 + μ 2 z o k + 1 b k 2 2 ,
{ z i k + 1 = soft ( ( o k + 1 + b k ) i , 1 μ ) i = 1 , 2 , , n soft ( o , γ ) = sign ( o ) max ( | o | γ , 0 ) ,
min f ( o ) = 1 2 h o g 2 ,
f ( o ) = h T h o h T g = h T ( h o g ) .
w k + 1 = o k + k h T ( g h o k ) ,
t k + 1 = 1 2 ( 1 + 1 + 4 t k 2 ) ,
o k + 1 = w k + 1 + ( t k 1 / t k + 1 ) ( w k + 1 w k ) .
C .V =  1 N x y [ f ( x , y ) μ ] 2 μ ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.