Abstract
In order to reduce the radiation exposure caused by Computed Tomography (CT) scanning, low dose CT has gained much interest in research as well as in industry. One fundamental difficulty for low dose CT lies in its heavy noise pollution in the raw data which leads to quality deterioration for reconstructed images. In this paper, we propose a modified ROF model to denoise low dose CT measurement data in light of Poisson noise model. Experimental results indicate that the reconstructed CT images based on measurement data processed by our model are in better quality, compared to the original ROF model or bilateral filtering.
© 2012 Optical Society of America
1. Introduction
Computed Tomography (CT) is widely used for clinical diagnosis and industry nondestructive testing. However, exposure of high amounts of radiation could result in problems. For medical examination, high amounts of radiation probably increase the risk of cancer during the examinee’s whole lifetime [1]. In order to reduce the dose, a common approach is to decrease the intensity of X-Ray source, which leads to high noise level in the raw data collected by detectors. On the other hand, the detectors used in industrial CT systems usually suffers from a upper limit of counts with 12 bit or 14 bit. Therefore, in order to get the correct projection data, the dose is limited to ensure that the raw data scanned without the object will be less than 212 (or 214). In this case, while scanning large or high density object, the counts of the raw data would be low.
To produce “good” images, some denoising procedure needs to be employed. This can happen in projection domain or in CT image domain, by using filtering algorithms to the projection data [2,3], or special reconstruction methods [4]. Traditional denoising methods involve spatial filters (Gaussian, bilateral etc.) or filtering in transformed domain by utilizing FFT and wavelet, which usually bring undesired blurring. Processing the low-dose data in projection domain is very popular in the current research, which includes penalized-likelihood sinogram smoothing on calibrated projection data [5, 6] and filtering with maximum likelihood estimation on raw data (PWLS) [7, 8] etc.
In this paper, we propose a modified Rudin-Osher-Fatemi(ROF) model to filter the raw data (measurement data). The original ROF model [9] has been proven effective for denoising images corrupted by additive white Gaussian noise. However, the raw data comes from a photon-counting process, and its stochastic nature is Poisson distribution [10,11]. For our CT raw data, the noise is not exact Poisson. It’s just poisson-like, which means that if we model the projection raw data by a random field, then its mean and variance are related to each other, but not equal. There are several approaches for denoising Poisson noise in the literature. Le et al. proposed a model in the framework of Bayesian inference [12]. Sawatzky et al. [13] used a dual approach for the numerical solution which leads to fast and robust convergence. Le’s model fits well to Poisson noise. Other methods, like PURE-LET [14] and MS-VST [15], are based on wavelet transformation. However, it’s not apparent how to adapt those approaches mentioned above to Poisson-like noise. The modified ROF model we propose here is in view of Poisson-like nature of the projection raw data. A relaxed form of the Chambolle’s [18] algorithm will be provided to solve the modified ROF model efficiently. Experimental results are supplied to verify the effectiveness of the proposed model as well as the algorithm.
The rest of the paper is organized as follows. In section 2 we give background knowledge on noise model in real CT systems as well as an introduction to the ROF model and Chambolle’s algorithm. In section 3, the modified ROF model will be introduced, with a statistical analysis to provide some theoretical foundations. Experiments with both simulation and real data are shown in section 4. Section 5 dedicates to conclusions and acknowledgement.
2. Noise model and the ROF model
2.1. Noise model
For monoenergetic x-ray source and non-scattering condition, the number of photons collected by detectors obeys the Poisson distribution [10]
where I0 indicates the number of emitted photons, Id the number of photons collected, f(x) the attenuation distribution of the object under examination and L the X-Ray trajectory. CT image reconstruction is an inverse problem, which is to solve f(x) from a series of measurement data . If the measurement data is noise free, we change formula (1) by logarithmic transformation for linearization and we call pi the projection data. For formula (2), we usually use the Filtered-Back-Projection algorithm (FBP) [16] to compute f(x). In the previous work [16, 17], it’s shown that the low-dose calibrated data could be approximated as the ideal data polluted by non-stationary gaussian noise n and n has a variance and mean relationship as where σi and μi denote the variance and mean respectively, while fi and T are parameters that determined by the property of detectors.We do not intend to deal with pi. Instead, we deal with the raw data Id, which is named projection raw data in this paper. The reason is that we think that Id is less “polluted” than pi, since less mathematical computations are involved. We also found out that it’s not obvious how to design a denoising method for the noise model of pi. For real CT systems, there are also two kinds of background noise–electrical thermal noise (i.i.d. gaussian) and round-off errors. Moreover, the monoenergetic source is just an ideal assumption. So the Poisson distribution is just an approximate model. Strictly, we can only say that the noise is Poisson-like. In real CT systems, I0 in formulas (1) and (2) is usually sampled several hundred times for average. So I0 could be thought of as ideal. What we should do is to find the ideal of Id, then compute pi.
2.2. ROF model and Chambolle’s method
The ROF model was proposed by Rudin, Osher and Fatemi [9] in 1992.
It can be turned into a non-constrained minimization problem, with a suitable parameter λ where (for regular enough of u) In formula (6), Ω is the image domain, f is the input image which is assumed to contain additive white Gaussian noise.In order to solve (6), Chambolle gave a fast algorithm with dual-formulation in [18] as follows:
where un+1 denotes the denoised image after the (n + 1)th iteration, div(·) the divergence operator and τ the pseudo time step, which should be less than 1/4 for 2D images to guarantee convergence.3. The modified ROF model - Poisson-ROF
In this section, we will first derive the Poisson-ROF model which is designed for Poisson-like noise, followed by a statistical analysis to justify the model. Then we will describe how to solve the minimization problem.
3.1. Deviation of the Poisson-ROF model
In the ROF model, Gaussian white noise is assumed, e.g.
where n is a random variable that obeys N(0, σ2). Note that n doesn’t depend on x. For digital images, it means that if we regard each pixel corresponding to a random variable, then all the random variables are independent of each other while share the same distribution. In this situation, the image pixels can be thought of as samplings to one single random variable. So the variance σ can be estimated as e.g. For the projection raw data, each “pixel” should be regarded as corresponding to a random variable of Poisson. Suppose u(x) is the mean of f(x), then the noise should be . So if we would like to extend the ROF model to deal with Poisson noise, a heuristic choice for the data term is Based on the above data term, we propose the following denoising model which is named Poisson-ROF in this paper.Remark 3.1 Mathematically, the data term in (10) suffers from nonlinearity and degeneration problem at zeros. These two difficulties could be overcome by changing the term slightly to . and then applying some linearization technique such as the ”Lagged diffusion” method [19]. For our CT projection raw data, u(x) = 0 means that the X-Ray can not penetrate the object from some trajectory, and the associated information is lost. Insufficient information results in deterioration for the reconstructed images. In this case, people usually increase the intensity of the X-Ray to avoid its happening. So for the CT projection raw data, one could think that u(x) > 0 holds true all the time.
Remark 3.2 It’s easy to verify that if u is strictly greater than zero, then the minimization problem (10) is strictly convex. So the problem admits a single solution.
3.2. Justification of the Poisson-ROF model – statistical analysis
Previously, we extended the data term of the ROF model to (9). However, this extension is only formally. In the Gaussian distribution case, the random variable
obeys the same distribution N(0, 1) for each pixel x. For Poisson distributions, the random variable obeys different distributions for different pixels. However, from statistical knowledge, we do know that the random variables share the same mean as well as variance. To justify (9), we 1 must verify that obeys almost the same distribution (independent of x) for our CT projection raw data. Suppose f(x) obeys Poisson distribution with mean u(x), e.g. Define then we need to compare the density functions of w(x) for each x. There is a basic difficulty here. The Poisson distribution f(x) is defined on integers. So w(x) is discrete, e.g. defined on certain real numbers, and we can only talk about P{w(x) = θ} for θ ∈ Dx, where Dx is some accountable set. The difficulty lies in the fact that Dx is not independent of x. So we cannot compare the discrete density functions directly.To overcome the difficulty, we extend the Poisson distribution to a continuous random variable with the density function
In this view of Poisson distribution, the probability density function for w(x) can be expressed as where [·] denotes the round operation. For real CT systems, the count of raw data mostly lies in the range of [1000, 5000]. We compute the density function for u(x) = 100, 1000, 3000 and 5000, and plot them in Fig. 1. From Fig. 1, we see that in real CT systems, even the distributions corresponding to w(x) are not exact the same (pixel dependent), they are very similar to each other. So we can think that w(x) obeys almost the same distribution for different x. This justifies the data term in (9) and the Poisson-ROF model. For Poisson-like noise, we believe that the above argument about w(x) still holds, and the Poisson-ROF model should be valid even for Poisson-like noise. It should be pointed out that the density function for a very low count case that u(x) = 100 (which is unlikely to happen for real CT scanning) is also plotted in Fig. 1. This is to show that the above observation about w(x) could be valid for a much larger spectrum, e.g. our model should be valid for ”very-low” dose CT.3.3. Algorithm to solve the Poisson-ROF model
In the Poisson-ROF model, the data term is nonlinear with respect to u. To easy the problem, we do a iterative lineation of the data term. Let’s first suppose that the variance is known as fNF, e.g.
where fNF indicates the noise free data of f. Then the Poisson-ROF model becomes where The corresponding E-L Eq. is Obviously, fNF is not available. So we approximate it by ū during the iterations of the proposed algorithm described below, and ū is calculated by bilateral filtering where The Chambolle’s algorithm can not be applied to the Poisson-ROF model directly. So we introduce another variable v to relax the problem, just like in [20]. Then we solve (15) by solving the following two subproblems alternatively- Minimization with u, for which Chambolle’s method could be used, or one can adopt some more recently proposed fast algorithms such as the Split-Bregman [21];
In our numerical simulations, θ is set to 0.1, v is updated for each 5 iterations.
The weighting parameter λ in 16 could be estimated in several ways. If we consider λ as coefficient of the Lagrange multiplier, then the choice of λ should always ensure that the Lagrange Eq. (13) is equivalent [9, 22]. In [23], Gilboa et al. suggested that λ should be updated according to the current SNR. And in [24, 25], the authors suggested that an optimal λ be calculated once for all, and keep it still during the iterations. However, fixed λ tends to smooth the image as the iteration number increases. Here we simply update λ to keep the Euler-Lagrange Eq. in equivalence, which means
4. Experiments
We compare our model with the original ROF and bilaterial filtering for simulation data as well as real CT data. The real CT data sets were produced with different dose. The CT images reconstructed from the raw data processed by our model and other models will also be shown and compared.
4.1. Simulation data
The simulation data are produced by projecting the Shepp-Logan phantom [26] using the ray-tracing algorithm. 104 photons are transmitted for each image in this way. The ideal CT images are computed by analytic method, e.g. FBP. We then pollute the raw data with simulated Poisson noise. Three different denoising methods including bilateral filtering, ROF model and Poisson-ROF model, are applied to the simulated projection raw data. The results are shown in Fig. 2. In the first column, from (a)–(b), the ideal data and noisy data are shown respectively. From (c)–(e), the data denoised by bilateral filtering, ROF model and Poisson-ROF model are shown separately. In the second column, the corresponding images reconstructed by FBP algorithm are shown. By comparing the reconstructed images, we see that the bilateral filtering can preserve the structures of the phantoms, while suppressing some of the noise. The ROF model could remove most of the noise, but the structures (edges) are also diffused, which is unacceptable in real applications. For the Poisson-ROF model, most of the noise are removed, while the details are also preserved, see Fig. 2(e’). We also give the profiles of each image in Fig. 3.
To compare the three denoising methods in a quantitative way, we have computed the following three quantities, which are widely used in the literature to measure image qualities.
The SNR reflects the noise level in the signal, NMAD and NRMSE show the divergence of two images with regard to the human vision. From Table 1 and Table 2, we see that both the projection raw data and CT images processed by our model have the best quality.
4.2. Real CT data
We get two sets of projection raw data from an Industrial CT system. The first group come from scanning some cylindroid object made of volcanic rock, and the second group are the results from scanning a line-pair phantom for testing spatial resolutions. The number of channels per view is 3710 with a total of 720 views evenly spanned on a circular track of 360 degrees. The length of detector bins is 0.083 mm, the distance from source to center of turntable is 800 mm and the source to the detectors is 1050 mm. The integration time in each view is 5 ms. All these raw data are denoised and then transformed to images by the famous FBP algorithm.
4.2.1. Volcanic rock at different dose
The cylinder of volcanic rock has a diameter of 100 mm, and we set the x-ray source to 160 kVp to acquire raw data. We first scanned the object with the current intensity 8 mA and reconstructed it as a high dose CT image as shown in Fig. 4, and then reduce the current to 2 mA and 1 mA as low dose experiments. The reconstructed images are shown in Fig. 5 and Fig. 6. With the reduction of current, the noise level grows rapidly, see Fig. 5(a) and Fig. 6(a). By applying denoising methods, CT images display better qualities in vision. When zooming in the images and comparing the details in Fig. 7, we can see that the bilateral filtering and ROF model tend to blur the images. Meanwhile, our model can keep the fine structures while removing much of the noise.
We can also observe that for bilateral filtering and ROF model, the blurring is not evenly distributed, e.g. less blurring in the center, while more blurring far away from the center. Suppose that the scanned object is of cylindrical shape with isotropic material, and the X-ray is parallel-beam, see Fig. 8. Point A is close to the boundary while B is near the center. Denoting DA,α and DB,α as the scanned raw data of the X-ray penetrating A and B in angel α, DA,β and DB,β as the scanned raw data in angel β. It is obvious that DA,α is bigger than the others, which means DA,α has a higher SNR for Possion-like noise. In this case, the noise levels of the raw data related to A are anisotropic in different scanning angels while almost isotropic for B. However, the bilateral filtering and ROF model are designed for Gaussian white noise, which is distributed isotropically. Hence, the CT images reconstructed from the raw data processed by the bilateral filtering or ROF model is more blurred far away from the center and much less for center region. On the other hand, our model can adapt automatically to different noise levels, therefore, the blurring effects are almost unobservable.
4.2.2. line-pairs phantom at low dose
To test the spatial resolution of our algorithm, we use a line-pairs phantom to scan in CT with 120 kVp and 4 mA. The reconstructed image acts as an ideal resolution, as shown in Fig. 9. Then we decrease the current to 0.5 mA and scan this phantom again. We then reconstruct images from the low dose data without denoising as well as denoised ones by the Poisson-ROF model. The results are shown in Fig. 10.
As shown in Fig. 10(a), the CT image reconstructed from the low dose data without denoising almost cannot be distinguished by the bars. When the raw data are prepocessed by the Poisson-ROF model, the reconstructed images have better quality and resolution.
5. Conclusion
In this paper, we propose a modified ROF model (Poisson-ROF) and a corresponding fast algorithm. The ROF model can be regarded as based on white Gaussian noise, and is not designed for processing the projection raw data coming from CT systems, which are polluted by Poisson noise. The new idea is to modify the data term of the original ROF model on the basis of statistical analysis. Furthermore, we use a relaxed version of the Chambolle’s algorithm for solving the new model. Experimental results comparing with the original ROF and bilateral filtering are presented, which validate the advantages of the proposed model.
Acknowledgment
This work was supported in part by the National Natural Science Foundation of China under the grants 60971131, 61127003 and 60972140, Beijing Education Committee under the grants PHR20110509 and KZ201110028034.
References and links
1. A. Berrington de Gonzalez, M. Mahesh, K. Kim, M. Bhargavan, R. Lewis, F. Mettler, and C. Land, “Projected cancer risks from computed tomographic scans performed in the united states in 2007,” Arch. Intern Med. 169, 2071 (2009). [CrossRef] [PubMed]
2. A. Schilham, B. van Ginneken, H. Gietema, and M. Prokop, “Local noise weighted filtering for emphysema scoring of low-dose ct images,” IEEE Trans. Med. Imaging 25, 451–463 (2006). [CrossRef] [PubMed]
3. M. Tabuchi, N. Yamane, and Y. Morikawa, “Adaptive wiener filter based on gaussian mixture model for denoising chest x-ray ct image,” in IEEE Proceedings of SICE 2007 Annual Conference (IEEE, 2007), pp. 682–689. [CrossRef]
4. E. Sidky, C. Kao, and X. Pan, “Accurate image reconstruction from few-views and limited-angle data in divergent-beam ct,” J. X-Ray Sci. Technol. 14, 119–139 (2006).
5. P. La Rivière and D. Billmire, “Reduction of noise-induced streak artifacts in x-ray computed tomography through spline-based penalized-likelihood sinogram smoothing,” IEEE Trans. Med. Imaging 24, 105–111 (2005). [CrossRef] [PubMed]
6. P. La Rivière, “Penalized-likelihood sinogram smoothing for low-dose ct,” Med. Phys. 32, 1676 (2005). [CrossRef] [PubMed]
7. T. Li, X. Li, J. Wang, J. Wen, H. Lu, J. Hsieh, and Z. Liang, “Nonlinear sinogram smoothing for low-dose x-ray ct,” IEEE Trans. Nucl. Sci. 51, 2505–2513 (2004). [CrossRef]
8. J. Wang, T. Li, H. Lu, and Z. Liang, “Penalized weighted least-squares approach to sinogram noise reduction and image reconstruction for low-dose x-ray computed tomography,” IEEE Trans. Med. Imaging 25, 1272–1283 (2006). [CrossRef] [PubMed]
9. L. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60, 259–268 (1992). [CrossRef]
10. J. Hsieh, Computed tomography: principles, design, artifacts, and recent advances (Society of Photo Optical, 2003), Vol. 114.
11. K. LangeR. Carson, et al., “Em reconstruction algorithms for emission and transmission tomography.” J. Comput. Assist. Tomogr. 8, 306 (1984). [PubMed]
12. T. Le, R. Chartrand, and T. Asaki, “A variational approach to reconstructing images corrupted by poisson noise,” J. Math. Imaging Vision 27, 257–263 (2007). [CrossRef]
13. L. Zhang, L. Zhang, D. Zhang, and H. Zhu, “Computer analysis of images and patterns,” Pattern Recogn. 44, 1990–1998 (2011). [CrossRef]
14. F. Luisier, T. Blu, and M. Unser, “Image denoising in mixed poisson-gaussian noise,” IEEE Trans. Image Process. 20(3), 696–708 (2011). [CrossRef]
15. B. Zhang, J. Fadili, and J. Starck, “Wavelets, ridgelets, and curvelets for poisson noise removal,” IEEE Trans. Image Process. 17, 1093–1108 (2008). [CrossRef] [PubMed]
16. A. C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging (IEEE Service Center, 1988), p. 327.
17. H. Lu, T. Hsiao, X. Li, and Z. Liang, “Noise properties of low-dose ct projections and noise treatment by scale transformations,” in in the IEEE Nuclear Science Symposium Conference 2001 Record (IEEE, 2001), Vol. 3, pp. 1662–1666.
18. A. Chambolle, “An algorithm for total variation minimization and applications,” J. Math. Imaging Vision 20, 89–97 (2004). [CrossRef]
19. T. Chan and P. Mulet, “On the convergence of the lagged diffusivity fixed point method in total variation image restoration,” SIAM J. Numer. Anal. 36, 354–367 (1999). [CrossRef]
20. M. Unger, T. Pock, and H. Bischof, “Continuous globally optimal image segmentation with local constraints,” in Computer Vision Winter Workshop at Slovenian Pattern Recognition Society, Ljubljana, Slovenia (2008).
21. T. Goldstein and S. Osher, “The split bregman method for l1 regularized problems,” SIAM J. Imag. Sci. 2, 323–343 (2009). [CrossRef]
22. L. Rudin and S. Osher, “Total variation based image restoration with free local constraints,” in Proceedings of the IEEE International Conference on Image Processing1994 (IEEE, 1994), vol. 1, pp. 31–35.
23. G. Gilboa, N. Sochen, and Y. Zeevi, “Estimation of optimal pde-based denoising in the snr sense,” IEEE Trans. Image Process. 15, 2269–2280 (2006). [CrossRef] [PubMed]
24. B. Wohlberg and Y. Lin, “Upre method for total variation parameter selection,” Tech. Report, Los Alamos National Laboratory (LANL) (2008).
25. S. Babacan, R. Molina, and A. Katsaggelos, “Parameter estimation in tv image restoration using variational distribution approximation,” IEEE Trans. Image Process. 17, 326–339 (2008). [CrossRef] [PubMed]
26. H. Gach, C. Tanase, and F. Boada, “2d & 3d shepp-logan phantom standards for mri,” in IEEE 19th International Conference on Systems Engineering 2008 (IEEE, 2008), pp. 521–526. [CrossRef]