Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fast iterative reconstruction of differential phase contrast X-ray tomograms

Open Access Open Access

Abstract

Differential phase-contrast is a recent technique in the context of X-ray imaging. In order to reduce the specimen’s exposure time, we propose a new iterative algorithm that can achieve the same quality as FBP-type methods, while using substantially fewer angular views. Our approach is based on 1) a novel spline-based discretization of the forward model and 2) an iterative reconstruction algorithm using the alternating direction method of multipliers. Our experimental results on real data suggest that the method allows to reduce the number of required views by at least a factor of four.

© 2013 Optical Society of America

1. Introduction

X-ray phase-contrast imaging (PCI) is a promising alternative to absorption-based computed tomography (CT) for visualizing many structures in biological samples, in particular soft-tissues. PCI is based on the phase shift induced by the propagation of a coherent wave through the investigated object. Various PCI methods have been developed including analyzer based [13], interferometric [46] and free space propagation methods [79]. These methods differ substantially in terms of the physical signal that is measured and the required experimental setup.

Weitkamp et al[6] and Momose et al[10] recently developed a new X-ray phase-contrast imaging method based on grating interferometry (GI). It has attracted increasing interest in a variety of fields owing to its unique combination of imaging characteristics. First, GI provides a high sensitivity to electron-density variations, down to 0.18e/nm3[11]. This makes the technique particularly suitable for soft-tissue specimens. It has been applied successfully to biological samples such as insects [6, 11], rat-brain tissue and even human as breast tissue [12]. Second, GI produces three complementary types of information; namely, attenuation, phase-shift and dark-field measurements. In differential phase-contrast imaging (DPCI), one focuses exclusively on the phase information, which in principle allows for reconstructing the refractive index distribution of the object. Third, GI does not require a highly monochromatic source, which means that conventional laboratory X-ray tubes can be used. The combination of the aforementioned characteristics makes GI suitable for a broad range of applications, such as material sciences (e.g., material testing), biomedical research (e.g., monitoring drug effects), or even clinical diagnostics (e.g., mammography).

DPCI essentially yields the derivative of the Radon transform of the object’s refractive-index map. This map can therefore be retrieved using a suitable variant of the filtered back-projection (FBP) algorithm known from conventional tomography. However the FBP method requires a large number of viewing angles, and thus the acquisition time is very long. The long radiation time can also damage biological samples. This provides a strong motivation for developing algorithms that can handle fewer views, so as to reduce the exposure time.

1.1. Contributions

To this end, it is natural to formulate the reconstruction as an inverse problem that is regularized using prior information on the sample. The first step in such a formulation is the discretization of the forward model. The contributions of this paper are summarized as follows:

  • The proposal of a new discretization scheme for derivatives of the Radon transform based on B-spline calculus. We show that the model lends itself to an analytical treatment, yielding a fast and accurate implementation.
  • The design of an iterative reconstruction algorithm taking advantage of the proposed discretization and using a combination of TV and Tikhonov regularization. Our method follows an augmented-Lagrangian optimization principle and makes use of the conjugate gradient method to solve the linear step in the alternating direction method of multipliers (ADMM).
  • The application of a problem-specific preconditioner that speeds up the convergence of the linear optimization step considerably.
  • The experimental evaluation of the method using real data.

An early version of the proposed algorithm was presented at ISBI 2012 [13].

1.2. Related work

While ADMM has been adapted for several imaging modalities, e. g., magnetic resonance imaging [14] and CT [15], it has not yet been used in the context of DPCI. In fact, only a small number of contributions have proposed to apply iterative reconstruction techniques to DPCI [1618]. While these techniques also rely on TV regularization, they differ from the present work in the way the forward model is discretized and in terms of the optimization algorithm.

We now briefly review the characteristics of the aforementioned works. Qi et al. [16] express the derivative in the projection domain using derivatives along the two axes in the image domain. Then they implement these derivatives with finite differences. A basic steepest descent algorithm is used to solve the corresponding TV problem.

Kohler et al. [17] and Xu et al. [18] discretize the forward model using Kaiser-Bessel window functions. The advantage of these functions is that the derivative of their Radon transform is known analytically. However, they fail to satisfy the partition of unity, which is a necessary condition for controlling the error of approximation [19]. Note that [18] also presents results obtained with rectangular basis functions (splines of degree zero). However, the Radon transform of these functions is not differentiable analytically. Thus the authors have to use a numerical approximation of the derivative. In contrast, our work is based on higher-degree splines, which allows for fully analytic derivations. Concerning the optimization algorithm, [17] uses the Newton-Raphson method with a separable-paraboloidal-surrogate, while [18] relies on an adaptive-steepest-descent algorithm combined with projections onto convex sets [20].

1.3. Outline

In Section 2, we briefly review the physics of differential phase-contrast imaging. The end-result is that DPCI can be described mathematically in terms of derivative variants of the Radon transform. We discuss the properties of these operators in Section 3. We then propose a discretization of the forward model based on B-spline functions in Section 4 as well as a fast and accurate implementation. In Section 5, we describe our reconstruction scheme based on ADMM to solve regularized reconstruction problem. The framework and the proposed reconstruction method are validated in Section 6 through experiments with real data. We summarize and conclude our work in Section 7.

2. Physical model

An X-ray plane wave can be characterized by its intensity and phase. However the intensity is the only directly measurable part. Therefore, to extract the phase it is necessary to map this information into intensity patterns. In DPCI, this is achieved by using grating interfereometers. The principle has been described in detail in [6, 21, 22], and we briefly review it here.

Figure 1 shows the physical setup of DPCI. It consists of an object on a rotation stage, a phase grating (G1) and an analyzer absorption grating (G2) which are behind the sample. Typically the phase grating produces a phase shift of π on the incident wave.

 figure: Fig. 1

Fig. 1 Differential phase-contrast tomography is based on a grating interferometer setup. The phase grating introduces a phase shift of π in the transmitted wave. The absorption grating is necessary for measuring the received wave given the limited resolution of the detector. The diagram on the right shows the received intensity for one pixel of the CCD camera with respect to the position of the grating. The red curve corresponds to the situation where the object is in the illumination path. The black curve shows the received intensity when there is no object.

Download Full Size | PDF

In order to specify the physical model of DPCI, we first need to set the geometry: θ = (cosθ, sinθ) is a unit vector that is orthogonal to the projection direction and y is the projection coordinate depicted in Figure 1. The spatial coordinate of the 2-D object is denoted by x = (x1, x2).

The phase shift induced by the object on the transmitted beam is given by

ϕ(y,θ)=2πλ{f(x)}(y,θ)=2πλ2f(x)δ(yx,θ)dx,
where f(x) = n(x) − 1 with n(x) is the real part of the object’s refractive index, θ is the angle of view of the monochromatic wave illuminating the object and {f(x)}(y, θ) is the Radon transform of f(x).

Definition 1.[23] The (2-dimensional) Radon transform ℛ : L2(ℝ2) → L2(ℝ× [0, π]) maps a function on ℝ2 into the set of its integrals over the lines of ℝ2. Specifically, if θ = (cosθ, sinθ) and y ∈ ℝ, then

{f(x)}(y,θ)=2f(x)δ(yx,θ)dx=f(yθ+tθ)dt,
where θ is perpendicular to θ.

The object introduces changes in the propagation direction of the illumination wave. The refraction angle is proportional to the derivative of the phase of the output wave with respect to y:

α(y,θ)=λ2πϕ(y,θ)y,
where α(y, θ) is the refraction angle in radians, and λ is the wavelength

After passing through the object, the X-ray reaches the phase grating which introduces a periodic phase modulations. It essentially splits the beam into its first two diffraction orders. A periodic interference pattern perpendicular to the optical axis is formed. To measure this pattern with high resolution, one then uses a phase stepping technique (PST) [6]. To that end, an absorption grating is placed in front of the detector with the same periodicity and orientation as the pattern. In this technique, the absorption grating is moved perpendicular to the optical axis and the intensity signal in each pixel in the detector plane is recorded as a function of the grating position, xg.

When an object is placed on the optical path, the illumination wave is refracted which causes a local shift of the interference pattern. This displacement is given by

Δxg(y,θ)=dα(y,θ),
where d is the distance between the two gratings. Combining this relation with (1) and (3) yields
g(y,θ)=1dΔxg(y,θ)={f}y(y,θ).
For our purpose, this equation characterizes the forward model of DPCI: f is the quantity of interest and g is the physically measurable signal.

3. Review of the derivative variants of the Radon transform and generalized filtered back-projection

Since the measurements are related to the Radon transform and its derivatives, we start by establishing some higher-level mathematical properties of these operators. The nth derivative of the Radon transform of a function f(x) is denoted by

(n)f(y,θ)=nynf(y,θ).

The derivatives of the Radon Transform are linear operators with the following properties:

  • Scale invariance:
    (n){f(αx)}(y,θ)=αn+1(n)f(αy,θ),α+.
  • Pseudo-distributivity with respect to convolution:
    (n){(f*g)(x)}(y,θ)=((n)f(,θ)*g(,θ))(y,θ)=(f(,θ)*(n)g(,θ))(y,θ).
  • Projected translation invariance:
    (n){f(x0)}(y,θ)=(n)f(yx0,θ,θ).

To derive the necessary relations for the rest of the paper, we define a new operator, the Hilbert transform along the second coordinate.

Definition 2. The Hilbert transform along the x2 axis, 2 : L2(ℝ2) → L2(ℝ2), is defined in the Fourier domain as

2{f(x)}^(ω1,ω2)=isgn(ω2)f^(ω1,ω2),
where (ω1, ω2) are the spatial frequency coordinates corresponding to (x1, x2).

Proposition 1. The sequential application of the Radon transform, the nth derivative operator and the adjoint of the Radon transform on function f(x) ∈ L2(ℝ2) is

*{nyn{f}(y,θ)}(x)=2π(1)n2n(Δ)n12{f}(x),
where (Δ)12 is the fractional Laplace operator with transfer function ‖ω ‖ and (Δ)n2 is n times application of this operator.

Proof. Let g(y, θ) = ℛf(y, θ). The Fourier Slice Theorem states that:

g^(ω,θ)=f^(ωcosθ,ωsinθ).

The Fourier transform of the nth derivative of g(y, θ) with respect to y is ()nĝ(ω, θ). Thus,

(iω)ng^(ω,θ)=in×sgnn(ω)|ω|nf^(ωcosθ,ωsinθ).
where |ω| = ‖ω ‖ with ω = (ω1, ω2) = ω(cosθ, sinθ). For θ ∈ (0, π), sgn(ω) = sgn(ωsinθ) = sgn(ω2). The space-domain equivalent is
nynf(y,θ)={((1)n(2)n(Δ)n2{f})(x)}(y,θ),θ(0,π).
Therefore we have
*{nyn{f}(y,θ)}(x)=*{((1)n(2)n(Δ)n2{f})(x)}(y,θ),θ(0,π).
which, owing to the property that * = (−Δ)1/2, yields the desired results.

(11) is a key equation. It implies that if one is interested in solving the inverse problem

(n)f(y,θ)=g(y,θ),
the direct solution is to first apply the Radon adjoint on the measured data and then apply the inverse of the operator 2π(1)n2n(Δ)(n1)/2. The transfer function of this inverse is 12πinsgn(ω2)nω(n1).

An equivalent form of (11) using the fact that (y)*=y is

(n)*{(q*(n)f(,θ))(y)}(x)=f(x),
where (n) is the adjoint of the n-th derivative of the Radon transform and the transfer function of q(y) is:
q^(ωy)=12π×1|ωy|2n1.
(17) is the basis for the generalized filtered back projection (GFBP). The full procedure is described in Algorithm 1.

Tables Icon

Algorithm 1. Generalized filtered back projection(GFBP) for the inverse problem θ(n)f(y)=gθ(y)

4. Discretization scheme and implementation of the forward model

In this section we define discrete versions of the Radon transform and its derivatives which are consistent with the continuous-domain operators. This is the basis for a fast and accurate implementation of the DPCI forward model.

4.1. Discrete model of the Radon transform and its derivatives

We propose a formulation that relies on the practical generalized sampling scheme [19]. Let φ(x) ∈ L2(ℝ2) denote a generating Riesz basis function for the approximation space V as:

V(φ)={f(x)=k2ckφ(xk):cl2(2)}.

The orthogonal projection of the object f(x) on the reconstruction space V is

f(x)=k2ckφk(x),
where φk (x) = φ(xk) and ck = 〈f, φ̃(· − k)〉 in which φ̃(x) is the dual of φ(x) with Fourier transform φ˜^(ω)φ^(ω)/k2|φ^(ω2πk)|2.

Using the linearity and translation invariance properties of the Radon transform, the Radon transform of f(x) is

{f}(y,θ)=k2ck{φk}(y,θ),
where
{φk}(y,θ)={φ}(yθ,k,θ),
and {φ}(y, θ) = −1{φ̂(ωθ)}(y). Likewise, we obtain the derivatives of the Radon transform:
(n){f}(y,θ)=k2ck(n){φk}(y,θ),
where
(n){φk}(y,θ)=(n){φ}(yθ,k,θ).
Based on Eqs. (22) and (24) we need to choose an appropriate generating function and to determine its Radon transform analytically. In this work we use a tensor-product B-spline, which provides a good compromise between support size and approximation order [24]:
φ(x)=β(x)=β(x1)β(x2),
Here βm is a centered univariate B-spline of degree m:
β(x)=Δ1m+1m!x+m(sin(ω/2)ω/2)m+1,
where Δhnf(y) is the n-fold iteration of the finite-difference operator Δhf(y) = (f(y + h/2) − f(yh/2))/h.

Proposition 2. The n-th derivative of the Radon transform of the tensor-product B-spline is given by

(n){β(x)}(y,θ)=Δcosθm+1Δsinθm+1(2mn+1)!y+2mn+1=k1=0m+1k2=0m+1(1)k1+k2(m+1k1)(m+1k2)×(y+(m+12k1)cosθ+(m+12k2)sinθ)+2mn+1(2mn+1)!cosθm+1sinθm+1.
Proof. We define g(y, θ) = (n){β}(y, θ). By an application of the Fourier-slice theorem
g(y,θ)=1{(jω)nβ^(ωcosθ)β^(ωsinθ)}.
let us now focus on the Fourier-domain expression
(jω)nβ^m(ωcosθ)β^m(ωsinθ)=(jω)n(sin(ωcosθ2)ωcosθ2)m+1(sin(ωsinθ2)ωsinθ2)m+1=(jω)n(ejωcosθ2ejωcosθ2jωcosθ)m+1(ejωsinθ2ejωsinθ2jωsinθ)m+1=(ejωcosθ2ejωcosθ2cosθ)m+1(ejωsinθ2ejωsinθ2sinθ)m+11(jω)2mn+2.
Taking the inverse Fourier transform of this expression yields the desired result.

To discretize our forward model, we need to calculate the derivatives of the Radon transform of f at the location (yj, θi) where yj = jΔy and θi = iΔθ

(n)f(yj,θi)=k2ck(n)φk(yj,θi).
The values (n)φk(yj, θi) are then determined using Proposition 2.

4.2. Matrix formulation of the forward model

To describe the forward model in a more concise way, we now introduce an equivalent matrix formulation of Eq. (30)

g=Hc,
where c is a vector whose entries are the B-spline coefficients ck, g is the output vector and H is the system matrix defined by:
[H](i,j),k=(1)φk(yj,θi).
The corresponding adjoint operator is represented by the conjugate transpose of the system matrix.

In our implementation we use cubic B-spline basis functions and take advantage of the closed-form expression (28). In the next section, we present an effective algorithm for calculating the forward model and its adjoint.

4.3. Fast implementation

The calculation of the Radon transform (or its variants) involves the application of the system matrix on the B-spline coefficients of the object. However the system matrix H cannot be stored explicitly due to its size. To circumvent this problem we exploit the translation-invariance of the Radon transform. This property implies that all the matrix entries in Eq. (32) can be derived from a single derivative of the Radon transform, namely that of the generating function φ:

H(i,j),k=(1)φ(yjk,θi,θi).

To improve the speed, we oversample (n)φ(y, θ) on a fine grid Y × Θ with 100 samples along each angular direction and store the values in a lookup table L. To compute the matrix entries we define a mapping

I:×[0,π]{1,2,,K}×{1,2,.P}(y,θ)(j,i),
with K is the number of samples along each direction, P is the number of projections and (Y(j), Θ(i)) is the sample in Y × Θ that is nearest to (y, θ). Therefore, we have
[H](i,j),kLI(yjk,θi,θi).

Note that the algorithm can easily be parallelized, since projections corresponding to different angles are completely independent of each other. We designed a multithreaded implementation for an 8-core workstation which allows for 8 simultaneous projection computations. Similarly, for the adjoint of the forward model, the computation can be parallelized with respect to each object point.

In summary, our implementation is based on an accurate continuous-to-discrete model. Moreover it is fast thanks to the use of look-up tables and multi-threading. Note that our method could also be adapted to a fan-beam geometry, which can always be mapped back to the parallel one. This would lead to a non-uniform sampling pattern but our method can account for this at no additional cost (thanks to our look-up-table-based implementation).

4.4. B-spline-based discretization scheme vs Kaiser Bessel window functions

To compare the performance of the two families of basis functions, we test their consistency with a continuously-defined sample, both with respect to the forward model and with respect to the reconstruction problem. It is difficult to do this with real data for two reasons. First, we do not have access to the ground truth. Second, owing to the limited number of viewing angles and the presence of noise, one needs to apply some regularization which has a significant effect on the quality of the reconstruction when the problem is ill-posed (see Section 5). To factor out this effect, we introduce a synthetic sample for which the derivative of the Radon transform can be computed analytically.

Proposition 3. The derivative of the Radon transform of the isotropic function

pa(x)={(a2x2)2xa0Oth.,
where x=x12+x22, is
pa(y,θ)y=163y(a2y2)32.

Proof. Since the function is isotropic, its Radon transform is independent of the direction. For simplicity we consider θ = 0 whose projection coordinate is parallel to the x1 axis. Thus we have

pa(y,θ)y=2y0a2y2(a2y2x22)2dy
which yields the desired result.

The linearity and pseudo shift invariance of the derivative of the Radon transform then allows for an analytic computation of the projection of any function of the form

f(x)=kαkpak(xxk),
where α ∈ ℝ and xk ∈ ℝ2. Figure 2(a) shows the analytic phantom that we consider as the object of interest. It was generated using 10 random radiuses ak, 10 random locations xk and 10 random intensities αk. Its differentiated sinogram is displayed in Figure 2(b) with 1800 viewing angles that are chosen uniformly between 0 and π.

 figure: Fig. 2

Fig. 2 (a) Analytic phantom. (b) Analytical values of the derivative of the Radon transform of analytic phantom.

Download Full Size | PDF

In our first experiment we computed the derivative of the Radon transform of the object using our discretization scheme and the one proposed in [18], which is based on Kaiser-Bessel functions. The signal-to-noise ratios (SNR) of both methods were computed with respect to the analytical projection of the phantom. The results are shown in Table 1 and demonstrate that the projections obtained with B-spline functions are more accurate than those obtained with the Kaiser-Bessel functions used in [18], and this despite the fact that the initial object is perfectly isotropic and the B-splines are not.

Tables Icon

Table 1. Comparison of the projection and reconstruction accuracy using cubic B-splines and Kaiser-Bessel functions with the parameters proposed in [18].

In our second experiment, we used the analytical projections along 1800 viewing angles as the measurement and we reconstructed the object by minimizing the quadratic cost function

J(c)=12Hcg2,
where H is the discretized forward operator, c contains the expansion coefficient in the chosen basis and g is the measurement vector. We use the sampled version of the object on a 1024 × 1024 grid as the ground truth. Since the data is correctly sampled and noise-free, there is not need for regularization and the reconstruction was performed using the standard CG algorithm. We used the same basis functions as in our first experiment. The results are shown in the last row of Table 1 and again demonstrate the advantage of our spline-based formulation. They confirm that the partition-of-unity condition provides a significant advantage in terms of reconstruction quality. This condition is necessary and sufficient for the approximation error to vanish as the grid size tends to zero [19]. Moreover, if we constrain the support of the basis function, polynomial B-splines are known to provide the largest-possible order of approximation, and hence the minimal approximation error [26].

5. Image reconstruction

We have already presented a direct method for inverting (31), namely the generalized filtered back-projection (GFBP) described in Algorithm 1. However, for large images with a limited number of measurements, direct methods such as FBP are not accurate enough. Thus, in this section, we formulate the reconstruction as a penalized least-squares optimization problem.

The cost function we want to minimize is:

J(c)12Hcg2+Ψ(c),
where g is the measurement vector and the regularization term Ψ(c) is chosen based on the following considerations. First we observe that the null space of the derivative of the Radon transform contain the zero frequency (constant). To compensate for this we incorporate the energy of the coefficients into the regularization term. Second, to enhance the edges in the reconstructed image we impose a sparsity constraint in the gradient domain. This leads to
Ψ(c)=λ12c2+λ2k{Lc}k1,
where λ1 and λ2 are regularization parameters and {Lc}k ∈ ℝ2 denotes the gradient of the image at position k. More precisely, the gradient is computed based on our continuous-domain image model using the following property.

Proposition 4. Let f(x) =∑k ∈𝕑2 ckβn(xk). The gradient of f on the Cartesian grid is

fx1[k1,k2]=((h1[,k2]*c[,k2])[k1,]*b2[k1,])[k1,k2]fx2[k1,k2]=((h2[k1,]*c[k1,])[,k2]*b1[,k2])[k1,k2],
where k1, k2 ∈ 𝕑, hi[k1,k2]=βn1(ki+12)βn1(ki12), and bi[k1, k2] = βn(ki) for i = 1, 2.

High-dimensional sparsity-regularized problems are typically solved using the family of iterative-shrinkage/thresholding algorithms (ISTA). The convergence speed of these methods depend on the conditioning of HTH. In our case, the derivative of the Radon transform is a very ill-conditioned operator leading to very slow convergence. To overcome this difficulty, we use a variable-splitting scheme so as to map our general optimization problem into simpler ones. Specifically, we introduce the auxiliary variable u[14, 2730] and reformulate the TV problem (42) as a linear equality-constrained problem

c=argminc,u{12Hcg2+λ12c2+λ2kuk1}.subjecttou=Lc
We can map this into an unconstrained problem by considering the augmented-Lagrangian functional
u(c,u,α)=12Hcg2+λ12c2+λ2kuk1+αT(Lcu)+μ2Lcu2,
where α is a vector of Lagrange multipliers. To minimize (45), we adopt a cyclic update scheme also known as Alternating-Direction Method of Multipliers (ADMM), which consists of the iterations
{ck+1argmincμ(c,uk,αk)uk+1argmincμ(ck+1,u,αk)αk+1αk+μ(Lck+1uk+1).
μ(c, uk, α k) is a quadratic function with respect to c whose gradient is
μ(c,uk,αk)=(HTH+μLTL+λ1I)Ac(HTg+μLT(ukαkμ))b.
We minimize μ(c, uk,α k) iteratively using the conjugate gradient (CG) algorithm to solve the equation Ac = b. Since A has a large condition number, it is helpful to introduce a preconditioning matrix M. This matrix is chosen such that M−1(HTH +μLTL +λ1I) has a condition number close to 1. To design this problem-specific preconditioner, we use the following property of the forward model.

Proposition 5. The successive application of the derivatives of the Radon transform and their adjoint is a highpass filter with frequency response ‖ω2n−1:

(n)*(n){f}(x)=2π×(Δ)2n12{f}(x).
Proof. This follows from (y)*=y and Proposition 1.

This shows that (1)*(1) has a Fourier transform that is proportional to ‖ω ‖ while LTL is a discretized Laplace operator whose continuous-domain frequency response is ‖ω2. Thus, the preconditioner M−1 that we use in the discrete domain is the discrete filter that approximates the frequency response 1ω+μω2+λ1.

A standard result for FISTA-type algorithms is that the solution of the minimization of μ(ck, u,α k) with respect to u is given by the shrinkage function [31, 32]

uk+1=max{|Lck+1+αkμ|λ2μ,0}sgn(Lck+1+αkμ).
The complete reconstruction method is summarized in Algorithm 2 below. Note that in standard implementations of the CG algorithm the initial estimate is often chosen to be zero. Here we use a warm initialization in the sense that the starting point of each inner PCG iteration is the outcome of the previous PCG iteration (see step 4).

Tables Icon

Algorithm 2 ADMM-PCG with warm initialization reconstruction method

6. Experimental analysis

6.1. Performance metrics

We use the structural similarity measure (SSIM) [33, 34] and signal-to-noise ratio (SNR) for measuring the quality of the reconstructed image. SSIM is a similarity measure proposed by Z. Wang, et al which compares the luminance, contrast and structure of images. SSIM is computed for a window of size R × R around each image pixel. The SSIM measure for two images x and for the specified window is

SSIM(x,x^)=(2μxμx^+C1)(2σxx^+C2)(μx2+μx^2+C1)(σx2+σx^2+C2),
where C1 and C2 are small constants values to avoid instability. μx and μx̂ denote the empirical mean of image x and in the specified window, respectively. σx and σx̂ are the empirical variance of the corresponding images and the covariance of two images is denoted by σxx̂ for the corresponding window. In our experiments, we choose C1 = C2 = (.001 * L)2 where L is the dynamic range of the image pixel values. SSIM for the total image is obtained as the average of SSIM over all pixels. It takes values between 0 and 1 with 1 corresponding to the highest similarity.

Our other quality measure is the SNR. If x is the oracle and is the reconstructed image we have

SNR(x,x^)=maxa,b20logx2xax^+b2
Higher values of the SNR correspond to a better match between the oracle and the reconstructed image.

6.2. Parameter selection

The proposed method has several parameters which are tuned as follow. From a statistical point of view, the Baysian estimator for the additive gaussian white noise inverse problem, g = Hc + n, can be written as

c^=argminc{12Hcg2σn2logp(c)},
where σn2 is the noise variance and p(c) is the prior density of the signal. Assuming that the gradient sample values uk = {Lc}k are i.i.d. Laplace distributed, we have
logp(c)1σukuk1,
where σu2 is the variance of the gradient values. On the other hand, by definition SNR=10log(Hc22/Nσn2) where N is the number of pixels in the image. Therefore σn2=10.1SNRHc22/N. In practice Hc22 can be approximated by g22 and σu2 is proportional to kuk22 which is empirically estimated using g22. This leads to the following rule of thumb for setting the TV regularization parameter: λ2 ∝ 10−.1SNRg2. Therefore, the TV parameter is proportional to the norm of the measurement. The proportionality constant is related to its signal-to-noise ratio.

The Thikhonov parameter is chosen as small as possible and is set to λ1 = 10−5. Here we use λ2 = ‖g2 × 10−3. Based on our experience, the parameter μ can be chosen ten times larger than λ2. The number of inner iterations for solving the linear step plays no role in the convergence of the proposed technique, but affects speed; we suggest to choose it as small as possible, e.g., 2 or 3. We use cubic B-splines with m = 3 as the basis functions.

6.3. Experimental result

To validate our reconstruction method, we conducted experiments with real data acquired using the TOMCAT beam line of the Swiss Light Source at the Paul Scherrer Institute in Villigen, Switzerland. The synchrotron light is delivered by a 2.9 T super-bending magnet. The energy of the X-ray beam is 25keV [35]. We used nine phase steps over two periods to measure the displacement of the diffraction pattern described in Section 2. For each step a complete tomogram was acquired around 180 degrees; we used 721 uniformly distributed projection angles. Image acquisition was performed with a CCD camera whose pixel size was 7.4μm.

For our experiments we used a rat-brain sample. The sample is embedded in liquid paraffin at room temperature. This is necessary to match the refractive index of the sample with its environment, so that the small-refraction-angle approximation holds. Finally the projections were post-processed, including flat-field and dark-field corrections, for the extraction of the phase gradient.

Figure 3 gives insights into the convergence behavior of our algorithm. In comparison with the established FISTA algorithm one can observe a drastic acceleration. This results from the combination of 1) the ADMM solver, 2) our problem-specific preconditioner and 3) our warm-initialization method for the inner PCG iterations. One can also observe that the convergence speed is not very sensitive to the number of inner iterations when we use warm initialization.

 figure: Fig. 3

Fig. 3 Convergence-speed comparison of different iterative techniques for solving our regularized problem.

Download Full Size | PDF

In practice our reconstruction scheme appears to converge after 5 outer iterations, each involving 2 inner PCG iterations. A PCG iteration essentially requires one application of the forward operator and one application of its adjoint. Since the computational complexities of the two operators are comparable, the overall cost is equivalent to 2 × 2 × 5 = 20 evaluations of the forward operator. Based on these considerations, we expect our algorithm to be about 6 times faster than the method described in [18]. This figure is based on a personal communication of the main authors of [18], who state that their algorithm converges in about 65 iterations, each being equivalent in cost to one of our inner iterations.

We further investigated the performance of the direct filtered back-projection and of the proposed iterative reconstruction techniques on a coronal rat-brain section. The reconstructed images for 721 and 181 viewing angles using GFBP and our method are shown in Figure 4. The figure of merits (SNR and SSIM) are indicated below each image. As the number of views is being reduced, the influence of regularization becomes predominant compared to the choice of a specific basis function. This was also confirmed to us by the fact that the reconstructions obtained by running the same algorithm with a different set of basis functions (Kaiser-Bessel with α = 10.4) were essentially indistinguishable for those in Figure 4(b, e). On the other hand, the effect of choosing an optimized set of basis functions is more significant in the well-posed scenario without regularization as demonstrated in Section 4.4.

 figure: Fig. 4

Fig. 4 Comparison of the reconstruction results for 721 viewing angles (first column) and 181 viewing angles (second column). (a,d) GFBP, (b,e) the iterative ADMM, (c,f) GFBP with smoothing kernel. The sub-images correspond to the region between the thalamus and the hippocampus (top), a part of the talamus (middle) and the Fornix (bottom). Notice the oscillatory artifacts produced by GFBP. Applying a smoothing kernel reduces the artifacts but also blurs the reconstruction.

Download Full Size | PDF

The GFBP reconstruction contains artifacts at the boundary of the image and with respect to specific anatomical features. For example, the bottom-right sub images in the reconstructions of Figure 4 show the mammal-thalamic tract in this coronal section. One can clearly see oscillatory artifacts in the GFBP reconstruction. This could confuse the biologist or automated diagnostic systems for determining the nucleus and immoneurins in that region. The middle-right and top-right images show a part of the thalamus and the region between the thalamus and the hippocampus, respectively.

To reduce the aforementioned artifacts, we also implemented a smoothed version of the GFBP algorithm. Specifically, we modified the filter in the third step of Algorithm 1 as follows:

q^(ωy)=12π1|ωy|×hk(ωy),
where h(ωy) is a lowpass filter and k is an exponent that acts as a smoothing parameter. We chose h(ω) to be the standard Hamming window. The reconstructions are shown in the bottom row of Figure 4. They suggest that with this smoothed version of GFBP there is a trade-off between artifacts and image contrast. Note that for these experiments the parameter k was optimized so as to achieve the best SNR.

Visually the reconstructed image with 721 angles using our proposed technique is more faithful in comparison with the GFBP approach. Therefore, we consider it as our gold standard for investigating the dependence of our algorithm on the number of views as shown in Figure 5. The SNR and SSIM values are computed for the main region of the sample which includes the brain. They suggest that we can reduce the number of views with our method by at least a factor of four while essentially maintaining the quality of the standard reconstruction method (FBP with a complete set of views).

 figure: Fig. 5

Fig. 5 The SNR and SSIM metrics for images reconstructed from a subset of projections.

Download Full Size | PDF

7. Conclusion

Up to now, in-vivo tomography with grating interferometry faces the challenge of large dose deposition, which potentially harms the specimens e.g., in small rodent scanners. To reduce the total scanning time, we presented an exact B-spline formulation for the discretization of the derivative variants of the Radon transform that avoids any numerical differentiation. We formulated the reconstruction problem as an inverse problem with a combination of regularization terms. We introduced a new iterative algorithm that uses the alternating direction method of multipliers to solve our TV-regularized reconstruction problem. An important practical twist is the introduction of a problem-specific preconditioner which significantly speeds up the quadratic optimization step of the algorithm. Finally, we presented experimental results to validate the proposed discretization method and corresponding iterative technique. Our finding confirms that ADMM is quite competitive for solving TV-regularized problems. Moreover, our method allows for a substantial dose reduction while preserving the image quality of FBP-type methods. This is a crucial step towards the diffusion of DPCI in medicine and biology.

Acknowledgment

This work was funded (in part) by ERC Grant ERC-2010-AdG 267439-FUN-SP, the Swiss National Science Foundation under Grant 200020-144355 and the Center for Biomedical Imaging of the Geneva-Lausanne Universities and EPFL.

References and links

1. V. Ingal and E. Beliaevskaya, “X-ray plane-wave tomography observation of the phase contrast from a non-crystalline object,” J. Phys. D: Appl. Phys 28, 2314–2317 (1995). [CrossRef]  

2. T. Davis, D. Gao, T. Gureyev, A. Stevenson, and S. Wilkins, “Phase-contrast imaging of weakly absorbing materials using hard X-rays,” Nat. 373, 595–598 (1995). [CrossRef]  

3. D. Chapman, S. Patel, and D. Fuhrman, “Diffraction enhanced X-ray imaging,” Phys., Med. and Bio. 42, 2015–2025 (1997). [CrossRef]  

4. U. Bonse and M. Hart, “An X-ray interferometer,” Appl. Phys. Lett. 6, 155–156 (1965). [CrossRef]  

5. A. Momose, T. Takeda, Y. itai, and K. Hirano, “Phase-contrast X-ray computed tomography for observing biological soft tissues,” Nat. Med 2, 473–475 (1996). [CrossRef]   [PubMed]  

6. T. Weitkamp, A. Diaz, C. David, F. Pfeiffer, M. Stampanoni, P. Cloetens, and E. Ziegler, “X-ray phase imaging with a grating interferometer,” Opt. Express 13, 6296–6304 (2005). [CrossRef]   [PubMed]  

7. A. Snigirev, I. Snigireva, V. Kohn, S. Kuznetsov, and I. Schelekov, “On the possibilities of X-ray phase-contrast microimaging by coherent high-energy synchroton radiation,” Rev. Sci. Instrum. 66, 5486–5492 (1997). [CrossRef]  

8. K. A. Nugent, T. E. Gureyev, D. F. Cookson, D. Paganin, and Z. Barnea, “Quantitative phase imaging using hard X-rays,” Phys. Rev. Lett. 77, 2961–2964 (1996). [CrossRef]   [PubMed]  

9. S. W. Wilkins, T. E. Gureyev, D. Gao, A. Pogany, and A. W. Stevenson, “Phase-contrast imaging using polychromatic hard X-rays,” Nat. 384, 335–338 (1996). [CrossRef]  

10. A. Momose, S. Kawamoto, I. Koyama, Y. Hamaishi, K. Takai, and Y. Suzuki, “Demonstration of X-ray talbot interferometry,” Jap. Jour. of Appl. Phys. 42, L866–L868 (2003). [CrossRef]  

11. F. Pfieffer, O. Bunk, C. Kottler, and C. David, “Tomographic reconstruction of three-dimensional objects from hard X-ray differential phase contrast projection images,” Nucl. Inst. and Meth. in Phys. Res. 580.2, 925–928 (2007). [CrossRef]  

12. M. Stampanoni, Z. Wang, T. Thüring, C. David, E. Roessl, M. Trippel, R. Kubik-Huch, G. Singer, M. Hohl, and N. Hauser, “The first analysis and clinical evaluation of native breast tissue using differential phase-contrast mammography,” Inves. radio. 46, 801–806 (2011). [CrossRef]  

13. M. Nilchian and M. Unser, “Differential phase-contrast X-ray computed tomography: From model discretization to image reconstruction,” Proc. of the Ninth IEEE Inter. Symp. on Biomed. Imag.: From Nano to Macro (ISBI’12), 90–93 (2012).

14. M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo, “An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems,” IEEE Trans. Imag. Proc. 20, 681–695 (2011). [CrossRef]  

15. S. Ramani and J. A. Fessler, “A splitting-based iterative algorithm for accelerated statistical X-ray CT reconstruction,” IEEE Trans. Med. Imag. 31.3, 677–688 (2012). [CrossRef]  

16. Z. Qi, J. Zambelli, N. Bevins, and G. Chen, “A novel method to reduce data acquisition time in differential phase contrast computed tomography using compressed sensing,” Proc. of SPIE 7258, 4A1–8 (2009).

17. T. Köhler, B. Brendel, and E. Roessl, “Iterative reconstruction for differential phase contrast imaging using spherically symmetric basis functions,” Med. phys. 38, 4542–4545 (2011). [CrossRef]  

18. Q. Xu, E. Y. Sidky, X. Pan, M. Stampanoni, P. Modregger, and M. A. Anastasio, “Investigation of discrete imaging models and iterative image reconstruction in differential X-ray phase-contrast tomography,” Opt. Express 20, 10724–10749 (2012). [CrossRef]   [PubMed]  

19. M. Unser, “Sampling–50 years after Shannon,” Proc. IEEE 88, 254104–1–3 (2000). [CrossRef]  

20. E. Y. Sidky and X. Pan, “Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization,” Phys. Med. Biol. 53, 4777–4807 (2008). [CrossRef]  

21. A. Momose, W. Yashiro, Y. Takeda, Y. Suzuki, and T. Hattori, “Phase tomography by X-ray talbot interferometry for biological imaging,” Jpn. J. Appl. Phys. 45, 5254–5262 (2006). [CrossRef]  

22. F. Pfeiffer, C. Grünzweig, O. Bunk, G. Frei, E. Lehmann, and C. David, “Neutron phase imaging and tomography,” Phys. Rev. Lett. 96, 215505-1–4 (2006). [CrossRef]  

23. F. Natterer, The Mathematics of Computed Tomography (John Wiley and sons, 1986).

24. H. Meijering, J. Niessen, and A. Viergever, “Quantitative evaluation of convolution-based methods for medical image interpolation,” Med. Imag. Anal. 5, 111–126 (2001). [CrossRef]  

25. A. Entezari, M. Nilchian, and M. Unser, “A box spline calculus for the discretization of computed tomography reconstruction problems,” IEEE Trans. Med. Imag. 31, 1532 –1541 (2012). [CrossRef]  

26. P. Thvenaz, T. Blu, and M. Unser, “Interpolation revisited [medical images application],” IEEE Trans. Med. Imag. 19.7, 739–758 (2000). [CrossRef]  

27. Y. Wang, J. Yang, W. Yin, and Y. Zhang, “A new alternating minimization algorithm for total variation image reconstruction,” SIAM Jour. on Imag. Sci. 1, 248–272 (2008). [CrossRef]  

28. T. Goldstein and S. Osher, “The split bregman method for l1-regularized problems,” SIAM Jour. on Imag. Sci. 2, 323–343 (2009). [CrossRef]  

29. M. Ng, P. Weiss, and X. Yuan, “Solving constrained total-variation image restoration and reconstruction problems via alternating direction methods,” SIAM Jour. on Sci. Comp. 32, 2710–2736 (2010). [CrossRef]  

30. B. Vandeghinste, B. Goossens, J. De Beenhouwer, A. Pizurica, W. Philips, S. Vandenberghe, and S. Staelens, “Split-bregman-based sparse-view CT reconstruction,” in “Fully 3D 2011 proc. ,” 431–434 (2011).

31. I. Daubechies, M. Defrise, and C. De Mol, “An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,” Comm. Pure Appl. Math. 57, 1413–1457 (2004). [CrossRef]  

32. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM Imag. Sci. 2, 183–202 (2009). [CrossRef]  

33. Z. Wang and A. Bovik, “A universal image quality index,” IEEE Sig. Proc. Lett. 9, 81 –84 (2002). [CrossRef]  

34. Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Imag. Proc. 13, 600–612 (2004). [CrossRef]  

35. S. McDonald, F. Marone, C. Hintermuller, G. Mikuljan, C. David, F. Pfeiffer, and M. Stampanoni, “Advanced phase-contrast imaging using a grating interferometer,” Sync. Rad. 16, 562–572 (2009). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 Differential phase-contrast tomography is based on a grating interferometer setup. The phase grating introduces a phase shift of π in the transmitted wave. The absorption grating is necessary for measuring the received wave given the limited resolution of the detector. The diagram on the right shows the received intensity for one pixel of the CCD camera with respect to the position of the grating. The red curve corresponds to the situation where the object is in the illumination path. The black curve shows the received intensity when there is no object.
Fig. 2
Fig. 2 (a) Analytic phantom. (b) Analytical values of the derivative of the Radon transform of analytic phantom.
Fig. 3
Fig. 3 Convergence-speed comparison of different iterative techniques for solving our regularized problem.
Fig. 4
Fig. 4 Comparison of the reconstruction results for 721 viewing angles (first column) and 181 viewing angles (second column). (a,d) GFBP, (b,e) the iterative ADMM, (c,f) GFBP with smoothing kernel. The sub-images correspond to the region between the thalamus and the hippocampus (top), a part of the talamus (middle) and the Fornix (bottom). Notice the oscillatory artifacts produced by GFBP. Applying a smoothing kernel reduces the artifacts but also blurs the reconstruction.
Fig. 5
Fig. 5 The SNR and SSIM metrics for images reconstructed from a subset of projections.

Tables (3)

Tables Icon

Algorithm 1 Generalized filtered back projection(GFBP) for the inverse problem θ ( n ) f ( y ) = g θ ( y )

Tables Icon

Table 1 Comparison of the projection and reconstruction accuracy using cubic B-splines and Kaiser-Bessel functions with the parameters proposed in [18].

Tables Icon

Table 3 Algorithm 2 ADMM-PCG with warm initialization reconstruction method

Equations (55)

Equations on this page are rendered with MathJax. Learn more.

ϕ ( y , θ ) = 2 π λ { f ( x ) } ( y , θ ) = 2 π λ 2 f ( x ) δ ( y x , θ ) d x ,
{ f ( x ) } ( y , θ ) = 2 f ( x ) δ ( y x , θ ) d x = f ( y θ + t θ ) d t ,
α ( y , θ ) = λ 2 π ϕ ( y , θ ) y ,
Δ x g ( y , θ ) = d α ( y , θ ) ,
g ( y , θ ) = 1 d Δ x g ( y , θ ) = { f } y ( y , θ ) .
( n ) f ( y , θ ) = n y n f ( y , θ ) .
( n ) { f ( α x ) } ( y , θ ) = α n + 1 ( n ) f ( α y , θ ) , α + .
( n ) { ( f * g ) ( x ) } ( y , θ ) = ( ( n ) f ( , θ ) * g ( , θ ) ) ( y , θ ) = ( f ( , θ ) * ( n ) g ( , θ ) ) ( y , θ ) .
( n ) { f ( x 0 ) } ( y , θ ) = ( n ) f ( y x 0 , θ , θ ) .
2 { f ( x ) } ^ ( ω 1 , ω 2 ) = i sgn ( ω 2 ) f ^ ( ω 1 , ω 2 ) ,
* { n y n { f } ( y , θ ) } ( x ) = 2 π ( 1 ) n 2 n ( Δ ) n 1 2 { f } ( x ) ,
g ^ ( ω , θ ) = f ^ ( ω cos θ , ω sin θ ) .
( i ω ) n g ^ ( ω , θ ) = i n × sgn n ( ω ) | ω | n f ^ ( ω cos θ , ω sin θ ) .
n y n f ( y , θ ) = { ( ( 1 ) n ( 2 ) n ( Δ ) n 2 { f } ) ( x ) } ( y , θ ) , θ ( 0 , π ) .
* { n y n { f } ( y , θ ) } ( x ) = * { ( ( 1 ) n ( 2 ) n ( Δ ) n 2 { f } ) ( x ) } ( y , θ ) , θ ( 0 , π ) .
( n ) f ( y , θ ) = g ( y , θ ) ,
( n ) * { ( q * ( n ) f ( , θ ) ) ( y ) } ( x ) = f ( x ) ,
q ^ ( ω y ) = 1 2 π × 1 | ω y | 2 n 1 .
q ^ ( ω y ) = 1 2 π × 1 | ω y | 2 n 1
V ( φ ) = { f ( x ) = k 2 c k φ ( x k ) : c l 2 ( 2 ) } .
f ( x ) = k 2 c k φ k ( x ) ,
{ f } ( y , θ ) = k 2 c k { φ k } ( y , θ ) ,
{ φ k } ( y , θ ) = { φ } ( y θ , k , θ ) ,
( n ) { f } ( y , θ ) = k 2 c k ( n ) { φ k } ( y , θ ) ,
( n ) { φ k } ( y , θ ) = ( n ) { φ } ( y θ , k , θ ) .
φ ( x ) = β ( x ) = β ( x 1 ) β ( x 2 ) ,
β ( x ) = Δ 1 m + 1 m ! x + m ( sin ( ω / 2 ) ω / 2 ) m + 1 ,
( n ) { β ( x ) } ( y , θ ) = Δ cos θ m + 1 Δ sin θ m + 1 ( 2 m n + 1 ) ! y + 2 m n + 1 = k 1 = 0 m + 1 k 2 = 0 m + 1 ( 1 ) k 1 + k 2 ( m + 1 k 1 ) ( m + 1 k 2 ) × ( y + ( m + 1 2 k 1 ) cos θ + ( m + 1 2 k 2 ) sin θ ) + 2 m n + 1 ( 2 m n + 1 ) ! cos θ m + 1 sin θ m + 1 .
g ( y , θ ) = 1 { ( j ω ) n β ^ ( ω cos θ ) β ^ ( ω sin θ ) } .
( j ω ) n β ^ m ( ω cos θ ) β ^ m ( ω sin θ ) = ( j ω ) n ( sin ( ω cos θ 2 ) ω cos θ 2 ) m + 1 ( sin ( ω sin θ 2 ) ω sin θ 2 ) m + 1 = ( j ω ) n ( e j ω cos θ 2 e j ω cos θ 2 j ω cos θ ) m + 1 ( e j ω sin θ 2 e j ω sin θ 2 j ω sin θ ) m + 1 = ( e j ω cos θ 2 e j ω cos θ 2 cos θ ) m + 1 ( e j ω sin θ 2 e j ω sin θ 2 sin θ ) m + 1 1 ( j ω ) 2 m n + 2 .
( n ) f ( y j , θ i ) = k 2 c k ( n ) φ k ( y j , θ i ) .
g = Hc ,
[ H ] ( i , j ) , k = ( 1 ) φ k ( y j , θ i ) .
H ( i , j ) , k = ( 1 ) φ ( y j k , θ i , θ i ) .
I : × [ 0 , π ] { 1 , 2 , , K } × { 1 , 2 , . P } ( y , θ ) ( j , i ) ,
[ H ] ( i , j ) , k L I ( y j k , θ i , θ i ) .
p a ( x ) = { ( a 2 x 2 ) 2 x a 0 Oth . ,
p a ( y , θ ) y = 16 3 y ( a 2 y 2 ) 3 2 .
p a ( y , θ ) y = 2 y 0 a 2 y 2 ( a 2 y 2 x 2 2 ) 2 d y
f ( x ) = k α k p a k ( x x k ) ,
J ( c ) = 1 2 Hc g 2 ,
J ( c ) 1 2 Hc g 2 + Ψ ( c ) ,
Ψ ( c ) = λ 1 2 c 2 + λ 2 k { Lc } k 1 ,
f x 1 [ k 1 , k 2 ] = ( ( h 1 [ , k 2 ] * c [ , k 2 ] ) [ k 1 , ] * b 2 [ k 1 , ] ) [ k 1 , k 2 ] f x 2 [ k 1 , k 2 ] = ( ( h 2 [ k 1 , ] * c [ k 1 , ] ) [ , k 2 ] * b 1 [ , k 2 ] ) [ k 1 , k 2 ] ,
c = argmin c , u { 1 2 Hc g 2 + λ 1 2 c 2 + λ 2 k u k 1 } . subject to u = Lc
u ( c , u , α ) = 1 2 Hc g 2 + λ 1 2 c 2 + λ 2 k u k 1 + α T ( Lc u ) + μ 2 Lc u 2 ,
{ c k + 1 argmin c μ ( c , u k , α k ) u k + 1 argmin c μ ( c k + 1 , u , α k ) α k + 1 α k + μ ( L c k + 1 u k + 1 ) .
μ ( c , u k , α k ) = ( H T H + μ L T L + λ 1 I ) A c ( H T g + μ L T ( u k α k μ ) ) b .
( n ) * ( n ) { f } ( x ) = 2 π × ( Δ ) 2 n 1 2 { f } ( x ) .
u k + 1 = max { | L c k + 1 + α k μ | λ 2 μ , 0 } sgn ( L c k + 1 + α k μ ) .
SSIM ( x , x ^ ) = ( 2 μ x μ x ^ + C 1 ) ( 2 σ x x ^ + C 2 ) ( μ x 2 + μ x ^ 2 + C 1 ) ( σ x 2 + σ x ^ 2 + C 2 ) ,
SNR ( x , x ^ ) = max a , b 20 log x 2 x a x ^ + b 2
c ^ = argmin c { 1 2 Hc g 2 σ n 2 log p ( c ) } ,
log p ( c ) 1 σ u k u k 1 ,
q ^ ( ω y ) = 1 2 π 1 | ω y | × h k ( ω y ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.