Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

StaticCodeCT: single coded aperture tensorial X-ray CT

Open Access Open Access

Abstract

Coded aperture X-ray CT (CAXCT) is a new low-dose imaging technology that promises far-reaching benefits in industrial and clinical applications. It places various coded apertures (CA) at a time in front of the X-ray source to partially block the radiation. The ill-posed inverse reconstruction problem is then solved using l1-norm-based iterative reconstruction methods. Unfortunately, to attain high-quality reconstructions, the CA patterns must change in concert with the view-angles making the implementation impractical. This paper proposes a simple yet radically different approach to CAXCT, which is coined StaticCodeCT, that uses a single-static CA in the CT gantry, thus making the imaging system amenable for practical implementations. Rather than using conventional compressed sensing algorithms for recovery, we introduce a new reconstruction framework for StaticCodeCT. Namely, we synthesize the missing measurements using low-rank tensor completion principles that exploit the multi-dimensional data correlation and low-rank nature of a 3-way tensor formed by stacking the 2D coded CT projections. Then, we use the FDK algorithm to recover the 3D object. Computational experiments using experimental projection measurements exhibit up to 10% gains in the normalized root mean square distance of the reconstruction using the proposed method compared with those attained by alternative low-dose systems.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

X-ray transmission computed tomography (CT) exploits the intrinsic X-ray absorption coefficient of objects and their thickness to recover an image from a set of line-integral projections measured at different angles [1,2]. Since the introduction of the first commercial scanner, X-ray CT has advanced rapidly in terms of speed, resolution, and geometry. However, its basic principles have remained consistent [3]. Notably, most CT practical applications still use variations of the Feldkamp-Davis-Kress (FDK) algorithm given its low memory requirements and computational efficiency [4]. Nonetheless, the FDK algorithm requires a large number of measurements to obtain a high-quality reconstruction which, in medical imaging, results in the patients’ increased exposure to ionizing radiation [5,6]. Thus, the research community is continuously looking to develop methodologies that reduce radiation dosage while maintaining reconstruction quality [7]. State-of-the-art low-dose CT methods can be classified into two categories, (I) dose-optimized CT hardware design (e.g. [8,9]) and (II) dose-optimized reconstruction algorithm development (e.g. [10,11]). A well-studied example of the latter is sparse view acquisition which uses iterative reconstruction algorithms with additional regularization constraints to solve the resulting ill-posed inverse problem [12,13]. However, as the number of views decreases, sparse view acquisition falls short in obtaining high-quality reconstructions. This behavior is evidenced in Fig. 1 which depicts the reconstruction of a 128 $\times$ 128 Walnut phantom [14] using a CT fan-beam system and the settings described in [15], using sparse view angles for different subsampling scenarios. Additionally, current implementations of sparse view acquisition, such as rapidly switching the X-ray source on and off, are difficult to execute in a clinical setting [16]. In this manuscript, we will study alternative low-dose solutions that result in less degradation on the reconstruction quality compared to the artifacts shown in Fig. 1.

Structured illumination was proposed as an alternative to sparse sample acquisition to overcome these limitations. An example of this is coded aperture X-ray CT (CAXCT), which places coded apertures (CAs) in front of the X-ray source path to reduce the number of rays illuminating the object. Then, the inverse problem is solved using compressed sensing (CS) algorithms [17]. A major drawback in CAXCT is that the highest-quality reconstructions require the CAs to change in concert with the view angles. Thus, making the implementation impractical. Other research groups have explored the possibility of using a single collimator. A special example, referred to as “SparseCT”, has been studied extensively [16,18]. In this modality, a multi-slit collimator is placed in front of the X-ray source to partially block the radiation. However, its implementation adds significant mechanical complexity to the system, since the collimator is repeatedly moved tangentially to the motion of the rotating gantry. Additionally, the reconstruction speed of SparseCT and CAXCT is lower given that they rely on iterative optimization algorithms.

 figure: Fig. 1.

Fig. 1. Sparse view CT example. (a) 128x128 walnut phantom [14]. CT reconstruction using an iterative algorithm and total variation regularization for (b) 50%, (c) 25 %, and 10% of the full-sampling view angles scenario. Note that the reconstruction quality decreases substantially when using less than 25 % of the full data.

Download Full Size | PDF

This paper proposes a simple yet radically different approach to CAXCT coined StaticCodeCT, that uses a single-static CA in the CT gantry; thus, making the imaging system amenable for practical implementations. Additionally, rather than using CS reconstruction methods, we propose a novel two-step reconstruction framework to reconstruct the object. First, we construct a three-dimensional tensor by stacking the 2D measurements vertically. We then exploit its low-rank nature and multi-dimensional data correlation to estimate the missing projection data. Finally, given the synthesized complete set of measurements, we use the FDK algorithm to accurately recover the 3D datacube. Computational experiments demonstrate the superiority of the proposed framework compared to other low-dose systems.

2. Coded aperture X-ray CT (CAXCT)

In cone-beam X-ray CT, the X-ray source travels in a circular trajectory around the object, and X-ray projections are taken at $P$ different angles using an $M_1 \times M_2$ 2D detector. At each detector position $j$, for $j=1,\ldots ,M_1M_2P$, the measurements are given by Beer-Lambert’s law of attenuation: $I_j = I_{0j} \exp \left (-\int _{\ell } \mu (\ell )d \ell \right )$, where $I_{0j}$ is the incident X-ray intensity and $\mu (\ell )$ is the linear attenuation coefficient of the object at position $\ell$. For simplicity, the remainder of this work deals with the post-log data instead of working directly with $I_j$, i.e. $\textbf {y}_j = \ln (I_{0j})-\ln (I_j) = \int _{\ell } \mu (\ell )d\ell$. In real systems, only a discrete number of measurements are available, thus the imaging model must be discretized. For an object of dimensions $N\times N\times N$, the line integral projections are attenuated by the object as described by the ray path $\textbf {H}_j$, i.e., $\textbf {y}_j = \textbf {H}_j\textbf {x}$, where $\textbf {x} \in \mathbb {R}^{N^3}$ is a vectorized representation of the linear attenuation coefficients of the object. Thus, the discrete-to-discrete formulation can be written as a finite linear system of equations of the form $\textbf {y}=\textbf {H}\textbf {x}$, where $\textbf {H} \in \mathbb {R}^{M_1M_2P \times N^3}$ is the system matrix, with $\textbf {H}_{ji}$ equal to the intersection length of the $j^{th}$ ray with the $i^{th}$ voxel for $i=1,\ldots ,N^3$, and $\textbf {y} \in \mathbb {R}^{M_1M_2P}$ is a vector containing all the measurements.

Commercial clinical scanners predominantly use variations of the FDK algorithm due to its high speed. However, in low-dose scenarios such as sparse view acquisition and structured illumination CT, only a few measurements are available; hence, the inverse problem becomes severely ill-posed and the FDK algorithm fails. In these cases, iterative reconstruction methods may be used in conjunction with regularization techniques to reconstruct the image accurately [12,19]. Figure 2 depicts a structured illumination system coined CAXCT, for a cone-beam geometry [20]. Note, the measurements are subsampled by placing a 2D CA $T_{p}$ in front of the cone-beam X-ray source at each angle position $s_p$, for $p=1,\ldots , P$, effectively modifying the sensing matrix. When there is a one-to-one correspondence between the CA elements and the detector elements, the sensing matrix is formed by the rows of the matrix $\textbf {H}$ associated with the detector elements corresponding to the unblocking CA elements. Mathematically, the CAs are represented by a binary matrix, $\textbf {C}=[\textbf {c}_1,\ldots ,\textbf {c}_{M_1M_2P}]$, where the column vector $\textbf {c}_j \in \mathbb {R}^{D}$ is a zero vector if the $j^{th}$ detector element is associated with a blocking CA element. The remaining non-zero $\textbf {c}_j$ vectors form the standard basis in $\mathbb {R}^{D}$, where $D$ is the number of unblocked elements in the system. Then, the measurements in this system are obtained as $\textbf {y}=\textbf {C}\textbf {H}\textbf {x}$, and the linear attenuation coefficients, $\textbf {x}$, can be recovered by solving the following regularization problem:

$$\hat{\textbf{x}}=\underset{\textbf{x}}{\textrm{argmin }}\|\textbf{y}-\textbf{C}\textbf{H}\textbf{x}\|_2^2+\lambda\boldsymbol{\Phi}(\textbf{x}),$$
where $\lambda$ is a regularization constant, $\|\cdot \|_2$ corresponds to the $\ell _2$ norm, and $\boldsymbol {\Phi }(\textbf {x})$ represents any prior information about the data. Different algorithms have been proposed to solve (1) for cone-beam CT, including optimal first-order methods for strongly convex functions using total variation (TV) regularization [21]; and the gradient projection for sparse reconstruction (GPSR) algorithm [22], which, following the CS theory, is able to reconstruct the object with high probability if $\textbf {x}$ is sparse in some representation basis $\boldsymbol {\Psi }$ and the basis is incoherent with the sensing matrix $\textbf {C}\textbf {H}$ [17]. The theory of CS indicates that the best results make use of completely random measurements. However, since transmission CT confines the measurements to lines traversing the object, the sensing matrix in CT is highly structured and cannot be arbitrarily designed to be completely random. Nonetheless, Kaganovsky et al. examined several CS strategies for CAXCT in [20]. They demonstrated that using CAs reduces scattered radiation while keeping the same illumination energy in the transmitted rays, resulting in a higher signal-to-noise ratio than that achieved by lowering the exposure time or lowering the X-ray source power. Furthermore, compared to sparse view sampling, CAXCT allows the use of fewer measurements for the same image quality and leads to better image quality for the same number of measurements [17]. They also concluded that the reconstructions with the highest quality in CAXCT are obtained when a different CA is placed in front of the X-ray source at each view-angle [20]. Nonetheless, the implementation of such a system would be impractical in real applications. An alternative solution is to decrease the number of CAs used in the system. However, when a single CA is used, current reconstruction methods collapse given that parts of the object would be highly under-sampled [20]. Other research groups have explored moving a multi-slit structure while the gantry rotates [16,18]. The implementation of both systems, however, has high mechanical complexity.

 figure: Fig. 2.

Fig. 2. CAXCT system. A different CA $T_p$ is placed in front of the rotating source at each location $s_p$. StaticCodeCT uses a single-static code $T_{p}$ for all $p$.

Download Full Size | PDF

2.1 StaticCodeCT

This work overcomes these limitations by proposing a single-static CA X-ray CT system coined StaticCodeCT. In this system the CA is static with respect to the X-ray tube. Furthermore, rather than using iterative reconstruction algorithms directly on the measured projection data, we formulate the sensing problem in a tensorial form. Then, we approximate the missing data based on the tensor representation of the measurements. Particularly, the un-observed entries in the 3D tensor representation of the projection data are estimated using tensor completion algorithms. Then, given that a complete set of measurements is available, we can use conventional FDK algorithms for fast image reconstruction. To illustrate the motivation to develop a different reconstruction framework, the sampling density (SD) of the CAXCT system is analyzed. Abbas et al. introduced the SD in [23] to quantify the sensing quality in the context of X-ray CT from a CS perspective. They concluded that a higher and more uniform SD distribution was likely to produce high-quality reconstructions. They defined SD as:

$$SD_i=\textstyle \sum_{j=1}^{M_1M_2P}\textbf{H}_{ji}.$$

Recall, $\textbf {H}_{ji}$ corresponds to the intersection length of the $j^{th}$ ray with the $i^{th}$ voxel, $M_1 \times M_2$ are the detector dimensions, and $P$ is the number of view-angles. Using (2) the SD distribution of a cone-beam system is calculated for three scenarios: (I) When no CAs are used, (Figs. 3(a-b)); (II) the StaticCodeCT system (Figs. 3(c-d)); and (III) CAXCT with different CAs at each X-ray source position (Figs. 3(e-f)). Having no CAs produces a uniform distribution with a high mean value. Similarly, CAXCT with different CAs produces a uniform SD distribution, ideal for iterative reconstruction. In contrast, the SD distribution of the StaticCodeCT system has multiple rings of high and low values, and some pixels in the center of the image are not sensed in the axial slices. Moreover, the sagittal slices have multiple artifacts in a horizontal band along the center. These non-uniformities produce ring artifacts in the axial reconstructions and in the sagittal slices the artifacts follow the distribution shown in Fig. 3(d) as will be demonstrated in Section 4. These limitations motivated us to develop new image reconstruction strategies for the StaticCodeCT system. A possible solution could be to apply learning-based methods and to use training data to estimate the forward model instead of relying on the physical tomographic model. However, this would result in a daunting task for large imaging systems [19]. Moreover, learning-based algorithms are currently inadequate, in that small changes in the object, noise levels, and calibration can severely degrade the imaging system’s performance. Even worse, they can lead to the synthesis of structures that are not supported by the measurements. A more robust solution may be hybrid algorithms that use model-based and data-driven methods jointly, as implemented here. Particularly, this work aims to estimate the missing data on the 2D detector before reconstructing the 3D object, such that the SD at the reconstruction stage is equivalent to the traditional CT system without CAs (Figs. 3(a-b)). The proposed approach exploits the highly structured 2D projections obtained in cone-beam CT, which can be considered low-rank natural grayscale images. Figure 4(a) depicts the experimental projection data at $s_{635}=284.2^{\circ }$ of a Juglans hopeiensis walnut, and Fig. 4(b) depicts the emulation of having a CA of 25% transmittance, i.e., a quarter of the full-dose. The black pixels correspond to blocked X-ray paths, and the remaining pixels correspond to X-rays that pass through the object. Note that the coded measurements resemble an image with missing patches (Fig. 4(b1)). Thus, image inpainting techniques, such as matrix completion, could be applied to estimate the missing entries from the partially observed measurements. Possible approaches to the estimation problem would be to reorder the detector measurements at each angle into an $M_1\times M_2P$ concatenated matrix or to solve the problem independently per view-angle. Furthermore, a reasonable recovery attempt is to find the matrix with the lowest rank that agrees with the observed measurements. Mathematically, the convex relaxation of the low-rank minimization for matrix completion can be formulated as follows [24]:

$$\hat{\textbf{X }}=\underset{\textbf{X}}{\textrm{ argmin}} ||\textbf{X}||_{{\ast}} \textrm{ s.t. } \textbf{X}_{ij}=\textbf{O}_{ij} \textrm{ with } (i,j) \in \Omega,$$
where $\hat {\textbf {X}}$ are the estimated measurements, $||\cdot ||_{\ast }$ is the nuclear norm, $\textbf {O}$ are the partially observed measurements, and $\Omega$ is the set of indices of the observed entries. Figure 4(c) depicts the result of solving (3) for the coded measurements at $s_{635}=284.2^{\circ }$ using [25]. As shown in the normalized absolute error of the estimation in Figs. 4(d-d1), this matricization ignores the intrinsic multi-dimensional structure of the projection data and produces multiple errors. Consequently, a more natural and compact representation of the projection data is needed, as described next.

3. Tensorial reconstruction framework

Consider a cone-beam CT scan of an object using an $M_1\times M_2$ detector. Given a full circle scan with $P$ view angles, the set of measurements can be arranged as a 3-way tensor, $\mathcal {Y}\in \mathbb {R}^{M_1\times M_2\times P}$. We build the tensor by stacking the 2D measurements vertically. For the StaticCodeCT system, all the 2D measurements will exhibit the same blocking pattern as depicted in Fig. 5. Additionally, vertical lines of missing data will appear in the lateral and horizontal slices of the tensor, at the locations corresponding to the blocking elements on the CA. Additionally, note that sinogram-like figures form in the horizontal slices with the proposed construction. The measurement estimation problem can then be reformulated as a low-rank tensor completion problem:

$$\hat{\mathcal{S}}=\underset{\mathcal{S}}{\textrm{argmin}} \textrm{ rank}\left(\mathcal{S}\right) \textrm{ s.t. } \mathcal{P}_{\Omega}\left(\mathcal{S}\right)=\mathcal{P}_{\Omega}\left(\mathcal{Y}\right),$$
where $\hat {\mathcal {S}}$ is the estimate of the projection data tensor, $\mathcal {Y}$ is the observed incomplete tensor, $\mathcal {P}_{\Omega }\left (\mathcal {Y}\right )=\mathcal {Y}(i,j,k)$ for $\left (i,j,k\right ) \in \Omega$, otherwise $\mathcal {P}_{\Omega }\left (\mathcal {Y}\right )=0$, and $\Omega$ is the set of indices of the detector elements associated with the unblocking CA elements. The tensor rank, however, does not have a unique definition [26]. Thus, multiple research efforts have been devoted to finding approximate solutions to the low-rank tensor completion problem [2733]. The following subsection develops the tensor formulation used in this work.

 figure: Fig. 3.

Fig. 3. 2D SD distributions of the volume’s central axial (top) and sagittal (bottom) slices, when using: (a-b) No CA, (c-d) StaticCodeCT, and (e-f) CAXCT. Note that concentric rings of low and high density correspond to the CA pattern in the single CA case.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. 2D walnut CT projection at $s_{635}=284.2^{\circ }$ for (a) No CA, (b) CA with 25% transmittance. (c) Recovered projection using matrix completion [25] and its (d) normalized absolute error. (e) Recovered projection using the proposed approach and (f) its normalized absolute error. (a$_1$-f$_1$) Zoom of the highlighted regions in (a-f). Note that using only the low rank of the 2D projections is not enough to estimate the missing data.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. StaticCodeCT tensor representation of experimental projection data of a Juglans hopeiensis walnut.

Download Full Size | PDF

3.1 Tensor formulation

To formulate the tensor completion framework the following definitions need to be introduced:

Tensor indexing: The elements in the $\mathcal {Y}\in \mathbb {R}^{M_1\times M_2\times P}$ tensor are indexed as $\mathcal {Y}(i,j,k)$. Frontal slices can be obtained as $\mathcal {Y}^k=\mathcal {Y}(:,:,k)$, lateral slices as $\mathcal {Y}_j=\mathcal {Y}(:,j,:)$, and horizontal slices as $\mathcal {Y}(i,:,:)$. The frontal slices $\mathcal {Y}^k$ correspond to the 2D measurements at each view-angle and the central horizontal slice of the tensor, $\mathcal {Y}(M_1/2,:,:)$, corresponds to the fan-beam sinogram of the central axial slice of the object. Figure 5 depicts 3 slices in each direction of the tensor. Finally, the $i,j^{th}$ tube fiber is denoted as $\mathcal {Y}_{ij}=\mathcal {Y}(i,j,:)$ [31].

Tensor Norms: For tensors $\mathcal {X,Y}$, the inner product is defined as $\langle \mathcal {X,Y} \rangle = \sum _{i,j,k}\mathcal {X}(i,j,k)$ $\mathcal {Y}(i,j,k)$ and the Frobenius norm of $\mathcal {Y}$ is defined as the square root of the sum of all its elements squared, $||\mathcal {Y}||_F=\sqrt {\sum _{i,j,k}|\mathcal {Y}(i,j,k)|^2}$.

As previously mentioned, there are multiple methods to approximate the low-rank structure of a tensor. The CANDECOMP/PARAFAC (CP) decomposition, for example, factorizes a tensor into a sum of rank-one tensors; however, it has been shown that the problem of computing such factorization for a specific tensor is NP-hard [26]. Alternatively, other research groups (e.g. [33,34]) have developed algorithms to solve the low-rank tensor completion problem based on the matricization of the tensor as a convex surrogate of the minimization of the Tucker rank which is based on the Tucker decomposition which decomposes a tensor into a core tensor transformed by a matrix along each of its dimensions [26]. However, the unfolding performed on the tensor does not fully exploit its multi-dimensional data correlation. In this work, we use the Tensor Nuclear Norm (TNN) which was proposed as an alternative tensor rank approximation to preserve the intrinsic structure of third-order tensors [35]. The TNN is based on the tensor singular value decomposition and it is defined as $||\mathcal {Y}||_{\textrm {TNN}} = \sum _{k=1}^{P} ||\mathcal {\hat {Y}}^k||_{\ast }$, i.e. the sum of singular values of the frontal slices of the tensor $\mathcal {\hat {Y}}$ obtained by applying the Fourier transform along the tube fibers of $\mathcal {Y}$ [28,35]. Additionally, given that fine details might not be well-captured by the low-rank regularization, a data-driven regularization term, $\Phi \left (\mathcal {S}\right )$, is added to the cost function. Then, the problem in (4) is reformulated as follows:

$$\hat{\mathcal{S}}=\underset{\mathcal{S}}{\textrm{argmin }}||\mathcal{S}||_{\textrm{TNN}}+\lambda \Phi \left(\mathcal{S}\right) \textrm{ s.t. } \mathcal{P}_{\Omega}\left(\mathcal{S}\right)=\mathcal{P}_{\Omega}\left(\mathcal{Y}\right).$$

For mathematical convenience, two auxiliary tensors $\mathcal {Q}$ and $\mathcal {Z}$ are introduced to rewrite (5) as:

$$\hat{\mathcal{S}}=\underset{\mathcal{S,Q,Z}}{\textrm{argmin }}||\mathcal{Q}||_{\textrm{TNN}}+\lambda \Phi \left(\mathcal{Z}\right) + \mathcal{I} \left(\mathcal{S}\right) \textrm{ s.t } \mathcal{Q}=\mathcal{S}, \mathcal{Z}=\mathcal{S},$$
where $\mathcal {I} \left (\mathcal {S}\right )$ is an indicator function that takes a value of $0$ if $\mathcal {P}_{\Omega }\left (\mathcal {S}\right )=\mathcal {P}_{\Omega }\left (\mathcal {Y}\right )$ otherwise $\mathcal {I} \left (\mathcal {S}\right )=\infty$. Following the Alternating Direction Method of Multipliers (ADMM) procedure described in [36], the augmented Lagrangian of (6) is calculated as: $L = ||\mathcal {Q}||_{\textrm {TNN}}+\lambda \Phi \left (\mathcal {Z}\right ) + \mathcal {I} \left (\mathcal {S}\right )+ \beta /2||\mathcal {S}-\mathcal {Q}||_F^2+\langle \mathcal {S}-\mathcal {Q},\Lambda _1\rangle + \beta /2||\mathcal {S}-\mathcal {Z}||_F^2+\langle \mathcal {S}-\mathcal {Z},\Lambda _2\rangle$, where $\Lambda _1$ and $\Lambda _2$ are the Lagrangian multipliers. Then, the un-observed measurements are estimated by iteratively solving for $\mathcal {Q,Z,}$ and $\mathcal {S}$ independently.

$\mathcal {Q}$ subproblem: Let $\mathcal {Q}(\ell )$ denote the $\ell ^{th}$ iteration. Then by fixing $\Lambda _1(\ell )$ and $\mathcal {S}(\ell )$, $\mathcal {Q}(\ell +1)$ is obtained by solving:

$$\underset{\mathcal{Q}}{\textrm{argmin }} ||\mathcal{Q}(\ell)||_{\textrm{TNN}}+\beta/2||\mathcal{S}(\ell)-\mathcal{Q}(\ell)+\Lambda_1(\ell)/\beta||_F^2,$$
where the optimal solution can be obtained by singular value thresholding of the tensor $\mathcal {Q}(\ell )$ with parameter $1/\beta$ [36].

$\mathcal {Z}$ subproblem: Fixing $\Lambda _2(\ell )$ and $\mathcal {S}(\ell )$, $\mathcal {Z}(\ell +1)$ is obtained by solving:

$$\underset{\mathcal{Z}}{\textrm{argmin }}\lambda \Phi \left(\mathcal{Z}(\ell)\right)+\beta/2||\mathcal{S}(\ell)-\mathcal{Z}(\ell)+\Lambda_2(\ell)/\beta||_F^2.$$

The solution to this problem is equivalent to solving the proximal operator of regularization [36]. Furthermore, under the plug-and-play (PnP) framework, this operator can be replaced by the denoising of the tensor $\mathcal {S}(\ell )+ \Lambda _2(\ell ) / \beta$. That is, the term $\Phi \left (\mathcal {S}(\ell )\right )$ in (5) can be implicitly replaced by a denoiser prior. Multiple state-of-the-art denoisers can be flexibly used as a modular part under the PnP framework, such as TV regularization, block-matching and 3D filtering (BM3D), or convolutional neural network (CNN) based denoisers. FFDNet is a recent image denoising method based on a CNN architecture used for practical denoising applications given its low computational load [37]. Compared to other existing deep-learning-based denoisers, the FFDNet denoiser has multiple properties desirable for the current implementation. Specifically, faster execution time, lower memory requirements, and the flexibility to handle a wide range of noise levels with a single network model [38]. Here, we used the pre-trained FFDNet denoiser for gray-scale images on the frontal slices of the tensor independently in each ADMM iteration.

$\mathcal {S}$ subproblem: With the previously obtained tensors $\mathcal {Q}(\ell +1),\mathcal {Z}(\ell +1)$ and the tensors $\Lambda _1(\ell )$ and $\Lambda _2(\ell )$, $\mathcal {S}(\ell +1)$ is obtained as

$$\underset{\mathcal{S}}{\textrm{argmin }}\mathcal{I} (\mathcal{S}(\ell))+ \frac{\beta}{2} ||\mathcal{S}(\ell)-\mathcal{Q}(\ell+1)+\frac{\Lambda_1(\ell)}{\beta}||_F^2 +\frac{\beta}{2}||\mathcal{S}(\ell)-\mathcal{Z}(\ell+1)+\frac{\Lambda_2(\ell)}{\beta}||_F^2.$$

If $\left (i,j,k\right ) \in \Omega$, the solution is $\mathcal {S}(\ell +1)=\mathcal {P}_{\Omega }(\mathcal {Y})$; otherwise $\mathcal {S}(\ell +1)=1/2(\mathcal {Q}(\ell +1)+\mathcal {Z}(\ell +1))$ $-1/(2\beta )$ $(\Lambda _1(\ell )+\Lambda _2(\ell ))$ [36].

Lagrange Multipliers Update: By definition, the updates to the Lagrange multiplier tensors are obtained as $\Lambda _1(\ell +1)=\Lambda _1(\ell )+\beta \left (\mathcal {S}(\ell +1)-\mathcal {Q}(\ell +1)\right )$ and $\Lambda _2(\ell +1)=\Lambda _2(\ell )+\beta \left (\mathcal {S}(\ell +1)-\mathcal {Z}(\ell +1)\right )$.

Figure 6(a) depicts the experimental projection data at $s_{400}=178.9^{\circ }$ of a Juglans hopeiensis walnut. To illustrate the importance of the deep prior, consider a StaticCodeCT system with a CA with 25% transmittance, that is 75% of the measurements in the tensor are not observed. First, we solve the low-rank minimization problem considering only the TNN of the tensor without regularization. The normalized absolute error of the recovered projection at $s_{400}$ is depicted in Fig. 6(b). On the other hand, Figs. 6(c-d) depict the normalized error of the recovered projections obtained using the previously described ADMM framework with two different denoisers to solve the $\mathcal {Z}$ subproblem, the BM3D [39] and the pre-trained FFDNet denoiser, respectively. As it can be seen the errors obtained with both regularizers are very similar and they are significantly lower compared to using only the TNN. For the remaining of this paper, we only consider the pre-trained FFDNet denoiser, since it has faster execution time and it provides slightly better results. The computational complexity per iteration for an $n_1\times n_2\times n_3$ tensor is O($n_3 \textrm { min}(n_1^2n_2,n_1n_2^2)+n_1n_2n_3\log {n_3}+n_1n_2n_3n_ln_fn_k$) using a CNN with $n_l$ layers, $n_f$ features, and $n_k$ kernel pixels [36]. Here, $n_1,n_2,$ and $n_3$ are in the order of $N$ and $n_ln_fn_k \sim N$, thus the approximate complexity per iteration is $O(N^4)$.

 figure: Fig. 6.

Fig. 6. (a) 2D walnut CT projection at $s_{400}=178.9^{\circ }$. Recovered projection using (b) TNN without regularization, (c) TNN using BM3D [39] to solve the $\mathcal {Z}$ subproblem, and (d) TNN using the pre-trained FFDNet denoiser to solve the $\mathcal {Z}$ subproblem. Mean squared error (MSE) is calculated for each projection.

Download Full Size | PDF

3.2 Initialization

To find a suitable initialization point for the tensor ADMM algorithm, the matrix completion problem can be independently formulated for the tensor frontal slices. In this setting, the problem is to recover the original set of measurements $\mathcal {S}^{p}$ from its degraded observations $\textbf {Q}^{(p)}=\mathcal {P}_{\Omega }\left (\mathcal {S}^p\right )+V=\mathcal {P}_{\Omega }\left (\mathcal {Y}^{p}\right )$, where $V$ is additive white Gaussian noise with standard deviation $\sigma$ and $p$ indexes the view-angle. Due to the ill-posedness of the problem, a regularization term $\boldsymbol {\Phi }\left (\mathcal {S}\right )$ is added to the matrix completion formulation as follows:

$$\hat{\mathcal{S}}^{p}=\underset{\mathcal{S}^{p}}{\textrm{argmin }}\frac{1}{2\sigma^2}\|\textbf{Q}^{(p)}-\mathcal{P}_{\Omega}\left(\mathcal{S}^{p}\right)\|_2^2+\lambda \boldsymbol{\Phi}\left(\mathcal{S}^{p}\right).$$

Motivated by the performance of CNNs on image denoising tasks, Zhang et al. proposed solving the matrix completion problem in (10) based on a CNN denoiser prior and the half quadratic splitting (HQS) algorithm [40]. The HQS algorithm introduces an auxiliary variable $\textbf {R}^{(p)}$ to decouple the data fidelity term and the regularization term, as shown below,

$$\hat{\mathcal{S}}^{p}=\underset{\mathcal{S}^{p},\textbf{R}^{(p)}}{\textrm{ argmin }}\frac{1}{2\sigma^2}\|\textbf{Q}^{(p)}-\mathcal{P}_{\Omega}\left(\mathcal{S}^{p}\right)\|_2^2 + \lambda \boldsymbol{\Phi}\left(\textbf{R}^{(p)}\right) + \frac{\delta}{2} \|\textbf{R}^{(p)}-\mathcal{S}^{p}\|_2^2,$$
where $\delta (\ell )$ is a penalty term that increases in value in each iteration. From this formulation, two independent subproblems can be solved iteratively to reconstruct $\mathcal {S}^{(p)}$
$$ \hat{\mathcal{S}}^{p}(\ell+1)=\underset{\mathcal{S}^{(p)}}{\textrm{ argmin }}\|\textbf{Q}^{(p)}-\mathcal{P}_{\Omega}\left(\mathcal{S}^{p}\right)\|_2^2+ \delta(\ell) \sigma^2 \|\mathcal{S}^{(p)}-\textbf{R}^{(p)}(\ell)\|_2^2, $$
$$ \hat{\textbf{R}}^{(p)}(\ell+1)=\underset{\textbf{R}^{(p)}}{\textrm{ argmin }}\frac{\delta(\ell)}{2} \|\textbf{R}^{(p)}-\mathcal{S}^{(p)}(\ell+1)\|_2^2 + \lambda \boldsymbol{\Phi}\left(\textbf{R}^{(p)}\right). $$

The first subproblem, (12), is a regularized quadratic least squares with the following closed-form solution: $\hat {\mathcal {S}}^{p}(\ell +1)=\left (\mathcal {P}_{\Omega }\left (\textbf {Q}^{(p)}\right )+\delta (\ell )\sigma ^2 \textbf {R}^{(p)}(\ell )\right )\oslash \left (\mathcal {P}_{\Omega }\left (\textbf {I}\right )+\delta (\ell )\sigma ^2 \right )$, where $\textbf {I}$ is the identity matrix and $\oslash$ denotes the element-wise division [41]. The second subproblem, (13), corresponds to denoising the estimated measurements $\hat {\mathcal {S}}^{p}(\ell +1)$ by a CNN-based Gaussian denoiser with noise level $\sqrt {1/ \delta (\ell )}$. In this work, We calculate the initial value of $\delta (\ell )$ using cross-validation and the architecture proposed by Zhang et al. in [40] is used to solve (13), details on the iterative calculation of the parameter can be found in [41]. Finally, the complete algorithm used in this manuscript to recover the 3D object is summarized in Algorithm 1. The maximum iterations parameter for the HQS algorithm, maxIterHQS, was tuned manually and the maximum iterations parameter for the ADMM algorithm, maxIterADMM, was set to 50 to maintain a reasonable run-time.

4. Results

4.1 Cone beam micro CT data

We used a Rigaku GX 130 CT scanner, located in the Advanced Material Characterization Laboratory at the University of Delaware, to obtain projection data of a Juglans hopeiensis walnut. This micro CT scanner has a microfocus 130 kV/ 300 $\mu$A X-ray source and a $2352 \times 2944$ flat panel detector (FPD) with a 49.5 $\mu$m pixel size. The distances from the X-ray source to the center of rotation and the FPD are 120 mm and 224 mm, respectively. Lastly, the sample remains stationary while the gantry rotates in a circular trajectory. For this experiment, we operated the X-ray tube at 130 kV and 61 $\mu$A, and we used aluminum and copper filters of 5 mm and 0.06 mm thickness, respectively. Projections were obtained at $P=803$ view-angles uniformly distributed around the object. Only the center $1842 \times 1890$ region of the FPD was used since the projections of the walnut were confined to this area, and pixel binning was performed to attain $614 \times 630$ projections. Figures 7(a-c) depict the data structure of the mid frontal, lateral, and horizontal slices of the measurement’s tensor of the described full-dose dataset. The first simulation experiment compares three low-dose systems: SparseCT [16,18] with a moving collimator, StaticCodeCT with a random CA, and sparse view angles. Without loss of generality, the collimators’ pixel elements and the CA elements are assumed to have a one-to-one correspondence with the detector. Additionally, for a fair comparison, all systems are fixed to reduce 75% of the radiation dosage. For the SparseCT system, the collimator was formed by 13 open columns and 39 blocked columns and moved linearly at 30 Hz. We emulated the SparseCT projections by selecting the measurements corresponding to the open columns on the collimator at a particular point in time. As depicted in Figs. 7(d-f), the multi-slit collimator structure appears in the frontal slices of the measurement tensor, and the horizontal slices show a “zigzag” sampling pattern due to the linear motion of the collimator. The sampling in SparseCT produces close to uniform SD maps. Thus, $\ell _1$-norm based iterative regularization algorithms are typically employed to solve (1) for the SparseCT system. Next, consider the tensor structure created by the StaticCodeCT with a random static CA with 25% transmittance. We emulated the StaticCodeCT projections by setting to zero the FPD measurements corresponding to the locations of the blocking elements on the CA. As depicted in Figs. 7(g-i), the frontal slices of the tensor exhibit the same random pattern of the CA, and the horizontal and lateral slices show sparse column sampling. Finally, in the sparse view angles’ case, the projections are uniformly sampled to use a quarter of the available data, i.e. the number of view angles is reduced to $P=201$, given that projections are taken at a subsampled number of angles the tensor contains horizontal lines of missing data as shown in Figs. 7(j-l). The frontal slices of the tensor alternate between measuring the object and a zero matrix in the view angles that are not observed.

 figure: Fig. 7.

Fig. 7. Frontal, horizontal, and lateral tensor slices for (a-c) no coding, (d-f) SparseCT, (g-i) StaticCodeCT, (j-l) Sparse view angles.

Download Full Size | PDF

In practical applications, evaluating the image quality of reconstructions obtained from real data is not trivial, since it depends on the structural complexity of the object under inspection and the ground truth is generally unavailable [42]. Here, we use visualization-based evaluation and quantitative metrics. Specifically, the normalized root mean square distance (NRMSD) as defined in [43] and the structural similarity index (SSIM) are used to assess the reconstruction quality. If the reconstruction is approximately equal to the original image, then the NRMSD$\sim$0% and the SSIM$\sim$1. The FDK cone-beam reconstruction algorithm from the ASTRA tomography toolbox [44] was used to reconstruct the $512\times 512\times 512$ FDK-reference volume of the walnut with a voxel size of 90 $\mu$m using all the $P=803$ projections. Figure 8(a) depicts its central axial slice. As previously mentioned, CS-based algorithms are typically used for the SparseCT system reconstructions. In this work, we used 3D-TV regularization and the algorithm in [21]. Figure 8(b) depicts the central axial slice of the SparseCT reconstruction. The enlarged section of the highlighted portion of the image, shown in Fig. 8(b1), points to good reconstruction quality, although some smoothing of the fine features can be observed compared to the enlarged section of the reference volume depicted in Fig. 8(a1). To demonstrate that the StaticCodeCT sampling scheme results in multiple artifacts when iterative optimization algorithms are used, Fig. 8(c) depicts the reconstruction obtained using the algorithm in [21] to reconstruct the object using the StaticCodeCT system. Note that the reconstruction contains severe circular artifacts. Next, we used our proposed tensorial approach. That is, the matrix completion framework described in Section 3.2 is used to solve (10) for each view-angle. Then, these estimated projections are stacked vertically to form the initial guess for the measurements’ tensor $\mathcal {S}$. Subsequently, the ADMM algorithm is used to solve (6) and obtain the final estimation of the measurements’ tensor. Figure 4(e) depicts the estimated projection at $s_{635}$ using the proposed approach. As evidenced by the normalized absolute error of the recovered projection in Fig. 4(f-f1), the proposed methods accurately estimate the missing measurements. At this point, we used the FDK algorithm to reconstruct the 3D object, given that a set of cone-beam CT measurements, equivalent to a full-dose scan, is available. The central axial slice of this reconstruction is depicted in Fig. 8(d). Note that significantly higher fidelity is attained and the NRMSD and SSIM are improved. Thus, showing that the proposed system can obtain smooth reconstructions using conventional and fast CT algorithms. Figure 8(e) depicts the reconstruction obtained using sparse view-angles and the FDK algorithm for comparison. To demonstrate the importance of the deep prior regularization in (5) and the chosen tensor-low-rank definition, Fig. 8(f) depicts the reconstruction obtained using a different tensor completion algorithm to estimate the missing data. Here, we used the smooth PARAFAC tensor completion algorithm with quadratic variation minimization that considers the low-rankness of the tensor and imposes a smoothness constraint [30]. Note that multiple artifacts are obtained in the reconstruction since the projection data is not correctly estimated. Figure 8(g) depicts the reconstruction obtained when only the algorithm in [36] is used to estimate the missing data together with the FDK algorithm for reconstruction. Note, the procedure in [36] provides a reconstruction with a quality close to Fig. 8(d). However some of the ring artifacts remain in the axial slices and some line artifacts remain in the sagittal slices as evidenced in Fig. 8(g1) and Fig. 9(g1), respectively. It should be noted that the sparse view angles reconstruction presented in Fig. 8(e) can be enhanced by using an iterative algorithm and regularization as shown in Fig. 8(h). However, for a comparative number of iterations of the iterative algorithms, the proposed approach is still superior as demonstrated by the quantitative and qualitative results. For all algorithms, the hyper-parameters were tuned manually to obtain the best NRMSD and SSIM in each case.

 figure: Fig. 8.

Fig. 8. Juglans hopeiensis central axial slice reconstructions for: (a) full-dose CT data and FDK algorithm. (b) SparseCT and algorithm in [21]. (c) StaticCodeCT and algorithm in [21]. (d) StaticCodeCT and tensorial approach. (e) Sparse view angles and FDK. (f) StaticCodeCT and algorithm in [30]. (g) StaticCodeCT using [36]. (h) Sparse view angles and algorithm in [21]. Depicted NRMSD and SSIM were calculated per slice. (a$_1$-h$_1$) Zoom of the highlighted regions in (a-h). StaticCodeCT reconstructions present higher SSIM and lower NRMSD compared to the other methods.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Juglans hopeiensis central sagittal slice reconstructions for: (a) full-dose CT data and FDK algorithm. (b) SparseCT and algorithm in [21]. (c) StaticCodeCT and algorithm in [21]. (d) StaticCodeCT and tensorial approach. (e) Sparse view angles and FDK. (f) StaticCodeCT and algorithm in [30]. (g) StaticCodeCT using [36]. (h) Sparse view angles and algorithm in [21]. Depicted NRMSD and SSIM were calculated per slice. (a$_1$-h$_1$) Zoom of the highlighted regions in (a-h). StaticCodeCT reconstructions present higher SSIM and lower NRMSD compared to the other methods.

Download Full Size | PDF

To demonstrate the 3D imaging capabilities of all methods, Fig. 9 depicts the sagittal central slice reconstructions of the walnut for all cases. As depicted in the enlarged highlighted regions, using TV regularization to solve the inverse problem in the StaticCodeCT system exhibits numerous artifacts compared to the proposed reconstruction. In the axial slices, the artifacts follow concentric circles while in the sagittal slices the artifacts are confined to a horizontal band in the middle of the slice. Additionally, the NRMSD obtained using the proposed method is approximately 5% and 3% lower than the NRMSD obtained using a conventional CS algorithm and the SparseCT system, respectively. The volume SSIM attained with SparseCT and StaticCodeCT were 0.8 and 0.96 respectively. This example illustrates that using StaticCodeCT with the proposed reconstruction method leads to better performance when reconstructing the object from highly under-sampled data.

4.2 Low-dose Mayo CT database

The proposed algorithm was evaluated in simulations emulating the sampling effect of StaticCodeCT on a helical CT medical scanner. Namely, we used data from the “Low Dose CT Image and Projection Data” database (LDCT-and-Projection-data) [45]. The data corresponds to a helical sinogram of a chest examination from patient 120. Analogous to the procedure used with the circular cone-beam data presented in the previous section, the 3D tensor is built with the projections of the helical scan and has dimensions 736 $\times$ 64 $\times$ 12302 (detector channels $\times$ detector rows $\times$ view angles). The ground truth is set as the reconstruction obtained using all the projection data available from the dataset. The size of the reconstructed volume in each case is 512 $\times$ 512 $\times$ 344 with 0.74 mm $\times$ 0.74 mm $\times$ 1 mm resolution. In this experiment, we reconstructed the volume in Python using NumPy and the operator discretization library (ODL) [46].

We emulated StaticCodeCT acquisitions using a random static CA with 25% transmittance, i.e., a quarter of the original radiation dosage. Similar to the previous experiments, we assumed a one-to-one correspondence between the elements of the CA and the elements of the detector. Then we used the method described in Section 3 to estimate the missing projection data. For comparison, we also reconstructed the data cube using the 1/4th dose tube-current projection data available in [45] for the described dataset. We reconstructed the volume in each case with the same algorithm as the full-dose scan. Then, we calculated the NRMSD and SSIM for the central axial slice between the low-dose image volumes and the reference image volume as well as for the $200^{th}$ coronal slice. The results are shown in Fig. 10. The volume SSIM attained with the 1/4th tube current and StaticCodeCT were 0.78 and 0.97 respectively. Even though, both low-dose methods show image degradation in their reconstructions, StaticCodeCT results in less noisy reconstructions, higher SSIM, and lower NRMSD for an equivalent radiation dosage.

 figure: Fig. 10.

Fig. 10. Reconstructions of the $172^{th}$ axial slice (top) and $200^{th}$ coronal slice (bottom) for: (a-b) spiral CT full-dose reference scan; (c-d) StaticCodeCT with a CA of 25% transmittance, and (e-f) 1/4th tube current reduction. Depicted NRMSD and SSIM were calculated per slice. (a$_1$-f$_1$) Zoom of the highlighted regions in (a-f). StaticCodeCT reconstructions show up to 10% gains in the NRMSD compared to results attained by the industry standard of lowering the tube current.

Download Full Size | PDF

4.3 Additional quantitative indicators

As additional quantitative indicators of the proposed reconstruction quality, we calculate the peak signal to noise ratio (PSNR) of the axial slices of the reconstructed volumes in Sections 4.1 and 4.2; the PSNR is calculated as $\textrm {PSNR}=10\textrm {log}_{10}\left (R^2/\textrm {MSE}\right )$, where $R$ is the maximum value in the slice and the $MSE$ of the two slices $I_{1}$ and $I_2$ is $\textrm {MSE}=\sum _{x,y,z}[I_1(x,y,z)-I_2(x,y,z)]^2/\left (N^2\right )$. Figure 11 depicts the quantitative results. Note that the PSNR is also higher for the proposed StaticCodeCT system compared with other approaches. Furthermore, it can be seen that the top performers based on the PSNR are StaticCodeCT with the proposed approach and using only the tensor completion approach.

A third experiment using the microCT Rigaku system is performed to evaluate the effect of the proposed reconstruction framework on the modulation transfer function (MTF) of the system. In this scenario, the distances from the X-ray source to the center of rotation and the FPD remain 120 mm and 224 mm, respectively. We operated the X-ray tube at 130 kV and 61 $\mu$A, and we used aluminum and copper filters of 5 mm and 0.06 mm thickness, respectively. Projections were obtained at $P=459$ view-angles uniformly distributed around the object. Only the center $2272 \times 2864$ region of the FPD was used, and pixel binning was performed to attain $812 \times 1024$ projections. The reconstruction volume was set to be 512x512x512 with a pixel size of 0.144 $\mu$m. Figure 12(a) depicts an axial slice of the reconstructed microCT phantom used in this experiment. Particularly, this slice contains a slanted edge useful for the MTF calculation as described in [47]. Here, we averaged 10 slices where the slanted edge is reconstructed and calculate the MTF for the system without CA and a system with a CA of 25% transmittance using the proposed StaticCodeCT framework. Figure 12(b) depicts the MTF of the system with no CAs and the MTF obtained for the proposed StaticCodeCT framework with a CA of 25% transmittance. As it can be evidenced, the obtained MTF closely resembles the original system MTF.

 figure: Fig. 11.

Fig. 11. Axial PSNR of the reconstructions in Section 4.1 and Section 4.2. In Section 4.1 the top performers are StaticCodeCT with the proposed approach and using the algorithm in [36].

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. (a) MicroCT phantom axial slice [47]. (b) MTF of the CT system.

Download Full Size | PDF

5. Conclusions and future work

We accurately reconstructed a 3D volume using StaticCodeCT with 75% less radiation dosage using real projection data from a micro-CT cone-beam system and a medical helical CT system. Given the resulting non-uniformity of the sensing process, this work presented a tensor completion framework to estimate a complete set of cone-beam projections that can be used with conventional reconstruction algorithms such as FDK and SIRT. Different to other methods, the proposed approach considers the low-rankness of the measurements rather than regularizing the reconstructed object. Furthermore, to improve the run-time of the algorithm an initialization using matrix completion with a deep learning prior is used on the axial slices to speed-up the tensor completion approach used to inpaint the tensor of measurements. The performance of the proposed methods was compared to state-of-the-art algorithms and alternative low-dose CT systems. As shown in Section 4.1, the proposed algorithm for StaticCodeCT leads to reconstructions with lower NRMSD than those obtained using CS reconstruction algorithms, which present multiple ring artifacts in the axial slices given that a single-static CA is analogous to having damaged pixels on the FPD. Additionally, results with alternative inpainting algorithms present numerous artifacts compared to the proposed tensorial approach with deep prior regularization. Moreover, compared to the state-of-the-art tube current reduction, we attained a 10% improvement in the NRMSD when using StaticCodeCT. Finally, it’s worth noting that the proposed low-dose framework allows for further dose reduction by combining structured illumination with other low-dose architectures such as tube current reduction and sparse view angles. In these settings, the proposed approach can be used to estimate the missing measurements at the FPD as previously described, and then CS algorithms can be used to solve the resulting problem.

Regarding the proposed algorithm complexity, each iteration of the ADMM algorithm for tensor completion requires approximately O($N^4$) computations, and fast implementations of the FDK algorithm require approximately O($N^3\log {N}$) [48]. However, the methods used to solve (8) are well-suited for parallel computation on a GPU. Thus, the runtime is dominated by the algorithm used to solve (7), which has complexity $O(N^3\log N)$. On the other hand, the TV regularization algorithm using the unknown parameter Nesterov (UPN) algorithm described in [21] requires approximately $O(N^4)$ computations per iteration due to the back-projection operation [49]. Thus, this work provides a fast alternative for low-dose CT reconstruction.

Scattering was not considered during reconstruction. Scatter corrected data would increase the consistency of the imaging system model to the physics of image formation and thus would be able to contribute to enhancing the image quality of the reconstructed CT images. However, compared to conventional CT, using the CAs in front of the X-ray source presents a potential advantage since an efficient scatter estimation and correction can be applied and calculated with the prior information known of the CA [50]. Furthermore, as demonstrated in our previous work in [5154], random CAs are not optimal in CAXCT. The experimental demonstration of StaticCodeCT requires elaboration on the optimal CAs, the effects of penumbra and geometric magnification on the proposed system, and scattering. As such these topics are beyond the scope of this paper and will be addressed in a future manuscript.

Funding

University of Delaware (Blue Hen Proof of Concept Grant, UNIDEL); National Science Foundation (CIF 1717578).

Acknowledgements

We thank Dr. McCollough et al. for making available the spiral CT data used in Section 4.2, which was collected under grants EB017095 and EB017185 [45].

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in Section 4.1 will be available in [55] in October 2021; nonetheless, they are available from the corresponding author upon reasonable request. Data underlying the results presented in Section 4.2 are available in [45].

References

1. M. Prokop and M. Galanski, Spiral and Multislice Computed Tomography of the Body (Thieme, 2003).

2. G. Wang, H. Yu, and B. De Man, “An outlook on x-ray CT research and development,” Med. Phys. 35(3), 1051–1064 (2008). [CrossRef]  

3. G. N. Hounsfield, “Computerized transverse axial scanning (tomography): Part 1. description of system,” Br. J. Radiol. 46(552), 1016–1022 (1973). [CrossRef]  

4. X. Pan, E. Y. Sidky, and M. Vannier, “Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?” Inverse Probl. 25(12), 123009 (2009). [CrossRef]  

5. W. A. Kalender, “Dose in x-ray computed tomography,” Phys. Med. Biol. 59(3), R129–R150 (2014). [CrossRef]  

6. D. J. Brenner and E. J. Hall, “Computed tomography - an increasing source of radiation exposure,” N. Engl. J. Med. 357(22), 2277–2284 (2007). [CrossRef]  

7. J. Z. Liang, P. J. La Riviere, G. El Fakhri, S. J. Glick, and J. Siewerdsen, “Guest editorial low-dose ct: What has been done, and what challenges remain?” IEEE Trans. Med. Imag. 36(12), 2409–2416 (2017). [CrossRef]  

8. S. S. Hsieh and N. J. Pelc, “The feasibility of a piecewise-linear dynamic bowtie filter,” Med. Phys. 40(3), 031910 (2013). [CrossRef]  

9. S. Ouadah, M. Jacobson, J. W. Stayman, T. Ehtiati, C. Weiss, and J. H. Siewerdsen, “Task-driven orbit design and implementation on a robotic C-arm system for cone-beam CT,” Proc. SPIE 10132, 105–111 (2017). [CrossRef]  

10. D. Wu, K. Kim, G. El Fakhri, and Q. Li, “Iterative low-dose CT reconstruction with priors trained by artificial neural network,” IEEE Trans. Med. Imag. 36(12), 2479–2486 (2017). [CrossRef]  

11. E. Sidky, C.-M. Kao, and X. Pan, “Accurate image reconstruction from few-view limited-angle data in divergent-beam ct,” J. X-ray Scie. Technol. 14, 119–139 (2006).

12. J. S. Jorgensen, E. Y. Sidky, and X. Pan, “Quantifying admissible undersampling for sparsity-exploiting iterative image reconstruction in X-ray CT,” IEEE Trans. Med. Imag. 32(2), 460–473 (2013). [CrossRef]  

13. K. Hamalainen, A. Kallonen, V. Kolehmainen, and M. Lassas, “Sparse tomography,” J. Comput. Methods Sci. Eng. 35(3), B644–B665 (2013). [CrossRef]  

14. K. Hamalainen, L. Harhanen, A. Kallonen, A. Kujanpaa, E. Niemi, and S. Siltanen, “Tomographic x-ray data of a walnut,” arXiv:1502.04064v1 (2015).

15. A. P. Cuadros and G. R. Arce, “Coded aperture optimization in compressive X-ray tomography: a gradient descent approach,” Opt. Express 25(20), 23833–23849 (2017). [CrossRef]  

16. M. J. Muckley, B. Chen, T. Vahle, T. O’Donnell, F. Knoll, A. D. Sodickson, D. K. Sodickson, and R. Otazo, “Image reconstruction for interrupted-beam x-ray CT on diagnostic clinical scanners,” Phys. Med. Biol. 64(15), 155007 (2019). [CrossRef]  

17. D. J. Brady, A. Mrozack, K. MacCabe, and P. Llull, “Compressive tomography,” Adv. Opt. Photonics 7(4), 756–813 (2015). [CrossRef]  

18. T. Lee, C. Lee, J. Baek, and S. Cho, “Moving beam-blocker-based low-dose cone-beam ct,” IEEE Trans. Nucl. Sci. 63(5), 2540–2549 (2016). [CrossRef]  

19. M. T. McCann and M. Unser, “Biomedical image reconstruction: From the foundations to deep neural networks,” Found. Trends Signal Process. 13(3), 283–357 (2019). [CrossRef]  

20. Y. Kaganovsky, D. Li, A. Holmgren, H. Jeon, K. P. MacCabe, D. G. Politte, J. A. O’Sullivan, L. Carin, and D. J. Brady, “Compressed sampling strategies for tomography,” J. Opt. Soc. Am. A 31(7), 1369–1394 (2014). [CrossRef]  

21. T. Jensen, J. Jørgensen, P. Hansen, and S. Jensen, “Implementation of an optimal first-order method for strongly convex total variation regularization,” BIT Numer. Math. 52(2), 329–356 (2012). [CrossRef]  

22. M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,” IEEE J. Sel. Topics Signal Process. 1(4), 586–597 (2007). [CrossRef]  

23. S. Abbas, T. Lee, S. Shin, R. Lee, and S. Cho, “Effects of sparse sampling schemes on image quality in low-dose ct,” Med. Phys. 40(11), 111915 (2013). [CrossRef]  

24. E. J. Candes and B. Recht, “Exact low-rank matrix completion via convex optimization,” in Proc. 46th Allerton Conference on Communication, Control, and Computing, (2008), pp. 806–812.

25. J.-F. Cai, E. J. Candes, and Z. Shen, “A singular value thresholding algorithm for matrix completion,” SIAM J. Optim. 20(4), 1956–1982 (2010). [CrossRef]  

26. T. Kolda and B. Bader, “Tensor decompositions and applications,” SIAM Rev. 51(3), 455–500 (2009). [CrossRef]  

27. Q. Zhao, L. Zhang, and A. Cichocki, “Bayesian CP factorization of incomplete tensors with automatic rank determination,” IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1751–1763 (2015). [CrossRef]  

28. Z. Zhang, G. Ely, S. Aeron, N. Hao, and M. Kilmer, “Novel methods for multilinear data completion and de-noising based on tensor-svd,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition, (2014), pp. 3842–3849.

29. J. Liu, P. Musialski, P. Wonka, and J. Ye, “Tensor completion for estimating missing values in visual data,” IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 208–220 (2013). [CrossRef]  

30. T. Yokota, Q. Zhao, and A. Cichocki, “Smooth parafac decomposition for tensor completion,” IEEE Trans. on Signal Processing 64(20), 5423–5436 (2016). [CrossRef]  

31. Y.-F. Li, K. Shang, and Z.-H. Huang, “Low Tucker rank tensor recovery via ADMM based on exact and inexact iteratively reweighted algorithms,” J. Comput. Appl. Math. 331, 64–81 (2018). [CrossRef]  

32. N. D. Sidiropoulos and A. Kyrillidis, “Multi-way compressed sensing for sparse low-rank tensors,” IEEE Signal Processing Letters 19(11), 757–760 (2012). [CrossRef]  

33. C. F. Caiafa and A. Cichocki, “Multidimensional compressed sensing and their applications,” WIREs Data Min. Knowl. Discov. 3(6), 355–380 (2013). [CrossRef]  

34. Y. Chen, C. Hsu, and H. M. Liao, “Simultaneous tensor decomposition and completion using factor priors,” IEEE Trans. Pattern Anal. Mach. Intell. 36(3), 577–591 (2014). [CrossRef]  

35. M. E. Kilmer, K. Braman, N. Hao, and R. C. Hoover, “Third-order tensors as operators on matrices: A theoretical and computational framework with applications in imaging,” SIAM J. Matrix Anal. Appl. 34(1), 148–172 (2013). [CrossRef]  

36. X.-L. Zhao, W.-H. Xu, T.-X. Jiang, Y. Wang, and M. K. Ng, “Deep plug-and-play prior for low-rank tensor completion,” Neurocomputing 400, 137–149 (2020). [CrossRef]  

37. K. Zhang, W. Zuo, and L. Zhang, “Ffdnet: Toward a fast and flexible solution for cnn-based image denoising,” IEEE Trans. Image Process. 27(9), 4608–4622 (2018). [CrossRef]  

38. L. Fan, F. Zhang, H. Fan, and C. Zhang, “Brief review of image denoising techniques,” Vis. Comput. Ind. Biomed. Art 2(1), 7 (2019). [CrossRef]  

39. Y. Mäkinen, L. Azzari, and A. Foi, “Exact transform-domain noise variance for collaborative filtering of stationary correlated noise,” in 2019 IEEE International Conference on Image Processing (ICIP), (2019), pp. 185–189.

40. K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep cnn denoiser prior for image restoration,” in Proc. IEEE. Comput. Soc. Conf. Comput. Vis. Pattern Recognit., (2017), pp. 3929–3938.

41. W. Zuo, K. Zhang, and L. Zhang, Convolutional Neural Networks for Image Denoising and Restoration (Springer International Publishing, 2018), pp. 93–123.

42. J. Bian, J. H. Siewerdsen, X. Han, E. Y. Sidky, J. L. Prince, C. A. Pelizzari, and X. Pan, “Evaluation of sparse-view reconstruction from flat-panel-detector cone-beam CT,” Phys. Med. Biol. 55(22), 6575–6599 (2010). [CrossRef]  

43. G. Herman, Fundamentals of computerized tomography: image reconstruction from projections (Springer Verlag, 2009).

44. W. van Aarle, W. J. Palenstijn, J. Cant, E. Janssens, F. Bleichrodt, A. Dabravolski, J. D. Beenhouwer, K. J. Batenburg, and J. Sijbers, “Fast and flexible x-ray tomography using the astra toolbox,” Opt. Express 24(22), 25129–25147 (2016). [CrossRef]  

45. C. McCollough, B. Chen, D. Holmes III, X. Duan, Z. Yu, L. Yu, S. Leng, and J. Fletcher, “Data from low dose CT image and projection data,” The Cancer Imaging Archive, 2020, https://doi.org/10.7937/9npb-2637.

46. J. Adler, H. Kohr, and O. Oktem, “Operator discretization library (odl),” (2021). https://github.com/odlgroup/odl.

47. L. Y. Du, J. Umoh, H. N. Nikolov, S. I. Pollmann, T.-Y. Lee, and D. W. Holdsworth, “A quality assurance phantom for the performance evaluation of volumetric micro-CT systems,” Phys. Med. Biol. 52(23), 7087–7108 (2007). [CrossRef]  

48. S. Xiao, Y. Bresler, and D. C. Munson, “Fast feldkamp algorithm for cone-beam computer tomography,” in Proc. of IEEE ICIP, vol. 2 (2003), pp. II–819.

49. J. Dahl, P. Hansen, S. Jensen, and T. Jensen, “Algorithms and software for total variation image reconstruction via first-order methods,” Numer. Algorithms 53(1), 67–92 (2010). [CrossRef]  

50. L. Ren, F.-F. Yin, I. J. Chetty, D. A. Jaffray, and J.-Y. Jin, “Feasibility study of a synchronized-moving-grid (smog) system to improve image quality in cone-beam computed tomography (cbct),” Med. Phys. 39(8), 5099–5110 (2012). [CrossRef]  

51. A. P. Cuadros, C. Peitsch, H. Arguello, and G. R. Arce, “Coded aperture optimization for compressive x-ray tomosynthesis,” Opt. Express 23(25), 32788–32802 (2015). [CrossRef]  

52. X. Ma, Q. Zhao, A. Cuadros, T. Mao, and G. R. Arce, “Source and coded aperture joint optimization for compressive x-ray tomosynthesis,” Opt. Express 27(5), 6640–6659 (2019). [CrossRef]  

53. T. Mao, A. P. Cuadros, X. Ma, W. He, Q. Chen, and G. R. Arce, “Fast optimization of coded apertures in x-ray computed tomography,” Opt. Express 26(19), 24461–24478 (2018). [CrossRef]  

54. T. Mao, A. P. Cuadros, X. Ma, W. He, Q. Chen, and G. R. Arce, “Coded aperture optimization in x-ray tomography via sparse principal component analysis,” IEEE Trans. Comput. Imag. 6, 73–86 (2020). [CrossRef]  

55. A. Cuadros, “Sparse sampling in x-ray computed tomography via spatial and spectral coded illumination,” Ph.D. thesis, University of Delaware (2020).

Data availability

Data underlying the results presented in Section 4.1 will be available in [55] in October 2021; nonetheless, they are available from the corresponding author upon reasonable request. Data underlying the results presented in Section 4.2 are available in [45].

55. A. Cuadros, “Sparse sampling in x-ray computed tomography via spatial and spectral coded illumination,” Ph.D. thesis, University of Delaware (2020).

45. C. McCollough, B. Chen, D. Holmes III, X. Duan, Z. Yu, L. Yu, S. Leng, and J. Fletcher, “Data from low dose CT image and projection data,” The Cancer Imaging Archive, 2020, https://doi.org/10.7937/9npb-2637.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Sparse view CT example. (a) 128x128 walnut phantom [14]. CT reconstruction using an iterative algorithm and total variation regularization for (b) 50%, (c) 25 %, and 10% of the full-sampling view angles scenario. Note that the reconstruction quality decreases substantially when using less than 25 % of the full data.
Fig. 2.
Fig. 2. CAXCT system. A different CA $T_p$ is placed in front of the rotating source at each location $s_p$. StaticCodeCT uses a single-static code $T_{p}$ for all $p$.
Fig. 3.
Fig. 3. 2D SD distributions of the volume’s central axial (top) and sagittal (bottom) slices, when using: (a-b) No CA, (c-d) StaticCodeCT, and (e-f) CAXCT. Note that concentric rings of low and high density correspond to the CA pattern in the single CA case.
Fig. 4.
Fig. 4. 2D walnut CT projection at $s_{635}=284.2^{\circ }$ for (a) No CA, (b) CA with 25% transmittance. (c) Recovered projection using matrix completion [25] and its (d) normalized absolute error. (e) Recovered projection using the proposed approach and (f) its normalized absolute error. (a$_1$-f$_1$) Zoom of the highlighted regions in (a-f). Note that using only the low rank of the 2D projections is not enough to estimate the missing data.
Fig. 5.
Fig. 5. StaticCodeCT tensor representation of experimental projection data of a Juglans hopeiensis walnut.
Fig. 6.
Fig. 6. (a) 2D walnut CT projection at $s_{400}=178.9^{\circ }$. Recovered projection using (b) TNN without regularization, (c) TNN using BM3D [39] to solve the $\mathcal {Z}$ subproblem, and (d) TNN using the pre-trained FFDNet denoiser to solve the $\mathcal {Z}$ subproblem. Mean squared error (MSE) is calculated for each projection.
Fig. 7.
Fig. 7. Frontal, horizontal, and lateral tensor slices for (a-c) no coding, (d-f) SparseCT, (g-i) StaticCodeCT, (j-l) Sparse view angles.
Fig. 8.
Fig. 8. Juglans hopeiensis central axial slice reconstructions for: (a) full-dose CT data and FDK algorithm. (b) SparseCT and algorithm in [21]. (c) StaticCodeCT and algorithm in [21]. (d) StaticCodeCT and tensorial approach. (e) Sparse view angles and FDK. (f) StaticCodeCT and algorithm in [30]. (g) StaticCodeCT using [36]. (h) Sparse view angles and algorithm in [21]. Depicted NRMSD and SSIM were calculated per slice. (a$_1$-h$_1$) Zoom of the highlighted regions in (a-h). StaticCodeCT reconstructions present higher SSIM and lower NRMSD compared to the other methods.
Fig. 9.
Fig. 9. Juglans hopeiensis central sagittal slice reconstructions for: (a) full-dose CT data and FDK algorithm. (b) SparseCT and algorithm in [21]. (c) StaticCodeCT and algorithm in [21]. (d) StaticCodeCT and tensorial approach. (e) Sparse view angles and FDK. (f) StaticCodeCT and algorithm in [30]. (g) StaticCodeCT using [36]. (h) Sparse view angles and algorithm in [21]. Depicted NRMSD and SSIM were calculated per slice. (a$_1$-h$_1$) Zoom of the highlighted regions in (a-h). StaticCodeCT reconstructions present higher SSIM and lower NRMSD compared to the other methods.
Fig. 10.
Fig. 10. Reconstructions of the $172^{th}$ axial slice (top) and $200^{th}$ coronal slice (bottom) for: (a-b) spiral CT full-dose reference scan; (c-d) StaticCodeCT with a CA of 25% transmittance, and (e-f) 1/4th tube current reduction. Depicted NRMSD and SSIM were calculated per slice. (a$_1$-f$_1$) Zoom of the highlighted regions in (a-f). StaticCodeCT reconstructions show up to 10% gains in the NRMSD compared to results attained by the industry standard of lowering the tube current.
Fig. 11.
Fig. 11. Axial PSNR of the reconstructions in Section 4.1 and Section 4.2. In Section 4.1 the top performers are StaticCodeCT with the proposed approach and using the algorithm in [36].
Fig. 12.
Fig. 12. (a) MicroCT phantom axial slice [47]. (b) MTF of the CT system.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

x ^ = argmin  x y C H x 2 2 + λ Φ ( x ) ,
S D i = j = 1 M 1 M 2 P H j i .
^ =  argmin X | | X | |  s.t.  X i j = O i j  with  ( i , j ) Ω ,
S ^ = argmin S  rank ( S )  s.t.  P Ω ( S ) = P Ω ( Y ) ,
S ^ = argmin  S | | S | | TNN + λ Φ ( S )  s.t.  P Ω ( S ) = P Ω ( Y ) .
S ^ = argmin  S , Q , Z | | Q | | TNN + λ Φ ( Z ) + I ( S )  s.t  Q = S , Z = S ,
argmin  Q | | Q ( ) | | TNN + β / 2 | | S ( ) Q ( ) + Λ 1 ( ) / β | | F 2 ,
argmin  Z λ Φ ( Z ( ) ) + β / 2 | | S ( ) Z ( ) + Λ 2 ( ) / β | | F 2 .
argmin  S I ( S ( ) ) + β 2 | | S ( ) Q ( + 1 ) + Λ 1 ( ) β | | F 2 + β 2 | | S ( ) Z ( + 1 ) + Λ 2 ( ) β | | F 2 .
S ^ p = argmin  S p 1 2 σ 2 Q ( p ) P Ω ( S p ) 2 2 + λ Φ ( S p ) .
S ^ p =  argmin  S p , R ( p ) 1 2 σ 2 Q ( p ) P Ω ( S p ) 2 2 + λ Φ ( R ( p ) ) + δ 2 R ( p ) S p 2 2 ,
S ^ p ( + 1 ) =  argmin  S ( p ) Q ( p ) P Ω ( S p ) 2 2 + δ ( ) σ 2 S ( p ) R ( p ) ( ) 2 2 ,
R ^ ( p ) ( + 1 ) =  argmin  R ( p ) δ ( ) 2 R ( p ) S ( p ) ( + 1 ) 2 2 + λ Φ ( R ( p ) ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.