Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Tomographic absorption spectroscopy based on dictionary learning

Open Access Open Access

Abstract

Tomographic absorption spectroscopy (TAS) has an advantage over other optical imaging methods for practical combustor diagnostics: optical access is needed in a single plane only, and the access can be limited. However, practical TAS often suffers from limited projection data. In these cases, priors such as smoothness and sparseness can be incorporated to mitigate the ill-posedness of the inversion problem. This work investigates use of dictionary learning (DL) to effectively extract useful a priori information from the existing dataset and incorporate it in the reconstruction process to improve accuracy. We developed two DL algorithms; our numerical results suggest that they can outperform classical Tikhonov reconstruction under moderate noise conditions. Further testing with experimental data indicates that they can effectively suppress reconstruction artifacts and obtain more physically plausible solutions compared with the inverse Radon transform.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Combustion diagnostics have become indispensable for both fundamental combustion studies and industrial applications [13]. Optical sensing techniques are currently the dominant approaches due to their non-invasiveness, fast response and versatility in measuring a variety of thermophysical quantities [1,49]. Among those techniques, laser absorption spectroscopy enjoys features such as ease of implementation, high sensitivity, species specificity and the minimal requirement of optical access, making it extremely amenable for practical combustor diagnosis [4,10]. However, due to its line-of-sight (LOS) nature it cannot provide spatially-resolved measurements.

To overcome this limitation, tomographic absorption spectroscopy (TAS) has been introduced in which multiple LOS measurements along various orientations are de-convolved together to recover the distributions of temperature, species concentration and pressure [5,6,11]. However, for practical applications such as engine measurements, the space for deploying laser beams is highly restricted and only 10s of beams are available [12]. Thus, the number of equations in the inverse problem of TAS is typically far less than that of the unknowns in the target field, making the problem severely ill-posed [13,14]. To alleviate the ill-posedness, supplementary information is essential for the reconstruction process to obtain a solution that conforms to the physical reality. For example, spatial smoothness is a commonly used prior, which can be incorporated into the inversion process via the so-called Tikhonov regularization [13,15].

Data driven approaches have also been adopted in TAS to extract useful information from the existing dataset and incorporate it into the reconstruction process. For example, Huang et al designed convolutional neural networks to build up a relationship between the limited projections and the corresponding reconstructions [16,17]. The neural networks can then quickly produce the target field given the input projections. According to the study, CNN can not only speed up the reconstruction process but also improve the noise immunity over the classical iterative methods.

Dictionary learning (DL) is another data-driven approach which relies on the principle of compressed sensing. Since 1990s, it has been extensively investigated and successfully applied to the fields of signal processing and image recovery [1821]. For example, DL has been applied to landscape imagery [22], face recognition [23] and later to biomedical imaging [24,25], including computed tomography (CT) and magnetic resonance imaging (MRI). Different from deep learning [17], DL does not need a large training set, since the limited training phantoms can be divided into a large number of small patches [2628] and with the application of compressed sensing (CS), the characteristics of these patches can be extracted by dictionary learning algorithms [2935]. Recently, DL has been successfully applied in spectral CT [36,37]. Kong et al improved the so-called prior image constrained compressed sensing (PICCS) with DL by using the information extracted by DL as an additional prior besides total variation (TV). Wu et al. [37] combined image domain material decomposition (IDMD) with DL, which was then compared with the IDMD-TV method and exhibited superior performance. However, in aforementioned CT applications, over 100,000 beams were used for the reconstruction of 512×512 grids. While, as mentioned in TAS only 10s of LOS measurements are available. Therefore, the applicability of DL for TAS needs to be thoroughly investigated.

In this work, we propose a reconstruction technique based on DL for limited-data TAS. The method can perform reconstruction efficiently via simple matrix calculations with a computational time about 2 ms; thus, real-time data processing can be envisaged. Furthermore, a regularized version of this approach was also developed by incorporating Tikhonov regularization, which was consequently named DL-Tik method. Performance both methods were tested with synthetic phantoms under various simulation conditions. The comparative studies between the two methods with Tikhonov reconstruction suggested their superiority. Finally, both methods were tested with the raw data of a previous experiment conducted by E.F. Nasir et al. [38]. The results indicate that the DL-based methods can effectively suppress artifacts and obtain more physically plausible reconstructions than inverse Radon transform.

2. Theory

2.1 Mathematical background of TAS

The mathematical principle of TAS can be found in many recent studies [4,11]; a brief summary is provided here. The basic physical principle behind TAS is the Beer’s law described as

$$b(\lambda )={-} \ln \frac{{{I_t}(\lambda )}}{{{I_0}(\lambda )}} = P\smallint Z(l )S({T(l ),\lambda } )\varphi ({\lambda ,T(l ),Z(l )} )dl = \smallint x({\lambda ,T(l ),Z(l )} )dl, $$
where b is the absorbance, defined as the logarithmic ratio between the incident ${I_0}$ and transmitted ${I_t}$ light intensity respectively; P the pressure in the ROI, which is assumed to be uniform in most applications; Z the concentration of the absorbing species, e.g. H2O; l the position along the LOS; S the spectral line strength and only depends on the local temperature ($T$) and the wavelength ($\lambda $); $\varphi $ the line shape function, which can be approximated with a Voigt profile for general combustion applications [39]; and x the absorption coefficient.

Figure 1 illustrates the principle of TAS. The ROI is discretized into M = n×n grids and the physical parameters in each grid are assumed to be uniform. Thus, for the i-th beam, the relationship between the absorption coefficient and the measured absorbance at a certain wavelength $\lambda $ can be expressed as

$${b_i}(\lambda )= \mathop \sum \limits_j {x_j}(\lambda )L({i,j} ), $$
where ${b_i}$ is the absorbance; ${x_j}$ the absorption coefficient in the j-th grid; and $L({i,j} )$ the length of the i-th beam within the j-th grid. The equations for all the beams at a particular wavelength can be written in a matrix form as
$${\boldsymbol b} = {\boldsymbol Ax}, $$
where ${\boldsymbol b} \in \mathbb{\textrm{R}}^N$ is the vector of absorbance; ${\boldsymbol x} \in \mathbb{\textrm{R}}^{\boldsymbol M}$ is the vector of absorption coefficients at the specified wavelength; and ${\boldsymbol A} \in \mathbb{\textrm{R}}^{{\boldsymbol N} \times {\boldsymbol M}}$ is the weight matrix, which is dictated by the beam arrangement.

 figure: Fig. 1.

Fig. 1. Illustration of tomographic absorption spectroscopy. The ROI is discretized into M = n×n grids. The intersect of the i-th beam within the j-th grid is highlighted in yellow and denoted as L(i,j).

Download Full Size | PDF

Over the past decades, numerous algorithms have been adopted for solving Eq. (3) among which algebraic reconstruction technique (ART) and its variants are the most popular ones especially for limited-data tomography [4042]. Other algorithms, such as maximum likelihood expectation maximization (MLEM) [43], Tikhonov reconstruction [13] and singular value decomposition (SVD) [44], are also widely used. In this work, we explore the feasibility of DL for TAS when only a small number of training set is available.

2.2 Method of dictionary learning

The reconstruction based on DL mainly consists of two steps, including the generation of a redundant dictionary with a training set, and the reconstruction of the absorption coefficients with the projection data and the dictionary.

Firstly, it is better to employ training phantoms with similar local characteristics as the target field to be reconstructed. The purpose of training is to represent the training phantoms in a sparse form. Assuming C training phantoms are employed and organized in columns, each with a dimensionality of M = n×n. To effectively extract the local information, the phantoms are divided into square patches, each with a size of P = p×p grids. A linear operator ${\boldsymbol E}$ is defined to extract patches from a phantom, which maps $\mathbb{\textrm{R}}^{M \times 1}$ into $\mathbb{\textrm{R}}^{P \times O}$, where $O \ge M/P$ is the total number of patches extracted. Its inverse operation can be directly applied as ${{\boldsymbol E}^{ - 1}}$. Similarly, the operator ${\boldsymbol E}$ can also map a series of columns $\mathbb{\textrm{R}}^{M \times C}$ into a matrix $\mathbb{\textrm{R}}^{P \times ({O \times C} )}$. The process of sparse coding can be express as

$${\boldsymbol EX} = {\boldsymbol DY}, $$
where ${\boldsymbol X} \in \mathbb{\textrm{R}}^{M \times C}$ is a matrix with each of its column ${{\boldsymbol x}_{\boldsymbol i}} \in \mathbb{\textrm{R}}^{M \times 1}$ contains a vectored phantom; ${\boldsymbol D} \in \mathbb{\textrm{R}}^{P \times K}$ is the matrix of redundant dictionary and each of its column is called an atom with P elements which is consistent with the dimension of an image patch, K is a parameter reflecting the redundancy of the dictionary; and ${\boldsymbol Y} \in \mathbb{\textrm{R}}^{K \times ({O \times C} )}$ is the sparse representation of the training set under the dictionary ${\boldsymbol D}$.

The optimal dictionary-based sparse representation can be obtained by solving the following minimization problem as

$$\min {\boldsymbol EX} - {\boldsymbol D}{{\boldsymbol Y}_F}\; s.t.{{\boldsymbol y}_i}_0 \le t, $$
where ${{\boldsymbol y}_i}$ is the i-th column of ${\boldsymbol Y}$ and t is the upper bound of the sparsity.

The minimization problem can be solved iteratively. Each iteration has a sparse-coding stage and a codebook-updating stage, respectively. Matrices ${\boldsymbol Y}$ and ${\boldsymbol D}$ are updated alternately between these two stages. Taking the k-th iteration as an example, in the sparse-coding stage, the matrix ${{\boldsymbol D}^{(k )}}$ is considered as a constant while the matrix ${{\boldsymbol Y}^{({k + 1} )}}$ is solved from Eq. (5) using a compressed sensing recovery algorithm, e.g. orthogonal matching pursuit (OMP) [45]. Then, in the codebook-updating stage, the matrix ${{\boldsymbol Y}^{({k + 1} )}}$ is regarded as a constant, and the dictionary matrix ${{\boldsymbol D}^{({k + 1} )}}$ is updated column by column according to Eq. (5) using the K-SVD algorithm [30]. Finally, the dictionary-generating process would be terminated when the residual ${\boldsymbol EX} - {\boldsymbol D}{{\boldsymbol Y}_F}$ become less than a small positive number $\varepsilon $.

After obtaining the dictionary and sparse representation of the training phantoms, the relationship between the projection data and the sparse coding needs to be derived. The absorbance projections of the training phantoms can be calculated with

$${\boldsymbol B} = {\boldsymbol AX}, $$
where ${\boldsymbol B} \in \mathbb{\textrm{R}}^{N \times C}$ consists of C columns of projections corresponding to the same number of training phantoms. Note that Eq. (6) is just an extension of Eq. (3), since the corresponding columns in ${\boldsymbol X}$ and ${\boldsymbol B}$ satisfy the relationship in Eq. (3). From Eqs. (4) and (6), it can be noticed that matrices ${\boldsymbol Y}$ and ${\boldsymbol B}$ have a linear relationship, which can be summarized by
$${\boldsymbol f}({\boldsymbol Y} )= {\boldsymbol B}, $$
where ${\boldsymbol f}$ is a linear transformation that maps $\mathbb{\textrm{R}}^{K \times ({O \times C} )}$ into $\mathbb{\textrm{R}}^{N \times C}$. Reshape${\; }{\boldsymbol Y\; } \in \mathbb{\textrm{R}}^{K \times ({O \times C} )}$ into ${\boldsymbol Y^{\prime}} \in \mathbb{\textrm{R}}^{({K \times O} )\times C}$, Eq. (7) can be re-written in a matrix form as
$${\boldsymbol FY^{\prime}} = {\boldsymbol B}, $$
where ${\boldsymbol F} \in \mathbb{\textrm{R}}^{N \times ({K \times O} )}$ is the matrix form of the linear transformation ${\boldsymbol f}$. Thus, the corresponding columns from ${\boldsymbol F}$ and ${\boldsymbol B}$ satisfy the same relationship as well
$${\boldsymbol Fy}{{^{\prime}}_i} = {{\boldsymbol b}_i}, $$
where ${\boldsymbol y}{{^{\prime}}_i}$ and ${{\boldsymbol b}_i}$ are the i-th column of ${\boldsymbol Y^{\prime}}$ and ${\boldsymbol B}$ respectively. At the same time, ${\boldsymbol y}{{^{\prime}}_i}$ and ${{\boldsymbol b}_i}$ are also the sparse coding and the projection data of the i-th training phantom respectively. As long as the relationship between the projection data and the sparse coding of the corresponding phantom can be established, we can extend the relationship from the training to the applications.

Assuming the projection data of the field to be reconstructed has the same relationship with the sparse coding as the training cases, we have

$${\boldsymbol y}{{^{\prime}}_r} = {{\boldsymbol F}^{ - 1}}{{\boldsymbol b}_r}, $$
where ${\boldsymbol y}{{^{\prime}}_r}$ and ${{\boldsymbol b}_r}$ are the sparse coding of the phantom and its absorbance projection, and ${{\boldsymbol F}^{ - 1}}$ is the inverse transformation of ${\boldsymbol F}$. However, since the closed form of ${\boldsymbol F}$ is unknown, we cannot calculated its inversion directly, but it can be found with ${\boldsymbol Y}$ as:
$${{\boldsymbol F}^{ - 1}} = {\boldsymbol Y^{\prime}}{{\boldsymbol B}^ + }, $$
where ${{\boldsymbol B}^ + }$ is the Moore-Penrose pseudoinverse of ${\boldsymbol B}$., which is easy to calculate since the matrix ${\boldsymbol B}$ only contains a small number of columns.

In practice, to reduce the computational cost, the all-zero rows of matrix ${\boldsymbol Y^{\prime}}$ are removed. The matrix after this operation is denoted as ${{\boldsymbol Y}_U}({ \in \mathbb{\textrm{R}}^{U \times C}},t \le U \le K )$. Similarly, other matrices related to ${\boldsymbol Y}$ should be modified accordingly. For example, ${{\boldsymbol D}_U} \in \mathbb{\textrm{R}}^{P \times U}$ can be obtained from ${\boldsymbol D}$ with its corresponding columns deleted. Obviously, it satisfies

$${{\boldsymbol D}_U}{{\boldsymbol Y}_U} = {\boldsymbol DY}, $$
thus we have
$${\boldsymbol F}_U^{ - 1} = {{\boldsymbol Y}_U}{{\boldsymbol B}^ + }, $$
where ${\boldsymbol F}_U^{ - 1}$ is the corresponding inverse transformation.

Combining the analysis detailed above, the reconstruction can be performed as

$${\boldsymbol x} = {{\boldsymbol E}^{ - 1}}{{\boldsymbol D}_{\boldsymbol U}}{{\boldsymbol F}^{ - 1}}{\boldsymbol b}. $$
This formulation can take full advantage of both projection data (a posterior) and a priori information extracted from the training data to reconstruct the fields of absorption coefficients. After the initialization process detailed above, the reconstruction only involves matrix multiplication, which has an overwhelming advantage in terms of computational efficiency over conventional iterative approaches.

2.3 TAS based on DL and regularization

The essence of DL-based reconstruction is to recover the target field piece-by-piece according to both the training data and the measured projection data. Its piece-by-piece nature may occasionally cause discontinuities at the borders of the patches and influence the accuracy of reconstruction. To mitigate this problem, supplementary information that enforcing smoothness should be incorporated into the reconstruction process via Tikhonov regularization as

$$\min ({\boldsymbol Ix} - {{\boldsymbol E}^{ - 1}}{{\boldsymbol D}_{\boldsymbol U}}{{\boldsymbol F}^{ - 1}}{{\boldsymbol b}_2} + \gamma {\boldsymbol L}{{\boldsymbol x}_2}), $$
where ${\boldsymbol L} \in \mathbb{\textrm{R}}^{M \times M}$ is the Laplacian matrix defined as [46]
$${{\boldsymbol L}_{ij}} = \left\{ {\begin{array}{c} {1,\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; i = j\; \; }\\ { - 1/{n_i},\; \; \; \; \; \; \; \; \; \; \; \; j\; neighbors\; i}\\ {\; \; \; \; \; 0,\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; otherwise\; \; } \end{array}} \right.$$
where ${n_i}$ is the total number of pixels neighboring the i-th pixel. The regularization factor $\gamma $ can be determined using the so-called L-curve method [13].

The solution of Eq. (15) can be express as

$${\boldsymbol x} = {({{{\boldsymbol I}^T}{\boldsymbol I} + {\gamma^2}{{\boldsymbol L}^T}{\boldsymbol L}} )^{ - 1}}{{\boldsymbol I}^T}{{\boldsymbol E}^{ - 1}}{{\boldsymbol D}_{\boldsymbol U}}{{\boldsymbol F}^{ - 1}}{\boldsymbol b} = {({{\boldsymbol I} + {\gamma^2}{{\boldsymbol L}^T}{\boldsymbol L}} )^{ - 1}}{{\boldsymbol E}^{ - 1}}{{\boldsymbol D}_{\boldsymbol U}}{{\boldsymbol F}^{ - 1}}{\boldsymbol b}. $$
For comparative purpose, the reconstruction solely using Tikhonov regularization is implemented in the following simulative studies as
$${\boldsymbol x} = {({{{\boldsymbol A}^T}{\boldsymbol A} + {\gamma^2}{{\boldsymbol L}^T}{\boldsymbol L}} )^{ - 1}}{{\boldsymbol A}^T}{\boldsymbol b}. $$
Note that the $\gamma $ here can be different form that of Eq. (17) and needs to be determined with the L-curve method as well.

3. Simulative studies

3.1 Design of simulative studies

Simulative studies were performed with a large set of artificial phantoms of absorption coefficient at a specific wavelength under various assumed experimental conditions. The simulative study consists of the following steps: 1) prepare a series of training phantoms which were discretized into M=40×40 = 1600 grids, and use them to compute the dictionary ${{\boldsymbol D}_U}$ and the inverse transformation matrix ${{\boldsymbol F}^{ - 1}}$; 2) generate another phantom ${{\boldsymbol x}_0}$, and calculate the absorbance projection ${{\boldsymbol b}_0}$ according to Eq. (3) using the weight matrix that is defined by the beam arrangement; 3) to mimic the practical experimental conditions uniform noise was added to the projections denoted as ${{\boldsymbol b}_m}$; 4) reconstruct the phantom according to ${{\boldsymbol b}_m}$ by either method mentioned above. The result is denoted as ${{\boldsymbol x}_r}$; and 5) compare ${{\boldsymbol x}_r}$ with ${{\boldsymbol x}_0}$ to assess the reconstruction accuracy. The phantoms were discretized into M=40×40 = 1600 grids; and the number of beams assumed was 32 which is far less than the number of grids.

Figure 2 presents 16 example phantoms for training and reconstruction. Phantom 1 contains two Gaussian peaks and is referred to as double-Gauss phantom. The two Gaussian peaks are randomly positioned and their widths were also randomly selected in a certain range. All simulative studies in this work were implemented with MATLAB R2020b on a computer with an Intel Core i7-DMI2-X79 PHC 2.60 GHz CPU.

 figure: Fig. 2.

Fig. 2. (a) Phantom 1, namely double-Gauss phantom for testing and (b) the corresponding cases for dictionary training.

Download Full Size | PDF

To quantitatively compare the algorithms in the following studies, the relative error and structural similarity (SSIM) [47] were adopted as the metrics to assess the reconstructions.

The relative error $er$ is defined as

$$er = {{\boldsymbol x}_r} - {{\boldsymbol x}_0}_2/{{\boldsymbol x}_0}_2, $$
where the ${{\boldsymbol x}_r}$ and ${{\boldsymbol x}_0}$ refer to the phantom and the reconstruction, respectively.

The SSIM is defined as

$$\textrm{SSIM} = ({2{u_r}{u_0} + {c_1}} )({2{\sigma_{r0}} + {c_2}} )/({u_r^2 + u_0^2 + {c_1}} )({\sigma_r^2 + \sigma_0^2 + {c_2}} ), $$
where u is the mean value, $\sigma $ is the standard deviation, and ${\sigma _{r0}}$ is the covariance of ${{\boldsymbol x}_r}$ and ${{\boldsymbol x}_0}$, and ${c_1} = 1 \times {10^{ - 4}}$ and ${c_2} = 9 \times {10^{ - 4}}$ are constants.

3.2 Determination of DL parameters

There are several critical parameters that affect the generation of a dictionary. For a controlled test, other simulation conditions are fixed, including the phantoms for training and testing, the noise level, and the number of beams and their arrangement. For learning algorithms, we expect that the more training phantoms used, the more information is harvested and a more physically plausible solution can be obtained. However, this maybe not true as the reconstruction process also relies on the behavior of the matrix ${\boldsymbol \; B},$ the condition number of which not necessarily decreases as the number of training phantoms increases. Therefore, the number of training phantoms is one significant parameter that needs to be optimized.

DL and DL-Tik were both tested on the double-Gauss phantom which was divided into 40×40 grids as mentioned. Four parallel projections each with eight equidistant beams were assumed. The projections were uniformly spaced with an angular difference of 45°. 2% uniformly distributed random noise was added to the projections to mimic practical situations. The number of training phantoms varied from 5 to 45. The simulation was repeated 30 times for each number of phantoms, and the training phantoms were regenerated for each simulation case. The training process was terminated when the residual ${\boldsymbol EX} - {\boldsymbol D}{{\boldsymbol Y}_F}$ is smaller than 0.01. The relative reconstruction error and the condition number of matrix B as a function of the number of training phantoms are shown in Fig. 3(a).

 figure: Fig. 3.

Fig. 3. Results for DL parameter study. (a) The reconstruction error and the condition number as a function of the number of training phantoms; (b) Evolution of residual in the training process for different number of dictionary elements; (c) Reconstruction error as a function of the number of dictionary elements and (d) Computational time as a function of the number of dictionary elements.

Download Full Size | PDF

As can be seen for both DL and DL-Tik, the reconstruction error initially decreases as the number of training phantoms increases, but then quickly rise to its maximum when the number of training phantoms equals to the number of beams. This phenomenon can be traced back to the process of calculating ${{\boldsymbol B}^ + }$ i.e. the Moore-Penrose pseudo-inverse of ${\boldsymbol B}$. The number of columns of ${\boldsymbol B} \in \mathbb{\textrm{R}}^{N \times C}$ is the number of training phantoms, and the number of rows is that of beams, the condition number of B would be larger as the number of rows and columns became closer, resulting in an increased error. Figure 3(a) has confirmed such tendency.

Besides, the number of elements in the dictionary i.e. redundancy is another key factor that would influence the performance of DL. To study the effect of this parameter, dictionaries with 25 to 400 elements were generated for reconstruction using both DL and DL-Tik. Other simulation conditions remain the same as the aforementioned cases. For these cases, the number of iterations required for convergence were considerably influenced by the number of elements. For different numbers of training cases and elements, the evolution of the residual in the iteration process is shown in Fig. 3(b). The relative errors and the computational cost as a function of the number of elements are shown in Fig. 3(c) and (d), respectively.

As can be seen, the residual in the training process oscillates significantly before the convergence. And when there are more elements in the dictionary, it requires more iterations to reach the convergence. As seen, there is no prominent relation between the error and the element quantity. It is obvious that the error of DL-Tik was smaller than that of DL, while the computational cost of DL was much less than that of DL-Tik. Besides, the computational time of both DL and DL-Tik shows little variance as the number of element changes. On the other hand, more training time is required as the number of elements increases due to the increased iteration steps before convergence. Since the number of elements does not show an obvious influence on the errors as well as the reconstruction time of DL and DL-Tik, it is recommended the number of elements should be determined according to the time required for training.

3.3 Results and discussions

3.3.1 Results for general discussion and comparisons of algorithms

To test the general performance of the algorithms under testing, simulative studies were conducted with a large number of testing phantoms. In these implementations, the beam arrangement was assumed the same as in Section 3.2. 2% uniform noise was added to the projections and 16 training phantoms were used for the generation of dictionaries with 800 elements. The reconstructions of three selected phantoms are shown in Fig. 4. Phantom 1 has been introduced in Section 3.1. Phantom 2 consists of one Gaussian peak and one Gaussian valley. Phantom 3 contains three Gaussian peaks, and is thus referred to as triple-Gauss phantom. Contours are drawn in different views to reflect the results of reconstruction; and the relative errors and SSIM are shown and compared as well.

 figure: Fig. 4.

Fig. 4. The phantoms and the corresponding reconstructions of the three methods with the relative error and SSIM listed in the figure.

Download Full Size | PDF

For Phantoms 1 and 3, DL-Tik exhibited the best performance among the three methods for its lowest error and largest SSIM. The positions and heights of the peaks recovered by DL-Tik also matched the best with the phantoms. On the other hand, the performance of DL was slightly inferior to DL-Tik, and Tikhonov reconstruction had the worst performance as it produced over-smoothed results. As for Phantom 2, the three approaches had similar performance.

In addition, the computational costs of these phantoms are shown in Table 1. DL is the most computational efficient and had a computational time on the order of millisecond, indicating its potential for real-time applications. Meanwhile, the computational time of Tikhonov regularization was about 0.12 s and that of DL-Tik was twice this value.

Tables Icon

Table 1. Computational time for three phantoms reconstructed by the three methods.

3.3.2 Robustness against deviated training

In practical applications, the characteristics of the available training phantoms may deviate from that of the target field to be reconstructed. Thus, the influence of such deviation should be assessed. To study this effect, the training sets that were used to generate dictionaries for reconstructing Phantom 1 and Phantom 3 respectively in Section 3.3.1 were exchanged. That is, 16 double-Gauss training phantoms were used to generate a dictionary for reconstructing Phantom 3, and 16 triple-Gauss training phantoms for Phantom 1. Results of Phantom 2 are not shown here to avoid providing redundant information. Actually, the same conclusion could be derived if Phantom 2 were applied in this section instead of Phantom 1 or Phantom 3. Other simulation conditions were kept the same as in Section 3.3.1. The corresponding reconstructions are shown in Fig. 5.

 figure: Fig. 5.

Fig. 5. The phantoms and the corresponding reconstructions. Phantom 1 was reconstructed a dictionary generated with triple-Gauss training phantoms, and Phantom 3 with double-Gauss training phantoms.

Download Full Size | PDF

It can be seen that with a deviated training set, DL-Tik can still perform well. The relative error and SSIM suggest that the reconstructions were able to replicate the ground truth well, though inferior to the results in Section 3.3.1. This can be explained by the fact that DL use patches from the training phantoms as the elements for reconstruction and the patches from deviated phantoms still have their similarities with the test phantoms; as long as the similar features of the patches are recorded, effective reconstructions can be accomplished. However, compared with the results in Section 3.3.1, it can be inferred that usage of priors that are more similar to the ground truth can result in more accurate reconstruction.

3.3.3 Effects of beam configurations

This section investigates the effect of the beam configuration which has a dramatic impact on the reconstruction accuracy as it determines the weight matrix [4850]. Simulative cases were implemented with Phantom 3. As in Section 3.2, four projections each with 40 equidistant beams were assumed. The simulation was repeated 100 times, and each time 32 beams were randomly selected from the 160 beams. The results of the assessment metrics of the three algorithms including the mean, the maximum, the minimum and the standard deviations (σ) are shown in Fig. 6.

 figure: Fig. 6.

Fig. 6. Relative error and SSIM for reconstructions with only 32 randomly selected beams. Both the mean error and SSIM were displayed by a bar chart, and the maximum and minimum values are represented by error bars. Standard deviations were listed in the bar chart.

Download Full Size | PDF

On the one hand, compared with the results in Section 3.3.1, the errors with randomly chosen beams were larger than those of uniformly spaced beams, and the SSIMs were smaller, especially for DL and DL-Tik. On the other hand, although the mean values of errors and SSIMs of the three methods were close, the maximum error and minimum SSIM of Tikhonov reconstruction deviated from the mean values dramatically, indicating poor reconstruction stability. The standard deviations of error and SSIM of Tikhonov reconstruction were also much larger than those of DL and DL-Tik, suggesting that DL and DL-Tik were less sensitive to the beam arrangement than Tikhonov reconstruction.

3.3.4 Influence of noise level

Measurement noise is ubiquitous in practical applications and its influence on the algorithms was also tested. Phantoms with three Gaussian peaks similar to Phantom 3 were used here for both training and testing. 16 training and 30 testing phantoms were randomly generated and 32 beams were assumed, whose arrangement was the same as that in Section 3.3.1. Noisy projections were simulated with a noise level ranging from 0.5% to 10%. The reconstruction results are summarized in Fig. 7.

 figure: Fig. 7.

Fig. 7. (a) Error and (b) SSIM of reconstructions under a noise level ranging from 0.5% to 10%.

Download Full Size | PDF

As can be seen, both DL-Tik and Tikhonov reconstruction show good noise immunity. As the noise level increases, the relative error of the two methods only increases by 3% and the SSIM decreases slightly. Besides, in term of the relative error and SSIM, DL-Tik presents a better performance than Tikhonov reconstruction at almost all noise levels, especially when the noise level is low. The DL method works well at low noise levels, however, as the noise level increases, its error increases rapidly and SSIM decreases below 0.3, suggesting poor noise resistance. This can be inferred from the reconstruction process of these methods which can be summarized by

$${{\boldsymbol x}_0} + {{\boldsymbol x}_e} = {\boldsymbol V}({{{\boldsymbol b}_0} + {\boldsymbol e}} ),$$
where ${{\boldsymbol x}_0}$ is ground truth, ${{\boldsymbol x}_e}$ the reconstruction error caused by noise, ${{\boldsymbol b}_0}$ the noise-free component of the projection, and ${\boldsymbol e}$ the measurement noise. ${\boldsymbol V}$ is the reconstruction operator, which equals to ${{\boldsymbol E}^{ - 1}}{{\boldsymbol D}_{\boldsymbol U}}{{\boldsymbol F}^{ - 1}}$ for DL, ${({{{\boldsymbol A}^T}{\boldsymbol A} + {\gamma^2}{{\boldsymbol L}^T}{\boldsymbol L}} )^{ - 1}}{{\boldsymbol A}^T}$for Tikhonov and ${({{\boldsymbol I} + {\gamma^2}{{\boldsymbol L}^T}{\boldsymbol L}} )^{ - 1}}{{\boldsymbol E}^{ - 1}}{{\boldsymbol D}_{\boldsymbol U}}{{\boldsymbol F}^{ - 1}}$ for DL-Tik. Since the operator is linear, Eq. (21) can be taken apart into ${{\boldsymbol x}_0} = {\boldsymbol V}{{\boldsymbol b}_0}$ and ${{\boldsymbol x}_e} = {\boldsymbol Ve}$. It can be noticed that for Tikhonov reconstruction and DL-Tik, the operator ${\boldsymbol V}$ comprises of the Laplacian matrix ${\boldsymbol L}$.Therefore, the measurement noise ${\boldsymbol e}$ is smoothed when mapped into ${{\boldsymbol x}_e}$ by ${\boldsymbol V}$. Since the noise is assumed random with a zero mean value, ${{\boldsymbol x}_e}$ converges to zero as well. This is why Tikhonov and DL-Tik have good noise immunity. On the contrary, DL is lack of such a smoothing operator and thus sensitive to noise. In conclusion, DL-Tik had the best performance under a wide range of noise levels.

3.3.5 Effect of number of projections

This section investigates the effect of number of beams on reconstructions. The phantoms used were the same as those in Section 3.3.4, and 2% noise level was used. The number of beams varies from 24 to 160, and the beams were randomly selected from the 160 beams used in Section 3.3.3. Each simulation case has a different number of beams and was repeated 30 times. The mean values of the relative error and SSIM of these cases are shown in Fig. 8.

 figure: Fig. 8.

Fig. 8. (a) Error and (b) SSIM of reconstructions under different beam numbers. Beams were selected randomly from the 160 beams in Section 3.3.3

Download Full Size | PDF

As seen, as the number of beams varies, DL-Tik has the smallest relative error and the largest SSIM for most simulation cases. As for the other two methods, when the number of beams is smaller than 90, DL has a better performance and verse vice. It is an interesting phenomenon that as the number of beams increases, the errors of DL-Tik and Tikhonov reconstruction show a consistent trend of decreasing. This can be explained by the fact that more beams can provide more spatial information, which can lead to a more physically plausible solution. However, for DL, no similar trend was observed. As the number of beams increases, its error first increases and then start to decrease when the number reach a certain point. This phenomenon is attributed to the increment of the condition number of matrix B. In Section 3.2, we have discussed that when the number of training cases approaches the number of beams, the condition number of B increases rapidly, which would introduce numerical errors when calculating the pseudo inverse. This explanation was confirmed by recording and analyzing the condition numbers for all cases.

On the other hand, by observing the curves of DL-Tik it can be found that when the number of beams reaches about 70, the speed of increment in SSIM and decrement in error became slower. In other words, there are diminishing returns with further increasing the number of beams. The results can serve as a reference to balance the experimental cost/complexity and reconstruction accuracy.

4. Experimental studies

To further test the performance of the proposed algorithms, studies were performed on experimental data from Nasir et al.’s work [38]. This work was recently done by Nasir and Sanders, who applied TAS for the measurement of ammonia in diesel engine exhaust [38]. In their work, the ROI was probed with a near-infrared quantum cascade laser (QCL). 30 equiangular projections were obtained, each has 71 equidistant parallel LOS measurements. The ROI was discretized into 71×71 grids and the distribution of ammonia absorption coefficient was reconstructed with inverse Radon transform [51]. The mole fraction of ammonia was thence recovered. Absorbance data at 100 timestamps within an overall period of 500 ms was recorded. In this section, the proposed algorithms are applied to post-process the raw data from Nasir et al. [38] to test their effectiveness.

In our work, the reconstruction of Case 1 in Nasir et al.’s work [38] was repeated to evaluate the performance of our methods. The reconstructed absorption coefficient distributions at 5-50 ms, namely the reconstructions by inverse Radon transform at the first ten timestamps, were used as the training set for the recovery of each timestamp afterwards. Besides, when conducting DL-Tik, the smoothness prior was only applied to the central region, whose scope was determined according to the inverse Radon transform. Figure 9 presents the reconstructions of inverse Radon transform, DL and DL-Tik for different timestamps. Comparing the DL results with that of the inverse Radon, it can be seen that the reconstructions were similar in structure, indicating effectiveness of both DL and DL-Tik. Besides, unlike the reconstructions of inverse Radon which feature numerous artifacts, the reconstructions from DL and DL-Tik are smoother and more physically plausible.

 figure: Fig. 9.

Fig. 9. Reconstructions of NH3 mole fractions with three algorithms for different timestamps, including 125 ms, 250 ms, 375 ms and 495 ms.

Download Full Size | PDF

To assess the performance of the algorithms, the residual in the projections and the computational time were computed and are listed in Table 2. The values are the average of 30 cases. The residual, namely the discrepancy between the predicted projections according to Eq. (6) and the measured projections, had little difference between inverse Radon transform and DL. DL-Tik had a slightly larger residual, which may result from the effect of regularization. DL needed the least computational time among the algorithms. Its computational time was about 2 ms, which was less than the time interval between two measurements. Consequently, it has the potential to realize real-time data processing.

Tables Icon

Table 2. Reconstruction residual and average computational time for processing the experimental data.

The performance of DL and DL-Tik was also tested with reduced projection data. Two strategies were adopted to reduce the data. The first is to reduce the number of projections but keep the number of beams in each projection the same. The second is to use fewer beams in each projection. The reconstructions with reduced projection data are shown in Fig. 10, where the number of projections is denoted by ‘A’ and the number of beams in each projection by ‘T’. For example, the reconstruction by using full measurement data is denoted as 30A×71 T. Notations 10A×71 T and 30A×24 T are examples corresponding to the two strategies, respectively.

 figure: Fig. 10.

Fig. 10. Reconstructed NH3 mole fractions at 125 ms with different beam arrangements. Notations A and T indicate the number of projections and the number of beams in each projection, respectively.

Download Full Size | PDF

For inverse Radon transform, less projections resulted in more streak-like artifacts, and less beams in each projection significantly reduced the resolution. It is hard to see differences discrepancy between reconstructions of both DL and DL-Tik with different beam arrangements, suggesting their robustness under restricted beam arrangement situations.

5. Conclusions

This work proposed DL-based methods for TAS with limited projection data. Simulative studies were conducted to compare their performance with Tikhonov reconstruction. The results suggested that DL and DL-Tik have better performance than Tikhonov regularization under moderate noise. The good reconstructions of DL and DL-Tik with deviated training set suggested their robustness. Additionally, different beam arrangements can result in large deviation in performance of Tikhonov reconstruction, while DL and DL-Tik can obtain stable reconstructions. Also, DL can be easily influenced by noise, while Tikhonov reconstruction and DL-Tik have better noise immunity. As the number of beams increases, the reconstructions by DL are deteriorated by the large condition number of the matrix B, while DL-Tik have the best performance under most of the application situations. DL and DL-Tik were also tested with experimental data to recover two-dimensional fields of NH3 mole fraction. The results have shown that they can effectively suppress artifacts and recover a more physically plausible solution than inverse Radon transfer. Besides, their performance remain stable even with the number of beams decreased, while the reconstructions by inverse Radon transfer deteriorates significantly. Nevertheless, the DL-based method is not flawless and needs further improvement. For instance, the current method requires a good number of projections and probing beams, which maybe unavailable in some practical applications due to limited space and optical access. Thus, how to maintain the same level of performance with much more restricted projection data is one of our future topics. One possible solution is to use high-quality artificial data for training so as to fully explore the a priori information and relax the dependency on posterior data i.e. measurements.

Funding

National Natural Science Foundation of China (51976122, 52061135108); Foundation of Science and Technology on Combustion and Explosion Laboratory (6142603200508); National Science and Technology Major Project (2017-III-0007-0033).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are available in [38].

References

1. C. S. Goldenstein, R. M. Spearrin, J. B. Jeffries, and R. K. Hanson, “Infrared laser-absorption sensing for combustion gases,” Prog. Energy Combust. Sci. 60, 132–176 (2017). [CrossRef]  

2. J. A. Nwaboh, Z. Qu, O. Werhahn, and V. Ebert, “Interband cascade laser-based optical transfer standard for atmospheric carbon monoxide measurements,” Appl. Opt. 56(11), E84–E93 (2017). [CrossRef]  

3. M. G. Allen, “Diode laser absorption sensors for gas-dynamic and combustion flows,” Meas. Sci. Technol. 9(4), 545–562 (1998). [CrossRef]  

4. W. W. Cai and C. F. Kaminski, “Tomographic absorption spectroscopy for the study of gas dynamics and reactive flows,” Prog. Energy Combust. Sci. 59, 1–31 (2017). [CrossRef]  

5. S. J. Grauer, J. Emmert, S. T. Sanders, S. Wagner, and K. J. Daun, “Multiparameter gas sensing with linear hyperspectral absorption tomography,” Meas. Sci. Technol. 30(10), 105401 (2019). [CrossRef]  

6. C. Wei, K. K. Schwarm, D. I. Pineda, and R. M. Spearrin, “Volumetric laser absorption imaging of temperature, CO and CO2 in laminar flames using 3D masked Tikhonov regularization,” Combustion and Flame (2020).

7. Z. Lang, S. Qiao, Y. He, and Y. Ma, “Quartz tuning fork-based demodulation of an acoustic signal induced by photo-thermo-elastic energy conversion,” Photoacoustics 22, 100272 (2021). [CrossRef]  

8. Y. Ma, Y. Hu, S. Qiao, Y. He, and F. K. Tittel, “Trace gas sensing based on multi-quartz-enhanced photothermal spectroscopy,” Photoacoustics 20, 100206 (2020). [CrossRef]  

9. S. Qiao, Y. Ma, Y. He, P. Patimisco, A. Sampaolo, and V. Spagnolo, “Ppt level carbon monoxide detection based on light-induced thermoelastic spectroscopy exploring custom quartz tuning forks and a mid-infrared QCL,” Opt. Express 29(16), 25100–25108 (2021). [CrossRef]  

10. J. Foo and P. A. Martin, “Tomographic imaging of reacting flows in 3D by laser absorption spectroscopy,” Appl. Phys. B 123(5), 160 (2017). [CrossRef]  

11. W. W. Cai and C. F. Kaminski, “A tomographic technique for the simultaneous imaging of temperature, chemical species, and pressure in reactive flows using absorption spectroscopy with frequency-agile lasers,” Appl. Phys. Lett. 104(3), 034101 (2014). [CrossRef]  

12. N. Terzija, J. L. Davidson, C. A. Garcia-Stewart, P. Wright, K. B. Ozanyan, S. Pegrum, T. J. Litt, and H. McCann, “Image optimization for chemical species tomography with an irregular and sparse beam array,” Meas. Sci. Technol. 19(9), 094007 (2008). [CrossRef]  

13. D. Calvetti, S. Morigi, L. Reichel, and F. Sgallari, “Tikhonov regularization and the L-curve for large discrete ill-posed problems,” J. Comput. Appl. Mathematics 123(1-2), 423–446 (2000). [CrossRef]  

14. K. J. Daun, S. J. Grauer, and P. J. Hadwin, “Chemical species tomography of turbulent flows: Discrete ill-posed and rank deficient problems and the use of prior information,” J. Quant. Spectrosc. Radiat. Transfer 172, 58–74 (2016). [CrossRef]  

15. A. N. Tikhonov, “Inverse problems in heat conduction,” Journal of Engineering Physics 29(1), 816–820 (1975). [CrossRef]  

16. J. Q. Huang, H. C. Liu, J. H. Dai, and W. W. Cai, “Reconstruction for limited-data nonlinear tomographic absorption spectroscopy via deep learning,” J. Quant. Spectrosc. Radiat. Transfer 218, 187–193 (2018). [CrossRef]  

17. J. Q. Huang, J. A. Zhao, and W. W. Cai, “Compressing convolutional neural networks using POD for the reconstruction of nonlinear tomographic absorption spectroscopy,” Comput. Phys. Commun. 241, 33–39 (2019). [CrossRef]  

18. B. A. Olshausen and D. J. Field, “Sparse coding with an overcomplete basis set: A strategy employed by V1?” Vision Res. 37(23), 3311–3325 (1997). [CrossRef]  

19. K. Engan, K. Skretting, and J. H. Husoy, “Family of iterative LS-based dictionary learning algorithms, ILS-DLA, for sparse signal representation,” Digital Signal Processing 17(1), 32–49 (2007). [CrossRef]  

20. M. D. Plumbley, “Dictionary learning for L1-exact sparse coding,” in international conference on independent component analysis and signal separation, 2007), 406–413.

21. X. Chen, Z. Du, J. Li, X. Li, and H. Zhang, “Compressed sensing based on dictionary learning for extracting impulse components,” Signal Processing 96, 94–109 (2014). [CrossRef]  

22. I. M. Daniela, P. B. Steven, C. R. Joel, and G. Chandana, “Undercomplete learned dictionaries for land cover classification in multispectral imagery of Arctic landscapes using CoSA: clustering of sparse approximations,” in Proc.SPIE, 2013).

23. Q. Zhang and B. Li, “Discriminative K-SVD for dictionary learning in face recognition,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2010), 2691–2698.

24. S. Ravishankar and Y. Bresler, “MR Image Reconstruction From Highly Undersampled k-Space Data by Dictionary Learning,” IEEE Trans. Med. Imaging 30(5), 1028–1041 (2011). [CrossRef]  

25. C. Zhang, T. Zhang, M. Li, C. Peng, Z. Liu, and J. Zheng, “Low-dose CT reconstruction via L1 dictionary learning regularization using iteratively reweighted least-squares,” BioMedical Engineering OnLine 15(1), 66 (2016). [CrossRef]  

26. I. Tosic and P. Frossard, “Dictionary Learning,” IEEE Signal Processing Magazine (2011).

27. M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Trans. Image Process. 15(12), 3736–3745 (2006). [CrossRef]  

28. J. C. Yang, J. Wright, T. Huang, and Y. Ma, “Image super-resolution as sparse representation of raw image patches,” in 2008 IEEE Conference on Computer Vision and Pattern Recognition, Vols 1-12 (IEEE, New York, 2008), pp. 2378.

29. M. Zhu, D. Luo, and L. Yi, “Generalized PCA algorithm for feature extraction,” Comput. Eng. Appl. 44, 38–40 (2008).

30. M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process. 54(11), 4311–4322 (2006). [CrossRef]  

31. J. Mairal, J. Ponce, G. Sapiro, A. Zisserman, and F. Bach, “Supervised Dictionary Learning,” in neural information processing systems, 1033–1040 (2008).

32. R. Mazhar and P. D. Gader, “EK-SVD: Optimized dictionary design for sparse representations,” in international conference on pattern recognition, 2008), 1–4.

33. M. Yaghoobi, T. Blumensath, and M. E. Davies, “Regularized dictionary learning for sparse approximation,” in european signal processing conference, 2008), 1–5.

34. X. Deng, D. Wang, L. Cheng, and S. Kong, “Traffic Sign Recognition Using Dictionary Learning Method,” in wri global congress on intelligent systems, 2010), 372–375.

35. S. Yang, Z. Liu, M. Wang, F. Sun, and L. Jiao, “Multitask dictionary learning and sparse representation based single-image super-resolution reconstruction,” Neurocomputing 74(17), 3193–3203 (2011). [CrossRef]  

36. H. H. Kong, X. X. Lei, L. Lei, Y. B. Zhang, and H. Y. Yu, “Spectral CT Reconstruction Based on PICCS and Dictionary Learning,” IEEE Access 8, 133367–133376 (2020). [CrossRef]  

37. W. Wu, H. Yu, P. Chen, F. Luo, F. Liu, Q. Wang, Y. Zhu, Y. Zhang, J. Feng, and H. Yu, “Dictionary learning based image-domain material decomposition for spectral CT,” Phys. Med. Biol. 65(24), 245006 (2020). [CrossRef]  

38. E. F. Nasir and S. T. Sanders, “Laser absorption tomography for ammonia measurement in diesel engine exhaust,” Appl. Phys. B 126(11), 178 (2020). [CrossRef]  

39. J. J. Olivero and R. L. Longbothum, “Empirical fits to the Voigt line width: a brief review,” J. Quant. Spectrosc. Radiat. Transfer 17(2), 233–236 (1977). [CrossRef]  

40. R. Gordon, R. Bender, and G. T. Herman, “Algebraic reconstruction techniques (ART) for three-dimensional electron microscopy and x-ray photography,” J. Theor. Biol. 29(3), 471–481 (1970). [CrossRef]  

41. C. Badea and R. Gordon, “Experiments with the nonlinear and chaotic behaviour of the multiplicative algebraic reconstruction technique (MART) algorithm for computed tomography,” Phys. Med. Biol. 49(8), 1455–1474 (2004). [CrossRef]  

42. A. H. Andersen and A. C. Kak, “Simultaneous algebraic reconstruction technique (SART): a superior implementation of the art algorithm,” Ultrasonic imaging 6(1), 81–94 (1984). [CrossRef]  

43. A. P. Dempster, “Maximum likelihood from incomplete data via the EM algorithm,” J. Royal Stat. Soc. 39(1), 1–22 (1977). [CrossRef]  

44. P. C. Hansen, “The truncated SVD as a method for regularization,” Bit 27(4), 534–553 (1987). [CrossRef]  

45. S. Chen, S. A. Billings, and W. Luo, “Orthogonal least squares methods and their application to non-linear system identification,” Int. J. Control 50(5), 1873–1896 (1989). [CrossRef]  

46. K. J. Daun, “Infrared Species Limited Data Tomography through Tikhonov Reconstruction,” Ht2009: Proceedings of the Asme Summer Heat Transfer Conference 2009, Vol 1, 187–196 (2009).

47. Z. B. Zhang, X. Y. Wang, G. A. Zheng, and J. G. Zhong, “Fast Fourier single-pixel imaging via binary illumination,” Sci. Rep. 7(1), 9 (2017). [CrossRef]  

48. Q. Wang, T. Yu, H. Liu, J. Huang, and W. Cai, “Optimization of camera arrangement for volumetric tomography with constrained optical access,” J. Opt. Soc. Am. B 37(4), 1231–1339 (2020). [CrossRef]  

49. T. Yu, B. Tian, and W. W. Cai, “Development of a beam optimization method for absorption-based tomography,” Opt. Express 25(6), 5982–5999 (2017). [CrossRef]  

50. S. J. Grauer, P. J. Hadwin, and K. J. Daun, “Bayesian approach to the design of chemical species tomography experiments,” Appl. Opt. 55(21), 5772–5782 (2016). [CrossRef]  

51. A. C. Kak, M. Slaney, and G. Wang, “Principles of Computerized Tomographic Imaging,” Med. Phys. 29(1), 107 (2002). [CrossRef]  

Data availability

Data underlying the results presented in this paper are available in [38].

38. E. F. Nasir and S. T. Sanders, “Laser absorption tomography for ammonia measurement in diesel engine exhaust,” Appl. Phys. B 126(11), 178 (2020). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Illustration of tomographic absorption spectroscopy. The ROI is discretized into M = n×n grids. The intersect of the i-th beam within the j-th grid is highlighted in yellow and denoted as L(i,j).
Fig. 2.
Fig. 2. (a) Phantom 1, namely double-Gauss phantom for testing and (b) the corresponding cases for dictionary training.
Fig. 3.
Fig. 3. Results for DL parameter study. (a) The reconstruction error and the condition number as a function of the number of training phantoms; (b) Evolution of residual in the training process for different number of dictionary elements; (c) Reconstruction error as a function of the number of dictionary elements and (d) Computational time as a function of the number of dictionary elements.
Fig. 4.
Fig. 4. The phantoms and the corresponding reconstructions of the three methods with the relative error and SSIM listed in the figure.
Fig. 5.
Fig. 5. The phantoms and the corresponding reconstructions. Phantom 1 was reconstructed a dictionary generated with triple-Gauss training phantoms, and Phantom 3 with double-Gauss training phantoms.
Fig. 6.
Fig. 6. Relative error and SSIM for reconstructions with only 32 randomly selected beams. Both the mean error and SSIM were displayed by a bar chart, and the maximum and minimum values are represented by error bars. Standard deviations were listed in the bar chart.
Fig. 7.
Fig. 7. (a) Error and (b) SSIM of reconstructions under a noise level ranging from 0.5% to 10%.
Fig. 8.
Fig. 8. (a) Error and (b) SSIM of reconstructions under different beam numbers. Beams were selected randomly from the 160 beams in Section 3.3.3
Fig. 9.
Fig. 9. Reconstructions of NH3 mole fractions with three algorithms for different timestamps, including 125 ms, 250 ms, 375 ms and 495 ms.
Fig. 10.
Fig. 10. Reconstructed NH3 mole fractions at 125 ms with different beam arrangements. Notations A and T indicate the number of projections and the number of beams in each projection, respectively.

Tables (2)

Tables Icon

Table 1. Computational time for three phantoms reconstructed by the three methods.

Tables Icon

Table 2. Reconstruction residual and average computational time for processing the experimental data.

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

b ( λ ) = ln I t ( λ ) I 0 ( λ ) = P Z ( l ) S ( T ( l ) , λ ) φ ( λ , T ( l ) , Z ( l ) ) d l = x ( λ , T ( l ) , Z ( l ) ) d l ,
b i ( λ ) = j x j ( λ ) L ( i , j ) ,
b = A x ,
E X = D Y ,
min E X D Y F s . t . y i 0 t ,
B = A X ,
f ( Y ) = B ,
F Y = B ,
F y i = b i ,
y r = F 1 b r ,
F 1 = Y B + ,
D U Y U = D Y ,
F U 1 = Y U B + ,
x = E 1 D U F 1 b .
min ( I x E 1 D U F 1 b 2 + γ L x 2 ) ,
L i j = { 1 , i = j 1 / n i , j n e i g h b o r s i 0 , o t h e r w i s e
x = ( I T I + γ 2 L T L ) 1 I T E 1 D U F 1 b = ( I + γ 2 L T L ) 1 E 1 D U F 1 b .
x = ( A T A + γ 2 L T L ) 1 A T b .
e r = x r x 0 2 / x 0 2 ,
SSIM = ( 2 u r u 0 + c 1 ) ( 2 σ r 0 + c 2 ) / ( u r 2 + u 0 2 + c 1 ) ( σ r 2 + σ 0 2 + c 2 ) ,
x 0 + x e = V ( b 0 + e ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.