Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Integrated approach to the data processing of four-dimensional datasets from phase-contrast x-ray tomography

Open Access Open Access

Abstract

Phase contrast X-ray tomography (PCT) enables the study of systems consisting of elements with similar atomic numbers. Processing datasets acquired using PCT is nontrivial because of the low-pass characteristics of the commonly used single-image phase retrieval algorithm. In this study, we introduce an image processing methodology that simultaneously utilizes both phase and attenuation components of an image obtained at a single detector distance. This novel method, combined with regularized Perona-Malik filter and bias-corrected fuzzy C-means algorithm, allows for automated segmentation of data acquired through four-dimensional PCT. Using this integrated approach, the three-dimensional coarsening morphology of an Aluminum-29.9wt% Silicon alloy can be analyzed.

© 2014 Optical Society of America

1. Introduction

Phase contrast x-ray tomography (PCT) enables the study of weakly absorbing samples, as well as systems consisting of elements with similar atomic numbers. This is because the real part of the refractive index δ dominates over the imaginary part β in PCT experiments [1, 2]. In propagation-based PCT, a phase map is commonly obtained by applying phase-retrieval algorithms to the projections collected by a similar setup to Fig. 1 [3, 4]. Then, filtered back projection [5] is applied to these phase maps in order to recover the refractive index decrement during reconstruction. This two-step approach [6] of phase-retrieval followed by backprojection will hereafter be referred to as PAG [3]. On the other hand, the one-step approach of using the filtered back projection algorithm to reconstruct the images directly from the traditional absorption-based projection images, collected at the same sample-to-detector distance (R2) will be referred to as FBP. Robust segmentation of the PCT reconstructions is crucial for quantitative analysis of 3D structures, e.g., surface area of interfaces and interfacial curvature measurements [7].

 figure: Fig. 1

Fig. 1 PCT experiment, where R2 is the detector distance, θ is the projection angle and λ is the wavelength of propagating wave. The frame (x, y, z) is the reference frame, while (r1, r2) lie in the plane of the imaging detector [4].

Download Full Size | PDF

The growing size of data collected during the experiments, typically on the order of terabytes, renders manual segmentation impractical. Furthermore, such an approach does not offer a high degree of reproducibility, accuracy, or consistency. However, automated segmentation of PCT reconstructions is nontrivial for the following reasons:

  • During the phase-retrieval step in the PAG approach, the tomograms undergo a smoothing operation. In other words, the single-image phase-retrieval algorithms that are conventionally used show inherently low-pass characteristics [8, 9], which, in our study, leads to diffuse interfaces in the PAG reconstructions. Smoothing of the interface makes it difficult to determine accurately the interfacial morphology.
  • On the other hand, the FBP images are characterized by dark-bright fringes at the interfaces, giving rise to the so-called halo effect [10]. In the near-field or short propagation regime, only the first-order Fresnel fringes are visible in the reconstructions [1, 2]. These images are very challenging to segment.

Conventional image processing techniques, such as histogram thresholding and k-means clustering, fail to provide reliable results; for example, the “halo effect”, described above, leads to spurious edges in the binary image [10]. Nevertheless, a few attempts have been made to semi-automatically segment PCT datasets. Ref. [11] used a weak watershed transform assembly to increase segmentation robustness. In this method, however, the centroids of the particles need to be marked by a user prior to segmentation. Ref. [12] presented a learning classifier to guide a constrained statistical shape model to fit the data; such a method is overly deterministic and does not allow for particle shapes outside of the classifier to be segmented. It is for this reason that the rotating kernel transform [13] is also ineffective in segmenting multiple particles of variable shape and size. To circumvent the problems in segmenting PCT images, Ref. [14] proposed combining FBP and PAG images in Fourier space in their study of lung aveoli. Two-dimensional composite images were produced that were then easily thresholded [14].

We propose an integrated image processing methodology that utilizes both phase and absorption contrast, derived from applying PAG and FBP algorithms separately to the raw PCT data, along with a suite of data processing methods to allow the automated segmentation of large datasets acquired through 4D tomography. Unlike the work of Ref. [14], the images are combined in real space, and due to the complexity of the microstructure and the sensitivity of our measurements, a more robust image processing procedure is necessary. The hybrid images feature improved contrast-to-noise ratio and spatial resolution, thereby enabling the automated segmentation of such images by non-linear diffusion filtering and fuzzy logic. The binary images are then combined to reveal the 3D microstructures, in this case for an Al-29.9wt%Si alloy coarsening in time. To our knowledge, this is the first time that 4D PCT has been used to study the 3D interfacial morphologies of an alloy consisting of elements with similar atomic numbers.

2. Materials and methods

2.1. Experimental methodology

Al-Si rods of composition 29.9 wt% Si were prepared by Ames Laboratory [15], see Ref. [16] for experimental details. The 4D propagation-based phase contrast tomography experiment was conducted ex situ at the TOmographic Microscopy and Coherent rAdiology ExperimenTs (TOMCAT) beamline of the Swiss Light Source (Paul Scherrer Institut, Switzerland) [17]. The sample-to-detector distance was set to 110.0 mm and optimized for the sample volume and a monochromatic X-ray energy of 28 keV. Such a setup satisfied the near-field condition for PCT [3, 18].

The alloy consisted of large, interconnected Si laths in a eutectic matrix, see Fig. 2. The sample was placed in a custom-made isothermal furnace, and the Si laths were allowed to coarsen at 590 °C, just above the eutectic temperature of 577 °C. After 10 minutes, the sample was taken out of the furnace and tomographic projections were collected at room temperature when the sample was fully solid. The sample was then reheated to above the eutectic temperature, and kept at 590 °C for a subsequent 10 minutes, continuing the coarsening process. This cycle was repeated for six time iterations.

 figure: Fig. 2

Fig. 2 PAG images of Al-29.9 wt% Si sample coarsening for an elapsed time of 50 minutes. (a) As-cast microstructure is shown. Samples were coarsened in an isothermal furnace for (b–f) 10 minute increments at 590 °C.

Download Full Size | PDF

In between each iteration, 1001 projections were collected over 180°. Phase-retrieval and subsequent reconstruction of the images were conducted on-site using Paganin’s algorithm [3] and a modified Gridrec algorithm [19]. Additionally, FBP reconstructions of the data were produced at Northwestern University following the experiment, and used in the comparisons below. Each resulting dataset is 1525×1525×1598μm, with a voxel size of 0.74×0.74×0.74μm. Figure 2 shows the PAG reconstructions at various coarsening times.

2.2. Multimodal reconstruction technique

2.2.1. Conceptual outline

The phase map, ϕ(r1, r2; θ), of each projection is calculated during the phase-retrieval step of the PAG technique [3, 18, 20]. Taking the inverse radon transform [5] of ϕ(r1, r2; θ) gives the backprojected image, μPAG(x, y, z),

μPAG(x,y,z)δ(x,y,z)
While it is easy to qualitatively differentiate between components in this δ-map, it is challenging to quantitatively characterize the system due to its low pass characteristics. On the other hand, when the projections are backprojected without the intermediate phase-retrieval step, the reconstructed image intensity, μFBP(x, y, z), is such that [4]
μFBP(x,y,z)2δ(x,y,z)+μatten(x,y,z)+μmixed(x,y,z)
where μatten(x, y, z) is the linear attenuation coefficient given by
μatten(x,y,z)=4πλβ(x,y,z)
and μmixed(x, y, z) is a function of μatten(x, y, z) and δ(x, y, z) [4]. The dominant term in Eq. (2) is the Laplacian of refractive index decrements, ∇2δ(x, y, z), since the β(x, y, z) term in Eq. (3) does not provide appreciable contrast in the images. While the FBP image offers sharper interfaces due to ∇2δ(x, y, z), the grayscale intensity levels of the components are very similar.

A hybrid PCT reconstruction, μ+(x, y, z), is a linear combination of the PAG and FBP images, such that

μ+(x,y,z)=c1μPAG(x,y,z)+(1c1)μFBP(x,y,z)
Here 0 ≤ c1 ≤ 1, thereby combining the strong contrast present within the PAG image and the sharp interfaces found in the FBP image. In other words, the FBP image is a natural source of image sharpening. Thus, it is possible to extract two sets of data, PAG and FBP, from a single PCT experiment, and the linear combination of the two provides a hybrid reconstruction crucial for quantitative analysis. It is anticipated that this multimodal approach could be generally applicable to weakly absorbing samples (in which β ≈ 0) imaged in the near-field regime; these conditions would then give rise to the edge-enhancement of μFBP and low-pass characteristics of μPAG. However, the parameter c1, reflecting the contribution of the PAG image in the hybrid reconstruction, may be different and require sample-specific tuning for other datasets.

2.2.2. Multimodal image analysis

The scalar c1 in Eq. (4) is determined by optimizing μ+(x, y) with respect to two image quality metrics: contrast-to-noise ratio (CNR) and sharpness (SH). The notation (x, y) indicates a 2D slice of the 3D (x, y, z) volume. In other words, whereas Ref. [14] tuned the propagation distance for optimal image quality, we optimize the relative contributions of PAG and FBP images in μ+(x, y), at a single sample-to-detector distance. In this way, varying the scalar allows for robust segmentation of our PCT images. CNR is defined as

CNR=2(|SfSb|σf+σb)
where S and σ are the mean and standard deviation, respectively, of the pixel values in the foreground, f, and background, b, regions. For our images, the foreground refers to the Si laths and the background refers to the eutectic matrix. CNR was determined by manually tracing over the interfaces of the Si laths in a representative number of 2D images.

Although technically SH lacks a precise definition, intuitively, sharpness is related to the fineness of the resolvable details. Ref. [21] developed an algorithm to determine the overall SH of an image; we use their global single parameter sharpness model, implemented as the ratio between the output energy of an ideal high pass filter and an ideal band pass filter [21]:

SH=ξ¯H|μ+(ξ¯)|2dξ¯/ξ¯B|μ+(ξ¯)|2dξ¯
where ξ̄ = (ξx, ξy) are the Cartesian frequency coordinates, and H and B are the high and low-band pass frequency ranges, respectively. Additionally, the resolution of the multimodal images is cross-checked using the method described by Ref. [22]. First, the power spectral density (PSD) of an arbitrary line profile in the image is computed. This PSD converges to a value defined as the noise baseline. Resolution is computed by taking twice the value of the PSD at the noise baseline, and matching it to a spatial frequency, kres [22]. Then, the spatial resolution, xres, is calculated as
xres=2πkres
Practical implementation of this resolution criterion is met if the maximum spatial frequency of the image is less than one-half of the sampling frequency of the line profile (i.e., the test image is over-sampled) [22]. Once the optimal scalar c1 is determined by the above mentioned metrics, the resultant multimodal image, μ+, is segmented according to the procedure described in subsequent sections.

2.3. Segmentation of hybrid images

2.3.1. Regularized Perona-Malik filter

Our work aims to characterize the evolution of primary Si laths in an Al-Si system. Advanced image processing techniques, such as the Expectation Maximization/Maximization of Posterior Marginals (EM/MPM) algorithm [23, 24], which was recently applied to materials datasets, fails to segment PCT images using their current image models. As a result, this approach does not allow us to characterize the primary Si laths that require the removal of the smaller fluctuations in the matrix. These fluctuations are a result of the eutectic lamellae that appear with the same intensity level as the primary laths, and are an artifact of the quenching process. Moreover, it is necessary to enhance the edges of the primary Si laths in order to obtain a robust segmentation.

Blurring the eutectic constituent and enhancing the interfaces of the primary Si laths can be accomplished by using a nonlinear diffusion filter, such as a regularized Perona-Malik filter (RPM) [25, 26], which is applied on all 2D (x, y) slices of the 3D (x, y, z) dataset. In this method, a filtered image u(x, y, t), where t is time, is obtained as the solution of the diffusion equation

t(μσ(x,y,t))=Div(D(|uσ(x,y,t)|2)uσ(x,y,t))
where
D(|uσ(x,y,t)|2)=Exp(|uσ(x,y,t)|2κ2)
uσ(x,y,t)=Kσu(x,y,t)
u(x,y,0)=μ+(x,y)
The notation D (|∇uσ(x, y, t)|2) is the nonlinear diffusion coefficient, κ is the gradient threshold parameter, Kσ is Gaussian structuring element with standard deviation σ, and ⊗ denotes the convolution operation [25]. The input of the algorithm is the hybrid, multimodal image, see Eq. (11). The iterative convolution of the image u(x, y, t) with Kσ, in Eq. (10), regularizes the Perona-Malik model such that RPM is robust against local noise at scales smaller than or equal to σ [25]. This means that gradients that result from lamellae are effectively removed, given that they are smaller than the Gaussian kernel. To minimize the local fluctuations in the eutectic, the images are also pre-processed with a combination of erosion and dilation operations, and median filtering.

The gradient threshold parameter, κ, is commonly fixed at a user-defined value [26]. A fixed κ that is too small, however, may misinterpret large gradients due to noise as edges it should preserve, while a κ that is too large may delete edges and small structures during the diffusion process [27]. Thus, κ is updated such that it is proportional to the average noise in u(x, y, t) at any given time. Noise can be estimated from morphological filters, see Refs. [28, 29]; however, these morphological operations, e.g., erosion and dilation, are computationally expensive and thus we estimate average noise as

κ(t)κ0Exp(ωt)
where ω is a constant. Equation (12) suggests that the noise in the image drops exponentially under many applications of the RPM filter; this is consistent with the decaying exponential form in Eq. (9). In this way, the gradient threshold parameter κ self-adapts to the image u after every iteration.

2.3.2. Bias-corrected fuzzy c-means

The RPM filtered images are characterized by uneven background illumination. Intensity inhomogeneities or the bias field, arise from the extrinsic diffusional characteristics of RPM filtering as well as the intrinsic dark-light fringes suggested by Eqs. (2) and (4). This effect often results in under or over segmentation for fixed histogram threshold values.

In order to estimate the bias field of an image, Ref. [30] introduced an algorithm based on fuzzy logic, known as bias-corrected fuzzy C-means (BCFCM). In particular, they modified the objective function, Jm, of the standard fuzzy C-means algorithm as

Jm=i=1ck=1Nuikpxkνi2+αNRi=1ck=1Nuikp(xr𝒩kxrνi2)
where c is the number of clusters, N is the number of pixels, xk is the k-th pixel of measured data, uik is the degree of membership of xk in cluster i, p is the fuzziness coefficient, ν is the cluster prototypes or centroids, 𝒩k is the set of neighbors of xk, NR is the cardinality of 𝒩k, and α is a weighting parameter [30, 31, 32]. The notation ‖*‖ is the distance between xk and centroid νi. More specifically, the regularizing effect of a pixel’s local neighborhood is controlled by the parameter α [30]. Thus, the goal of BCFCM algorithm is to divide the data into two clusters, that of the primary silicon laths and that of the eutectic constituent, while taking into account the slow-varying bias field of the images.

BCFCM was originally developed for magnetic resonance imaging, though it can be applied to PCT data by modeling the RPM filtered image as

yk=xk+βk
where xk and yk are the true and observed intensities of the k-th pixel, respectively, and βk the bias field at the k-th pixel. Inserting xk in Eq. (13) and minimizing Jm with respect to constraints on membership, u, leads to an expression for βk [30]. The bias-corrected image xk has a bimodal histogram and therefore can be robustly segmented using conventional histogram-based segmentation methods, such as Otsu’s method [33].

2.3.3. Digital inpainting of voids

BCFCM algorithm assumes a two-phase system. Presence of voids in the microstructure, which appear as dark regions in the image, result in misinterpretation of bias field of the image. Thus, prior to the bias-field correction, it is necessary to

  1. determine if a given image has voids;
  2. camouflage any voids.
Hartigan’s dip test (HDT) is a statistical measure of the deviation of a distribution from uni-modality [34]. It is a useful tool since statistically significant voids manifest as a separate peak in the image histogram. For each image, if HDT determines that the histogram is not unimodal to a confidence level of 95%, the voids are inpainted by solving the steady-state diffusion equation with constant diffusivity (i.e., Laplace’s equation), only within the void, see Refs. [35, 36] for details. Following inpainting of the voids, the images are processed using RPM filtering and BCFCM algorithm, respectively.

Figure 3 summarizes the steps involved in processing of PCT data using the proposed methodology. All the algorithms discussed are applied in 2D; however, the methods can directly be extended to work in 3D. The total time for data processing is approximately 10 hours using a single node on Quest, the supercomputer cluster at Northwestern University, for a 296×296×159μm (34.6×106 voxels) stack of grayscale images. The node contains two Intel Nehalem Quad Core Xeon processors rated at 2.26 GHz. All codes are written in MATLAB R2012a [37]. It should be noted that the work of this paper is only to illustrate the proof-of-concept of our highly integrated approach; as such, our execution time is only an upper bound on performance, and parallelization using a compiled language can drastically speed up convergence rates.

 figure: Fig. 3

Fig. 3 Flowchart of PCT image-processing steps, beginning with the X-ray projections (at top) and ending with binarized output (at bottom).

Download Full Size | PDF

3. Results and discussion

3.1. Multimodal reconstruction

Figures 4(a)–4(e) show multimodal images, μ+, with varying contributions of μPAG and μFBP. In the extreme limits, Fig. 4(a) depicts μFBP, where c1 = 0 in Eq. (4), and Fig. 4(e) shows μPAG, where c1 = 1. In general, the dark gray structures represent the primary Si laths and the light gray background along with the fine elongated darker features represent the eutectic. While there is sufficient contrast between Si laths and the eutectic in the PAG image in Fig. 4(e), the interfaces are quite diffuse, due to the low-pass characteristics of the single-image phase-retrieval algorithm. On the other hand, the FBP reconstruction in Fig. 4(a) features prominent dark-bright Fresnel fringes at the interfaces of the Si laths, which are due to ∇2δ in Eq. (2).

 figure: Fig. 4

Fig. 4 Multimodal images with (a) c1 = 0, (b) c1 = 0.25, (c) c1 = 0.5, (d) c1 = 0.75, and (e) c1 = 1. (f) These images were assessed with respect to CN̂R and SH^, where k1 > k2 > k3 are three different high-pass cut-off frequencies used in measuring SH^. Optimal trade-off between CN̂R and SH^ occurs at an intermediate value of c1.

Download Full Size | PDF

To assess the quality of the multimodal images, we measured CNR and SH for each image. Since we are interested only in relative values of CNR and SH for comparison purposes, normalized values, denoted by CN̂R and SH^, respectively, will be used whenever possible. Specifically, CN̂R in Fig. 4(f) is normalized with respect to the minimum and maximum values of CNR, at c1 = 0 and c1 = 1, respectively. It is also important to note that SH^ is sensitive to the filter cut-off frequencies in Eq. (6). Thus, we investigate three high-pass frequency cutoffs, denoted k1 > k2 > k3 in Fig. 4(f), where k1, k2, and k3 are 0.32, 0.28, and 0.24 pixels−1, respectively; all SH^ values are normalized with respect to the minimum and maximum values of SH at cutoff k1. In all cases, the band pass frequency range is defined as [0.02, 0.2] pixels−1. As expected, SH^ increases with decreasing high-pass cut-off frequency, for a given value of c1, since the output energy of the high-pass filter increases. Regardless of the cut-off frequency selected, the results indicate that FBP images are sharper than PAG images, while PAG images offer greater contrast-to-noise ratios. The optimal reconstruction is a trade-off between these two image quality metrics, see Fig. 4(f). For all practical purposes, we use c1 = 0.5 in our hybrid images. The resulting hybrid image preserves the contrast between Si laths and the eutectic, while also providing sharper interfaces for quantitative analysis.

Furthermore, spatial resolutions calculated using the power spectral density (PSD) approach are in good qualitative agreement with the relative sharpness estimates. Figure 5 shows the calculation of this resolution criterion, where |C(k)|2 is the spectral power of the detected signal, μs is the noise baseline, and kres is the maximum spatial frequency from Eq. (7). The PSDs shown in Fig. 5(a) and Fig. 5(b) have been calculated for line profiles in FBP and PAG images, respectively. It can be observed that the FBP image has a spatial resolution, xres, that is roughly 40% greater than that of the corresponding PAG image; as such, the FBP image provides a natural source of image sharpening, but at a cost of CNR.

 figure: Fig. 5

Fig. 5 Measurement of the resolution criterion for a line profile in (a) FBP image and (b) PAG image where |C(k)|2 is the spectral power of the detected signal, μs is the noise baseline, and kres is the maximum spatial frequency. Spatial frequencies are given in units of inverse pixels, px−1. The FBP image has a resolution that is approx. 40% greater than the PAG image.

Download Full Size | PDF

The enhanced resolution of the FBP image can be understood by exploiting the Wiener-Khintchine theorem [38], which states that the autocorrelation function is the Fourier transform of the power spectrum (i.e., autocorrelation and PSD are a Fourier transform pair). According to Fig. 5(a), then, the PSD of the FBP image shows high autocorrelation for long wavelengths. For this reason, kres for μFBP is greater than kres for μPAG. This correlation in pixel intensity values over large distances is expected since only high-frequency, interfacial fringes manifest in the FBP reconstructions.

3.2. Proposed segmentation method

For non-linear diffusion filtering, we choose κ0 = 0.1, ω = 0.05, σ = 0.5, Kσ to be defined as 10×10 pixels, and the number of iterations of RPM to 1,000. The parameter σ is set in reference to the standard deviation of grayscale intensity values in the eutectic. It is important to note that the quality of the filtered images is most sensitive to the gradient threshold parameter, see Fig. 6 for comparison of static and dynamic κ. Static refers to the fact that κ is fixed for all iterations, while dynamic indicates that κ is of the form given by Eq. (12). While static κ may lead to the smoothing of semantically important edges, and in the worst case, to the deletion of small structures during the diffusion process, dynamic κ preserves such edges, as previously discussed.

 figure: Fig. 6

Fig. 6 (a) Plot of κ versus number of iterations of RPM algorithm. Static κ is fixed at a value of 0.05 which dynamic κ is given by Eq. (12), where κ0 = 0.1 and ω = 0.05. The filtered images produced using static κ and dynamic κ after 250 iterations are shown in (b) and (c), respectively. Dynamic κ preserves the edges of the Si laths better than the static case.

Download Full Size | PDF

After approximately 1,000 iterations of the RPM filter, the image converges to a steady-state. Figures 7(a)–7(d) shows the effect of RPM filtering on the pre-processed image. Successive iterations of the RPM filter allow for the removal of intra-phase noise while still preserving interface positions. Interface width decreases with iterations of RPM filter because these stronger fluctuations are above the gradient threshold κ(t) and thus diffusion is inhibited. With RPM filtering, it is possible for diffusion and edge detection to interact in one process.

 figure: Fig. 7

Fig. 7 Segmentation steps on (a) 1D pre-processed, hybrid image: (b–d) isotropic, nonlinear diffusion smoothing with 50, 200, and 1,000 iterations, respectively, and final result (e) with bias-field corrections. The dotted line indicates that the eutectic-Si interface positions are preserved during the segmentation process, while the intra-phase noise is removed.

Download Full Size | PDF

In the BCFCM algorithm, values of α = 1 × 10−5, p = 1.4, NR = 8, and 5,000 iterations were used to represent the image using two clusters, corresponding to the Si laths and the eutectic. The neighborhood parameter α is estimated as the inverse of the signal-to-noise ratio in the RPM-filtered image. Theoretically, the fuzziness coefficient p ∈ [1, ∞) [31], although the ideal value of p is problem-specific, and was empirically selected for our dataset. However, increasing p beyond 2 causes the clusters to overlap significantly such that cluster boundaries become ill-defined. Using these parameters, the bias-corrected image is presented in Fig. 7(e).

Figure 8 summarizes the 2D segmentation steps. Figure 8(b) shows the inpainting of the void shown in the hybrid image (Fig. 8(a)). Following inpainting of the voids, the images are processed using RPM filtering and BCFCM algorithm, see Figs. 8(c)–8(f). The evaluation of the segmentation results, e.g., Fig. 8(f), is difficult since the ground truth is unknown. However, the automated segmentation approach can be compared to manual segmentation in terms of the adjusted Rand index (ARI) [39, 40]. ARI is a measure of similarity between the two methods, and varies between −1 and 1, where 1 indicates a perfect match. It is defined as

ARI=ij(nij2)(i(ai2)j(bj2))/(n2)12(i(ai2)+j(bj2))(i(ai2)j(bj2))/(n2)
where
ai=jnij,bj=inij,andn=ijnij
and where nij is the number of voxels belonging to class i in the manual segmentation, and to class j in the automated approach [39, 40]. For our images, there are two classes corresponding to the Si lathes and the eutectic matrix. The notation (**) represents the binomial coefficient.

 figure: Fig. 8

Fig. 8 Segmentation steps on (a) 2D hybrid image: (b) pre-processing step with inpainted void, (c) isotropic, non-linear diffusion smoothing, (d) bias-field estimation and (e) subtraction from RPM filtered image, and (f) Otsu-thresholded output. Interface positions are preserved. All images are scaled to the range [0,255].

Download Full Size | PDF

Manual segmentation was conducted by tracing out the interfaces of the hybrid images. To overcome experimenter’s bias, we collected eight hand-generated segmentations of three hybrid PCT images from eight different people. Then, using Eqs. (15)(16), we found that ARI = 0.91 ± 0.02. This ARI value is not intended to make a statement on the absolute accuracy of the automated segmentation, but to provide a relative comparison to the manual case. As such, the high ARI value indicates that using the proposed automated method, we can achieve results similar to that of manual segmentation with significantly less effort and much greater reproducibility.

This segmentation technique takes into account the inherent structures and properties of the material being analyzed. For instance, the lamellar structure of the eutectic phase manifests as small (i.e., 2.5 ± 0.5 μm in width) intensity fluctuations in the hybrid images, while the Si laths are, on average, considerably larger. In order to isolate the Si laths from this finer lamellar structure, it is necessary to smooth the smaller fluctuations while enhancing the larger ones, through RPM filtering. This toolbox of algorithms for segmenting PCT images also has the potential to be tailored for other materials systems.

3.3. 3D reconstruction

Once binarized, the 2D images are combined to reveal the 3D microstructure at different time steps. Figure 9 reflects the extraordinary morphological and topological complexity of the primary Si laths that evolve during coarsening. Characterization of such structures for comparison to coarsening theory requires a fully three-dimensional analysis (Fig. 9). The increase in size scale of the structure is consistent with a coarsening process, and an increasingly isotropic structure. For instance, Ref. [16] demonstrated, qualitatively, that the highly anisotropic Si laths do not evolve through a series of equilibrium Wulff shapes, and that there are interfaces with low mobility in the structure. For quantitative microstructural characterization of the coarsening morphologies in Fig. 9, see [41].

 figure: Fig. 9

Fig. 9 3D Si laths coarsening in time. The dark gray background is the eutectic. (a) As-cast microstructure is shown. Samples were coarsened in an isothermal furnace for (b–f) 10 minute increments at 590 °C. ROI shown is 296×296×159μm.

Download Full Size | PDF

4. Conclusion

The proposed technique allows for the automated processing of PCT images. In agreement with Ref. [14], we find that there are advantages of near-field imaging, since it permits both phase-contrast and attenuation-based reconstructions of the same microstructure. The linear combination of these reconstructions, at a single sample-to-detector distance, offers the possibility of tuning the contrast-to-noise ratio and spatial resolution, thereby utilizing the advantages of both imaging modalities. The multimodal images can then be robustly segmented by using algorithms from computer vision, biomedical imaging, and art restoration communities; in particular, we use RPM filtering followed by BCFCM method to achieve binarized datasets. Finally, these datasets reveal the 4D microstructural evolution of an Aluminum-29.9wt% Silicon alloy during coarsening.

Acknowledgments

This work was supported by the Multidisciplinary University Research Initiative (MURI) under award AFOSR FA9550-12-1-0458. Additional support was provided for J.W. Gibbs by the DOE NNSA Stewardship Science Graduate Fellowship under grant no. DE-FC52-08NA28752. The sample preparation and data acquisition were supported by the DOE under contract no. DE-FG02-99ER45782. J.L. Fife also acknowledges the CCMX for funding. We thank the staff at the TOMCAT beamline, especially Dr. Sarah Irvine and Gordan Mikuljan. This research utilized the Quest high performance computing facility at Northwestern University, which is jointly supported by the Office of the Provost, the Office for Research, and Northwestern University Information Technology.

References and links

1. P. Cloetens, R. Barrett, J. Baruchel, J. P. Guigay, and M. Schlenker, “Phase objects in synchrotron radiation hard x-ray imaging,” J. Phys. D Appl. Phys. 29, 133–146 (1996). [CrossRef]  

2. G. Margoritondo, Elements of Synchrotron Light: for Biology, Chemistry, and Medical Research (Oxford University, 2002).

3. D. Paganin, S. C. Mayo, T. E. Gureyev, P. R. Miller, and S. W. Wilkins, “Simultaneous phase and amplitude extraction from a single defocused image of a homogeneous object,” J. Microsc. 206, 33–40 (2002). [CrossRef]   [PubMed]  

4. X. Wu, H. Lu, and A. Yan, “Phase-contrast x-ray tomography: Contrast mechanism and roles of phase retrieval,” Eur. J. Radiol. 68S, S8–S12 (2008). [CrossRef]  

5. A. C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging (IEEE, 1988).

6. A. Burvall, U. Lundström, P. A. Takman, D. H. Larsson, and H. M. Hertz, “Phase retrieval in x-ray phase-contrast imaging suitable for tomography,” Opt. Express 19, 10359–10376 (2011). [CrossRef]   [PubMed]  

7. D. J. Rowenhorst and P. W. Voorhees, “Measurement of interfacial evolution in three dimensions,” Ann. Rev. Mater. Res. 42, 105–124 (2012). [CrossRef]  

8. S. C. Irvine, D. M. Paganin, S. Dubsky, R. A. Lewis, and A. Fouras, “Phase retrieval for improved multidimensional velocimetric analysis of x-ray blood flow speckle patterns,” Appl. Phys. Lett. 93, 153901 (2008). [CrossRef]  

9. D. M. Paganin, T. E. Gureyev, K. M. Pavlov, R. A. Lewis, and M. Kitchen, “Phase retrieval using coherent imaging systems with linear transfer functions,” Opt. Commun. 234, 87–105 (2004). [CrossRef]  

10. T. Otaki, “Artifact halo reduction in phase contrast microscopy using apodization,” Opt. Rev. 7, 119–122 (2000). [CrossRef]  

11. O. Debeir, N. Warzee, P. V. Ham, and C. Decaestecker, “Phase contrast image segmentation by weak watershed transform assembly,” in 5th IEEE Int. Symp. Biomed. Imaging (2008), pp. 724–727.

12. M. Hammon, A. Cavallaro, M. Erdt, P. Dankerl, M. Kirshner, K. Drechsler, S. Wesarg, M. Uder, and R. Janka, “Model-based pancreas segmentation in portal venous phase contrast-enhanced ct images,” J. Digit. Imaging 26, 1082–1090 (2013). [CrossRef]   [PubMed]  

13. Y. K. Lee and W. T. Rhodes, “Nonlinear image processing by a rotating kernel transformation,” Opt. Lett. 15, 1383–1385 (1990). [CrossRef]   [PubMed]  

14. G. Lovric, S. Barré, J. C. Schittny, M. Roth-Kleiner, M. Stampanoni, and R. Mokso, “Dose optimization approach to fast x-ray microtomography of the lung alveoli,” J. Appl. Cryst. 46, 856–860 (2013). [CrossRef]  

15. “Materials Preparation Center, Ames Laboratory, US DOE Basic Energy Sciences, Ames, IA, USA,” www.mpc.ameslab.gov.

16. E. B. Gulsoy, A. J. Shahani, J. W. Gibbs, J. L. Fife, and P. W. Voorhees, “Four-dimensional morphological evolution of coarsening of Aluminum Silicon alloy using phase-contrast x-ray tomography,” Mater. Trans., JIM 55, 161–164 (2014). [CrossRef]  

17. M. Stampanoni, A. Groso, A. Isenegger, G. Mikuljan, Q. Chen, A. Bertrand, S. Henein, R. Betemps, U. Frommherz, P. Bhler, D. Meister, M. Lange, and R. Abela, “Trends in synchrotron-based tomographic imaging: the sls experience,” Proc. SPIE 6318, 63180M (2006). [CrossRef]  

18. T. Weitkamp, D. Haas, D. Wegrzynek, and A. Rack, “ANKAphase: Software for single-distance phase retrieval from inline x-ray phase-contrast radiographs,” J. Synchrotron Rad. 18, 617–629 (2011). [CrossRef]  

19. F. Marone and M. Stampanoni, “Regridding reconstruction algorithm for real-time tomographic imaging,” J. Synchrotron Rad. 19, 1029–1037 (2012). [CrossRef]  

20. M. A. Beltran, D. M. Paganin, K. Uesugi, and M. J. Kitchen, “2d and 3d x-ray phase retrieval of multi-material objects using a single defocus distance,” Opt. Express 18, 6423–6436 (2010). [CrossRef]   [PubMed]  

21. D. Shaked and I. Tastl, “Sharpness measure: Towards automatic image enhancement,” in IEEE Int. Conf. Im. Process. (2005), Vol. 1.

22. P. Mondregger, D. Lübbert, P. Schäfer, and R. Köhler, “Spatial resolution in bragg-magnified x-ray images as determined by fourier analysis,” Phys. Stat. Solidi A 204, 2746–2752 (2007). [CrossRef]  

23. M. L. Comer and E. J. Delp, “Segmentation of textured images using a multiresolution gaussian autoregressive model,” IEEE Trans. Image Process. 8, 408–420 (1999). [CrossRef]  

24. P. Simmons, P. Chuang, M. L. Comer, J. E. Spowart, M. D. Uchic, and M. de Graef, “Application and further development of advanced image processing algorithms for automated analysis of serial section image data,” Modelling Simul. Mater. Sci. Eng. 17, 025002 (2009). [CrossRef]  

25. F. Catté, P. L. Lions, J. M. Morel, and T. Coll, “Image selective smoothing and edge detection by nonlinear diffusion,” SIAM J. Numer. Anal. 29, 182–193 (1992). [CrossRef]  

26. P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 12, 629–639 (1990). [CrossRef]  

27. B. Jähne and H. Haußecker, Computer Vision and Applications (Academic, 2000).

28. J. C. Russ, The Image Processing Handbook (CRC, 2010).

29. F. Voci, S. Eiho, N. Sugimoto, and H. Sekiguchi, “Estimating the gradient threshold in the perona-malik equation,” IEEE Signal Proc. Mag. 21, 39–46 (2004). [CrossRef]  

30. M. N. Ahmed, S. M. Yamany, A. A. Farag, and T. Moriarty, “Bias field estimation and adaptive segmentation of mri data using a modified fuzzy c-means algorithm,” in Proc. IEEE Int. Conf. Computer Vision and Pattern Recogn. (1999), Vol. 1.

31. J. C. Bezdek, R. Ehrlich, and W. Full, “Fcm: The fuzzy c-means clustering algorithm,” Comput. Geosci. 10, 191–203 (1984). [CrossRef]  

32. J. C. Dunn, “A fuzzy relative of the isodata process and its use in detecting compact well-separated clusters,” J. Cybern. 3, 32–57 (1973). [CrossRef]  

33. N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans. Syst. Man Cybern. SMC-9, 62–66 (1979).

34. J. A. Hartigan and P. M. Hartigan, “The dip test of unimodality,” Ann. Stat. 13, 70–83 (1985). [CrossRef]  

35. M. Bertalmío, G. Sapiro, V. Caselles, and C. Ballester, “Image inpainting,” in Proc. SIGGRAPH 2000 (2000).

36. M. Mainberger, A. Bruhn, J. Weickert, and S. Forchhammer, “Optimising spatial and tonal data for homogeneous diffusion inpainting,” Pattern Recognit. 44, 1859–1873 (2011). [CrossRef]  

37. MATLAB, version 7.14.0 (R2012a) (The MathWorks Inc., Natick, Massachusetts, 2012).

38. N. Wiener, Extrapolation, Interpolation, and Smoothing of Stationary Time Series (MIT, 1964).

39. L. Hubert and P. Arabie, “Comparing partitions,” J. Classif. 2, 193–218 (1985). [CrossRef]  

40. W. M. Rand, “Objective criteria for the evaluation of clustering methods,” J. Am. Stat. Assoc. 66, 846–850 (1971). [CrossRef]  

41. A. J. Shahani, E. B. Gulsoy, J. W. Gibbs, and P. W. Voorhees, “Four-dimensional morphological characterization of Al-Si alloy during coarsening” (2014), Manuscript in preparation.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 PCT experiment, where R2 is the detector distance, θ is the projection angle and λ is the wavelength of propagating wave. The frame (x, y, z) is the reference frame, while (r1, r2) lie in the plane of the imaging detector [4].
Fig. 2
Fig. 2 PAG images of Al-29.9 wt% Si sample coarsening for an elapsed time of 50 minutes. (a) As-cast microstructure is shown. Samples were coarsened in an isothermal furnace for (b–f) 10 minute increments at 590 °C.
Fig. 3
Fig. 3 Flowchart of PCT image-processing steps, beginning with the X-ray projections (at top) and ending with binarized output (at bottom).
Fig. 4
Fig. 4 Multimodal images with (a) c1 = 0, (b) c1 = 0.25, (c) c1 = 0.5, (d) c1 = 0.75, and (e) c1 = 1. (f) These images were assessed with respect to CN̂R and SH ^, where k1 > k2 > k3 are three different high-pass cut-off frequencies used in measuring SH ^. Optimal trade-off between CN̂R and SH ^ occurs at an intermediate value of c1.
Fig. 5
Fig. 5 Measurement of the resolution criterion for a line profile in (a) FBP image and (b) PAG image where |C(k)|2 is the spectral power of the detected signal, μs is the noise baseline, and kres is the maximum spatial frequency. Spatial frequencies are given in units of inverse pixels, px−1. The FBP image has a resolution that is approx. 40% greater than the PAG image.
Fig. 6
Fig. 6 (a) Plot of κ versus number of iterations of RPM algorithm. Static κ is fixed at a value of 0.05 which dynamic κ is given by Eq. (12), where κ0 = 0.1 and ω = 0.05. The filtered images produced using static κ and dynamic κ after 250 iterations are shown in (b) and (c), respectively. Dynamic κ preserves the edges of the Si laths better than the static case.
Fig. 7
Fig. 7 Segmentation steps on (a) 1D pre-processed, hybrid image: (b–d) isotropic, nonlinear diffusion smoothing with 50, 200, and 1,000 iterations, respectively, and final result (e) with bias-field corrections. The dotted line indicates that the eutectic-Si interface positions are preserved during the segmentation process, while the intra-phase noise is removed.
Fig. 8
Fig. 8 Segmentation steps on (a) 2D hybrid image: (b) pre-processing step with inpainted void, (c) isotropic, non-linear diffusion smoothing, (d) bias-field estimation and (e) subtraction from RPM filtered image, and (f) Otsu-thresholded output. Interface positions are preserved. All images are scaled to the range [0,255].
Fig. 9
Fig. 9 3D Si laths coarsening in time. The dark gray background is the eutectic. (a) As-cast microstructure is shown. Samples were coarsened in an isothermal furnace for (b–f) 10 minute increments at 590 °C. ROI shown is 296×296×159μm.

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

μ PAG ( x , y , z ) δ ( x , y , z )
μ FBP ( x , y , z ) 2 δ ( x , y , z ) + μ atten ( x , y , z ) + μ mixed ( x , y , z )
μ atten ( x , y , z ) = 4 π λ β ( x , y , z )
μ + ( x , y , z ) = c 1 μ PAG ( x , y , z ) + ( 1 c 1 ) μ FBP ( x , y , z )
CNR = 2 ( | S f S b | σ f + σ b )
SH = ξ ¯ H | μ + ( ξ ¯ ) | 2 d ξ ¯ / ξ ¯ B | μ + ( ξ ¯ ) | 2 d ξ ¯
x res = 2 π k res
t ( μ σ ( x , y , t ) ) = Div ( D ( | u σ ( x , y , t ) | 2 ) u σ ( x , y , t ) )
D ( | u σ ( x , y , t ) | 2 ) = Exp ( | u σ ( x , y , t ) | 2 κ 2 )
u σ ( x , y , t ) = K σ u ( x , y , t )
u ( x , y , 0 ) = μ + ( x , y )
κ ( t ) κ 0 Exp ( ω t )
J m = i = 1 c k = 1 N u i k p x k ν i 2 + α N R i = 1 c k = 1 N u i k p ( x r 𝒩 k x r ν i 2 )
y k = x k + β k
ARI = i j ( n i j 2 ) ( i ( a i 2 ) j ( b j 2 ) ) / ( n 2 ) 1 2 ( i ( a i 2 ) + j ( b j 2 ) ) ( i ( a i 2 ) j ( b j 2 ) ) / ( n 2 )
a i = j n i j , b j = i n i j , and n = i j n i j
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.