Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Artifact-suppressing reconstruction of strongly interacting objects in X-ray near-field holography without a spatial support constraint

Open Access Open Access

Abstract

The phase problem is a well known ill-posed reconstruction problem of coherent lens-less microscopic imaging, where only the squared magnitude of a complex wavefront is measured by a detector while the phase information of the wave field is lost. To retrieve the lost information, common algorithms rely either on multiple data acquisitions under varying measurement conditions or on the application of strong constraints such as a spatial support. In X-ray near-field holography, however, these methods are rendered impractical in the setting of time sensitive in situ and operando measurements. In this paper, we will forego the spatial support constraint and propose a projected gradient descent (PGD) based reconstruction scheme in combination with proper preprocessing and regularization that significantly reduces artifacts for refractive reconstructions from only a single acquired hologram without a spatial support constraint. We demonstrate the feasibility and robustness of our approach on different data sets obtained at the nano imaging endstation of P05 at PETRA III (DESY, Hamburg) operated by Helmholtz-Zentrum Hereon.

Published by Optica Publishing Group under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. Introduction

The phase problem is a well known ill-posed reconstruction problem of coherent lens-less microscopic imaging, where only the intensity of a complex wave field is measured by a detector while the phase information is lost [15]. In those imaging setups, the detector only measures the wavefront as an integral over a certain exposure time which is proportional to the squared magnitude of the wavefront. In X-ray near-field holography [68], the object is fully illuminated by hard X-rays and in-line holograms are acquired in the so-called Fresnel regime. To minimize the X-ray dose on the objects as well as to maximize the framerate for in situ and operando measurements, the object has to be recovered from a single hologram. Naively, one would aim to recover the lost phase information in the detector plane. In practice however, the reconstruction target is not the lost phases at the detector plane but typically the transmission function of the object, that characterizes the interaction with the X rays. This inverse problem is under-determined, since this function is complex, whereas the detector data is real.

While the reconstruction of weakly interacting samples works well with single-step approaches like the direct inversion using the contrast transfer function (CTF) [7] or alternating-projection-based algorithms like the famous Gerchberg-Saxton algorithm [9] and its successors [1013], the reconstruction complexity increases with the interaction strength of the measured objects. For strongly interacting objects, many algorithms run into phase wrapping problems, take a very long time to converge to a solution, stagnate in the reconstruction process, are computational expensive or fail to reconstruct the object at all [1416].

To compensate for missing information, numerous measurement techniques, additional object constraints and regularization schemes have been utilized. Naturally, some approaches alleviate the underdetermination by obtaining more measurements while slightly changing the experimental setup. This includes for example ptychography [1719] or full-field imaging [20] with multiple distances [7,2124].

In addition or as an alternative to using more data, approaches typically introduce prior knowledge about the measured object into the reconstruction scheme which are related to general physical properties, the measured materials or the shape of the object. Here, some methods introduce an absorption and phase relationship for example by strict constraints on the reconstructed materials [23] or coupling with joint sparsity regularization [25]. Also, the introduction of a finite spatial support assumption of the object, together with a non-negativity constraint that enforces a positive electron density is a common approach [16,23,26,27].

Priors, which are not necessarily bounded to the object domain introduce more implicit assumptions about the object. Lasso and total variation regularization for example enforce sparsity in different domains like wavelet, shearlet or the gradient domain [2830] while Tikhonov type regularization introduces a softer bias towards smoothness and smaller values [23,25,29].

Unfortunately, each approach comes with a certain trade-off. Measuring more data is not always possible or not always desired, for example in single-pulse imaging at X-ray free electron lasers or in situ/operando studies of material degradation and battery charge cycles at other accelerator-based light sources like synchrotron radiation sources [3134]. Employing a spatial support constraint implies that the object has to posses a finite spatial support. Furthermore, the support has to be either manually set before the actual reconstruction or automatically found during the reconstruction process [31,35] which adds further computational complexity and refinement parameters. This renders a spatial support constraint impractical for certain applications like time sensitive in situ and operando measurements. Domain specific regularization always introduces a reconstruction bias, whose strength depends on the regularized domain and the regularization weights. Strong weights or the wrong domain can cause blur and artifacts. To avoid optically and quantitatively incorrect images, regularization should be applied carefully and systematically. To this end, some key elements of recent iterative algorithms have been outlined as beneficial to solving the phase problem. These include the independent regularization of phase shift and absorption values [30], the stabilization of high frequencies [23] and the use of a Nesterov-accelerated gradient descent step [16].

Nevertheless, a universal reconstruction of strongly interacting objects under in situ/operando is still to be found. Especially for these experiments, certain restrictions such as multiple distance measurements, a spatial support, the assumption of a single material and computationally intensive regularizations such as total variation must be abandoned.

In this work, we extend the projected-gradient-descent(PGD)-based approach [16], denoted as refAP, which has been developed for the direct reconstruction of the projected refractive index. We propose a preprocessing and reconstruction scheme, which is able to reconstruct images from single holograms without a spatial support constraint while significantly suppressing artifacts. We demonstrate that the proposed combination of preprocessing Alg. 1 and reconstruction algorithm Alg. 5 is robust for experimental data, and performs efficiently with a significantly reduced computation time, compared to the unmodified refAP algorithm.

The paper is structured as follows. In the problem statement Sec. 2 we go into the details of the current reconstruction issues and the sources of possible artifacts. We then give an overview on the current state of preprocessing and reconstruction methods in Sec. 3. In Sec. 4, we propose multiple techniques to reduce reconstruction artifacts and to improve the computational time with respect to the reference algorithm (refAP). In Sec. 5, we show reconstruction results of data from experiments at beamline P05 at PETRA III (DESY, Hamburg), which offers a lens-less X-ray in-line holography setup for nanotomography (Fig. 1) [8,3638]. We then compare the different enhancements against the current refAP.

 figure: Fig. 1.

Fig. 1. Sketch of an experimental setup based on a Fresnel zone plate (FZP) for near-field holographic microscopy. The FZP focuses the incoming coherent monochromatic X rays to the focal spot located at a distancee $f$ behind the zone plate. There, an order sorting aperture (OSA) is placed that blocks the higher diffraction orders of the FZP. The sample is put into the diverging cone-shaped beam of the FZP at a distance $z_{01}$. Behind the sample, the X rays propagate to the detector that is placed at the sample-to-detector distance $z_{12}$. To protect the detector from radiation damage, the direct beam is blocked by a beamstop behind the FZP [8].

Download Full Size | PDF

An overview of the derived algorithms of Sec. 4 is shown in Appendix A. The data and the software underlying the results presented in this paper are available in Code 1, Ref. [39].

2. Problem statement

Figure 1 shows a setup for X-ray near-field holography, using the divergent illumination of a nanofocusing optics to enable microscopy. The goal of the reconstruction is to retrieve the projected refractive index $\tilde {O}$ of a measured specimen with respect to its spatial coordinates from a hologram. The object transmission function describes the interaction of an object with the X-ray illumination. Commonly used algorithms are grouped into direct methods, alternating projection (AP) based [913] and PGD-based algorithms.

To avoid the phase wrapping problem, i.e. to avoid the $2\pi$ phase ambiguity, a PGD-based algorithm, denoted as refAP has been developed [16]. Nevertheless, AP and PGD-based algorithms both typically have problems with image artifacts unrelated to phase wrapping as illustrated in Fig. 2. The images in Fig. 2 were obtained by an unmodified refAP algorithm. These distinct artifacts originate from several different sources:

  • • Non-linear and non-convex optimization problems: These types of inverse problems tend to have multiple local minima, which can trap [14,15] or slow down iterative algorithms as they try to find a path to a global optimum. An algorithm may also converge to different local minima depending on the initial values.
  • • Truncation of information: Due to the limited size of the detector, the hologram measured at the detector is truncated. If not handled properly, this loss of information can lead to different kinds of artifacts in the reconstruction. Ring-like or stripe-like artifacts that are particularly pronounced at the edges are common [4042].
  • • Forward model induced reconstruction bias: In X-ray near-field holography, the forward model is sensitive to the second spatial derivative of the measured object with respect to its refractive indices. This can be seen by calculating the Taylor expansion of the propagation kernel in the forward model ( [43], Eq. 4.113). The steepness of edges of the reconstructed object and the residual reconstruction error are directly and proportionally coupled in a way that steep edges have a higher contribution to the residual reconstruction error than areas with small changes of the refractive index. Hence, iterative reconstruction algorithms tend to first reconstruct the edges before they reconstruct the object’s interior. This causes a slow reconstruction speed of large structures which have large values for $\tilde {O}$ but small values for $\nabla \tilde {O}$ and can lead to artifacts, if the algorithm stops too early.
  • • Regularization induced reconstruction bias: Regularization techniques always entail a bias. The more regularization is applied, the stronger this bias is. This can be exploited to derive a warm-up phase for the reconstruction but can also manifest in artifacts, e.g. blurred objects introduced by Tikhonov regularization [44,45], missing features from sparsity regularization [46,47] or staircase artifacts from total variation regularization [4850]. The more parameters exist that have to be tuned for the reconstruction, the higher is the risk for bias and artifacts that depend on the used regularization techniques.
  • • Structured noise from illumination: Unstructured measurement noise is a general issue in inverse problems that can be addressed using regularization strategies such as Tikhonov regularization [44,45]. For structured noise, commonly used regularization strategies are less effective. Low spatial frequency artifacts are often observed, especially in object-free areas. In this particular measurement setup, various types of structured noise remain in the data e.g. by the illumination. First, changes in the illumination patterns, for example due to electron refills of the storage ring at the synchrotron radiation source, are not covered by a flat-field correction approach [31,51,52]. Second, if the illumination of the measured object is larger than the reconstructed FOV, the measured hologram at the detector is partly superimposed by a wave field that has been propagated from outside of the FOV.

 figure: Fig. 2.

Fig. 2. Comparison of reconstruction artifacts between a reconstruction from (a) a hologram, (b) without a spatial support constraint and (c) with a spatial support constraint. The images in Fig. 2 were obtained by an unmodified refAP algorithm (Alg. 2 with and without a spatial support constraint). The scale bars indicate ${10}\;\mathrm{\mu}$.

Download Full Size | PDF

To counteract this list of problems, many reconstruction methods in X-ray near-field holography rely on the acquisition of diverse data sets such as multi-distance scans [7,22] or employ additional constraints such as applying a spatial support mask [26,27]. However, the application of these two particular constraints is either time consuming or object dependent. For in situ/operando measurements at the beamline, however, we have high demands on the reconstruction speed, quality, and robustness. Therefore, we have a strong need to find a method that performs high quality reconstruction from only a single hologram. In this paper, we propose a combined preprocessing/reconstruction scheme that addresses most of the above problems and produces artifact-free phase images from a single hologram, without the need for a spatial support constraint.

3. Current state of reconstruction

3.1 Refractive forward model for holographic imaging

The physical properties of the object are encoded in the complex refractive index

$$n(x,y,z) = 1- \delta(x,y,z) + \text{i}\beta(x,y,z),$$
along the spatial coordinates $(x,y,z)$, where $\beta \in \mathbb {R}$ describes the absorption, i.e the attenuation, $\delta \in \mathbb {R}$ the dispersion, i.e. phase-shifting properties and i is the imaginary unit. In this paper, we assume a sufficiently thin object to consider only the projection of the refractive indices over the object thickness $d$ in beam direction. The object is illuminated by a monochromatic wave field $\psi _0$. We follow the projection approximation of Paganin, section 2.2, equation 2.39 [43] and get the transmission function that describes the exit wave $\psi _{\text {exit}}$ from the interaction between the illumination and the object
$$\psi_{\text{exit}}(x,y) = \exp \left({-\text{i}k \int_{0}^d \delta(x,y,z) - \text{i}\beta(x,y,z)~dz}\right) \psi_0,$$
where $k=\frac {2\pi }{\lambda }$, with $\lambda$ the wavelength of the illumination. For the sake of simplicity of these equations, we substitute the exponent of the exponential function and define the refractive object by
$$\tilde{O}(x,y) ={-}k \int_{0}^d \delta(x,y,z) - \text{i}\beta(x,y,z)~dz = \phi(x,y) + \text{i}\mathrm{\mu}(x,y).$$

We further omit in the following the notation of the spatial coordinates. In the substitution, $\phi$ encodes the phase-shifting properties and $\mathrm{\mu}$ encodes the attenuation properties of the object. The reconstructions in this paper will aim to recover $\tilde {O}$. In idealized conditions, the object is illuminated by an aberration free coherent beam with constant amplitude. In this case, the exit wave behind the object is described by

$$\psi_{\text{exit}}(\tilde{O}) = \exp ({\text{i}\tilde{O}}) A_0 \exp ({\text{i}\phi_0}),$$
with $A_0: \mathbb {R}^2 \mapsto \mathbb {R}$ the illumination amplitude and $\phi _0: \mathbb {R}^2 \mapsto \mathbb {R}$ the illumination phase offset. Both parameters are in general unknown and have to either be estimated or reconstructed separately. The measurement process of a hologram is then modeled by propagating $\psi _{\text {exit}}$ to the detector and taking the squared magnitude of the propagated wave field. We summarize the illumination-interaction-propagation process into an operator $\mathcal {D}_{\textrm {Fr}}(\tilde {O})$ and take the squared magnitude as an extra step:
$$\mathcal{D}_{\textrm{Fr}}(\tilde{O}) = \mathcal{F}^{{-}1} \circ \exp \left({-\text{i}\cdot\pi\frac{(k_x^2+k_y^2)}{\textrm{Fr}}}\right) \circ \mathcal{F} \circ \psi_{\text{exit}}(\tilde{O}),$$
$$\mathcal{I}_{\textrm{det}} = |\mathcal{D}_{\textrm{Fr}}(\tilde{O})|^2.$$

$\mathcal {D}_{\textrm {Fr}}$ is an approximation of the free-space propagation for the near field and is called the Fresnel propagator in operator form [43]. Here, the function $\mathcal {F}$ describes the 2D Fourier transform of the wave field and $\mathcal {F}^{-1}$ its inverse, respectively, with respect to the spatial coordinates $(x,y)$ and the coordinates in frequency space $(k_x,k_y)$. The Fresnel number $\textrm {Fr}$ depends on the geometry of the illuminating beam. In a parallel beam setup, the Fresnel number is derived from the pixel size of the detector $\Delta x$, the wavelength of the source $\lambda$ and the propagation distance between sample and detector $z_{12}$:

$$\textrm{Fr} = \frac{\Delta x^2}{\lambda z_{12}}.$$

In order to enable microscopy through holographic imaging, the object has to be placed into a divergent beam in a defocused position. The Fresnel number must then also incorporate the magnification through the cone beam, which can be derived from the Fresnel scaling theorem [43]. The theorem states that the cone beam setup can be transformed to a virtual parallel beam setup by calculating a new effective propagation distance $z_{12}^*=Mz_{12}$ from the object magnification $M$. The divergent beam magnifies the sample by a factor

$$M = \frac{z_{01} + z_{12}}{z_{01}}$$
that is given by the proportion of the total propagation distance $z_{01}+z_{12}$ to the focus-object distance $z_{01}$. The Fresnel number for a cone beam becomes
$$\textrm{Fr} = \frac{\Delta x^2}{\lambda M z_{12}}.$$

3.2 Preprocessing

Reconstruction algorithms in X-ray near-field holography assume idealized conditions for the source and the setup components. An ideal setup has aberration-free optics and perfect coherent illumination, which are not fulfilled in practice. An important tool for any reconstruction is to prepare the acquired data in a preprocessing step such that the assumptions of the forward model are met while respecting issues and limitations of certain implementations. Two important issues that are typically handled in a preprocessing step are non-uniform static illumination and spectral leakage originating from the fast Fourier transform of a non-periodic data set.

3.2.1 Flat-field correction

To sufficiently approximate ideal conditions for a reconstruction, a flat-field correction approach is often applied on the detector data, that corrects the static part of aberrations. For that, a synthetic flat-field image $\mathcal {I}_{\text {flat}}$ is derived first from measurements without an object in the beam by principle components analysis [51,52]. The raw data $\mathcal {I}_{\text {raw}}$ is then divided by this flat-field to yield a flat-field corrected hologram $\mathcal {I}_{\text {corr}}$:

$$\mathcal{I}_{\text{corr}} = \frac{\mathcal{I}_{\text{raw}}}{\mathcal{I}_{\text{flat}}}.$$

3.2.2 Padding

The acquired hologram consists of a discrete set of detector pixel values forming a 2D image. Part of the forward model is the Fourier transform, computed by its discrete implementation, the Fast Fourier Transform (FFT). For the FFT, all data and operators have to be sampled properly at a sufficient rate. For the Fresnel propagation operator, it has been shown that this function, which is a chirp, has to be sampled at least by $N\geq \frac {1}{\textrm {Fr}}$ pixels in each spatial dimension [53]. A possible method to match the required sampling rate in the frequency domain is to pad the hologram in the spatial domain. Various methods for padding data already exist [54]. Typical methods are padding with a constant value, a repetition of the marginal values or mirroring of the existing data towards each orthogonal and diagonal directions.

3.2.3 Windowing

Handling spectral leakage issues of the FFT has been widely studied in the field of signal processing. An assumption of the FFT is that the input data is periodic and the sampled data covers a full period and can therefore be repeated into each spatial direction. If the input data does not cover a full period of a signal, spectral leakage will occur in form of a convolution between the true spectrum and a sinc shaped function. A common approach to handle this problem is to transform the data into a pseudo periodic signal, before a Fourier transform is applied. To this end, a Hadamard product of the discrete data and window function fades the data towards the margin. Nevertheless, common windows still create a convolution with a sinc shaped function. However, common windows also shape the main lobe, i.e. the passband and the side lobes i.e. the stopband in their respective Fourier transforms according to desired properties. Some examples are the Hamming, Hann or Blackman windows [55,56].

3.3 Reconstruction problem and constraints

When the X-ray wave field arrives at the detector, the involved processes can be described by linear operations (Eq. (5)) except the measurement itself (Eq. (6)). Due to the measurement process, the phase information of the propagated wave field is lost and the object reconstruction a non-linear inverse problem. First papers on this topic [9,21] introduced algorithms that are based on alternating projections onto constraint sets for one or more constraints. They attempt to solve the feasibility problem

$$\text{find} \; \psi \in \mathcal{I}_{\textrm{det}} \cap \Omega$$
i.e. to find the intersection of two sets namely the holograms $\mathcal {I}_{\textrm {det}}$ and a set of object constraints $\Omega$. These algorithms converge to points of shortest distances to the constraint set and posses the property to follow a performance criterion named Summed Distance Error (SDE) [14] stated
$$\text{SDE}(\psi) = \left\lVert \mathcal{P}_{\mathcal{I}_{\textrm{det}}}\psi - \psi \right\rVert_2 + \left\lVert \mathcal{P}_{\Omega}\psi - \psi \right\rVert_2,$$
where $\mathcal {P}_x$ are the projections of $\psi$ onto the constraint sets.

In recent decades huge progress has been made in the field of convex analysis, providing us with a mathematical framework and necessary tools to approximate solutions to different kinds of inverse problems [5759]. Many of the reconstruction algorithms used in X-ray holography can be associated with relatives [911] or have been even actively derived from concepts of convex analysis [12,13].

A common approach in convex optimization is to formulate an objective function that has to be optimized. The solution of this optimization problem yields the desired reconstruction result. An objective function typically consists of multiple terms, where at least one describes the consistency of a solution with the set of measurements and some additional terms to include prior knowledge and regularization. Here, given a single exposure hologram $\mathcal {I}_{\textrm {det}}$, the reconstruction problem for the refractive index in X-ray near-field holography can be formulated using a regularized least-squares approach

$$\tilde{O}^{{\ast}} = \mathop{\textrm{argmin}}\limits_{\tilde{O}} \frac{1}{2} {\left\lVert{|\mathcal{D}_{\textrm{Fr}}(\tilde{O})| - \sqrt{\mathcal{I}_{\textrm{det}}}}\right\rVert}^{2}_{2} + \mathcal{X}_{\Omega}(\tilde{O}).$$

The term $\mathcal {X}_{\Omega }$ is an indicator function, defined by

$$\mathcal{X}_{\Omega}(\tilde{O})= \begin{cases} 0 & \tilde{O} \in \Omega \\ +\infty & \text{else} \end{cases},$$
that models prior knowledge of the setup and the object. An established choice of $\Omega$ is a constraint set that consists of the following:
$$\Omega = \Omega_P \cap \Omega_S,$$
with $\Omega _P$ being the set of physically valid reconstruction values that models the interaction between X rays and the object such that
$$\Omega_P=\;\left\{\forall\; x \in \tilde{O}: \textrm{Re}({x}) \in [-\infty,0], \; \textrm{Im}({x}) \in [0,\infty] \right\}.$$

$\Omega _S$ is the spatial support of the object. As stated in Sec. 2, we aim to omit this constraint.

4. Methods

4.1 Data preprocessing

In this section, we aim to reduce truncation artifacts that result from the limited detector size. We also take care of spectral leakage that is caused by the fast Fourier transform on the non-periodic measured hologram.

4.1.1 Padding

The proposed padding scheme for the preprocessing aims to reduce truncation artifacts that result from edges that are introduced at the hologram border. These edges appear for example as propagation fringes when a truncated wavefield is propagated from the detector plane to the object plane. To avoid this, edges have in general to be avoided when the data is continued beyond the truncation edge. In one dimension, this can be easily achieved by a repetition of the marginal values of the hologram. In two dimensions it is more complex since a simple repetition in one direction introduces edges in the orthogonal direction. Our approach here is to extend the data by mirroring the acquired hologram (see Fig. 3(a)) into each direction (see Fig. 3(b)). Although mirroring is not a physically correct model and the truncated hologram continues in a way that is unknown, it is a way to avoid truncation artifacts through consistency. At the mirroring border, we avoid sudden edges while in the extended area, instead of truncation artifacts, the data will simply resolve to a copy of the reconstructed object. We then pad the mirrored hologram with the value $A_0$ of the constant probe model from Eq. (4) to the necessary size to match the sampling rate of the Fresnel propagation operator as described in Sec. 3.2. The result is shown in Fig. 3(c). We choose $A_0$ such that it is consistent with the flat-field corrected hologram. An ideal flat field correction would normalize $A_0$ to one. However, after applying the flat-field approach as described in Sec. 3.2.1, an offset remains. Currently, the offset parameter $A_0$ has to be tuned manually.

 figure: Fig. 3.

Fig. 3. Preprocessing scheme. The raw hologram is flat-field corrected a) by a PCA approach [51,52]. To solve the marginal problem, the flat-field corrected hologram is mirrored along each direction, yielding b). To match the required sampling rate of the Fresnel propagation convolution kernel, the result is padded by a constant which is the estimated illumination $A_0$, yielding c). A two dimensional fadeout mask (Fig. 4.c) is then applied, yielding d).

Download Full Size | PDF

4.1.2 Windowing

Figure 4 illustrates the steps for the creation of the fading window. To reduce spectral leakage in the frequency domain, we apply a two dimensional window function on the extended detector data, such that after mirroring and padding, the values of the original hologram in the center are unchanged. To create such a window, we modify one of the common fading windows, which are typically bell shaped. Note that an unmodified window would induce a fading into the original hologram that we aim to preserve. We start with a one dimensional Blackman window

$$y(w,x) = 0.42 - 0.5 \cdot \cos\left({\frac{2\pi x}{w - 1}}\right) + 0.08 \cdot \cos\left({\frac{4\pi x}{w - 1}}\right).$$

Here, $w$ is the window width and $x$ the one dimensional spatial coordinate. To preserve a fading free area in the center, we split the one dimensional window Fig. 4(a) at the center and continue to fill the gap with the constant value one, until the desired starting points and end points for the fading are reached. The result is then zero padded to match the size of the input data for the reconstruction. The new one dimensional window is shown in Fig. 4(b). From that, the two dimensional window function is derived by a repetition of the one dimensional function into each spatial direction, yielding Fig. 4(c) that leaves a rectangular shaped fading free area in the center. The respective hologram with the applied window is shown in Fig. 3(d). The processing described here in Sec. 4.1 is summarized in Alg. 1 in the appendix.

 figure: Fig. 4.

Fig. 4. Window function. a) A one dimensional Blackman function is sampled to the size w of Eq. (17). b) The window function is split in half and the gap filled with ones, until desired fading start and fading end is reached. The result is then zero padded until the window width matches the size of the input data for the reconstruction. From that, the two dimensional window function c) is derived by a repetition in each direction.

Download Full Size | PDF

4.2 Reconstruction

In this section, we aim to reduce the image artifacts discussed initially in the problem statement and decrease the total runtime compared to the reference algorithm refAP. In the following subsections, we start first with the introduction of the reference algorithm without a support constraint for comparison. We then propose two modifications to address stagnation issues and artifacts and eventually combine these in a final algorithm.

4.2.1 Reference algorithm without a spatial support constraint

We begin with the construction of the reference algorithm that finds a solution for Eq. (13) under the constraint set $\Omega = \Omega _P$ without the spatial support constraint $\Omega _S$ discussed in Sec. 3.3. $\Omega _P$ is a constraint that enforces a positive electron density in the calculated solution, which is a convex set

$$\Omega_P=\;\left\{\forall\; x \in \tilde{O}: \textrm{Re}({x}) \in [-\infty,0], \; \textrm{Im}({x}) \in [-{\text{log}}(A_0),\infty] \right\}.$$

As a boundary for the set of physical solution, $-{\text {log}}(A_0)$ instead of $0$ is used, which is the direct consequence of the constant probe model $A_0$ that has been used in Sec. 4.1.1. We get a projector $\mathcal {P}_{\Omega _P}$ of $\tilde {O}$ onto $\Omega _P$ by projecting the real and imaginary part separately and pointwise onto the defined intervals:

$$\mathcal{P}_{\Omega_P}(\tilde{O}) = \text{min}(0,\textrm{Re}(\tilde{O})) + \text{i}~\text{max}(-{\text{log}}(A_0),\textrm{Im}(\tilde{O})).$$

For gradient descent methods, it is in general not recommended to use a standard gradient descent step for the optimization of a non-convex function $f$ in form of $x_{i+1} = x_i - \alpha \nabla f(x_i)$ [60,61]. This algorithm is known for having trouble with optimizing functions that contain local minima, flat surfaces or ravines. Instead, we make use of the Nesterov accelerated gradient (NAG) [62,63] with momentum $\gamma$ and step size $\eta$. The modified gradient $\nabla g$ is given by:

$$\nabla g(\tilde{O}_i) = \gamma \nabla g(\tilde{O}_{i-1}) + \eta\nabla f(\tilde{O}_i-\gamma \nabla g(\tilde{O}_{i-1})).$$
with $\nabla g(\tilde {O}_0) = 0$. Here, $\nabla f(\tilde {O})$ is the non-accelerated gradient of the data fidelity term, given by [16,64]:
$$f(\tilde{O}) = \frac{1}{2} \lVert \lvert \mathcal{D}_{\textrm{Fr}}(\tilde{O}) \rvert - \sqrt{\mathcal{I}_\text{det}} \rVert^2_2$$
$$\nabla f(\tilde{O}) ={-} \text{i} \cdot \overline{\exp ({\text{i}\tilde{O}})}\cdot \mathcal{D}_{\textrm{Fr}}^{{-}1} \left( \mathcal{D}_{\textrm{Fr}}(\tilde{O}) - \sqrt{\mathcal{I}_{\textrm{det}}} \odot \text{sgn}(\mathcal{D}_{\textrm{Fr}}(\tilde{O}))\right)$$
$$\text{sgn}(x) = \begin{cases} 0 & \text{if}\;x = 0\\ \frac{x}{\lvert x \rvert} & \text{else} \end{cases}$$

Recalling the preprocessing scheme Alg. 1, we also model the previous extension of the hologram as a prior knowledge into the reconstruction. At the beginning of each iteration $i$, we apply the same preprocessing steps on the current solution $\tilde {O}_i$. The complete PGD algorithm with a Nesterov momentum accelerated gradient step is shown in Alg. 2 in the appendix.

4.2.2 Regularization for $\textrm {Im}(\tilde {O})$

In order for Alg. 2 to reach a minimum of the target function Eq. (13) fast, the Nesterov momentum $\gamma$ needs to be as high as possible. However, high momentum values entail the risk of overshooting in the gradient descent step, that causes overshooting artifacts (problem statement Fig. 2). Here, we aim to counteract the overshooting in single pixels by considering the numerical coupling between the absorption and phase shift values. In the forward model, the refractive index representation $\tilde {O}$ is transformed into the transmission function $O=\exp ({i\tilde {O}})$ and essentially becomes a polar coordinate representation of complex numbers. In each pixel of $O$, the resulting vector magnitude is related to $\lvert {O}\rvert =\exp ({-\textrm {Im}(\tilde {O})})$ and consequently decreases with increasing $\textrm {Im}(\tilde {O})$. For $\textrm {Im}(\tilde {O})\rightarrow \infty$, the vector magnitude vanishes and so does the contribution of the argument $\arg {O}=\textrm {Re}(\tilde {O})$ to the target function. The coupling combined with an overestimation of the absorption $\textrm {Im}(\tilde {O})$ by the Nesterov accelerated gradient entails thereby a risk of destabilizing the reconstruction of the phase shift values $\textrm {Re}(\tilde {O})$. To stabilize the reconstruction, we propose the incorporation of a self scaling, pixel-wise damping of the absorption values during the reconstruction. The introduced reconstruction bias of the damping should be as conservative as possible to still allow for the reconstruction of large absorption values. To this end, we chose a regularization only for $\textrm {Im}(\tilde {O})$. To prevent an excessive punishment of large absorption values, which are part of the object and not the result of overshooting, we employ here a simple $L_2$ regularization instead of a squared version like for example in Tikhonov regularization. We extend the target function to:

$$\tilde{O}^* = \mathop{\textrm{argmin}}\limits_{\tilde{O}} \frac{1}{2} {\left\lVert{|\mathcal{D}_{\textrm{Fr}}(\tilde{O})| - \sqrt{\mathcal{I}_{\textrm{det}}}}\right\rVert}^{2}_{2} + \mathcal{X}_{\Omega}(\tilde{O}) + \beta \lVert \textrm{Im}(\tilde{O})\rVert_2.$$

We can account for this additional term by a slight modification of the gradient in Alg. 2

$$f(\tilde{O}) = \frac{1}{2} \lVert \lvert \mathcal{D}_{\textrm{Fr}}(\tilde{O}) \rvert - \sqrt{\mathcal{I}_\text{det}} \rVert^2_2 + \beta \lVert \textrm{Im}(\tilde{O})\rVert_2$$
$$\nabla f(\tilde{O}) ={-} \text{i} \cdot \overline{\exp ({\text{i}\tilde{O}})}\cdot \mathcal{D}_{\textrm{Fr}}^{{-}1} ( \mathcal{D}_{\textrm{Fr}}(\tilde{O}) - \sqrt{\mathcal{I}_{\textrm{det}}} \odot \text{sgn}(\mathcal{D}_{\textrm{Fr}}(\tilde{O})) ) + \mathcal{R}(\tilde{O})$$
$$\mathcal{R}(\tilde{O}) = \begin{cases} 0 & \text{if } \;{\textrm{Im}(\tilde{O})} = 0\\ \text{i}\beta\frac{\textrm{Im}(\tilde{O})}{\lVert \textrm{Im}(\tilde{O}) \rVert_2 } & \mathrm{else} \end{cases}.$$

The corresponding algorithm is the reference algorithm Alg. 2, extended by a modified gradient and a loop over different regularization parameters, which yields Alg. 3 in the appendix.

4.2.3 Suppression of high spatial frequencies

Here, we propose two approaches to counteract the slow reconstruction of low spatial frequencies. In general, this can be done either by increasing the contribution of low spatial frequencies or by decreasing the contribution of high spatial frequencies to the objective function.

First, we suppress the acceleration of high spatial frequencies by the Nesterov accelerated gradient, which causes overshooting of pixel groups down to single pixels for high momentum values. Here, we aim to compensate for the Fresnel propagation kernel’s property of being sensitive to the second derivative, which makes a momentum based gradient descent acceleration unstable for high spatial frequencies. Our goal is to reduce the momentum for high spatial frequencies while maintaining a high momentum for low spatial frequencies. To this end, we replace the momentum $\gamma$ by an operator $\tau$ which first transforms the Nesterov accelerated gradient into the Fourier domain and then applies frequency-dependent weights with Gaussian distribution:

$$\tau = \gamma\;\mathcal{F}^{{-}1} \circ \exp \left({-2\pi^2 (k_x^2 + k_y^2)\sigma^2}\right) \circ \mathcal{F},$$
which yields then the Nesterov accelerated gradient
$$\nabla g(\tilde{O}_i) = \tau \nabla g(\tilde{O}_{i-1}) + \eta\nabla f(\tilde{O}_i- \tau \nabla g(\tilde{O}_{i-1})).$$
with the parameters $\gamma$ and $\sigma$ included in $\tau$. The second approach is a multigrid method. Here, an approximate low resolution solution is obtained on a coarse grid first. Formally, this can be done by solving the optimization problem
$$\tilde{O}^* = {S_{\uparrow}}\left[\mathop{\textrm{argmin}}\limits_{\tilde{O}} \frac{1}{2}{\left\lVert{|\mathcal{D}_{\textrm{Fr}}(\tilde{O})| - \sqrt{{S_{\downarrow}}\mathcal{I}_{\textrm{det}}}}\right\rVert}^{2}_{2} + \mathcal{X}_{\Omega}(\tilde{O})\right],$$
where $S_{\downarrow }$ and $S_{\uparrow }$ are the corresponding down- and upsampling operators. The approximate solution to this problem is then used as an initial guess in the above optimization problem on a refined grid, represented by new down- and upsampling operators. This process is iterated until the original resolution is restored. A detailed breakdown of the algorithm can be found in Alg. 4 in the appendix.

4.2.4 Artifact-suppressing reconstruction method

We create the final algorithm by combining all proposed methods to a global artifact-suppressing reconstruction method which we call ASRM. We start again with the reference algorithm Alg. 2 and extend the algorithm by first apply the weighted regularization term $\beta \lVert \textrm {Im}(\tilde {O}) \rVert _2$ of Sec. 4.2.2. We also add the high frequency suppression and the multigrid method from Sec. 4.2.3, which is the extension of the objective function by a downsampling operator $S_{\downarrow }$, an upsampling operator $S_{\uparrow }$ and the modified momentum operator $\tau$ in the Nesterov accelerated gradient. The objective function to reconstruct the complex refractive object is then

$$\tilde{O}^* = {S_{\uparrow}}\left[\mathop{\textrm{argmin}}\limits_{\tilde{O}} \frac{1}{2}{\left\lVert{|\mathcal{D}_{\textrm{Fr}}(\tilde{O})| - \sqrt{{S_{\downarrow}}\mathcal{I}_{\textrm{det}}}}\right\rVert}^{2}_{2} + \mathcal{X}_{\Omega}(\tilde{O}) + \beta \lVert \textrm{Im}(\tilde{O}) \rVert_2\right].$$
In each iteration, a Nesterov accelerated gradient $\nabla g(\tilde {O})$ is calculated by
$$\tau = \gamma\;\mathcal{F}^{{-}1} \circ \exp \left({-2\pi^2 (k_x^2 + k_y^2)\sigma^2}\right) \circ \mathcal{F}$$
$$\nabla g(\tilde{O}_i) = \tau \nabla g(\tilde{O}_{i-1}) + \eta\nabla f(\tilde{O}_i-\tau \nabla g(\tilde{O}_{i-1})).$$
where $\nabla f(\tilde {O})$ is the analytical gradient of the data fidelity term combined with the $L_2$ regularization of the imaginary part of $\tilde {O}$:
$$f(\tilde{O}) = \frac{1}{2} \lVert \lvert \mathcal{D}_{\textrm{Fr}}(\tilde{O}) \rvert - \sqrt{S_{\downarrow}\mathcal{I}_\text{det}} \rVert^2_2 + \beta \lVert \textrm{Im}(\tilde{O})\rVert_2$$
$$\nabla f(\tilde{O}) ={-} \text{i} \cdot \overline{\exp ({\text{i}\tilde{O}})} \cdot \mathcal{D}_{\textrm{Fr}}^{{-}1} \left( \mathcal{D}_{\textrm{Fr}}(\tilde{O}) - \sqrt{{S_{\downarrow}}\mathcal{I}_{\textrm{det}}} \odot \text{sgn}(\mathcal{D}_{\textrm{Fr}}(\tilde{O})) \right) + \mathcal{R}(\tilde{O})$$
$$\text{sgn}(x) = \begin{cases} 0 & \text{if}\;x = 0\\ \frac{x}{\lvert x \rvert} & \text{else} \end{cases}$$
$$\mathcal{R}(\tilde{O}) = \begin{cases} 0 & \text{if } \;{\textrm{Im}(\tilde{O})} = 0\\ \text{i}\beta \frac{\textrm{Im}(\tilde{O})}{\lVert \textrm{Im}(\tilde{O}) \rVert_2 } & \text{else} \end{cases}$$

The summarized algorithm is shown in Alg. 5 in the appendix.

5. Experiments

5.1 Experimental setup

We obtained four datasets at the beamline P05 at PETRA III, located at DESY in Hamburg, operated by Helmholtz-Zentrum Hereon [3638]. A Fresnel-zone-plate-based setup for NFH as shown in Fig. 1 was used [8]. The detector is a scintillator (${10}\;\mathrm{\mu}$ Gadox) sCMOS camera (Hamamatsu C12849-101U) with ${6.5}\;\mathrm{\mu}\textrm {m}$ pixel size at 16 bit image depth and $2048\times 2048$ pixels.

We chose the following samples for our demonstration: A spider attachment hair (Fig. 5) [6567], the tip of a cactus needle (Fig. 6), a sample of a human tooth prepared by focused ion beam milling (Fig. 7) and a partly corroded biodegradable magnesium wire (Fig. 8) [32,6870]. The measurement parameters for each of the samples are shown in Appendix B, Table 3.

 figure: Fig. 5.

Fig. 5. Spider hair. The result of Alg. 2 shows a clearly visible overshooting artifact in the dense area in the middle that emits fringes into the background. The object itself is optically well reconstructed in panels of Alg. 3 and Alg. 4. However, both approaches still show LF artifacts across the whole phase image. The panel of Alg. 5 shows the best reconstruction. The dashed line indicates the position of the cross section. The scale bar indicates ${8}\;\mathrm{\mu}$.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Cactus Needle. The algorithm Alg. 2 has not completely reconstructed the object’s interior. In the background around the object are some LF variations visible. The panel of Alg. 3 offers a better phase reconstruction but suffers from LF artifacts in the top area and LF variations in the background. The panel of Alg. 4 and Alg. 5 reconstructed the object almost identical, with significantly reduced LF variations. The dashed line indicates the position of the cross section. The scale bar indicates ${20}\;\mathrm{\mu}$.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Tooth. The algorithms Alg. 2 and Alg. 3 have not completely reconstructed the object’s interior. In the background are some LF variations visible. In the cross section, the error increases towards the image border. Alg. 4 and Alg. 5 significantly reduce the LF variation in the cross section. Alg. 5 posses the highest ratio of maximum reconstructed phase shift to noise in the object free area. The dashed line indicates the position of the cross section. The visible dense structures at the top and bottom of the image stem from the sample preparation. The scale bar indicates ${10}\;\mathrm{\mu}$.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Magnesium wire. Here, the effects of the regularization techniques are particularly visible. The quality of reconstructed phase values increases gradually in the panel order Alg. 2, Alg. 3, Alg. 4, Alg. 5. Alg. 3 reduced the overshooting artifacts of Alg. 2, especially in the right area of the object but was unable to reconstruct the object’s interior. The result of Alg. 4 is free of overshooting artifacts but the object’s interior is still not completely reconstructed. Only the combination of all approaches, Alg. 5, yields a well reconstructed phase image with the completely reconstructed object’s interior and significantly reduced artifacts. The dashed line indicates the position of the cross section. The scale bar indicates ${50}\;\mathrm{\mu}$.

Download Full Size | PDF

5.2 Reconstruction setup

We chose the samples with increasing complexity for the reconstruction. The samples increase each in their thickness and interaction strength with the X-ray illumination i.e. phase shifts and attenuation properties are increased. The interaction strength increases with respect to the given sample order given above. The detector data was preprocessed by Algorithm 1 before the actual reconstruction. The synthetic flat field $I_\text {flat}$ for the flat-field correction according to Eq. (10) was generated from 50 empty images with a PCA approach [51,52]. The preprocessed data was reconstructed with the reference method and the different methods outlined above:

Alg. 2:Reference PGD method refAP.
Alg. 3:Additional $L_2$ regularization for the absorption values of $\tilde {O}$.
Alg. 4:Suppression of high spatial frequencies.
Alg. 5:The combination of Alg. 3 and Alg. 4.

We implemented our algorithms in Python using the PyTorch library for GPU acceleration [54,71,72]. The reconstruction was performed on the Maxwell computing cluster at DESY on a NVIDIA A100 GPU with 40GB memory [73].

5.3 Choice of parameters

The reconstruction parameters for each sample are shown in Appendix B, Table 4 and Table 5, and were heuristically determined for the four shown objects. All reconstruction variants are controlled by fixed iteration numbers, also to compare convergence. For the total iteration number, we found 2000 iterations to be a good reference for showing the artifact suppressing effects of our proposed approaches. For the multigrid approaches Alg. 4 and Alg. 5, we chose 700 iterations on the first grid, which is close to the observed overshooting peak in Fig. 2 of the problem statement. For the up- and downsampling $S_{\uparrow }$ and $S_{\downarrow }$, respectively, we used a bilinear interpolation, provided as part of PyTorch [54,72]. The required downsampling factors and number of iterations were determined by trial and error, but are consistent for the presented reconstructions. Three out of the available parameters remained sample dependent and had to be tuned beforehand and for each hologram separately: (i) the variance $\sigma$ of $\tau$ (Eq. (32)) as well as the two model parameters (ii) $A_0$ of the constant source model (Eq. (4)) and (iii) the optimal Fresnel number $\textrm {Fr}$ of the forward model (Eq. (5)). The regularization parameter $\beta$ (Eq. (31)) appeared to be in general very robust and could be chosen in the range from 0.1 to 10.

Note, that in the last 500 iterations of each algorithm variant, we switched off our proposed regularization. This means $\beta =0$, $\tau =\gamma$ and no downsampling. This removes the regularization induced reconstruction bias of our proposed approaches. The reconstruction result on the final iterations is then computed only on data-driven and physical constraints, i.e. the data fidelity term and the positive electron density constraint.

5.4 Analysis of reconstruction

In the following, we show the reconstruction results for each variant and horizontal cross sections. We measure the reconstruction quality in terms of the ability to reconstruct large phase shifts in the object’s interior while maintaining sharp object edges and low background noise in the object free area. We classify the remaining artifacts into categories, according to the respective artifacts in the problem statement Sec. 2:

LF:Low frequency noise
TA:Truncation artifacts
OA:Overshooting artifacts
WR:Weak reconstruction

The resulting Figs. 5 to 9 have the same structure and show the reconstruction variants as listed above in their respective panels. The resulting computation times for each algorithm variant and sample are shown in Table 1. The total computation time for each variant depends on the required sampling rate for the Fresnel propagation kernel and the grid sizes of the multigrid approach. The computation time was determined by an average over 20 reconstruction runs without initialization overhead and without intermediate plots.

Tables Icon

Table 1. The total computation time depends on the required sampling rate for the Fresnel propagation kernel and the grid sizes of the multigrid approach. The array size for the reconstruction and the computation time is shown in this table. The computation time was determined by an average of $20$ reconstructions without initialization overhead and intermediate plots.

6. Results

The reconstruction results for each algorithm variant are shown in Figs. 5 to 9.

For all samples and every algorithm variant, truncation artifacts TA are significantly less visible in the reconstruction results. The remaining artifacts are, depending on the algorithm variants, LF, OA and WR that we summarized into Table 2, where we summarized the different samples and algorithm variants and occurring artifacts. The proposed algorithm variants perform as follows:

Tables Icon

Table 2. Reconstruction artifacts that are present in the respective result panels of Figs. 5, 6, 7, 8. The truncation artifact (TA) issue was already resolved by preprocessing and is left out in this table. The abbreviations under the algorithm numbers correspond to the artifact types defined as (LF) low frequency noise, (OA) overshooting artifacts and (WR) weak reconstruction. An arrow downwards $\downarrow$ indicates a significant suppression of the artifact, an X indicates that the artifact is visibly present.

Algorithm 2, the reference algorithm, yields the worst results in all cases. It failed to reconstruct the object’s interior of the cactus needle, the tooth and the magnesium wire. The results of the spider hair and the magnesium wire possess clearly visible overshooting artifacts. Also, low frequency artifacts are very prominent.

Algorithm 3 reduced the overshooting artifacts of Alg. 2 that were visible in the spider hair and magnesium wire results. The algorithm still failed to completely reconstruct the object’s interior of the cactus needle, the tooth and the magnesium wire. The results also suffer from low frequency artifacts, either in form of a static offset for the spider hair or background variations for the cactus needle, tooth and magnesium wire.

Algorithm 4 significantly suppressed all overshooting artifacts of Alg. 2. Compared to Alg. 3, the algorithm also improved the background variations as well as the reconstruction of the object’s interior. For the cactus needle, the algorithm succeeded to suppress artifacts in the reconstructed phase image and almost reconstructed the object’s interior of the tooth.

Algorithm 5 offers the best reconstruction quality if compared to the other approaches. The combination of both approaches significantly reduced overshooting artifacts and low frequency noise. The algorithm also succeeded in the reconstruction of the object’s interior for all tested objects.

 figure: Fig. 9.

Fig. 9. Reconstructed absorption for the magnesium wire. The dashed line indicates the position of the cross section and was positioned at the most prominent overshooting artifact. The presence of the overshooting artifact decreases gradually in the panel order Alg. 2, Alg. 3 and Alg. 4. Left and right from the artifact, fringes in Alg. 2 and Alg. 3 appear where the absorption is partly reconstructed as zero. The fringes and the overshooting in the absorption disappear with Alg. 4. In the combined variant Alg. 5, the cross section is almost equal to the cross section of Alg. 4. The scale bar indicates ${50}\;\mathrm{\mu}$.

Download Full Size | PDF

The magnesium wire is the only sample with considerable absorption. The reconstruction result of the absorption is shown in Fig. 9. Similar to the phase reconstruction, the presence of the overshooting artifacts decreased gradually in the order Alg. 2, Alg. 3 and Alg. 4. While the artifact suppression effect is already visible in Alg. 3 the strongest artifact suppression effect had the multigrid approach Alg. 4 and the combination with Alg. 3 leads to no improvement in the absorption (Alg. 5).

The computation times for each algorithm variant are shown in Table 1 and are an average of 20 reconstructions with the parameters given in Appendix B, Table 4 and Table 5. Due to the different Fresnel number of the forward model, the required sampling rate and therefore array size were object dependent. The maximum computation time for the largest array size was 8 min 48 sec (spider hair, cactus needle) with Alg. 3. The fastest reconstruction was performed in 43 sec with Alg. 5(tooth sample). Independent of the array size, the computation time could be reduced to $1/3$ by the multigrid approach of algorithm variants Alg. 4 and the ASRM method Alg. 5.

Further information of the behaviour of the algorithm variants are given in the appendix, in particular the mean squared error (MSE) of the target function (Appendix C, Fig. 10 and Fig. 11) and additional results of the reference algorithm refAP without a Nesterov accelerated gradient (also Appendix C, Fig. 10 and Fig. 11, Alg. 2, $\gamma =0$, violet).

7. Summary

In the problem statement Sec. 2, we investigated the reconstruction of objects with a state-of-the-art projected gradient descent approach refAP [16], without a support constraint and from a single hologram. The reconstruction result suffered from various artifacts that we sorted into different categories. We found that the algorithm overshoots the reconstructed phase values during the first few hundred iterations. It could also be seen that the algorithm had weak reconstruction capabilities for small image gradients. Due to the limited field of view of the detector, the truncated hologram also caused artifacts at the image border. In Sec. 4, we proposed methods to reduce the artifacts that were identified in Sec. 2:

  • i. To avoid truncation artifacts, we improved the preprocessing of measured holograms. This included the derivation of an appropriate padding scheme by mirroring the hologram into each direction and by adjusting the constant padding value with respect to the illumination offset $A_0$. Eventually, we also applied a proper window function to the padded data. To suppress overshooting artifacts and to increase the reconstruction speed, we introduced a warm-up scheme for the projected gradient descent reconstruction. The new algorithm employs multiple regularization techniques.
  • ii. To suppress the overshooting, we regularized the absorption values of the reconstructed refractive object with a weighted $L_2$ regularization.
  • iii. To increase the reconstruction capability of low spatial frequencies, we combined a multigrid reconstruction with a suppression of high frequencies in the Nesterov accelerated gradient.

In Sec. 5, we tested our approaches on different kinds of real objects that were measured at the beamline P05 at PETRA III, located at DESY in Hamburg. The objects covered interaction strengths from weakly interacting samples which do not exceed phase shifts of $2\pi$ to multi material samples that have a phase range beyond $6\pi$. In general, each of our approaches show quality improvements when compared to the reference refAP algorithm. Our preprocessing scheme (i) successfully suppressed truncation artifacts. The $L_2$ regularization of the absorption values (ii) reduced significantly overshooting artifacts and offered an improved reconstruction of the object envelope for three of four objects. The high frequency suppression method (iii) resulted in stronger improvements of the reconstruction quality than the regularization approach (ii). In all examples, no overshooting artifacts were visible and the low-frequency artifacts were reduced. For two of four objects, this standalone approach was still not successful to reconstruct the complete object envelope. It could be seen that the results of the HF suppression approach can be further improved by adding the $L_2$ regularization of the absorption values. The combination of all approaches successfully suppressed artifacts in all cases with a good reconstructed object envelope. In addition, we were able to decrease the computation time with a multigrid reconstruction approach, in particular by 1/3 for 2000 iterations and the tested data.

In this paper, we have demonstrated that the proposed preprocessing and reconstruction scheme enables a significant suppression of artifacts in the reconstruction of phase images without a spatial support using only a single hologram. The combination of the $L_2$ regularization for the absorption values and the high frequency suppression offers a superior reconstruction quality than each of the individual approaches. We showed that the implemented algorithm is robust and fast with respect to a wide range of real objects, that strongly differ in their interaction strength with the X-ray illumination. We could also identify a common set of parameters that were well suited and could be generalized over the tested objects. Two object dependent parameters, $\sigma$ for the Nesterov accelerated gradient weights and the illumination parameter $A_0$ in the forward model had to be tuned manually. An automatic refinement is subject of current research.

With the advent of fourth generation synchrotron radiation sources, coherent full-field imaging techniques are gaining more interest in new application areas, for example in high pressure physics [74], while for established science fields, the trend is aiming for dynamic studies of specimens, i. e. in situ and operando studies. The latter is followed by increasing requirements on the measurement process in terms of acquisition parameters. Consequently this increases also the requirements on the reconstruction process. This work is a step further towards simplified and automatized processing of large amounts of holographic data sets, which could enable an online reconstruction to monitor and control the state of dynamic measurements.

Appendix

A. Algorithms

Tables Icon

Algorithm 1. Preprocessing of reconstruction input $\mathcal {I}_{\textrm {det}}$

Tables Icon

Algorithm 2. Reconstruction of $\tilde {O}$ with the reference algorithm refAP

Tables Icon

Algorithm 3. Reconstruction of $\tilde {O}$ with $L_2$ regularization for $\textrm {Im}(\tilde {O})$

Tables Icon

Algorithm 4. Reconstruction of $\tilde {O}$ with high frequencies suppression

Tables Icon

Algorithm 5. ASRM reconstruction of $\tilde {O}$ created from a combination of Alg. 3 and Alg. 4

B. Reconstruction parameters

Tables Icon

Table 3. Parameters for the setup shown in Fig. 1, the calculated effective $\textrm {Fr}$ and the exposure time $t$.

Tables Icon

Table 4. Sample dependent parameters. Values of $\sigma$ in $\tau$ given as full width at half maximum (FWHM) in pixels. $\sigma$ can be derived by calculating $\sigma =FWHM/(2\sqrt {2\ln {(2)}})$. Value $A_0$ for the constant probe model.

Tables Icon

Table 5. Setup and sample independent reconstruction parameters for algorithms 2 to 5 that were used to generate results in Figs. 5 to 9. The filter $\tilde {O}$ values are given in full width at half magnitude (FWHM) for real and imaginary parts in form of $\text {FWHM}_{\text {real}}/\text {FWHM}_{\text {imag}}$.

C. Mean squared error

 figure: Fig. 10.

Fig. 10. Progress of the target function for the spider hair and the cactus needle samples. The graphs show the mean squared error (MSE) of the data fidelity, i.e $1/N \cdot \lVert |\mathcal {D}_{\textrm {Fr}}(\tilde {O})| - \sqrt {\mathcal {I}_{\textrm {det}}}\rVert _2$, for each algorithm as a mean value over all pixel inside the field of view, plotted over the iteration number.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Progress of the target function for the tooth and magnesium wire samples. The graphs show the mean squared error (MSE) of the data fidelity, i.e $1/N \cdot \lVert |\mathcal {D}_{\textrm {Fr}}(\tilde {O})| - \sqrt {\mathcal {I}_{\textrm {det}}}\rVert _2$, for each algorithm as a mean value over all pixel inside the field of view, plotted over the iteration number.

Download Full Size | PDF

Funding

Helmholtz Association (HIDSS-0002 (DASHH), ZT-I-PF-4-027 (SmartPhase)); Deutsche Forschungsgemeinschaft (192346071 (SFB 986)).

Acknowledgments

We acknowledge DESY (Hamburg, Germany), a member of the Helmholtz Association HGF, for the provision of experimental facilities. Parts of this research were carried out at the PETRA III beamline P05: Beamtime-IDs (11010216 spider hair), (11014415 cactus needle, tooth), (11008588 magnesium wire). We thank Berit Zeller-Plumhoff for fruitful discussions and access to the data of the magnesium wire. We thank Michael Stueckelberger for the preparation of the cactus needle sample. We would like to thank Imke Greving for fruitful discussions and the support during the beamtimes at P05. This research was supported in part through the Maxwell computational resources operated at DESY. This research was supported by Hi-Acts, an innovation platform under the grant of the Helmholtz Association HGF.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data and software underlying the results presented in this paper are available in Code 1, Ref. [39].

References

1. J. R. Fienup, “Phase retrieval algorithms: a personal tour [invited],” Appl. Opt. 52(1), 45 (2013). [CrossRef]  

2. R. P. Millane, “Phase retrieval in crystallography and optics,” J. Opt. Soc. Am. A 7(3), 394 (1990). [CrossRef]  

3. D. R. Luke, J. V. Burke, and R. G. Lyon, “Optical wavefront reconstruction: theory and numerical methods,” SIAM Rev. 44(2), 169–224 (2002). [CrossRef]  

4. L. Taylor, “The phase retrieval problem,” IEEE Trans. Antennas Propag. 29(2), 386–391 (1981). [CrossRef]  

5. Y. Shechtman, Y. C. Eldar, O. Cohen, et al., “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32(3), 87–109 (2015). [CrossRef]  

6. P. Cloetens, R. Barrett, J. Baruchel, et al., “Phase objects in synchrotron radiation hard x-ray imaging,” J. Phys. D: Appl. Phys. 29(1), 133–146 (1996). [CrossRef]  

7. P. Cloetens, W. Ludwig, J. Baruchel, et al., “Holotomography: quantitative phase tomography with micrometer resolution using hard synchrotron radiation x rays,” Appl. Phys. Lett. 75(19), 2912–2914 (1999). [CrossRef]  

8. S. Flenner, A. Kubec, C. David, et al., “Hard x-ray nano-holotomography with a Fresnel zone plate,” Opt. Express 28(25), 37514–37525 (2020). [CrossRef]  

9. R. W. Gerchberg, “A practical algorithm for the determination of plane from image and diffraction pictures,” Optik 35(2), 237–246 (1972).

10. J. R. Fienup, “Reconstruction of an object from the modulus of its Fourier transform,” Opt. Lett. 3(1), 27–29 (1978). [CrossRef]  

11. J. R. Fienup, “Reconstruction of a complex-valued object from the modulus of its Fourier transform using a support constraint,” J. Opt. Soc. Am. A 4(1), 118–123 (1987). [CrossRef]  

12. H. H. Bauschke, P. L. Combettes, and D. R. Luke, “Hybrid projection–reflection method for phase retrieval,” J. Opt. Soc. Am. A 20(6), 1025–1034 (2003). [CrossRef]  

13. D. R. Luke, “Relaxed averaged alternating reflections for diffraction imaging,” Inverse Problems 21(1), 37–50 (2005). [CrossRef]  

14. A. Levi and H. Stark, “Image restoration by the method of generalized projections with application to restoration from magnitude,” J. Opt. Soc. Am. A 1(9), 932–943 (1984). [CrossRef]  

15. J. R. Fienup and C. C. Wackerman, “Phase-retrieval stagnation problems and solutions,” J. Opt. Soc. Am. A 3(11), 1897–1907 (1986). [CrossRef]  

16. F. Wittwer, J. Hagemann, D. Brückner, et al., “Phase retrieval framework for direct reconstruction of the projected refractive index applied to ptychography and holography,” Optica 9(3), 295–302 (2022). [CrossRef]  

17. P. Thibault, M. Dierolf, A. Menzel, et al., “High-resolution scanning x-ray diffraction microscopy,” Science 321(5887), 379–382 (2008). [CrossRef]  

18. M. Kahnt, L. Grote, D. Brückner, et al., “Multi-slice ptychography enables high-resolution measurements in extended chemical reactors,” Sci. Rep. 11(1), 1500 (2021). [CrossRef]  

19. Y. Jiang, Z. Chen, Y. Han, et al., “Electron ptychography of 2D materials to deep sub-ångström resolution,” Nature 559(7714), 343–349 (2018). [CrossRef]  

20. D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948). [CrossRef]  

21. D. L. Misell, “An examination of an iterative method for the solution of the phase problem in optics and electron optics: I. test calculations,” J. Phys. D: Appl. Phys. 6(18), 2200–2216 (1973). [CrossRef]  

22. J. Hagemann, A.-L. Robisch, D. R. Luke, et al., “Reconstruction of wave front and object for inline holography from a set of detection planes,” Opt. Express 22(10), 11552–11569 (2014). [CrossRef]  

23. S. Huhn, L. M. Lohse, J. Lucht, et al., “Fast algorithms for nonlinear and constrained phase retrieval in near-field x-ray holography based on Tikhonov regularization,” Opt. Express 30(18), 32871–32886 (2022). [CrossRef]  

24. V. Davidoiu, B. Sixou, M. Langer, et al., “Non-linear iterative phase retrieval based on Frechet derivative,” Opt. Express 19(23), 22809–22819 (2011). [CrossRef]  

25. B. Sixou, V. Davidoiu, M. Langer, et al., “Absorption and phase retrieval with Tikhonov and joint sparsity regularizations,” Inverse Problems and Imaging 7(1), 267–282 (2013). [CrossRef]  

26. J. R. Fienup, T. R. Crimmins, and W. Holsztynski, “Reconstruction of the support of an object from the support of its autocorrelation,” J. Opt. Soc. Am. 72(5), 610–624 (1982). [CrossRef]  

27. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]  

28. S. Loock and G. Plonka, “Phase retrieval for Fresnel measurements using a shearlet sparsity constraint,” Inverse Problems 30(5), 055005 (2014). [CrossRef]  

29. V. Davidoiu, B. Sixou, M. Langer, et al., “Nonlinear approaches for the single-distance phase retrieval problem involving regularizations with sparsity constraints,” Appl. Opt. 52(17), 3977–3986 (2013). [CrossRef]  

30. K. Mom, M. Langer, and B. Sixou, “Nonlinear primal–dual algorithm for the phase and absorption retrieval from a single phase contrast image,” Opt. Lett. 47(20), 5389–5392 (2022). [CrossRef]  

31. J. Hagemann, M. Vassholz, H. Hoeppe, et al., “Single-pulse phase-contrast imaging at free-electron lasers in the hard x-ray regime,” J. Synchrotron Radiat. 28(1), 52–63 (2021). [CrossRef]  

32. S. Meyer, A. Wolf, D. Sanders, et al., “Degradation analysis of thin Mg-xAg wires using x-ray near-field holotomography,” Metals 11(9), 1422 (2021). [CrossRef]  

33. F. Sun, X. He, X. Jiang, et al., “Advancing knowledge of electrochemically generated lithium microstructure and performance decay of lithium ion battery by synchrotron x-ray tomography,” Mater. Today 27, 21–32 (2019). [CrossRef]  

34. Z. Zhang, K. Dong, K. A. Mazzio, et al., “Phase transformation and microstructural evolution of CuS electrodes in solid-state batteries probed by in situ 3D x-ray tomography,” Adv. Energy Mater. 13(2), 2203143 (2023). [CrossRef]  

35. S. Marchesini, H. He, H. N. Chapman, et al., “X-ray image reconstruction from a diffraction pattern alone,” Phys. Rev. B 68(14), 140101 (2003). [CrossRef]  

36. M. Ogurreck, F. Wilde, J. Herzen, et al., “The nanotomography endstation at the PETRA III imaging beamline,” J. Phys.: Conf. Ser. 425(18), 182002 (2013). [CrossRef]  

37. A. Haibel, M. Ogurreck, F. Beckmann, et al., “Micro- and nano-tomography at the GKSS imaging beamline at PETRA III,” in Developments in X-Ray Tomography VII, S. R. Stock, ed. (SPIE, 2010).

38. A. Haibel, F. Beckmann, T. Dose, et al., “Latest developments in microtomography and nanotomography at PETRA III,” Powder Diffr. 25(2), 161–164 (2010). [CrossRef]  

39. J. Dora, S. Flenner, and J. Hagemann, “A Python framework for the online reconstruction of x-ray near-field holography data,” Zenodo (2023) https://doi.org/10.5281/zenodo.8349365.

40. K. T. Block, M. Uecker, and J. Frahm, “Suppression of mri truncation artifacts using total variation constrained data extrapolation,” Int. J. Biomed. Imaging 2008, 184123 (2008). [CrossRef]  

41. J. W. Gibbs, “Fourier’s series,” Nature 59(1539), 606 (1899). [CrossRef]  

42. L. Czervionke, J. Czervionke, D. Daniels, et al., “Characteristic features of MR truncation artifacts,” Am. J. Roentgenol. 151(6), 1219–1228 (1988). [CrossRef]  

43. D. Paganin, Coherent X-Ray Optics (Oxford University Press, 2006).

44. A. N. Tikhonov and V. Y. Arsenin, Solutions of ill-posed problems (V. H. Winston & Sons, Washington, D.C.: John Wiley & Sons, New York, 1977). Translated from the Russian, Preface by translation editor Fritz John, Scripta Series in Mathematics.

45. A. N. Tikhonov, A. Goncharsky, V. V. Stepanov, et al., Numerical methods for the solution of ill-posed problems, in Mathematics and Its Applications (Springer, Dordrecht, Netherlands, 2010).

46. R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society: Series B (Methodological) 58(1), 267–288 (1996). [CrossRef]  

47. I. Daubechies, M. Defrise, and C. De Mol, “An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,” arXiv, arXiv:math/0307152v2 (2003). [CrossRef]  

48. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D 60(1-4), 259–268 (1992). [CrossRef]  

49. A. Chambolle and P.-L. Lions, “Image recovery via total variation minimization and related problems,” Numerische Mathematik 76(2), 167–188 (1997). [CrossRef]  

50. D. C. Dobson and F. Santosa, “Recovery of blocky images from noisy and blurred data,” SIAM J. Appl. Math. 56(4), 1181–1198 (1996). [CrossRef]  

51. C. Homann, T. Hohage, J. Hagemann, et al., “Validity of the empty-beam correction in near-field imaging,” Phys. Rev. A 91(1), 013821 (2015). [CrossRef]  

52. V. V. Nieuwenhove, J. D. Beenhouwer, F. D. Carlo, et al., “Dynamic intensity normalization using eigen flat fields in x-ray imaging,” Opt. Express 23(21), 27975–27989 (2015). [CrossRef]  

53. D. G. Voelz and M. C. Roggemann, “Digital simulation of scalar optical diffraction: revisiting chirp function sampling criteria and consequences,” Appl. Opt. 48(32), 6132 (2009). [CrossRef]  

54. “PyTorch reference, version 1.10.2,” (2019).

55. T. Butz, Fourier Transformation for Pedestrians, Undergraduate Lecture Notes in Physics, 2nd ed. (Springer International Publishing, Cham, Switzerland, 2015).

56. A. V. Oppenheim, R. W. Schafer, M. A. Yoder, et al., Discrete-time signal processing, 3rd ed. (Pearson, Upper Saddle River, NJ, 2009).

57. H. W. Engl, M. Hanke, and G. Neubauer, Regularization of Inverse Problems, in Mathematics and Its Applications, 1996th ed. (Springer, Dordrecht, Netherlands, 1996).

58. M. Bertero and P. Boccacci, Introduction to Inverse Problems in Imaging (CRC Press, 2020).

59. H. H. Bauschke, P. L. Combettes, and D. R. Luke, “Phase retrieval, error reduction algorithm, and fienup variants: a view from convex optimization,” J. Opt. Soc. Am. A 19(7), 1334–1345 (2002). [CrossRef]  

60. S. Ruder, “An overview of gradient descent optimization algorithms,” (2016).

61. R. S. Sutton, “Two problems with backpropagation and other steepest-descent learning procedures for networks,” in Proc. of Eighth Annual Conference of the Cognitive Science Society, (1986), pp. 823–831.

62. Y. E. Nesterov, “A method of solving a convex programming problem with convergence rate O(1/k2),” in Doklady Akademii Nauk, Vol. 269 (Russian Academy of Sciences, 1983), pp. 543–547.

63. N. Qian, “On the momentum term in gradient descent learning algorithms,” Neural Networks 12(1), 145–151 (1999). [CrossRef]  

64. R. Xu, M. Soltanolkotabi, J. P. Haldar, et al., “Accelerated Wirtinger flow: a fast algorithm for ptychography,” (2018).

65. D. T. Roscoe and G. Walker, “The adhesion of spiders to smooth surfaces,” Bull.Br.arachnol.Soc. 8(7), 224–226 (1991).

66. C. F. Schaber, S. Flenner, A. Glisovic, et al., “Hierarchical architecture of spider attachment setae reconstructed from scanning nanofocus x-ray diffraction data,” J. The Royal Soc. Interface 16(150), 20180692 (2019). [CrossRef]  

67. S. Niederegger and S. N. Gorb, “Friction and adhesion in the tarsal and metatarsal scopulae of spiders,” J. Comp. Physiol. A 192(11), 1223–1232 (2006). [CrossRef]  

68. F. Witte, “The history of biodegradable magnesium implants: a review,” Acta Biomater. 6(5), 1680–1692 (2010). The THERMEC'2009 Biodegradable Metals. [CrossRef]  

69. B. Zeller-Plumhoff, M. Gile, M. Priebe, et al., “Exploring key ionic interactions for magnesium degradation in simulated body fluid – a data-driven approach,” Corros. Sci. 182, 109272 (2021). [CrossRef]  

70. B. Zeller-Plumhoff, H. Helmholz, F. Feyerabend, et al., “Quantitative characterization of degradation processes in situ by means of a bioreactor coupled flow chamber under physiological conditions using time-lapse srµct,” Mater. Corros. 69(3), 298–306 (2018). [CrossRef]  

71. “Python language reference, version 3.8,” (2019).

72. A. Paszke, S. Gross, S. Chintala, et al., “Automatic differentiation in PyTorch,” in NIPS-W, (2017).

73. NVIDIA, “NVIDIA A100 tensor core GPU architecture,” (2020).

74. R. J. Husband, J. Hagemann, E. F. O’Bannon, et al., “Simultaneous imaging and diffraction in the dynamic diamond anvil cell,” Rev. Sci. Instrum. 93(5), 053903 (2022). [CrossRef]  

Supplementary Material (1)

NameDescription
Code 1       Python Repository

Data availability

Data and software underlying the results presented in this paper are available in Code 1, Ref. [39].

39. J. Dora, S. Flenner, and J. Hagemann, “A Python framework for the online reconstruction of x-ray near-field holography data,” Zenodo (2023) https://doi.org/10.5281/zenodo.8349365.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Sketch of an experimental setup based on a Fresnel zone plate (FZP) for near-field holographic microscopy. The FZP focuses the incoming coherent monochromatic X rays to the focal spot located at a distancee $f$ behind the zone plate. There, an order sorting aperture (OSA) is placed that blocks the higher diffraction orders of the FZP. The sample is put into the diverging cone-shaped beam of the FZP at a distance $z_{01}$. Behind the sample, the X rays propagate to the detector that is placed at the sample-to-detector distance $z_{12}$. To protect the detector from radiation damage, the direct beam is blocked by a beamstop behind the FZP [8].
Fig. 2.
Fig. 2. Comparison of reconstruction artifacts between a reconstruction from (a) a hologram, (b) without a spatial support constraint and (c) with a spatial support constraint. The images in Fig. 2 were obtained by an unmodified refAP algorithm (Alg. 2 with and without a spatial support constraint). The scale bars indicate ${10}\;\mathrm{\mu}$.
Fig. 3.
Fig. 3. Preprocessing scheme. The raw hologram is flat-field corrected a) by a PCA approach [51,52]. To solve the marginal problem, the flat-field corrected hologram is mirrored along each direction, yielding b). To match the required sampling rate of the Fresnel propagation convolution kernel, the result is padded by a constant which is the estimated illumination $A_0$, yielding c). A two dimensional fadeout mask (Fig. 4.c) is then applied, yielding d).
Fig. 4.
Fig. 4. Window function. a) A one dimensional Blackman function is sampled to the size w of Eq. (17). b) The window function is split in half and the gap filled with ones, until desired fading start and fading end is reached. The result is then zero padded until the window width matches the size of the input data for the reconstruction. From that, the two dimensional window function c) is derived by a repetition in each direction.
Fig. 5.
Fig. 5. Spider hair. The result of Alg. 2 shows a clearly visible overshooting artifact in the dense area in the middle that emits fringes into the background. The object itself is optically well reconstructed in panels of Alg. 3 and Alg. 4. However, both approaches still show LF artifacts across the whole phase image. The panel of Alg. 5 shows the best reconstruction. The dashed line indicates the position of the cross section. The scale bar indicates ${8}\;\mathrm{\mu}$.
Fig. 6.
Fig. 6. Cactus Needle. The algorithm Alg. 2 has not completely reconstructed the object’s interior. In the background around the object are some LF variations visible. The panel of Alg. 3 offers a better phase reconstruction but suffers from LF artifacts in the top area and LF variations in the background. The panel of Alg. 4 and Alg. 5 reconstructed the object almost identical, with significantly reduced LF variations. The dashed line indicates the position of the cross section. The scale bar indicates ${20}\;\mathrm{\mu}$.
Fig. 7.
Fig. 7. Tooth. The algorithms Alg. 2 and Alg. 3 have not completely reconstructed the object’s interior. In the background are some LF variations visible. In the cross section, the error increases towards the image border. Alg. 4 and Alg. 5 significantly reduce the LF variation in the cross section. Alg. 5 posses the highest ratio of maximum reconstructed phase shift to noise in the object free area. The dashed line indicates the position of the cross section. The visible dense structures at the top and bottom of the image stem from the sample preparation. The scale bar indicates ${10}\;\mathrm{\mu}$.
Fig. 8.
Fig. 8. Magnesium wire. Here, the effects of the regularization techniques are particularly visible. The quality of reconstructed phase values increases gradually in the panel order Alg. 2, Alg. 3, Alg. 4, Alg. 5. Alg. 3 reduced the overshooting artifacts of Alg. 2, especially in the right area of the object but was unable to reconstruct the object’s interior. The result of Alg. 4 is free of overshooting artifacts but the object’s interior is still not completely reconstructed. Only the combination of all approaches, Alg. 5, yields a well reconstructed phase image with the completely reconstructed object’s interior and significantly reduced artifacts. The dashed line indicates the position of the cross section. The scale bar indicates ${50}\;\mathrm{\mu}$.
Fig. 9.
Fig. 9. Reconstructed absorption for the magnesium wire. The dashed line indicates the position of the cross section and was positioned at the most prominent overshooting artifact. The presence of the overshooting artifact decreases gradually in the panel order Alg. 2, Alg. 3 and Alg. 4. Left and right from the artifact, fringes in Alg. 2 and Alg. 3 appear where the absorption is partly reconstructed as zero. The fringes and the overshooting in the absorption disappear with Alg. 4. In the combined variant Alg. 5, the cross section is almost equal to the cross section of Alg. 4. The scale bar indicates ${50}\;\mathrm{\mu}$.
Fig. 10.
Fig. 10. Progress of the target function for the spider hair and the cactus needle samples. The graphs show the mean squared error (MSE) of the data fidelity, i.e $1/N \cdot \lVert |\mathcal {D}_{\textrm {Fr}}(\tilde {O})| - \sqrt {\mathcal {I}_{\textrm {det}}}\rVert _2$, for each algorithm as a mean value over all pixel inside the field of view, plotted over the iteration number.
Fig. 11.
Fig. 11. Progress of the target function for the tooth and magnesium wire samples. The graphs show the mean squared error (MSE) of the data fidelity, i.e $1/N \cdot \lVert |\mathcal {D}_{\textrm {Fr}}(\tilde {O})| - \sqrt {\mathcal {I}_{\textrm {det}}}\rVert _2$, for each algorithm as a mean value over all pixel inside the field of view, plotted over the iteration number.

Tables (10)

Tables Icon

Table 1. The total computation time depends on the required sampling rate for the Fresnel propagation kernel and the grid sizes of the multigrid approach. The array size for the reconstruction and the computation time is shown in this table. The computation time was determined by an average of 20 reconstructions without initialization overhead and intermediate plots.

Tables Icon

Table 2. Reconstruction artifacts that are present in the respective result panels of Figs. 5, 6, 7, 8. The truncation artifact (TA) issue was already resolved by preprocessing and is left out in this table. The abbreviations under the algorithm numbers correspond to the artifact types defined as (LF) low frequency noise, (OA) overshooting artifacts and (WR) weak reconstruction. An arrow downwards indicates a significant suppression of the artifact, an X indicates that the artifact is visibly present.

Tables Icon

Algorithm 1. Preprocessing of reconstruction input I det

Tables Icon

Algorithm 2. Reconstruction of O ~ with the reference algorithm refAP

Tables Icon

Algorithm 3. Reconstruction of O ~ with L 2 regularization for Im ( O ~ )

Tables Icon

Algorithm 4. Reconstruction of O ~ with high frequencies suppression

Tables Icon

Algorithm 5. ASRM reconstruction of O ~ created from a combination of Alg. 3 and Alg. 4

Tables Icon

Table 3. Parameters for the setup shown in Fig. 1, the calculated effective Fr and the exposure time t .

Tables Icon

Table 4. Sample dependent parameters. Values of σ in τ given as full width at half maximum (FWHM) in pixels. σ can be derived by calculating σ = F W H M / ( 2 2 ln ( 2 ) ) . Value A 0 for the constant probe model.

Tables Icon

Table 5. Setup and sample independent reconstruction parameters for algorithms 2 to 5 that were used to generate results in Figs. 5 to 9. The filter O ~ values are given in full width at half magnitude (FWHM) for real and imaginary parts in form of FWHM real / FWHM imag .

Equations (37)

Equations on this page are rendered with MathJax. Learn more.

n ( x , y , z ) = 1 δ ( x , y , z ) + i β ( x , y , z ) ,
ψ exit ( x , y ) = exp ( i k 0 d δ ( x , y , z ) i β ( x , y , z )   d z ) ψ 0 ,
O ~ ( x , y ) = k 0 d δ ( x , y , z ) i β ( x , y , z )   d z = ϕ ( x , y ) + i μ ( x , y ) .
ψ exit ( O ~ ) = exp ( i O ~ ) A 0 exp ( i ϕ 0 ) ,
D Fr ( O ~ ) = F 1 exp ( i π ( k x 2 + k y 2 ) Fr ) F ψ exit ( O ~ ) ,
I det = | D Fr ( O ~ ) | 2 .
Fr = Δ x 2 λ z 12 .
M = z 01 + z 12 z 01
Fr = Δ x 2 λ M z 12 .
I corr = I raw I flat .
find ψ I det Ω
SDE ( ψ ) = P I det ψ ψ 2 + P Ω ψ ψ 2 ,
O ~ = argmin O ~ 1 2 | D Fr ( O ~ ) | I det 2 2 + X Ω ( O ~ ) .
X Ω ( O ~ ) = { 0 O ~ Ω + else ,
Ω = Ω P Ω S ,
Ω P = { x O ~ : Re ( x ) [ , 0 ] , Im ( x ) [ 0 , ] } .
y ( w , x ) = 0.42 0.5 cos ( 2 π x w 1 ) + 0.08 cos ( 4 π x w 1 ) .
Ω P = { x O ~ : Re ( x ) [ , 0 ] , Im ( x ) [ log ( A 0 ) , ] } .
P Ω P ( O ~ ) = min ( 0 , Re ( O ~ ) ) + i   max ( log ( A 0 ) , Im ( O ~ ) ) .
g ( O ~ i ) = γ g ( O ~ i 1 ) + η f ( O ~ i γ g ( O ~ i 1 ) ) .
f ( O ~ ) = 1 2 | D Fr ( O ~ ) | I det 2 2
f ( O ~ ) = i exp ( i O ~ ) ¯ D Fr 1 ( D Fr ( O ~ ) I det sgn ( D Fr ( O ~ ) ) )
sgn ( x ) = { 0 if x = 0 x | x | else
O ~ = argmin O ~ 1 2 | D Fr ( O ~ ) | I det 2 2 + X Ω ( O ~ ) + β Im ( O ~ ) 2 .
f ( O ~ ) = 1 2 | D Fr ( O ~ ) | I det 2 2 + β Im ( O ~ ) 2
f ( O ~ ) = i exp ( i O ~ ) ¯ D Fr 1 ( D Fr ( O ~ ) I det sgn ( D Fr ( O ~ ) ) ) + R ( O ~ )
R ( O ~ ) = { 0 if  Im ( O ~ ) = 0 i β Im ( O ~ ) Im ( O ~ ) 2 e l s e .
τ = γ F 1 exp ( 2 π 2 ( k x 2 + k y 2 ) σ 2 ) F ,
g ( O ~ i ) = τ g ( O ~ i 1 ) + η f ( O ~ i τ g ( O ~ i 1 ) ) .
O ~ = S [ argmin O ~ 1 2 | D Fr ( O ~ ) | S I det 2 2 + X Ω ( O ~ ) ] ,
O ~ = S [ argmin O ~ 1 2 | D Fr ( O ~ ) | S I det 2 2 + X Ω ( O ~ ) + β Im ( O ~ ) 2 ] .
τ = γ F 1 exp ( 2 π 2 ( k x 2 + k y 2 ) σ 2 ) F
g ( O ~ i ) = τ g ( O ~ i 1 ) + η f ( O ~ i τ g ( O ~ i 1 ) ) .
f ( O ~ ) = 1 2 | D Fr ( O ~ ) | S I det 2 2 + β Im ( O ~ ) 2
f ( O ~ ) = i exp ( i O ~ ) ¯ D Fr 1 ( D Fr ( O ~ ) S I det sgn ( D Fr ( O ~ ) ) ) + R ( O ~ )
sgn ( x ) = { 0 if x = 0 x | x | else
R ( O ~ ) = { 0 if  Im ( O ~ ) = 0 i β Im ( O ~ ) Im ( O ~ ) 2 else
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.