Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

3D simulation of the image formation in soft x-ray microscopes

Open Access Open Access

Abstract

In water-window soft x-ray microscopy the studied object is typically larger than the depth of focus and the sample illumination is often partially coherent. This blurs out-of-focus features and may introduce considerable fringing. Understanding the influence of these phenomena on the image formation is therefore important when interpreting experimental data. Here we present a wave-propagation model operating in 3D for simulating the image formation of thick objects in partially coherent soft x-ray microscopes. The model is compared with present simulation methods as well as with experiments. The results show that our model predicts the image formation of transmission soft x-ray microscopes more accurately than previous models.

© 2014 Optical Society of America

1. Introduction

Water-window soft x-ray microscopy allows imaging of intact cells in their near-native “hydrated” state with nanoscale resolution [1]. However, the formation of the 2D images as well as their use in 3D tomographic reconstructions is not uncomplicated, primarily for two reasons: the depth of focus in these lens-based microscopes is typically smaller than the sample and the sample illumination is often partially coherent. Accurate description of the image formation process shows promise to alleviate this problem but previous computationally feasible models for thick objects are limited to 2D computations or incoherent imaging. Here we present an improved model for partially coherent microscopes based on 3D wave propagation and show that this model provides a superior description of the image formation.

Soft x-ray microscopes (XRM:s) operating in the water window exploit the natural contrast between carbon and oxygen in the E = 284-543 eV energy region (λ = 4.37-2.29 nm) and rely on zone-plate optics for the high-resolution imaging. XRM:s are operated at several synchrotron sources [2, 3] and laboratory-source-based microscopes are emerging [4]. The primary advantage of XRM:s is their unique ability to image the intracellular structure of whole intact cells, in 2D as well as in 3D via tomographic reconstruction. The last few years synchrotron-based microscopy has demonstrated significant progress and is now delivering results of high biological relevance [5, 6]. The challenge in XRM imaging is that the depth of focus (DOF) depends on the wavelength (λ) and numerical aperture of the zone plate objective (NAzp) as λ/NAzp2 [7], which does typically not extend through the whole specimen so that only part of the sample volume is imaged sharply. Furthermore, the illuminating condenser systems often have a smaller NA (NAill) than the zone-plate, resulting in a coherence parameter [8], m = NAill/NAzp smaller than 1, i.e. partially coherent illumination. Such partially coherent illumination is often advantageous since it provides higher contrast for higher-spatial-frequency details [6]. However, the combination of partially coherent illumination with part of the sample volume being outside the DOF can introduce non-trivial defocus-effects to the image, which, in turn, may cause non-obvious artifacts in the 3D reconstruction if not taken into proper consideration. The magnitude of these effects is sample dependent, where images of small and/or weakly scattering samples are less influenced while large and/or strongly scattering samples may produce significant imaging artifacts. With improved optics having higher numerical apertures the problem worsens, since the DOF is inversely proportional to the square of NAzp. Typically, state-of-the-art nanofabrication can facilitate zone plates that allow imaging down to 10 nm for thin objects but their DOF is short [9].

The complicated image formation process calls for a model that properly describes the resulting image. The purpose of the model is to make better use of resolution and contrast in 2D and 3D imaging by an improved understanding of the effects of defocus and the illumination settings. Transmission soft x-ray microscopes have been modelled previously, both for incoherent and partially coherent illumination. For partially coherent illumination Schneider modelled the image formation of thin objects, providing numerical solutions to 1D objects [10]. Von Hofsten et al extended this method to 2D numerical calculations [11]. For 3D samples, Knöchel applied the methodology of Streibl [12] to x-ray microscopy [13]. However, this model is less applicable to thick 3D objects since it assumes weakly scattering objects (1st Born approx.) [14] and the numerical implementation for typical XRM objects would be computationally very demanding. Bertilson et al extended the partially coherent illumination model of [11] to thick objects by propagating the electric field through the object, but with a 2D treatment of the wave-propagation process [15]. Oton et al studied the image formation process in the incoherent case for a thick 3D object subject to a point spread function [16].

In the present paper we describe a 3D wave-propagation-based model for simulating the image formation process in partially coherent XRM:s with thick 3D objects. The model takes into account relevant parameters of the illumination, the object and the objective. We compare our 3D model with the two computationally feasible thick-object 3D models [15, 16] as well as with experiments and conclude that it provides an improved description the imaging formation in partially coherent x-ray microscopes. Such improved modelling can assist in optimizing experimental parameters in soft x-ray microscopes, contribute to improved qualitative interpretation of image data, and, potentially, allow retrieval of quantitative data for, e.g., tomographic reconstruction.

2. Theoretical background

Figure 1 illustrates the principal experimental arrangement for present operational soft x-ray microscopes. Here the object is illuminated by condenser optics, which images the x-ray source onto the object plane. The x-rays are absorbed and scattered by the object and collected by zone-plate optics which forms the intensity image in the detector plane. It is the task of any image formation model to predict this signal given the object and the microscope parameters.

 figure: Fig. 1

Fig. 1 Schematic representation of the simulation model. The illumination is decomposed to plane waves that are individually defined by their wave vector k. In the wave propagation step the electric field Ek is propagated forward in the z-direction for each incident plane wave Ek(za). In the imaging step the output Ek(zb) from the wave propagation step is imaged coherently using fourier optics methods to the detector plane. The final partially coherent image is formed by taking the sum over all coherent images weighted with the illumination intensity.

Download Full Size | PDF

2.1 Projection-based models

In the simplest model for thick objects it is assumed that the image intensity at the detector ID(Mx,My) is formed by rays travelling parallel with the optical axis. Each ray carries an intensity I(x,y,za) which is attenuated by the local absorption µ(x,y,z) of the object following Beer-Lambert's law

ID(Mx,My)=1M2I(x,y,za)×exp[zazbμ(x,y,z)dz],
where M is the magnification, and za and zb define a range outside of which µ(x,y,z) equals 0. The projection-based model is simple and applicable where the wave-nature of the light can be ignored and the whole object is inside the depth of focus. These are valid assumptions when the blur is negligible, e.g., in classical lens-less projection imaging and in low-resolution microscopy, but are generally not applicable for soft x-ray microscopy, where the optics introduce a resolution limit as well as a limited DOF. Furthermore the projection-based model does not include the effects of coherence in the illumination, which has been shown to have significant influence on the image formation [15].

2.2 Point-spread-function-enhanced projection models

In the point-spread-function-enhanced projection methods the modelling of the imaging system is improved by including a defocus-dependent point spread function (PSF). The method was first introduced for soft x-ray microscopy by Oton et al [16]. In their interpretation, hereafter referred to as the PSF-enhanced method, the intensity ID(Mx,My) at the detector plane is given by (in our notation),

ID(Mx,My)=Izi(Mx,My)zazb(μ(x,y,z)Izi(Mx,My)ezazμ(x,y,ξ)dξ)x,yh(x,y,D(z))dz,
where Izi(Mx,My) is the intensity image without an object at the detector, za and zb are z-positions before and after the object, µ is the object’s local absorption, h is the PSF and D(z) is the amount of defocus at z. The PSF-enhanced method has the advantage of being analytic, albeit only for special cases such as where the PSF is constant in the z-direction or the object is considered thin in comparison with the DOF. In the present paper we compare our 3D wave-propagation model (Sect. 3) with a numerical version of the PSF-enhanced method with a z-dependent PSF.

3. 3D wave-propagation model

Our image formation model consists of the three steps illustrated in Fig. 1: decomposition of the illumination into plane waves, propagation of each plane wave through the object, and imaging by the objective. As is common practice, we assume that scalar theory is applicable [11, 15–18]. Also, since the bandwidth of XRM:s is typically small, the first two calculation steps are performed monochromatically with wavelength λ0, since the effect of a typical experimental bandwidth would only be noticeable in the imaging step. The model is an extension of the numerical model of Bertilson et al [15], hereafter referred to as the 2D wave-propagation model. The major difference is that in our model all calculation tasks are performed in 3D, while the model of [15] is limited to 2D in the computationally intense wave-propagation step.

3.1 Illumination

XRM:s have different condenser schemes (zone plates, capillary optics, normal incidence mirrors etc.) to collect the light and illuminate the object. Following [10] and [11] our model assumes that the condenser illumination consists of secondary source points. Each source point independently produces an electric field Ek, resulting in a plane wave with wave vector k, which is directed from the source point to the object and has the magnitude k0 = 2π/λ0. This method is applicable to any of present condenser schemes. Assuming quasi-monochromaticity, it is convenient to decouple the stationary envelope uk from the fast harmonic space-and-time dependent oscillations Ek as

Ek(r,t)=E0kexp(j(k0zωt))uk(r),
where j is the imaginary unit and ω = 2πc/λ0 is the angular frequency. At the input plane z = za of the wave-propagation step, the stationary envelope is uk(x,y,z = za) = exp(j(kxx + kyy)) for the plane wave with wave vector k = (kx,ky,kz).

3.2 Propagation through the sample

The modelling of the propagation through the sample is based on the parabolic wave equation (PWE) [17]. In addition to scalar theory and quasi-monochromaticity, we assume that the medium has no free charges and is isotropic and non-dispersive. These are the same assumptions and approximations as are made in Fourier-optics theory [18] and also by previous models of XRM image formation [11, 15–17]. For each plane wave incident on the sample volume the PWE is solved in the form

uz=j2k0(x,y2+k02χ)u,
where χ is the space-dependent complex electric susceptibility of the object. The plane-wave illumination defines the boundary condition at z = za. At the x and y boundaries of the calculation volume, we employ the transparent boundary condition (TBC) [19], which allows energy to flow out but not to be reflected.

Equation (4) is solved numerically to obtain the electric field at the z = zb-plane. The solution is calculated using the finite differential method (FDM) with the 4th order Runge-Kutta (RK4) implementation [20]. In each iteration the solution un to the PWE is advanced with a step size ∆z in the z-direction as

un+1=un+16(k1+2k2+2k3+k4)+TBCn,
where k1 - k4 are intermediate evaluations of ∂u/∂z, n is the iteration step, and TBCn is a term that manipulates the border elements in order to apply the transparent boundary condition. Defining f(u,z) = ∂u/∂z, k1 - k4 are calculated as

k1=f(zn,un)k3=f(zn+Δz2,un+Δz2k2)k2=f(zn+Δz2,un+Δz2k1)k4=f(zn+Δz,un+Δzk3),

The explicit RK4 method is chosen in favor of an implicit method such as the Crank-Nicholson method used in the 2D wave-propagation method [15]. Implicit methods normally have the advantage of higher numerical stability and accuracy, but require that a linear equation system is solved in each propagation step, resulting in long computational times. A general linear equation system may be written Ay = x, where A is an N × N band matrix, y and x are vectors of length N, and N is the number of elements in un. Solving that equation system requires O(N × B2) operations [21], where B is the bandwidth (the width of the tightest diagonal band that contain all non-zero elements) of A. For the 2D case, where un is an M-element vector, A will be a tridiagonal (i.e., bandwidth B = 3) matrix with N = M. The number of operations required to solve that system is thus O(N × 32) = O(N), which has proven itself to be a manageable complexity given the problem size in many calculation tasks [15, 22]. For the 3D case, where un is an M × M matrix, A will increase in size to N = M2. In addition A will have two additional diagonals with non-zero elements at offset ± M from the main diagonal that represent how the PWE now imposes coupling along an additional dimension. Thus, the bandwidth B in that case equals 2M + 1, which yields a complexity O(N × (2M + 1)2) = O(N × M2) = O(N2) on the number of operations required to solve the equation system. Both the increase in the size of the data set (from N = M to N = M2) and the higher order in complexity (from O(N) to O(N2)) results in a calculation time that makes it infeasible to solve the PWE using an implicit method for the problem sizes considered in this work.

The RK4-method performs a fixed number of operations for each element in un at each propagation step, yielding the complexity O(N) on the number of operations per propagation step. With this the PWE is solvable with a reasonable amount of computational time, despite the increase from M to M2 in the data set size. Finally, we note that we do not observe any unstable behavior for this method and that the step size can be determined by the desired sampling in the z-direction rather than by stability issues.

3.3 Imaging by objective

The electric field from each illumination direction that exits the object at z = zb is independently imaged coherently by the objective as described in [18]. In short, the electric field ED,k that would be at the detector with no other contributions to the illumination than the illumination point with wave vector k is given by the electric field after the wave-propagation (Ek(zb) = Ek0 uk(zb)) filtered by an aperture function A and magnified with a factor M as

ED,k(Mx,My)=E0kKk(Mx,My)=E0k1M1{{uk(zb)}×A},
where Kk is the transmission function defined by the combined action of the object and the objective on the plane wave originating from the condenser with unit strength and wave vector k, and ℱ is the fourier transform. The aperture function takes into account the zone plate’s numerical aperture (NAzp) and the defocus (∆f) relative zb via
A(νx,νy)={exp(jπλ(νx2+νy2)f1+Δf1),νx2+νy2NAzp/λ0,νx2+νy2>NAzp/λ,
where νx and νy are the spatial frequencies, and f is the focal length of the zone plate. The final intensity image at the detector ID is the incoherent sum of the electric field resulting from each illumination point,
ID=kI0k|Kk|2,
where I0k = |E0k|2 defines the intensity distribution of the plane waves incident on the object. With Eq. (9) the image formation can be calculated for all intensity distributions of the condenser illumination where the condenser is incoherently illuminated. In particular an arbitrary NAill is set by defining a circularly symmetric illumination shape where the largest angle of incidence is arcsin(NAill). Thus, the degree of partial coherence characterized by the coherence parameter m can be set arbitrarily. For a more general treatment of partial coherence that also includes the degree of coherence at the condenser we refer to the appendix. However, for transmission x-ray microscopes this effect is small and, thus, not included in the present work. Further possible refinements to our model include relaxing the monochromaticity condition to a narrow-bandwidth illumination condition in a similar manner as in [15], and adapting the aperture function to take into account arbitrary aberrations of the optics in addition to the defocus considered here.

3.4 Computational issues

To speed up the calculations and have better control of the memory usage the algorithm was implemented in C/C++ , instead of in an interpreted language. The time requirement on a standard desktop computer when simulating the image formation of a 10 µm × 10 µm × 10 µm object with 10 nm sampling is typically 1 minute/illumination point. We have so far used ~200 illumination points, giving a total time of about 3 h for a full simulation. Note that in a typical simulation task the majority of the time is spent on the wave-propagation step. This step, however, normally only needs to be performed once for any given object. After the first computation the image formation for that object can be quickly reevaluated with different microscope parameters.

All parts of the simulation are accessed through a graphical user interface which provides an overview over all available XRM parameters. This simplifies the setup of the simulation task. The graphical user interface also aids in the design of realistic phantoms with detail and complexity that would otherwise not be manageable.

Finally, we note that the code can be run as a batch job, allowing it to be processed in parallel on a computer cluster. The parallel computing is especially suited for generating a full tomographic data set in the same time as it would take to generate the data for a single sample orientation.

4. Results and discussion

Here we first compare our 3D wave-propagation model with previous models in order to assure consistency as well as to identify where the results of the 3D-model differs from previous models. Secondly, we compare the models with experiments. It is concluded that our 3D model provides a significantly improved description of the image formation.

4.1 Comparison with previous models

For a quantitative comparison with previous models we simulated the XRM image formation with the same basic phantom as object. We used the 2D phantom that was employed in [15] (Fig. 2(a)) and extended it to “extended 2D” (Fig. 2(b)) and “full 3D” (Fig. 2(c)) phantoms. The phantoms are made of mylar discs (Fig. 2(a)), cylinders (Fig. 2(b)) and spheres (Fig. 2(c)) with diameters ranging from 66 nm to 280 nm and concentration ranging from 100% to 78%. The mylar features were placed in a water-containing 5-µm diameter 170-nm thick cylindrical shell with 6.3% mylar concentration, and it was furthermore assumed that the 5-µm diameter phantoms were embedded in a 10-µm thick water/amorphous-ice layer.

 figure: Fig. 2

Fig. 2 Comparison of computational models. (a) shows the original 2D phantom of [15] with mylar discs, (b) shows the extended 2D phantom with cylinders, and (c) shows the full 3D phantom with mylar spheres. (d) depicts the result of a defocus series when the algorithm of [15] is applied to the original 2D phantom. (e) shows the defocus-dependent intensity at x = 0 when the present 3D model is applied to the extended 2D phantom. In (f) the same calculation is performed for the full 3D object, indicating significant differences compared to the 2D cases. For comparison, the method of [16] is applied to the full 3D phantom and plotted at x = 0 in (g). Finally, in (h) and (j) the four methods are compared quantitatively. The magenta lines in (h) shows how the intensity varies in the center of a mylar feature as a function of defocus, while the blue, green and red in (i) are y-profiles at different defocus positions. “Normalized intensity” here refers to the ratio between the image intensity with and without object in the calculation volume.

Download Full Size | PDF

The simulations were performed with the same microscope parameters as in the aperture matched (m = 1) case in [15]: λ = 2.48 nm, zone plate outermost zone width 30 nm, and condenser NA 0.04; parameters corresponding to the Stockholm laboratory microscope [23]. For each simulation a focus series from −9 µm to +9 µm was produced. Figure 2(d) shows the result of the 2D wave-propagation model [15] and Figs. 2(e) and 2(f) show the same focus series for our model in the x = 0-plane on the extended 2D cylinder phantom and the full 3D sphere phantom, respectively. Finally the PSF-enhanced method [16] was applied to the full 3D phantom. Here we follow [16] by using the defocus dependent PSF of an ideal lens with the same NA as the zone-plate optics. Figure 2(g) shows the resulting focus series in the x = 0-plane.

Already a visual inspection of the focus series in Figs. 2(d)-2(g) provides valuable qualitative information. The similarity of Figs. 2(d) and 2(e) indicates that the difference in numerical method between [15] and our model does not influence the result. Comparing Fig. 2(f) with 2(e), however, shows significant differences, indicating that the 2D or extended-2D calculations do not provide a sufficient description of the image formation for full 3D objects.

For a more quantitative comparison the intensity was plotted along different lines (Figs. 2(h) and 2(i)). Figure 2(h) depicts the quantitative intensity as a function of defocus, along the magenta-colored lines which pass through one of the mylar features. Following the qualitative observations above, we note also an excellent quantitative agreement between the 2D wave-propagation method [15] and our method on the extended 2D phantom. As expected, the quantitative differences for the full 3D-phantom are larger, yielding a significantly shorter DOF for the mylar feature of interest. The PSF-enhanced method [16] exhibit a larger modulation in the intensity than the other methods. Figure 2(i) shows quantitative intensity profiles in the y-direction at three defocus positions given by the blue, green and red lines. Also here our method, when used on the extended 2D object, shows good agreement with [15], while the perturbations decay significantly faster with out of focus position when the full 3D object is used. The PSF-enhanced method again results in a higher modulation in the intensity plots.

We believe that the differences between the PSF-enhanced method and our 3D method are due to contradictory assumptions in [16]. The general theoretical framework of the PSF-enhanced method is based upon the image formation process being incoherent, but Eq. (5) in [16] implicitly makes the assumption that the electrical field only propagates parallel to the optical axis. Consequently, in this step the image formation process is treated as fully coherent since the illumination cone would be infinitesimal (NAill = 0).

4.2 Comparison with experiment

Finally we compare the computational methods with experiments. The experiments were performed at the TXM-U41 beamline at HZB Berlin [2]. As test objects we used the silica-rich frustules of diatoms, since they have the size of a typical mammalian cell as well as a rich 3D structure with high spatial frequencies. The frustules were prepared from a sample of mixed freshwater diatoms and placed on ordinary TEM grids. The XRM imaging was performed at 510 eV (λ = 2.4 nm) with a 40 nm zone plate, yielding the numerical aperture NAzp = 0.03 for the zone plate. The DOF is 2.7 µm while the size of the object was 14 µm.

The numerical aperture of the illumination is NAill = 0.02, resulting in a clearly partially coherent imaging case since the coherence parameter m = NAill/NAzp = 0.67 is well below the limit m = 1 where the imaging can be considered incoherent. To allow for a proper comparison with simulations the actual intensity distribution in the hollow cone illumination was measured. Figure 3 shows the measured brightness from the HZB-TXM capillary condenser.

 figure: Fig. 3

Fig. 3 Measured brightness after the capillary condenser and the central stop at the XRM at HZB. The areas outside of the dashed line were outside the field of view during the measurement and their contents have been extrapolated. θx and θy are the angles the illumination makes with the optical axis in the x- and y-direction.

Download Full Size | PDF

Figure 4(a) shows images of two overlapping frustules with different defocus (−5, 0, and +5 µm). The effect of a limited DOF in combination with the partially coherent illumination is obvious, especially at the pores which exhibit significant fringing.

 figure: Fig. 4

Fig. 4 (a) Experimentally acquired images on two overlapping diatom frustules for defocus position: −5, 0 and +5 µm. The red rectangles indicate the pore which is studied in greater detail in Fig. 5. (b) Renderings of a phantom designed to have similar coarse and fine structure and material composition as part of the frustule-pair used in the experiments. (c) Images from simulations of the image formation using our 3D wave-propagation model.

Download Full Size | PDF

The second step was to develop a phantom similar to the frustule-pair that could be used in the computational models. For this purpose we recorded a full tomographic data set at z = 0 defocus of the frustule-pair and performed a tomographic reconstruction using weighted back-projection assuming there was no DOF problem. This reconstructed volume was combined with a priori knowledge about diatom frustules and electron-microscopy images of detailed substructures in frustules of the same species to define a similar (in structure and material composition) 3D phantom. Figure 4(b) shows the result. The 3D phantom was then used as an object in our wave-propagation model, with microscope parameters identical to the values used in the experiment and an illumination equal to the measured brightness (Fig. 3). Figure 4(c) shows the resulting images from this simulation at defocus positions −5, 0, and +5 µm.

The qualitative agreement between the experimental images of Fig. 4(a) and the corresponding simulation of Fig. 4(c) is excellent. In the experimental data as well as in the simulated data different parts of the object move in and out of focus at different defocus positions. This is consistent with the 2.7 µm DOF. Furthermore we observe considerable fringing at the parts of the object that are out-of-focus. This is especially visible at the high-spatial-frequency structures like the pores.

In Fig. 5 we make a detailed comparison between the experimentally observed intensity pattern from a 120 nm pore and the results from different computational models that allow 3D modelling. Figure 5(a) shows the detailed experimental focus series of the 120 nm diam pore which is marked by rectangles in Fig. 4(a). Figure 5(b) shows the result of our present 3D wave-propagation model. It provides reasonably similar intensity pattern also at this very detailed level. For comparison we include results from the PSF-enhanced method [16] (Fig. 5(c)) as well as the ideal projection method (Fig. 5(d)). It is clear that ideal projections (Fig. 5(d)) provide images very far from experiment. The focus series from the PSF-enhanced method [16] in Fig. 5(c) improves the situation but still falls short of a proper description of the fringing.

 figure: Fig. 5

Fig. 5 Focus series of a pore in the diatom frustule from the experimental data (a) and our 3D wave-propagation method’s simulation of the same pore (b). (c) shows the simulated focus series of the pore using the PSF-enhanced method and (d) shows it using ideal projections. Note that the defocus is relative to the diatom center and not the pore, which is why the more focused images are found at the more negative defocus positions.

Download Full Size | PDF

5. Conclusion

We have presented a 3D simulation model based on wave-propagation for the image formation in soft x-ray microscopes operating, e.g., in the water window. The key difficulties are the limited depth of focus and the partially coherent illumination of these microscopes. Our method is benchmarked against previous image-formation models by comparative simulations on similar phantoms. Furthermore, the method is compared with actual experimental images of diatom frustules measured at the HZB-TXM.

The 3D wave-propagation model shows excellent agreement with the previous 2D wave-propagation model when the object has 2D geometry, indicating that the computationally effective Runge Kutta algorithm implemented here performs equally well as the implicit method used before. However, the discrepancy compared with a simulation performed on a 3D object demonstrates the necessity for a model operating in 3D. When comparing the results from different simulation methods with the actual experimental XRM images, our method produced the most accurate predictions of the image formation. For clarity we have used strongly scattering objects in this demonstration.

The 3D wave-propagation method presented here provides an improved and more complete understanding of the image formation in transmission soft x-ray microscopes than previous models. This shows promise for several important applications. Parallel modelling and experiments will allow for a reduced ambiguity in the interpretation of experimental images. In addition, by modelling an experiment before the actual experiment, data acquisition parameters and illumination properties can be chosen for optimal results. In the future, we plan to include the model in an iterative 3D tomographic reconstruction algorithm to improve the results of such 3D imaging.

Appendix

Here we extend the theoretical model leading to Eq. (9) to include also partial coherent illumination of the condenser. For this general treatment of partial coherence we follow the theory of [8]. Under the assumption that the time delay between interfering beams is small the mutual intensity J(P1,P2) and the complex degree of coherence µ(P1,P2) are defined as

J(P1,P2)=E(P1,t)E(P2,t),
μ(P1,P2)=J(P1,P2)I(P1)I(P2),
where P1 and P2 are arbitrary points in space, <□> denotes the time average, * denotes the complex conjugate, E(P,t) is the complex representation of the electric field at P at instant t and I(P) is the average intensity at P.

The general expression for the propagation of the mutual intensity through an optical system from the surface A to the surface ℬ is [8]

J(Q1,Q2)=AAJ(P1,P2)K(P1,Q1)K(P2,Q2)dP1dP2,
where Q1 and Q2 are points on ℬ, P1 and P2 are points on A, and K(P,Q) is the transmission function from P to Q (PA, Q∈ℬ) defined as the complex disturbance at Q due to a single monochromatic point source of unit strength and zero phase at P.

Noting that I(Q) = J(Q,Q) and using the definition of the complex degree of coherence, the intensity at surface B is inferred from Eq. (12) as

I(Q)=AAμ(P1,P2)[I(P1)K(P1,Q)][I(P2)K(P2,Q)]dP1dP2.

In our model A is the surface of the condenser and ℬ is the surface of the detector. The illumination consists of a discrete set of points I(P) = ∑k I(Pk)δ(Pk) at the condenser and the transmission function is given by K(Pk,Q) = ED,k(Q)/√I(Pk). Thus, the integral in Eq. (13) is transformed into the sum

I(Q)=khμ(Pk,Ph)(ED,k(Q)×ED,h(Q)).

Depending on the illumination system the complex degree of coherence µ(Pk,Ph) may be experimentally measured or theoretically calculated given the source parameters. However, it should be noted that in microscopes where the coherently illuminated area on the condenser is significantly large, it is common practice to reduce the degree of coherence by using techniques such as a moving diffuser or wobbling the condenser [24]. In such case µ(P1,P2) should be taken as the effective complex degree of coherence.

A special case occurs when the coherence length is shorter than the sampling distance at the condenser, i.e., µ(Pk,Ph) = δkh. Equation (14) then reduces to the sum in Eq. (9) used in this work.

Acknowledgments

We thank S. Werner, K. Henzler, S. Rehbein, and G. Schneider of HZB for discussions and/or beam-line support. We thank HZB for the allocation of synchrotron radiation beam time. This work was supported by the Swedish Research Council Röntgen Ångström program.

References and links

1. A. Sakdinawat and D. Attwood, “Nanoscale X-ray imaging,” Nat. Photonics 4(12), 840–848 (2010). [CrossRef]  

2. HZB Transmission X-ray Microscope, http://www.helmholtz-berlin.de/

3. National Center for X-ray Tomography, http://ncxt.lbl.gov/

4. H. M. Hertz, O. von Hofsten, M. Bertilson, U. Vogt, A. Holmberg, J. Reinspach, D. Martz, M. Selin, A. E. Christakou, J. Jerlström-Hultqvist, and S. Svärd, “Laboratory cryo soft X-ray microscopy,” J. Struct. Biol. 177(2), 267–272 (2012). [CrossRef]   [PubMed]  

5. M. Uchida, G. McDermott, M. Wetzler, M. A. Le Gros, M. Myllys, C. Knoechel, A. E. Barron, and C. A. Larabell, “Soft X-ray tomography of phenotypic switching and the cellular response to antifungal peptoids in Candida albicans,” Proc. Natl. Acad. Sci. U.S.A. 106(46), 19375–19380 (2009). [CrossRef]   [PubMed]  

6. G. Schneider, P. Guttmann, S. Heim, S. Rehbein, F. Mueller, K. Nagashima, J. B. Heymann, W. G. Müller, and J. G. McNally, “Three-dimensional cellular ultrastructure resolved by X-ray microscopy,” Nat. Methods 7(12), 985–987 (2010). [CrossRef]   [PubMed]  

7. D. Attwood, Soft X-Rays and Extreme Ultraviolet Radiation: Principles and Applications (Cambridge University, 2007), Chap. 9.

8. M. Born and E. Wolf, Principles of Optics, 6th ed. (Pergamon Press Ltd., 1980), Chap. 10.

9. S. Rehbein, P. Guttmann, S. Werner, and G. Schneider, “Characterization of the resolving power and contrast transfer function of a transmission X-ray microscope with partially coherent illumination,” Opt. Express 20(6), 5830–5839 (2012). [CrossRef]   [PubMed]  

10. G. Schneider, “Cryo X-ray microscopy with high spatial resolution in amplitude and phase contrast,” Ultramicroscopy 75(2), 85–104 (1998). [CrossRef]   [PubMed]  

11. O. von Hofsten, P. A. Takman, and U. Vogt, “Simulation of partially coherent image formation in a compact soft X-ray microscope,” Ultramicroscopy 107(8), 604–609 (2007). [CrossRef]   [PubMed]  

12. N. Streibl, “Three-dimensional imaging by a microscope,” J. Opt. Soc. Am. A 2(2), 121–127 (1985). [CrossRef]  

13. C. Knöchel, “Anwendung und Anpassung tomographischer Verfahren in der Röntgenmikroskopie,” Ph.D. Thesis (Georg-August-Universität, Göttingen, 2005).

14. S. Trattner, M. Feigin, H. Greenspan, and N. Sochen, “Validity criterion for the Born approximation convergence in microscopy imaging,” J. Opt. Soc. Am. A 26(5), 1147–1156 (2009). [CrossRef]   [PubMed]  

15. M. Bertilson, O. von Hofsten, H. M. Hertz, and U. Vogt, “Numerical model for tomographic image formation in transmission x-ray microscopy,” Opt. Express 19(12), 11578–11583 (2011). [CrossRef]   [PubMed]  

16. J. Oton, C. O. Sorzano, E. Pereiro, J. Cuenca-Alba, R. Navarro, J. M. Carazo, and R. Marabini, “Image formation in cellular X-ray microscopy,” J. Struct. Biol. 178(1), 29–37 (2012). [CrossRef]   [PubMed]  

17. Y. V. Kopylov, A. V. Popov, and A. V. Vinogradov, “Application of the Parabolic Wave-Equation to X-Ray-Diffraction Optics,” Opt. Commun. 118(5-6), 619–636 (1995). [CrossRef]  

18. J. W. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts & Company, 2005).

19. G. R. Hadley, “Transparent boundary condition for beam propagation,” Opt. Lett. 16(9), 624–626 (1991). [CrossRef]   [PubMed]  

20. J. C. Butcher, Numerical Methods for Ordinary Differential Equations, 2nd ed. (Wiley, 2008), Chap. 3.

21. G. H. Golub and C. F. Van Loan, Matrix Computations, 3rd ed. (Johns Hopkins University, 1996), Chap. 4.

22. A. N. Kurokhtin and A. V. Popov, “Simulation of high-resolution x-ray zone plates,” J. Opt. Soc. Am. A 19(2), 315–324 (2002). [CrossRef]   [PubMed]  

23. P. A. C. Takman, H. Stollberg, G. A. Johansson, A. Holmberg, M. Lindblom, and H. M. Hertz, “High-resolution compact X-ray microscopy,” J. Microsc. 226(2), 175–181 (2007). [CrossRef]   [PubMed]  

24. J. W. Goodman, Speckle Phenomena in Optics (Roberts & Company, 2007), Chap. 5.

Supplementary Material (1)

Media 1: JPG (46 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 Schematic representation of the simulation model. The illumination is decomposed to plane waves that are individually defined by their wave vector k. In the wave propagation step the electric field Ek is propagated forward in the z-direction for each incident plane wave Ek(za). In the imaging step the output Ek(zb) from the wave propagation step is imaged coherently using fourier optics methods to the detector plane. The final partially coherent image is formed by taking the sum over all coherent images weighted with the illumination intensity.
Fig. 2
Fig. 2 Comparison of computational models. (a) shows the original 2D phantom of [15] with mylar discs, (b) shows the extended 2D phantom with cylinders, and (c) shows the full 3D phantom with mylar spheres. (d) depicts the result of a defocus series when the algorithm of [15] is applied to the original 2D phantom. (e) shows the defocus-dependent intensity at x = 0 when the present 3D model is applied to the extended 2D phantom. In (f) the same calculation is performed for the full 3D object, indicating significant differences compared to the 2D cases. For comparison, the method of [16] is applied to the full 3D phantom and plotted at x = 0 in (g). Finally, in (h) and (j) the four methods are compared quantitatively. The magenta lines in (h) shows how the intensity varies in the center of a mylar feature as a function of defocus, while the blue, green and red in (i) are y-profiles at different defocus positions. “Normalized intensity” here refers to the ratio between the image intensity with and without object in the calculation volume.
Fig. 3
Fig. 3 Measured brightness after the capillary condenser and the central stop at the XRM at HZB. The areas outside of the dashed line were outside the field of view during the measurement and their contents have been extrapolated. θx and θy are the angles the illumination makes with the optical axis in the x- and y-direction.
Fig. 4
Fig. 4 (a) Experimentally acquired images on two overlapping diatom frustules for defocus position: −5, 0 and +5 µm. The red rectangles indicate the pore which is studied in greater detail in Fig. 5. (b) Renderings of a phantom designed to have similar coarse and fine structure and material composition as part of the frustule-pair used in the experiments. (c) Images from simulations of the image formation using our 3D wave-propagation model.
Fig. 5
Fig. 5 Focus series of a pore in the diatom frustule from the experimental data (a) and our 3D wave-propagation method’s simulation of the same pore (b). (c) shows the simulated focus series of the pore using the PSF-enhanced method and (d) shows it using ideal projections. Note that the defocus is relative to the diatom center and not the pore, which is why the more focused images are found at the more negative defocus positions.

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

I D ( Mx,My )= 1 M 2 I( x,y, z a )×exp[ z a z b μ(x,y,z)dz ],
I D (Mx,My)= I zi (Mx,My) z a z b ( μ(x,y,z) I zi (Mx,My) e z a z μ(x,y,ξ)dξ ) x,y h(x,y,D(z)) dz,
E k (r,t)= E 0 k exp( j( k 0 zωt) ) u k (r),
u z = j 2 k 0 ( x,y 2 + k 0 2 χ )u,
u n+1 = u n + 1 6 ( k 1 +2 k 2 +2 k 3 + k 4 )+TB C n ,
k 1 =f( z n , u n ) k 3 =f( z n + Δz 2 , u n + Δz 2 k 2 ) k 2 =f( z n + Δz 2 , u n + Δz 2 k 1 ) k 4 =f( z n +Δz, u n +Δz k 3 ) ,
E D,k (Mx,My)= E 0 k K k (Mx,My)= E 0 k 1 M 1 { { u k ( z b ) }×A },
A( ν x , ν y )={ exp( jπλ( ν x 2 + ν y 2 ) f 1 +Δ f 1 ) , ν x 2 + ν y 2 N A zp /λ 0 , ν x 2 + ν y 2 >N A zp /λ ,
I D = k I 0 k | K k | 2 ,
J( P 1 , P 2 )= E( P 1 ,t)E ( P 2 ,t) ,
μ( P 1 , P 2 )= J( P 1 , P 2 ) I( P 1 ) I( P 2 ) ,
J( Q 1 , Q 2 )= A A J( P 1 , P 2 )K( P 1 , Q 1 )K ( P 2 , Q 2 ) d P 1 d P 2 ,
I(Q)= A A μ( P 1 , P 2 )[ I( P 1 ) K( P 1 ,Q) ] [ I( P 2 ) K( P 2 ,Q) ] d P 1 d P 2 .
I(Q)= k h μ( P k , P h )( E D,k (Q)× E D,h (Q) ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.