## Abstract

Ptychographic techniques are currently the subject of increasing scientific interest due to their capability to retrieve the complex transmission function of an object at very high resolution. However, they impose a substantial burden in terms of acquisition time and dimension of the scanned area, which limits the range of samples that can be studied. We have developed a new method that combines the ptychographic approach in one direction with Fresnel propagation in the other by employing a strongly asymmetric probe. This enables scanning the sample in one direction only, substantially reducing exposure times while covering a large field of view. This approach sacrifices ptychographic–related resolution in one direction, but removes any limitation on the probe dimension in the direction orthogonal to the scanning, enabling the scan of relatively large objects without compromising exposure times.

© 2014 Optical Society of America

## 1. Introduction

In x–ray imaging, the term phase retrieval indicates all those techniques aimed at reconstructing the phase of a complex electromagnetic wave impinging on a detector from the measured intensity [1]. Once the phase has been retrieved, it is possible to back propagate the wave to the sample plane to obtain the complex transmission function of the sample, which describes the absorption and the phase shift it introduced on the x–ray beam.

Coherent Diffraction Imaging (CDI) enables solving the phase problem in the far–field regime by illuminating the sample with highly coherent radiation. CDI algorithms allow the retrieval of the phase from a single diffraction pattern, provided that some conditions are met: namely, the sample [1, 2] or the illuminating beam [3] must be isolated and small enough to avoid the undersampling problem at the detector plane. Moreover, some degree of *a priori* knowledge must normally be available, such as, for example, the support of the isolated sample or beam. These limitations can be overcome in ptychography by acquiring several diffraction patterns of different and partially overlapping regions of the sample; all collected patterns are then combined by means of a suitable reconstruction algorithm [4, 5]. The main advantage of ptychography is that it enables imaging extended objects at high resolution without requiring any *a priori* information on the sample or the beam. Its main drawback is that its use is impractical for large fields of view, as it requires a two–dimensional scan of the object with a relatively small beam, which can lead to lengthy acquisitions. Different techniques have been developed to solve the phase problem for large fields of view, typically operating in the near–field and Fresnel regimes. In some cases, these approaches require approximations that are not always trivially satisfied such as, for example, the need for a weakly scattering sample or short propagation distances to linearize the equations governing the free space propagation of x–rays [6], or the assumption of a homogeneous sample [7]. In other cases, these approximations can be relaxed, provided that several intensity patterns are acquired at different distances from the sample [8, 9]. Ptychography is normally implemented in the far–field regime, which allows the achievement of very high and virtually noise–limited resolution; more recently, an extension to the near field regime has been proposed [10], in which, however, the resolution is limited by the (demagnified) pixel size.

We propose here a new approach which makes use of a strongly asymmetric and inhomogeneous probe (Fig. 1) and implements it into an iterative ptychographic algorithm. The desired field of view is covered by scanning the sample in one direction only, enabling substantially simplified and faster acquisitions. The main idea is to adopt different approaches to retrieve the complex transmission function in the two directions, which is enabled by the use of a strongly asymmetric probe and an “intermediate” propagation regime. Exploitation of the same “intermediate” regime was proposed by other researchers [11–13], whose approaches however involved the desired field of view to be fully illuminated by the probe. In our approach the object is placed sufficiently far from the detector to allow the use of ptychographic methods in the scanning direction, while appropriately modified near–field approaches enable phase retrieval also in the other one. The inhomogeneity of the probe introduces enough diversity in the diffraction patterns to solve the phase problem through a ptychographic approach, even though only a one dimensional scan is performed. The method does not impose any requirements on the sample transmission function (e.g. it can be applied to a strongly absorbing and/or scattering sample), can be applied to relatively large fields of view, and provides, in principle, noise–limited resolution in the scanning direction (while pixel size–limited resolution in the orthogonal one). This method should find the same range of applications of other phase contrast imaging techniques (i.e. in–line holography, edge illumination, grating interferometry, analyser–based imaging), while offering enhanced resolution in the scan direction. In particular, applications in fields where small features need to be detected in a relatively large sample (biological tissue, composite materials, etc) should benefit from the possibility of imaging a large field of view, with increased resolution/sensitivity in one direction, and within reasonable total exposure times.

In the following, we will refer to our approach as “1D–PIE”. After briefly discussing its theoretical background, we will show results obtained by applying the method to simulated and experimental datasets, followed by a discussion on spatial resolution and, finally, on strategies to further improve the method.

## 2. Theory

Figure 1 shows a schematic representation of a typical experimental setup for 1D–PIE. An x–ray beam, asymmetrically shaped by one or more optical elements, passes through the sample and propagates to the detector, where its intensity is recorded. The mathematical description of the interaction of x–rays with matter, and their propagation in free space, can be found in several text books [14, 15]; we will therefore report here just some of the key features.

Let *O*(*x*, *y*) and *B*(*x*, *y*) be the sample complex transmission function and the complex amplitude of the beam impinging on the sample, respectively. The wave exiting from the sample can be expressed as:

*𝒫*is the operator describing the propagation, * denotes two–dimensional convolution, and

_{xy}*H*is the Fresnel propagator:

_{p}*z*is the distance between the sample and the detector,

_{p}*λ*is the x–ray wavelength and

*k*= 2

*π/λ*. Neglecting the constant factor, Eq. (3) can be factorized in the product of two one dimensional functions:

*f*is a generic function of two variables: with an analogous definition for

*𝒫*. The integral in Eq. (6) can be written in several mathematically equivalent forms, but, since we will eventually have to deal with a discrete data matrix (as acquired by the detector pixels), it is important to choose one that enables avoiding numerical problems (such as for example undersampling). In particular, along the lines discussed in reference [16], we choose the following, different expressions for

_{y}*𝒫*and

_{x}*𝒫*:

_{y}*ℱ*and ${\mathcal{F}}_{i}^{-1}$ indicate the one dimensional Fourier and inverse Fourier transforms with respect to the

_{i}*i*coordinate, respectively, and

*ĥ*is the Fourier transform of

_{p}*h*. The distinction between the description of the propagation effects in the

_{p}*x*and

*y*directions comes from the extremely asymmetric shape of the x–ray probe. In the scanning direction (

*x*), the beam is very narrow, resulting in a small field of view (≈10

*μ*m) that enables preventing sampling problems on the detector plane when Eq. (7) is implemented numerically. In the

*y*direction, instead, the field of view is much larger (≈1 mm or more), and a different numerical implementation is needed. Referring to the Fresnel number

*N*=

_{F}*W*

^{2}/(

*λz*), where

_{p}*W*is the lateral extent of the illumination [10], we found that, in the experimental conditions described in the following,

*N*≈ 1 in the

_{F}*x*direction, and

*N*≈ 10

_{F}^{4}in the

*y*direction. Equation (7) is, actually, a generalization to the Fresnel regime (

*N*≈ 1) of the well–known equation

_{F}*𝒫*[

*f*] =

*ℱ*[

*f*](

*x*/(

*λz*)) used to describe x–ray diffraction; in the far field condition, in fact,

_{p}*h*(

_{p}*r*) ≈ 1 and the propagated field is proportional to the Fourier transform of the initial field. Equation (8), instead, uses the “angular spectrum propagation” method, which solves the convolution product in Eq. (2) in the Fourier space by means of the convolution theorem, and is numerically well–behaved in the near field regime (

*N*≫ 1).

_{F}The 1D–PIE uses a modified version of the ePIE algorithm [5], with the key difference lying in the description of the forward propagation through Eqs. (5), (7) and (8). Moreover, at each iteration, the intensity of the propagated reconstructed beam is set equal to a flat field image (i.e. an intensity pattern acquired without the sample). This last feature helps decoupling the contribution of the beam and the sample to the diffraction pattern, and it also avoids possible twin solutions consisting, for example, of the sum of two opposite phase gradients in the sample and beam complex functions.

We monitor the convergence of the algorithm [5] in the simulated case using the normalized RMS error between the reconstructed and real sample transfer functions:

*O*is the reconstructed sample transfer function at the

_{n}*n*–th iteration. In the case of experimental data the real sample transfer function

*O*is unknown, and the error between the measured and reconstructed intensities on the detector plane was computed as:

*I*is the measured intensity, while the index

*j*runs over all the relative positions between the probe and the sample.

## 3. Simulations

The simulated system presents the same features as the experimental setup described in the following section: x–ray energy of 9.7 keV, sample to detector distance of 58 cm and detector pixel size of 0.8 *μ*m. Figures 2(a) and 2(b) show the sample amplitude and phase maps used in the simulations, with amplitude and phase values lying in the intervals [0.6, 1] and [−1, 0] rad, respectively. Figures 2(c) and 2(d) show, instead, the amplitudes and the phases of the simulated probe, respectively. The phase is constant, while, for a closer representation of the actual experimental conditions, where an imperfect slit was used, the amplitude assumes the shape distribution that would be caused by an inhomogeneous horizontal slit, i.e. a degree of inhomogeneity in the beam is created through a random variation, along the horizontal direction, of the vertical size and central position of the slit in the intervals 15±5 *μ*m and 1.25±1.25 *μ*m, respectively. 136 scans were simulated with a 2.3 *μ*m scanning step, resulting in an overlap between two subsequent probe positions of about 80%. It is important to note that there is no theoretical limitation for the horizontal extension of the beam, the only drawback being the increased computational time required to run the corresponding simulation. The presence of noise is included in the simulation in the following way: a constant offset of about 1/50 of the maximum recorded intensity value, simulating the detector dark signal (i.e. the signal recorded when the source is off), is added to the theoretical diffraction patterns, and a 1–10% Gaussian noise is then generated. On top of this, the constant offset (which in the real case is measured experimentally) is then subtracted, and the negative intensity values are set to 0. The chosen offset and noise levels resemble the ones obtained in experimental data.

Figures 2(e)–2(n) show the results of the 1D–PIE reconstruction algorithm in the presence of 1% (Figs. 2(e)–2(h)) and 10% (Figs. 2(i)–2(n)) Gaussian noise, respectively. In both cases, the algorithm converges to the right solution, despite the use of a wide laminar beam and just a one–dimensional scan; the latter case, in particular, demonstrates the substantial robustness of the method against increasing noise levels, although these will inevitably lead to noisier reconstructed images. We found that introducing a degree of inhomogeneity in the probe is of primary importance for this method. Previous findings indicate that a “structured” illumination can improve the overall performance of ptychographic algorithms [17], and that this structuring of the probe becomes even more important when ptychography is applied in the near field regime [10], as in this case the illumination extends over most of the reconstructed field of view and its structure becomes the only source of diversity in the dataset. An additional advantage for the 1D–PIE brought by structuring the beam is that it helps coupling the effects of propagation in the vertical and horizontal directions, so that a strong degree of diversity is introduced in the diffraction pattern also in the horizontal direction by the vertical scan procedure. Figure 3 shows the evolution of the RMS error *ε _{n}* over 1000 iterations of the algorithm for the two simulations previously described. The error continuously decreases with the number of iterations in both cases, reaching smaller values for lower noise level, as expected.

A closer analysis of the reconstructed images of the sample shows the presence of some artifacts. In particular, vertical stripes are visible, especially in the amplitude image in the 10% noise case (Fig. 2(i)), and the value of the phase shift is not fully retrieved in the dark, bottom left part of the image (Fig. 2(f) and 2(l)). Similar considerations apply to the experimental images presented below (see Fig. 5(a) and 5(b), near the boundaries of the sample). This is admittedly a possible minor limitation of the current implementation of the method and is currently the subject of further investigations; for example, we have some preliminary evidence that this problem could be at least mitigated by increasing the inhomogeneity of the probe, which could be obtained, for example, by including the presence of a scattering element.

## 4. Experiments

Experimental data were acquired at the coherence branch of I13 at the Diamond Light Source (Didcot, UK). The source size of about 400×13 *μ*m^{2} (horizontal and vertical FWHM, respectively), jointly with the large source to sample distance of about 210 m, provide a high degree of coherence at the sample plane [18]. An x–ray energy of 9.7 keV was selected for the experiment using a Si(111) crystal monochromator. A 10 *μ*m slit was used to define the beam in the vertical direction. The sample and the detector were placed 6.3 cm and 64.3 cm downstream of the slit, respectively. A scintillation microscope was used as detector, consisting of a scintillation screen, an 8× magnifying optics and a PCO Edge sCMOS camera with 2560×2160 (horizontal and vertical, respectively) pixels. The effective pixel size was 0.8 *μ*m.

Our first experimental aim was to demonstrate that the proposed method is capable of retrieving phase values which are quantitatively correct. For this, we used a weakly perturbing sample consisting of a polyetheretherketone (PEEK) monofilament of 160 *μ*m diameter immersed in 0.5 cm of water. Figure 4 shows the comparison between theoretical and experimentally retrieved phase shifts; a good quantitative agreement is obtained. Note that the corresponding sample absorption signal is practically negligible at this energy (≈0.02%). For this sample, 500 scans were performed with scanning step of 1 *μ*m and 2 s exposure time for each scan, for a total exposure time of about 17 minutes. The relatively long scan time is primarily due to high water absorption at this energy (≈94%).

We then tested the method on a more complex biological sample, i.e. the leg of a tiny spider; the reconstructed amplitude and phase maps are shown in Fig. 5, while in Fig. 6 the RMS error *E _{n}* is plotted as a function of the iteration number. The results resemble those obtained in the simulations: the error continuously decreases, leading to a high final image quality. The phase image, in particular, presents a high level of detail, and very small features of the spider leg are resolved, as highlighted in the lateral insert in Fig. 5. For this sample, 688 scans were performed with scanning step of 1.6

*μ*m and 0.5 s exposure time for each scan, for a total exposure time of about 6 minutes.

For this proof–of–concept experiment, the structuring of the probe was implemented by using the slit imperfections on top of the inhomogeneity in the beam caused by its imperfect optical transfer through the beamline. From simulations, an increase in robustness and image quality, together with a reduction of artifacts, can be predicted when a stronger degree of perturbation is introduced in the probe. We expect that further experiments performed using a more structured illumination will further improve the methods performance (e.g. in terms of image quality and convergence speed). The photon statistics was limited in this case, resulting in relatively high levels of noise (≈5–10%). While on the one hand this is encouraging in terms of low dose imaging, it also leaves substantial room for improvement, e.g. on samples where dose is not important but speed is. An interesting possibility, in this respect, is offered by beam focusing in the vertical direction, which would significantly increase the number of detected photons thus leading to even shorter acquisition times.

## 5. Spatial resolution

The resolution achievable with the 1D–PIE is different along the x (scanning) and y directions. In the x direction, it is limited by the maximum scattering angle for which diffraction data are collected above the noise level, as for any other CDI technique. In the y direction, where a near–field approach is used, the resolution is limited by the (demagnified) pixel size. While in the experimental case it is not straightforward to measure the achieved resolution, this can be more easily done for the simulated case. In particular, we compared the Fourier transforms of the simulated sample transfer function with the reconstructed one for both the 1% and the 10% noise cases, using the Fourier ring correlation (FRC) criterion [19]. To distinguish between the x and y directions, we selected two angular regions of the Fourier space along the x and y directions (Fig. 7(a)). The FRC was then computed in the first region (upper and lower parts of the Fourier space) to estimate the resolution in the x direction, and in the second region (left and right–hand parts) for the resolution in the y direction. We arbitrarily chose an angular width of 30° for the two regions: a larger angle would include a greater part of the Fourier spectrum, in which, however, the contribution of the x and y directions would be largely mixed. The maximum frequency for which the sample is reconstructed correctly was found using the 2*σ* criterion [19] (Fig. 7), and the results are shown in Table 1. This result shows that the 1D–PIE has the potential to offer an enhanced resolution, and therefore increased sensitivity to small details, in the x direction.

A similar approach could be used to estimate the resolution of the experimental data, when two independent reconstructions are available. For the presented experimental results, however, only one dataset, and thus one reconstruction, was available. Nonetheless, a first estimation of the experimental resolution can be obtained by analysing the Fourier spectrum, and its noise level, of the spider leg transmission function (Fig. 5) in the same x and y regions indicated in Fig. 7(a). In the high frequency parts of the spectrum, in fact, noise dominates: we can therefore use these parts of the spectrum to estimate the mean noise value *n̄* and its standard deviation *σ _{n}* in the x and y regions of the spectrum. Assuming that all the frequencies components above

*n̄*+

*σ*are correctly reconstructed, we can estimate the resolution along the x and y directions to be approximately 1.2

_{n}*μ*m and 2.0

*μ*m, respectively. This is done by comparing the radial integrations of the Fourier spectrum (Fig. 8, solid lines) in the x and y regions with the relative thresholds (Fig. 8, dashed lines)

*n̄*+

*σ*, calculated in the same regions. Note that the values of

_{n}*σ*are calculated from the two–dimensional Fourier spectrum, and thus they appear to be higher that the noise levels of the one–dimensional Fourier profiles in Fig. 8; this, however, only depends on the radial integration process that reduces the noise level of the resulting one–dimensional profiles. Note, also, that the thresholds for the two regions are different, since the noise level is higher in the y region than in the x region. Using a more conservative threshold of

_{n}*n̄*+ 2

*σ*, we obtain a resolution of 1.5

_{n}*μ*m and 2.6

*μ*m for the x and y direction, respectively. Future experiments are planned, which will involve the use of a resolution test object to accurately measure the experimental resolution.

## 6. Conclusions

We developed a new method for solving the phase problem by using a ptychographic dataset acquired by performing a one dimensional scan of the sample with a laminar beam. The technique has been applied to simulated data, showing excellent robustness against increasing noise levels, and to experimental data, confirming the method’s ability to correctly reconstruct the quantitative values of amplitude and phase for large fields of view, whose accuracy, however, could be limited by the presence of artifacts. With this preliminary study we haven’t accurately investigated what resolution limits could be experimentally reached with the proposed method; the aim thus far was to demonstrate effective quantitative retrieval of the phase of the sample transfer function without any *a priori* assumptions. That said, at least our simulated result indicates that sub–pixel resolution should indeed be achievable, and this aspect will be explored further in the next steps of this research. Further improvements are expected in future developments which would include, for example, further beam structuring through phase plates or scattering elements, and beam focusing for increased acquisition speed and/or data statistics.

## Acknowledgments

This work is funded by the EPSRC (Grants EP/I021884/1 and EP/I022562/1). P.C.D. and M.E. are supported by Marie Curie Career Integration Grants within the Seventh Framework Programme of the European Union, PCIG12-GA-2012-333990/334056.

## References and links

**1. **J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. **21**, 2758–2769 (1982). [CrossRef] [PubMed]

**2. **V. Elser, “Phase retrieval by iterated projections,” J. Opt. Soc. Am. A **20**, 40–55 (2003). [CrossRef]

**3. **B. Abbey, K. A. Nugent, G. J. Williams, J. N. Clark, A. G. Peele, M. A. Pfeifer, M. de Jonge, and I. McNulty, “Keyhole coherent diffractive imaging,” Nat. Phys. **4**, 394–398 (2008). [CrossRef]

**4. **P. Thibault, M. Dierolf, O. Bunk, A. Menzel, and F. Pfeiffer, “Probe retrieval in ptychographic coherent diffractive imaging,” Ultramicroscopy **109**, 338–343 (2009). [CrossRef] [PubMed]

**5. **A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm,” Ultramicroscopy **109**, 1256–1262 (2009). [CrossRef] [PubMed]

**6. **T. E. Gureyev, A. Pogany, D. M. Paganin, and S. W. Wilkins, “Linear algorithms for phase retrieval in the Fresnel region,” Opt. Commun. **231**, 53–70 (2004). [CrossRef]

**7. **D. Paganin, S. C. Mayo, T. E. Gureyev, P. R. Miller, and S. W. Wilkins, “Simultaneous phase and amplitude extraction from a single defocused image of a homogeneous object,” J. Microsc. **206**, 33–40 (2002). [CrossRef] [PubMed]

**8. **M. Krenkel, M. Bartels, and T. Salditt, “Transport of intensity phase reconstruction to solve the twin image problem in holographic x–ray imaging,” Opt. Express **21**, 2220–2235 (2013). [CrossRef] [PubMed]

**9. **J. Hagemann, A. L. Robisch, D. R. Luke, C. Homann, T. Hohage, P. Cloetens, H. Suhonen, and T. Salditt, “Reconstruction of wave front and object for inline holography from a set of detection planes,” Opt. Express **22**, 11552–11569 (2014). [CrossRef]

**10. **M. Stockmar, P. Cloetens, I. Zanette, B. Enders, M. Dierolf, F. Pfeiffer, and P. Thibault, “Near-field ptychography: phase retrieval for inline holography using a structured illumination,” Sci. Rep. **3**, 1927 (2013).

**11. **R. Mokso, P. Cloetens, E. Maire, W. Ludwig, and J.-Y. Buffière, “Nanoscale zoom tomography with hard x rays using Kirkpatrick–Baez optics,” Appl. Phys. Lett. **90**, 144104 (2007). [CrossRef]

**12. **K. Giewekemeyer, S. P. Krüger, S. Kalbfleisch, M. Bartels, C. Beta, and T. Salditt, “X–ray propagation microscopy of biological cells using waveguides as a quasipoint source,” Phys. Rev. A **83**, 023804 (2011). [CrossRef]

**13. **C. Olendrowitz, M. Bartels, M. Krenkel, A. Beerlink, R. Mokso, M. Sprung, and T. Salditt, “Phase–contrast x–ray imaging and tomography of the nematode Caenorhabditis elegans,” Phys. Med. Biol. **57**, 5309–5323 (2012). [CrossRef] [PubMed]

**14. **J. W. Goodman, *Introduction to Fourier Optics* (McGraw–Hill, 1996).

**15. **D. M. Paganin, *Coherent X–Ray Optics* (Oxford University Press, 2006). [CrossRef]

**16. **J. D. Schmidt, *Numerical Simulation of Optical Wave Propagation with Examples in MATLAB* (SPIE Press, 2010).

**17. **M. Guizar-Sicairos, M. Holler, A. Diaz, J. Vila-Comamala, O. Bunk, and A. Menzel, “Role of the illumination spatial-frequency spectrum for ptychography,” Phys. Rev. B **86**, 100103(R) (2012). [CrossRef]

**18. **C. Rau, U. Wagner, Z. Pesic, and A. De Fanis, “Coherent imaging at the Diamond beamline I13,” Phys. Status Solidi A **208**, 2522–2525 (2011). [CrossRef]

**19. **N. Banterle, K. Huy Bui, E. A. Lemke, and M. Beck, “Fourier ring correlation as a resolution criterion for super–resolution microscopy,” J. Struct. Biol. **183**, 363–367 (2013). [CrossRef] [PubMed]