Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

3D single-shot ptychography with highly tilted illuminations

Open Access Open Access

Abstract

A method based on highly tilted illumination and non-paraxial iterative computation is proposed to improve the image quality of single-shot 3D ptychography. A thick sample is illuminated with a cluster of laser beams that are separated by large enough angles to record each diffraction pattern distinctly in a single exposure. 3D structure of the thick sample is accurately reconstructed from recorded diffraction patterns using a modified multi-slice algorithm to process non-paraxial illumination. Sufficient number of recorded diffraction patterns with noticeably low crosstalk enhances the fidelity of reconstruction significantly over single-shot 3D ptychography methods that are based on paraxial illumination. Experimental observations guided by the results of numerical simulations show the feasibility of the proposed method.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Three-dimensional (3D) imaging is an interdisciplinary and longstanding topic in optics [16], however, realizing high resolution imaging of large samples at a higher speed remains a challenging problem. Typical 3D imaging methods including computed tomography (CT) [7,8], magnetic resonant tomography (MRI) [9] and optical coherent tomography (OCT) [10] can provide details on 3D absorption or reflection properties of thick sample that are utilized in many applications in clinical diagnosis and biological research. However, if samples are highly transparent, the contrast of generated images is too low to reveal valuable details that limits the use of these imaging techniques. Researchers have developed several methods for measuring the index of refraction of optical elements, bio-samples and aqueous solutions that resemble phase objects [1116]. These methods use holography [12], Transport Intensity Equation (TIE) [13] and scanning Ptychography [1416] to reconstruct 3D optical refraction index by rotating the sample about 180° for data acquisition and then using Radon transformation [17] to achieve a 3D reconstruction. Such reconstructions are realized with x-ray and visible light, showing robust capability in generating high quality 3D imaging for bone, optical fiber and stepped microstructures etc. Since the principle adopted by these methods are much similar to that of conventional CT, the sample is to be positioned in multiple orientations during data acquisition using a complex optical setup.

The technique of 3D ptychographic iterative engine (3PIE) [18,19] demonstrated depth-resolved imaging of thick samples by numerically splitting a thick sample into discrete layers along the optical axis and reconstructing the complex transmission function of each layer using the same optical setup and data acquisition technique as in common scanning ptychography. Fourier ptychography [20], which realizes ptychographic iterative engine in Fourier domain, is also capable of iteratively reconstructing the microscopic 3D image of thick sample at multiple depths using multi-slice coherent model. Although both 3PIE and Fourier PIE have advantages such as simple optical alignment, high signal to noise ratio and reduced data acquisition time, dynamic events cannot be captured as they require multiple frames of diffraction patterns and extended data acquisition time. 3D single-shot ptychography [21] is a promising technique to realize 3D imaging of thick samples by illuminating it with a cluster of laser beams propagating in different directions. This allows recording of multiple non-overlapping diffraction patterns simultaneously by a single detector exposure. Since Ptychography is mathematically a kind of gradient searching algorithm [22], it requires adequate number of iterative computations to lower reconstruction errors along with faster convergence. In single shot ptychography, the number of core iterations in computations is determined by the number of sub-diffraction patterns recorded. Currently, single shot 3D ptychography follows a paraxial approximation and employs common Fresnel diffraction in its iterative computations. This restricts the propagation angle of outermost illuminations to a few degrees with respect to optical axis. Consequently, only a very limited number of distinct diffraction patterns can be recorded by the detector and reconstruction results in poor quality images. Alternatively, illuminations using highly tilted laser beams would generate sufficient number of sub-diffraction patterns while minimizing cross talk between neighboring patterns, as successfully demonstrated for imaging 2D samples [23] to obtain improved image quality. While the same technique could be extended to a 3D scenario, absence of a fully developed non-paraxial approach to address resolving capability and attainable spatial resolution, and unsteady response of sample transmission to large incident angles are some of the challenges to be addressed by researchers attempting to realize 3D single shot ptychography.

This paper proposes 3D single shot ptychography using highly tilted illuminations and a modified non-paraxial multi-slice iterative reconstruction algorithm [23]. The calibration on optical parameters and reconstruction procedures are illustrated in detail. The resolving capability of the suggested method is studied systematically while considering diffraction effects inside the thick sample. The feasibility of proposed method is verified in both numerical simulations and proof-of-principle experiments.

2. Theoretical analysis

The set up for 3D single shot ptychography using highly tilted illuminations is schematically shown in Fig. 1(a). Multiple laser beams of wide propagation angles are used to illuminate a thick sample, with outermost laser beam making an angle of several degrees with respect to optical axis. The thick sample can be approximated as a series of two dimensional slices ${S_n}(x,y),\textrm{ }n = 1 \cdot{\cdot} \cdot N$ with a spacing of d along optical axis.

 figure: Fig. 1.

Fig. 1. Schematic of 3D single shot ptychography with highly tilted illuminations.

Download Full Size | PDF

The transmitted light from each slice is a product of its transmission function ${t_n}(x,y)$ and respective illumination ${P_n}(x,y)$, and the propagation of light between two successive slices is regarded as propagation in free space [15,16]. Adjacent laser beams partially overlap on the front surface ${S_1}(x,y)$ of the thick sample and thus fulfills the requirement of common PIE algorithm. The tilted illumination incident on the front surface of sample can be written as

$$P(x,y)\textrm{ = }P^{\prime}(x,y){e^{i2\pi (\frac{{\cos \alpha }}{\lambda }x + \frac{{\cos \beta }}{\lambda }y)}}$$
where, $\alpha$ and $\beta$ are the propagating angles with respect to x- and y - axis respectively. $P^{\prime}(x,y)$ represents the illumination without tilt. The transmitted light leaving the first slice can be represented as
$$\begin{aligned} {T_1}(x,y) &= P^{\prime}(x,y){t_1}(x,y){e^{i2\pi (\frac{{\cos \alpha }}{\lambda }x + \frac{{\cos \beta }}{\lambda }y)}}\\ &= {T_1}^{\prime}(x,y){e^{i2\pi (\frac{{\cos \alpha }}{\lambda }x + \frac{{\cos \beta }}{\lambda }y)}} \end{aligned}$$
where ${t_1}(x,y)$ is the transmission function of the first slice. The light arriving at the second slice is obtained by a numerical propagation of highly tilted illumination using a formula based on Coherent Transmission Function (CTF) as
$${P_2}(x,y) = {F^{ - 1}}[F[{T_1}(x,y)]{H_d}({f_x},{f_y})]$$
where ${H_d}({f_x},{f_y}) = {e^{ - i2\pi d\sqrt {{\lambda ^{ - 2}} - {f_x}^2 - {f_y}^2} }}$ and ${P_2}(x,y)$ is the illumination on the second slice of the sample. With further simplification,
$$\begin{aligned} {P_2}({x,y} )&= {F^{ - 1}}[{{\tilde{T}}_1}^{\prime}({f_x} - \frac{{\cos \alpha }}{\lambda },{f_y} - \frac{{\cos \beta }}{\lambda }){H_d}({f_x},{f_y})]\\ \textrm{ } &= {F^{ - 1}}[{{\tilde{T}}_1}^{\prime}({f_x},{f_y}){H_d}({f_x} + \frac{{\cos \alpha }}{\lambda },{f_y} + \frac{{\cos \beta }}{\lambda })]{e^{i2\pi (\frac{{\cos \alpha }}{\lambda }x + \frac{{\cos \beta }}{\lambda }y)}}\\ \textrm{ } &= {P_2}^{\prime}(x,y){e^{i2\pi (\frac{{\cos \alpha }}{\lambda }x + \frac{{\cos \beta }}{\lambda }y)}} \end{aligned}. $$

Thus, the illumination on each slice and diffraction patterns on detector have the same phase ramp of ${e^{i2\pi (\frac{{\cos \alpha }}{\lambda }x + \frac{{\cos \beta }}{\lambda }y)}}$, and thus sampling interval smaller than ${\lambda / {(2\cos \alpha )}}$ and ${\lambda / {(2\cos \beta )}}$ are required in x- and y-axis, respectively. For $\alpha \textrm{ = }{70^ \circ }$ the required sampling interval should be smaller than 0.925 µm ($\lambda \textrm{ = }632.8nm$), but the smallest pixel of currently available commercial CCD is still larger than 1.5µm. Though it is possible to reduce the CCD pixel size to ${{N\lambda } / {(2\cos \alpha )}}$ and ${{N\lambda } / {(2\cos \beta )}}$ by magnifying both sample and diffraction patterns N times using a lens before recording. However, this method only works well for paraxial illuminations, since lens acts as a low–pass filter and only collects highly diffracted light when the sample was illuminated by non-paraxial illumination. Moreover, diffraction patterns magnified by lens would reduce the recordable number of sub-diffraction patterns in a single shot PIE. Furthermore, high quality lens is not available for shorter wavelengths, including synchrotron radiations and high energy electron beams.

Using Taylor series expansion, the phase of ${H_d}({f_x} + \frac{{\cos \alpha }}{\lambda },{f_y} + \frac{{\cos \beta }}{\lambda })$ can be expressed as

$$\begin{aligned} &\textrm{ 2}\pi d{\lambda ^{\textrm{ - }1}}\sqrt {1 - {{(\lambda {f_x} + \cos \alpha )}^2} - {{(\lambda {f_y} + \cos \beta )}^2}} \\ &= 2\pi d{\lambda ^{ - 1}}\{ 1 - \frac{1}{2}[{(\lambda {f_x} + \cos \alpha )^2} + {(\lambda {f_y} + \cos \beta )^2}] - \frac{1}{8}[{(\lambda {f_x} + \cos \alpha )^4} + {(\lambda {f_y} + \cos \beta )^4}] +{\cdot}{\cdot} \cdot \} \end{aligned}. $$

Under paraxial approximation, both $\cos \alpha$ and $\cos \beta$ are close to zero, and the third and higher order terms can be neglected. The phase term in Eq. (5) can then be approximated as $2\pi d{\lambda ^{ - 1}}\{ 1 - [{(\lambda {f_x} + \cos \alpha )^2} + {(\lambda {f_y} + \cos \beta )^2}]/2\}$, and it indicates that a change of $\alpha$ and $\beta$ only shifts the diffraction, without modifying its structure. However, this is not the case for non-paraxial laser beams. Higher order terms in Eq. (5) cannot be neglected and any change of $\alpha$ and $\beta$ would modify the position and the structure of computed diffracted light. The presence of two linear terms $2\pi d\cos \alpha {f_x}$ and $2\pi d\cos \beta {f_y}$ in the Taylor expansion of ${H_d}({f_x},{f_y})$ in Eq. (5) requires that the sampling interval in computation should be smaller than ${1 / {(2d\cos \alpha )}}$ and ${1 / {(2d\cos \beta )}}$ for ${f_x}$ and ${f_y}$, respectively. On leaving the last slice of sample, light further propagates by a distance D of several millimeters before reaching the detector. A simple calculation with $D\textrm{ = 5cm}$ and $\alpha \textrm{ = }\beta \textrm{ = }{70^ \circ }$ shows a required sampling interval of 0.029 mm−1 along ${f_x}$ and ${f_y}$. This would require a field size of 34.4 mm to compute $\tilde{T}({f_x},{f_y})$ for $t(x,y)$ with FFT (Fast Fourier Transform) and consequently demand a huge computational memory by making the matrix size as high as $37278\textrm{ }(34.4mm/0.925\mu m\textrm{ = }37278)$. Common PIE algorithm used in single shot 3PIE cannot handle such computational challenges arising from sampling requirements of phase in order to realize accurate 3PIE reconstructions.

2.1 Basic principle

Above analysis shows that the two phase terms, ${e^{i2\pi (\frac{{\cos \alpha }}{\lambda }x + \frac{{\cos \beta }}{\lambda }y)}}$ in Eq. (4) and ${e^{i2\pi d(\cos \alpha {f_x} + \cos \beta {f_y})}}$ of ${H_d}({f_x} + \frac{{\cos \alpha }}{\lambda },{f_y} + \frac{{\cos \beta }}{\lambda })$ in Eq. (5) are two main obstacles to realize single shot 3PIE with highly tilted illuminations. By defining

$${H_d}^{\prime}({f_x} + \frac{{\cos \alpha }}{\lambda },{f_y} + \frac{{\cos \beta }}{\lambda })\textrm{ = }{H_d}({f_x} + \frac{{\cos \alpha }}{\lambda },{f_y} + \frac{{\cos \beta }}{\lambda }){e^{\textrm{ - }i2\pi d(\cos \alpha {f_x} + \cos \beta {f_y})}}. $$

${P_2}(x,y)$ in Eq. (4) can be rewritten as

$$\begin{aligned} {P_2}(x,y) &= {F^{ - 1}}[{{\tilde{T}}_1}^{\prime}({f_x},{f_y}){H_d}^{\prime}({f_x} + \frac{{\cos \alpha }}{\lambda },{f_y} + \frac{{\cos \beta }}{\lambda }){e^{i2\pi d(\cos \alpha {f_x} + \cos \beta {f_y})}}]{e^{i2\pi (\frac{{\cos \alpha }}{\lambda }x + \frac{{\cos \beta }}{\lambda }y)}}\\ \textrm{ } &= \{ {F^{ - 1}}[{{\tilde{T}}_1}^{\prime}({f_x},{f_y}){H_d}^{\prime}({f_x} + \frac{{\cos \alpha }}{\lambda },{f_y} + \frac{{\cos \beta }}{\lambda })] \otimes {F^{ - 1}}[{e^{i2\pi d(\cos \alpha {f_x} + \cos \beta {f_y})}}]\} {e^{i2\pi (\frac{{\cos \alpha }}{\lambda }x + \frac{{\cos \beta }}{\lambda }y)}}\\ \textrm{ } &= [{P_2}^{\prime\prime}(x,y) \otimes \delta (x + d\cos \alpha ,y + d\cos \beta )]{e^{i2\pi (\frac{{\cos \alpha }}{\lambda }x + \frac{{\cos \beta }}{\lambda }y)}}\\ \textrm{ } &= {P_2}^{\prime\prime}(x + d\cos \alpha ,y + d\cos \beta ){e^{i2\pi (\frac{{\cos \alpha }}{\lambda }x + \frac{{\cos \beta }}{\lambda }y)}} \end{aligned}$$
where ⊗ is the convolution operation. Thus, the computation of $F[{\tilde{T}_1}^{\prime}({{f_x},{f_y}} ){H_d}({f_x} + \frac{{\cos \alpha }}{\lambda },{f_y} + \frac{{\cos \beta }}{\lambda })]$ in Eq. (4) can be replaced by computing $F[{\tilde{T}_1}^{\prime}({f_x},{f_y}){H_d}^{\prime}({f_x} + \frac{{\cos \alpha }}{\lambda },{f_y} + \frac{{\cos \beta }}{\lambda })]$ and shifting the computed result distances of $d\cos \alpha$ and $d\cos \beta$ in x axis and y axis, respectively. Since the linear phase term of ${e^{i2\pi d(\cos \alpha {f_x} + \cos \beta {f_y})}}$ is eliminated in ${H_d}^{\prime}({f_x} + \frac{{\cos \alpha }}{\lambda },{f_y} + \frac{{\cos \beta }}{\lambda })$, the computation $F[{\tilde{T}_1}^{\prime}({f_x},{f_y}){H_d}^{\prime}({f_x} + \frac{{\cos \alpha }}{\lambda },{f_y} + \frac{{\cos \beta }}{\lambda })]$ easily resolve the phase ramp of coherent transfer function and consequently eliminate the difficulty in computing the illumination of the ${2^{nd}}$ layer. At the same time, the spatial phase ramp ${e^{i2\pi (\frac{{\cos \alpha }}{\lambda }x + \frac{{\cos \beta }}{\lambda }y)}}$ in Eq. (2), Eq. (4) and Eq. (7) doesn’t take part in the computation of the illumination on the next slice. Numerical propagation of non-paraxial light transmitting through each slice of thick sample is accurately realized this way by computing the diffraction of corresponding non-tilted light with a modified coherent transfer function while properly shifting the computed result along both axes.

2.2 Measurement of $\cos \alpha$ and $\cos \beta$

Since we should shift $F[{\tilde{T}_1}^{\prime}({f_x},{f_y}){H_d}^{\prime}({f_x} + \frac{{\cos \alpha }}{\lambda },{f_y} + \frac{{\cos \beta }}{\lambda })]$ distances of $d\cos \alpha$ and $d\cos \beta$ in x axis and y axis, respectively, the numerical computation of tilted illumination incident on each sample slice with Eq. (7) requires a prior knowledge of $\cos \alpha$ and $\cos \beta$, which can be obtained from a common scanning ePIE illustrated in Fig. 1(b). A thin sample $S(x,y)$ is illuminated with a laser beam cluster ${P_{mn}}(x,y)$, and a series of diffraction pattern arrays ${ {I_{mn}^K(x,y)} |_{k = 1:K,m = 1:M,n = 1:N}}$ are recorded by scanning the sample $S(x,y)$ at many positions. $I_{mn}^K(x,y)$ indicates the recorded diffraction intensity corresponding to $m{n^{th}}$ illuminating beam while the object is at the ${K^{th}}$ position. According to common scanning PIE algorithm, both $\cos \alpha$ and $\cos \beta$ of the $m{n^{th}}$ illumination laser beam can be computed as follows:

  • 1. Extract sub-diffraction patterns corresponding to $(m,n) = (0,0)$ from each diffraction patterns array in the recorded series and use them to do an iterative computation with ePIE algorithm to obtain both complex image $S(x,y)$ of the sample and the complex amplitude ${P_{00}}(x,y)$ of the ${00^{\textrm{th}}}$ laser beam.
  • 2. Measure the central coordinate $({x_{mn}},{y_{mn}})$ of $m{n^{\textrm{th}}}$ diffraction patterns in the first recorded diffraction pattern array $I_{mn}^1(x,y)$, then extract the $m{n^{\textrm{th}}}$ diffraction pattern from all recorded diffraction patterns arrays ${ {I_{mn}^k(x,y)} |_{k = 1 \cdot{\cdot} \cdot K}}$ and shift them to the center of computing matrix by moving them a distance of $- {x_{mn}}$ and $- {y_{mn}}$ in x and y axis to get a new diffraction pattern sequence ${ {I^{\prime}{^k}_{{mn}(x,y)}} |_{k = 1 \cdot{\cdot} \cdot K}}$.
  • 3. Use ${ {I^{\prime}{^k}_{mn}(x,y)} |_{k = 1 \cdot{\cdot} \cdot K}}$ and known $S(x,y)$ to run an iterative computation with ePIE algorithm to compute ${P_{mn}}^{\prime}(x,y)$. In case ${P_{mn}}^{\prime}(x,y)$ still have a visible phase ramp, iteration is taken back to step (2) to correct the shift slightly as $- ({x_{mn}} + \Delta x)$ and $- ({y_{mn}} + \Delta y)$ to update ${ {I^{\prime}{^k}_{mn}(x,y)} |_{k = 1 \cdot{\cdot} \cdot K}}$. This is repeated until $P{^{\prime}_{mn}}$ has no visible phase ramp.
  • 4. Calculate $\cos \alpha$ and $\cos \beta$ of the $m{n^{\textrm{th}}}$ illumination beam with $\cos \alpha \textrm{ = }{D / {\sqrt {{{({x_{mn}} + \Delta x)}^2} + {{({y_{mn}} + \Delta y)}^2}} }}$ and $\cos \beta \textrm{ = }{D / {\sqrt {{{({x_{mn}} + \Delta x)}^2} + {{({y_{mn}} + \Delta y)}^2}} }}$.

2.3 Reconstruction algorithm

The three dimensional structure of sample is computed from the recorded diffraction pattern array as follows.

  • 1. Extract sub-diffraction patterns from the recorded diffraction pattern array ${ {{I_{mn}}(x,y)} |_{m = 1:M,n = 1:N}}$ and shift them by $- ({x_{mn}} + \Delta x)$ and $- ({y_{mn}} + \Delta y)$ in x and y directions. This creates a sequence of diffraction patterns located at the center of computing matrix. Iterative computation starts with an initial guess for transmission functions for discrete slices as ${ {{S_{{Z_i}}}(x,y)} |_{i = 1, \cdot{\cdot} \cdot L}}$ at different depths, where ${Z_i}$ is the axial coordinate of the ${i^{th}}$ slice.
  • 2. Compute the transmitted light field from the first slice as $T_{mn}^1(x,y)\textrm{ = }P^{\prime}{^1}_{mn}(x,y){S_{{Z_1}}}(x,y)$ and propagate $T_{mn}^1(x,y)$ by a distance of d to get the light arriving the ${2^{\textrm{nd}}}$ slice as $P^{\prime}{^2}_{mn}(x,y) = {F^{ - 1}}[F(T_{mn}^1(x,y)){H_d}^{\prime}({f_x} + \cos \alpha /\lambda ,{f_y} + \cos \beta /\lambda )]$. Shift $P^{\prime}{^2}_{mn}(x,y)$ by a distance of $d\cos \alpha$ and $d\cos \beta$ in x and y directions to get $P^{\prime}{^2}_{mn}(x + d\cos \alpha ,y + d\cos \beta )$.
  • 3. Sequentially compute the illuminating light ${ {P^{\prime}{^k}_{mn}(x,y)} |_{k = 1:K}}$ and transmitted light ${ {T^{\prime}{^k}_{mn}(x,y)} |_{k = 1:K}}$ for other slices under the illumination of $m{n^{\textrm{th}}}$ un-tilted illumination beam and the light field ${U_{mn}}(x,y)\textrm{ = }|{\; {U_{mn}}(x,y)} |{e^{i\varphi (x,y)}}$ arriving at the detector with the same method described in step (2), and change ${U_{mn}}(x,y)$ as $U{^{\prime}_{mn}}(x,y) = \sqrt {{I_{mn}}(x,y)} {e^{i \ast \varphi (x,y)}}$.
  • 4. Back propagate $U{^{\prime}_{mn}}(x,y)$ to the plane of last slice to get an improved transmitted light as $T^{\prime\prime}{^K}_{mn}(x,y) = {F^{ - 1}}[F({U_{mn}}(x,y)){H_{ - d}}^{\prime}({f_x} + \frac{{\cos \alpha }}{\lambda },{f_y} + \frac{{\cos \beta }}{\lambda })]$ and update its transmission function and incident illumination as
    $$\begin{array}{l} {S_{{Z_K}}}(x,y) = {S_{{Z_K}}}(x,y) + \frac{{T^{\prime\prime}{^K}_{mn}(x,y) - T^{\prime}{^k}_{mn}(x,y)}}{{{{|{P^{\prime}{^K}_{mn}(x,y)} |}^2} + \delta }}{[P^{\prime}{^K}_{mn}(x,y)]^ \ast }\\ P^{\prime}{^K}_{mn}(x,y) = P^{\prime}{^K}_{mn}(x,y) + \frac{{T^{\prime\prime}{^K}_{mn}(x,y) - T^{\prime}{^k}_{mn}(x,y)}}{{{{|{{S_{{Z_K}}}(x,y)} |}^2} + \delta }}{[{S_{{Z_K}}}(x,y)]^ \ast } \end{array}. $$
  • 5. Shift $P^{\prime}{^K}_{mn}(x,y)$ by $- d\cos \alpha$ and $- d\cos \beta$ in x and y directions to get $P^{\prime}{^K}_{mn}(x - d\cos \alpha \; ,y - d\cos \beta )$ and propagate it to the plane of ${(K\textrm{ - }1)^{th}}$ slice to get the corresponding transmitted light $T^{\prime\prime}{^{K - 1}}_{mn}(x,y) = {F^{ - 1}}[F(P^{\prime}{^{K - 1}}_{mn}(x - d\cos \alpha \; ,y - d\cos \beta )){H_{ - d}}^{\prime}({f_x} + \frac{{\cos \alpha }}{\lambda },{f_y} + \frac{{\cos \beta }}{\lambda })]$, and then update the transmission function of this slice as ${S_{{Z_{K\textrm{ - }1}}}}(x,y)$ and the illumination light as $P^{\prime}{^{K\textrm{ - }1}}_{mn}(x,y)$.
  • 6. Repeat step (5) to update the transmission function of each slice and its corresponding $m{n^{\textrm{th}}}$ illuminating light, except for the illumination on the first slice since it is known.
  • 7. Repeat step (2) to step (6) until all illuminating laser beams are indexed.
  • 8. Compute the convergence $\varepsilon$ of above iterative computations with the Eq. (9). If $\varepsilon$ is smaller than given value of ${\varepsilon _0}$, the iterative computation stops, else jump to step (2) to start another round of iterative computation.
    $$\varepsilon \textrm{ = }\frac{{{\Sigma _{m,n}}|{{{|{{U_{mn}}(x,y)} |}^2} - {I_{mn}}(x,y)} |}}{{{\Sigma _{mn}}{I_{mn}}(x,y)}}. $$

In experiments, there are inevitable errors in measuring $\cos \alpha$ and $\cos \beta$ owing to the uncertainties in ${x_{mn}} + \Delta x$, ${y_{mn}} + \Delta y$ and D. The measured $\cos \alpha ^{\prime}$ and $\cos \beta ^{\prime}$ can be written as

$$\cos \alpha ^{\prime} = \cos \alpha + {e_\alpha },\;\;\cos \beta ^{\prime} = \cos \beta + {e_\beta }.$$

The transmitted light of the first slice can be written as

$$\begin{aligned} {T_1}(x,y) &= P^{\prime}(x,y){t_1}(x,y){e^{i2\pi (\frac{{\cos \alpha ^{\prime}}}{\lambda }x + \frac{{\cos \beta ^{\prime}}}{\lambda }y)}}{e^{\textrm{ - }i2\pi (\frac{{{e_\alpha }}}{\lambda }x + \frac{{{e_\beta }}}{\lambda }y)}}\\ \textrm{ } &= P^{\prime}(x,y){e^{\textrm{ - }i2\pi (\frac{{{e_\alpha }}}{\lambda }x + \frac{{{e_\beta }}}{\lambda }y)}}{t_1}({x,y} ){e^{i2\pi (\frac{{\cos \alpha ^{\prime}}}{\lambda }x + \frac{{\cos \beta ^{\prime}}}{\lambda }y)}}\\ \textrm{ } &= {P_T}^{\prime}(x,y){t_1}(x,y){e^{i2\pi (\frac{{\cos \alpha ^{\prime}}}{\lambda }x + \frac{{\cos \beta ^{\prime}}}{\lambda }y)}} \end{aligned}. $$

In Eq. (11), measurement errors in $\cos \alpha$ and $\cos \beta$ add an additional phase ramp ${e^{\textrm{ - }i2\pi (\frac{{{e_\alpha }}}{\lambda }x + \frac{{{e_\beta }}}{\lambda }y)}}$ to un-tilted illumination $P^{\prime}(x,y)$ to generate a slightly tilted illumination ${P_T}^{\prime}(x,y)$. Thus, by using ${P_T}^{\prime}(x,y)$ to replace $P^{\prime}(x,y)$, $\cos \alpha ^{\prime}$ and $\cos \beta ^{\prime}$ to replace $\cos \alpha$ and $\cos \beta$, above equations are still valid for all computations since errors in angle $\alpha ^{\prime} - \alpha$ and $\beta ^{\prime} - \beta$ are fully compensated by the additional phase ramp of ${P_T}^{\prime}(x,y)$ which is automatically generated in the iterative computation with ePIE algorithm. In other words, measurement errors of $\cos \alpha$ and $\cos \beta$ themselves do not directly influence the accuracy of final reconstruction. Whenever there is significant measurement error, the residual phase ramp in ${P_T}^{\prime}(x,y)$ becomes more obvious and the computed illumination on the sample slice gradually departs from the center of computing matrix along with the propagation of light. This would result in obvious aliasing error at the edge of computing matrix, especially in computing the light incident on last several slices and eventually degrades the reconstruction quality. Although this can be avoided by a large enough matrix for computation, this would slow down the computation significantly.

2.4 Resolving capability

Similar to other coherent diffraction imaging techniques, PIE follows gradient search algorithm [22], and hence an analytic solution for the object’s structure $O(x,y,z)$ is not possible from recorded diffraction intensity $I(x,y)$ while losing phase information during data acquisition. However, since the phase $\varphi (x,y)$ of each sub-diffraction pattern is iteratively retrieved with high accuracy, we can propagate $\sqrt {I(x,y)} {e^{i\varphi (x,y)}}$ to the plane exactly behind sample to get the transmitted light $u(x,y)$. From Fourier transform of the transmitted light $\widetilde U({k_x},{k_y}) = {\widetilde U_i}({k_x},{k_y}) + {\widetilde U_s}({k_x},{k_y})$, we can analyze the resolving capability of our suggested method qualitatively with the same method as that of analysis of diffraction tomography with ptychography [24], where ${\widetilde U_i}({k_x},{k_y})$ and ${\widetilde U_s}({k_x},{k_y})$ are Fourier transforms of incident light and scattered light, respectively. The 3D Fourier transform of the scattering potential of sample can be written as [25,26]

$$\widetilde V({K_x},{K_y},{K_z})\textrm{ = }\frac{{i{k_z}}}{\pi }{\widetilde U_s}({k_x},{k_y};z = 0)$$
where, $\widetilde V({K_x},{K_y},{K_z})$ is the 3D Fourier transform of scattering potential $V(x,y,z)$ of the object under observation and $({K_x},{K_y},{K_z})$ defines the vector of scattered light relative to incident vector $({k_{x0}},{k_{y0}},{k_{z0}})$. That is, ${K_x}\textrm{ = }{k_x} - {k_{x0}}$, ${K_y}\textrm{ = }{k_y} - {k_{y0}}$ and ${K_z}\textrm{ = }{k_z} - {k_{z0}}$. Figure 2(a) shows scattering vectors on 2D Ewald sphere relative to incident lights with different vectors of ${{\textbf k}_1}$, ${{\textbf k}_2}$, ${{\textbf k}_3}$, ${{\textbf k}_4}$, ${{\textbf k}_5}$, ${{\textbf k}_6}$ and ${{\textbf k}_7}$. Under the assumption of weakly scattering sample, ${\widetilde U_s}({k_x},{k_y};z = 0)$ of different illuminating beams are distinct from each other on Ewald sphere and are shown as broken arcs in Fig. 2(a). By shifting ${\widetilde U_s}({k_x},{k_y};z = 0)$ relative to the corresponding incident vectors of $({k_{x0}},{k_{y0}},{k_{z0}})$, we can obtain $\widetilde V({K_x},{K_y},{K_z})$, which is shown as colorful cross arcs in Fig. 2(a). $V(x,y,z)$ can be reconstructed by an inverse Fourier transform of $\widetilde V({K_x},{K_y},{K_z})$.Since the ${K_x}{K_z}$ space is only sparsely occupied by obtained $\widetilde V({K_x},{K_y},{K_z})$ in Fig. 2, the resolving capability of this proposed method should be lower than that of 3D scanning ptychography, where $\widetilde V({K_x},{K_y},{K_z})$ can completely occupy the whole ${K_x}{K_z}$ space, and the main advantage of this proposed method lies its capability in imaging 3D dynamic samples.

 figure: Fig. 2.

Fig. 2. (a) the vectors of scattered light in 2D Ewald sphere, (b) $\widetilde V({{K_x},{K_y},{K_z}} )$ with three paraxial illuminating beams, (c) $\widetilde V({{K_x},{K_y},{K_z}} )$ with non-paraxial illuminating beams, (d) $\widetilde V({{K_x},{K_y},{K_z}} )$ with several non-paraxial illuminating beams.

Download Full Size | PDF

Figures 2(b)–2(d) present an enlarged view of $\widetilde V({K_x},{K_y},{K_z})$, where three, five and seven illuminating beams are used for imaging, respectively. We can find that the obtained $\widetilde V({K_x},{K_y},{K_z})$ does not completely occupies the whole ${K_x}{K_z}$ space, instead it partially fills only two sectors highlight in light purple color. Using more number of non-paraxial laser beams occupies more area in the purple sectors and accordingly the reconstruction quality is also improved by using highly tilted laser beams for illuminations. The maximum height $\Delta k_z^{\max }$ of purple sectors is ${{[\sin ({\alpha _n} + \theta ) - \sin ({\alpha _n} - \theta )]} / \lambda } = {{2\sin \theta \cos {\alpha _n}} / \lambda }$, where ${\alpha _n}$ is the angle of the outmost illuminating beam with respective to x-axis, and $\theta$ is the maximum angle of scattered respective to illuminating beam. The theoretically possible highest axial resolution $\Delta z$ reachable with the obtained $V(x,y,z)$ will be $\Delta z = {\lambda / {(2\sin \theta \cos {\alpha _n})}}$. Since ${\alpha _n}$ of highly tilted laser beam is much smaller than that of paraxial laser beam, the axial resolution $\Delta z$ can be remarkably improved by using non-paraxial beams for illuminations. On the other hand, the maximum width $\Delta {K_x}$ of $\widetilde V({K_x},{K_y},{K_z})$ in Fig. 2 is determined by the angular difference between ${{\textbf k}_1}$ and ${{\textbf k}_2}$ (or ${{\textbf k}_3}$) as $\Delta {k_x} = {{\sin {\alpha _2}} / \lambda }$, resulting in a transverse resolution $\Delta x = {\lambda / {\sin {\alpha _2}}} = {{D\lambda } / L}$, where D is the distance of the sample from the detector, and L is the diameter of each sub-diffraction patterns. Thus, while the use of non-paraxial beams for illumination can obviously improve the axial resolution, it does not improve the transverse resolution.

Since $\widetilde V({K_x},{K_y},{K_z})$ is only available in colorful cross arcs in Fig. 2(a), we can write it into a discrete form as

$$\widetilde V({K_x},{K_y},{K_z})\textrm{ = }\sum\nolimits_{n = 1}^N {V(x,y,z){e^{ - i({K_x}{x_n} + {K_y}{y_n} + {K_z}{z_n})}}}. $$

According to principles of linear equation set, if the number of available $\widetilde V({K_x},{K_y},{K_z})$ is M, which is determined by the total length of all cross arc in Fig. 2(a), the number of $V(x,y,z)$ that can be computed with Eq. (13) can’t be more than M. Accordingly, the largest volume of the sample that can be imaged with our proposed method is ${V_{\max }} = {M / {(\Delta {x^2}\Delta z)}}$, where $\Delta z = {\lambda / {(2\sin \theta \cos {\alpha _N})}}$ and $\Delta x = {\lambda / {\sin {\alpha _2}}}$. In other words, for this proposed method to work, the interval between neighboring slices should be larger than ${\lambda / {(2\sin \theta \cos {\alpha _N})}}$, and the finest detail resolvable is larger than ${\lambda / {\sin {\alpha _2}}}$, the transverse size ${S_{xy}}$ and thickness of ${D_z}$ sample should fill the requirement of $({S_{xy}}{D_z})/(\Delta {x^2}\Delta z) < M$.

It is worthy to point out that the above discussed spatial resolution and size of sample are obtained by assuming that the phase $\varphi (x,y)$ of diffraction intensity $I(x,y)$ has been faithfully reconstructed, and in practice, the accuracy of reconstructed $\varphi (x,y)$ can be seriously influenced by various experimental factors, and accordingly attained resolution and permitted sample size are much lower than the corresponding theoretical values.

3. Numerical simulation

Numerical simulation is carried out to verify the feasibility of the proposed method. Two discrete slices at a spacing of 500 µm are chosen as a sample, and respective transmission modulus and phase distributions are shown in Fig. 3. An array of 7×7 laser beams with a wavelength of 632.8 nm is used to illuminate the sample. An angular separation of 4° is maintained between propagation directions of two adjacent beams as shown in Fig. 1(b). Laser beams cross each other and pass through a pinhole of diameter 1.8 mm kept at (0, 0, z). Pinhole limits the size of illumination area on the sample surface. The first slice is 4 mm from the pinhole, and a CCD is placed 25 mm from the second slice. Adjacent illuminations on the first slice have an overlapping of about 80.32%.

Figure 3 shows the modulus and phase of selected laser beams $P{^{\prime}_{mn}}({x,y} )$ on the first sample slice plane and respective diffraction patterns. Since the propagation angle of outermost laser beams is about 16° with respect to the central beam (4,4), outer diffraction patterns are slightly oval, which is characteristics of non-paraxial beams.

 figure: Fig. 3.

Fig. 3. (a) Amplitude and (b) phase of first slice, and (c) amplitude and (d) phase of second slice of the sample that are used in simulation. (e) Modulus and (f) phase of selected illumination beams on the plane of first sample slice, and (g) corresponding diffraction patterns formed on detector.

Download Full Size | PDF

Sub-diffraction patterns are clipped out and moved to the center of computational matrix to iteratively reconstruct two sample slices using above proposed method. The reconstructed modulus and phase of two slices are shown in Fig. 4. Figure 4(e) shows residual reconstruction error changing with the number of iterations. Reconstruction error becomes less than 0.88% after 1000 iterations. The size of the computation matrix is 1650×1650 and it took 6704.51 seconds to complete 1000 iterations.

$$E\textrm{rror} = \frac{{{\sum _{mn}}[{|{{S_1}({x_m},{y_n}) - {S_1}^{\prime}({x_m},{y_n})} |+ |{{S_2}({x_m},{y_n}) - {S_2}^{\prime}({x_m},{y_n})} |} ]}}{{{\sum _{mn}}[{|{{S_1}({x_m},{y_n})} |+ |{{S_2}({x_m},{y_n})} |} ]}}. $$

To illustrate the resolving capability and noise immunization ability of the proposed method, another set of simulations are carried out by adding major noises such as dark noise, shot noise and point spread noise to the recorded diffraction pattern. For an easy analysis, two resolution targets with transmissions shown in Figs. 5(a) and 5(b) are used as samples while keeping other parameters the same as in previous simulation. Since the maximum intensity recorded is 3.0 while the shutter of CCD is closed in our experiments, a rand matrix changing in the range of [0,3] is chosen to simulate the dark noise. Poisson noise was also added to diffraction intensities as shot noise. Since the pixel size of detector used in the experiment is $9 \times 9\textrm{ }\mu {m^2}$, the effect of point spread noise of detector is simulated by convolving the diffraction intensity with a square of $9 \times 9\textrm{ }\mu {m^2}$ and then sampled at an interval of 9 µm along x and y-axis. Figure 5(c) shows some of the diffraction patterns generated, and Figs. 5(d) and 5(e) show two reconstructed images.

 figure: Fig. 4.

Fig. 4. (a) and (c) show reconstructed modulus, and (b) and (d) respective phase of each sample slice. (e) The reconstruction error with number of iterations.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. (a) (b) show the phase of two resolution targets, (c) corresponding diffraction patterns formed on detector, (d), (e) reconstructed phase of two resolution targets, (f) the reconstruction error with number of iterations.

Download Full Size | PDF

To show the influence of added noise to the reconstructed image clearly, residual errors changing with number of iterations are computed with Eq. (14) and are shown in Fig. 5(f) as curves of different color. The red curve shows the residual error with no added noise, which approaches 5‰ after 1000 iterations. The blue dotted curve and cyan dash curve are residual reconstruction errors with dark noise and shot noise, respectively. We can find that residual reconstruction errors caused by dark noise and shot noise of detector approach about 8‰ and 5‰, respectively after 1000 of iterations. With the effect of finite pixels taken into consideration, the residual reconstruction error is computed and are shown with a black curve, which approaches about 10‰ after 1000 iterations. This means the effect of finite pixel size of detector is more influential than dark noise and shot noise on the reconstruction quality, the reason being the fact that PIE is a gradient searching algorithm and what is reconstructed is the maximum likelihood estimation of real sample and hence the effect of added random noise is averaged out in iterative computations.

4. Experiments

The optical setup adopted to demonstrate the feasibility of the proposed method is shown in Fig. 6. Light from a He-Ne laser is split into 24 beams by a fiber beam splitter to form a 4×6 laser beam array on a hemispherical shell of radius 150 mm. Illumination beams crossed each other through a pinhole of radius 1.25mm that is placed at the center of the hemisphere to limit the size of the beam on the sample. Adjacent sub-beams make an angle of about 4.1°. A CCD (AVT Pike F1100B) is placed 68.5 mm away from the pinhole and has a resolution of 4008×2672 with a pixel size of 9 µm. Aligning of the optical setup in Fig. 6 is done by placing the sample at the center of curvature of the spherical surface. First, it is illuminated by ${00^{th}}$ laser beam to obtain the central diffraction pattern at the center of the detector, and then by ${01^{th}}$ laser beam to obtain another sub-diffraction pattern. The second pattern is then adjusted by tuning the incident angle of the beam to form two distinct diffraction patterns on detector. This is repeated for each laser beam. The size of each sub-diffraction can also be adjusted by changing the diameter of the pinhole, larger sub-diffraction patterns are obtained by increasing the diameter of this pinhole.

 figure: Fig. 6.

Fig. 6. (a) Setup of 3D single-shot ptychography with highly tilted illuminations, (b), (c) and (d) are microscopic images of three samples obtained with a 10 × objective. S1 is a sample of Pachira macrocarpa stem, S2 that of Pumpkin stem and S0, a phase resolution target.

Download Full Size | PDF

A fixed biological slide of Pachira macrocarpa stem is used as a scanning sample to compute propagation angles $\alpha$ and $\beta$ of each laser beam and corresponding un-tilted complex amplitude $P^{\prime}{^1}_{mn}(x,y)$ using ePIE imaging outlined in section 2.2. It is scanned at many positions to record many diffraction patterns arrays, which are then separated and shifted to the center of computational matrix to form a series of diffraction patterns corresponding to each illumination beam.

Figure 7(a) shows one of the recorded diffraction pattern arrays. The illumination $P^{\prime}{^1}_{00}(x,y)$ of central laser beam and the transmission function of the sample $S(x,y)$ are computed first, followed by illuminations $P^{\prime}{^1}_{mn}(x,y)$ of other tilted beams together with their propagating angles ${\alpha _{mn}}$ and ${\beta _{mn}}$. The computed phase and modulus of illumination beam array is shown in Fig. 7(b), where the brightness shows the phase and the color shows the modulus. Images are also labelled with computed angles.

 figure: Fig. 7.

Fig. 7. (a) Diffraction pattern array recorded by CCD and (b) retrieved complex amplitude distribution of illumination beam array where brightness indicates the phase and the color, intensity. Angles are labeled on the image.

Download Full Size | PDF

3D imaging capabilities of the proposed method is demonstrated using two biological samples (Stem sections of Pachira macrocarpa and Pumpkin) that are placed with a spacing of 2.9 mm in the optical setup shown in Fig. 6. First sample is kept 4.6 mm away from the pinhole. Figure 8(a) shows the recorded diffraction patterns array. Figures 8(b)–8(e) show the modulus and phase of two samples that are iteratively reconstructed by using computed illumination beams and corresponding propagation angles shown in Fig. 7(b). Clearly distinguishable cells in the reconstruction show that highly tilted illumination beams have reduced the presence of cross-talk between neighboring diffraction disks by generating enough number of clearly isolated sub-diffractions with much more high frequency components that allowed a faithful reconstruction.

 figure: Fig. 8.

Fig. 8. (a) Recorded diffraction patterns array (b) reconstructed amplitude and (c) phase distribution of first sample (d) reconstructed amplitude and (e) phase distribution of second sample.

Download Full Size | PDF

Images of two phase resolution targets are reconstructed with the same optical parameters of the above experiments to examine the spatial resolution capabilities of the proposed method. The experiment result is shown in Fig. 9, where we can find that the image of the resolution target closed to CCD has a resolution of 11.0 µm, and the image of another target, which is a littler far from CCD has a resolution of 12.4 µm, while their theoretical resolutions calculated with ${{\lambda D} / L}$ are 8.58 µm and 8.99 µm, respectively, where D is the distance of samples to detector, and L is the diameter of sub-diffraction disks.

 figure: Fig. 9.

Fig. 9. (a) Recorded diffraction patterns array obtained with phase resolution targets. Reconstructed phase distribution of (b) first target and (c) second target; (d) show the horizontal intensity distribution lines drawn on first resolution target, and (e), drawn along the vertical direction on second resolution target.

Download Full Size | PDF

This experimental result demonstrates the good capability of this proposed single shot method to obtain high quality 3D imaging using highly tilted illumination beams. The reason why the transverse resolution obtained in this experiment is obviously lower than the theoretically computed value ${{\lambda D} / L}$ can be explained using the observations made in Fig. 2. Theoretical transvers resolution $\Delta x$ and axial resolution $\Delta z$ are essentially the highest possible resolution values while $\widetilde V({K_x},{K_y},{K_z})$ fully occupy the K-space indicated by the light green rectangle in Figs. 2(b)–2(d). However, the obtained $\widetilde V({K_x},{K_y},{K_z})$ only partially covers the ${K_x}{K_z}$ space, in two sectors as indicated by light purple color. Thus, it is reasonable that the reached resolution in the experiment is obviously lower than ${{\lambda D} / L}$.

Experimental results obtained and presented in Fig. 8 and Fig. 9 indicates an obvious improvement over the methods based on paraxial approaches [21]. While methods based on paraxial approximation reached a resolution of about 100 µm (the diameter of human hair) in the reported literature, the spatial resolution is obtained is about 10 µm in the proposed method.

5. Conclusion

A single-shot 3D ptychographic imaging method utilizing highly tilted illumination beams is proposed, and its feasibility is demonstrated through simulations and experiments. By using a cluster of laser beams with large enough propagating angles between them to illuminate thick sample, enough number of diffraction patterns clearly isolated with each other can be recorded with single detector exposure, and high resolution 3D image can be reconstructed by using modified coherent transfer function to numerically propagate non-paraxial light for iterative computation. It is demonstrated that using highly tilted illuminations can reduce crosstalk between neighboring diffraction disks and get more high frequency components necessary for a faithful reconstruction. In view of the fact that the experimentally obtained resolution values are quite close to calculated ones, it follows that the proposed 3D imaging method is appropriate for studies of 3D distributions of laser plasma, internal stresses, temperature fields, flames, among others where weakly scattering specimens are involved and their dynamics cannot be captured by most of the common imaging methods.

Funding

National Natural Science Foundation of China (11875308, 61827816, 61905261); Scientific and Innovative Action Plan of Shanghai (19142202600); Scientific Instrument Developing Project of the Chinese Academy of Sciences (YJKYYQ20180024); Strategic Priority Research Program of Chinese Academy of Sciences (XDA25020306).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. M. Dierolf, A. Menzel, P. Thibault, P. Schneider, C. M. Kewish, R. Wepf, O. Bunk, and F. Pfeiffer, “Ptychographic X-ray computed tomography at the nanoscale,” Nature 467(7314), 436–439 (2010). [CrossRef]  

2. N. C. Pegard, H. Y. Liu, N. Antipa, M. Gerlock, H. Adesnik, and L. Waller, “Compressive light-field microscopy for 3D neural activity recording,” Optica 3(5), 517–524 (2016). [CrossRef]  

3. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5(1), 1–9 (2018). [CrossRef]  

4. R. Horisaki, K. Fujii, and J. Tanida, “Diffusion-based single-shot diffraction tomography,” Opt. Lett. 44(8), 1964–1967 (2019). [CrossRef]  

5. W. Zhang, B. Yu, D. Y. Lin, H. H. Yu, S. W. Li, and J. L. Qu, “Optimized approach for optical sectioning enhancement in multifocal structured illumination microscopy,” Opt. Express 28(8), 10919–10927 (2020). [CrossRef]  

6. G. W. Li, W. Q. Yang, Y. M. Bian, H. C. Wang, and G. H. Situ, “Single-shot three-dimensional imaging with a scattering layer [Invited],” Appl. Opt. 60(10), B32–B37 (2021). [CrossRef]  

7. Z. Yousaf, P. J. Withers, and P. Potluri. Compaction, “Nesting and image based permeability analysis of multi-layer dry preforms by computed tomography (CT),” Compos. Struct. 263, 113676 (2021). [CrossRef]  

8. M. Cellina, D. Gibelli, C. Floridi, A. Cappella, G. Oliva, C. Dolci, S. Giulia, and C. Sforza, “Changes of intrathoracic trachea with respiration in children: A metrical assessment based on 3D CT models,” Clin. Imag. 74, 10–14 (2021). [CrossRef]  

9. M. Engel, L. Kasper, B. Wilm, B. Dietrich, L. Vionnet, F. Hennel, J. Reber, and K. P. Pruessmann, “T-Hex: Tilted hexagonal grids for rapid 3D imaging,” Magn. Reson. Med. 85(5), 2507–2523 (2021). [CrossRef]  

10. J. Kweon, S. J. Kang, Y. H. Kim, J. G. Lee, S. Han, H. Ha, D. H. Yang, J. W. Kang, T. H. Lim, O. Kwon, J. M. Ahn, P. H. Lee, D. W. Park, S. W. Lee, C. W. Lee, S. W. Park, and S. J. Park, “Impact of coronary lumen reconstruction on the estimation of endothelial shear stress: in vivo comparison of three-dimensional quantitative coronary angiography and three-dimensional fusion combining optical coherent tomography,” Eur. Heart J. – Card. Img. 19(10), 1134–1141 (2018). [CrossRef]  

11. H. Ozturk, H. Yan, Y. He, M. Y. Ge, Z. H. Dong, M. F. Lin, E. Nazaretski, I. K. Robinson, Y. S. Chu, and X. J. Huang, “Multi-slice ptychography with large numerical aperture multilayer Laue lenses,” Optica 5(5), 601–607 (2018). [CrossRef]  

12. Y. Sando, K. Satoh, D. Barada, and T. Yatagai, “Real-time interactive holographic 3D display with a 360° horizontal viewing zone,” Appl. Opt. 58(34), G1–G5 (2019). [CrossRef]  

13. Y. F. Wen and A. Asundi, “3D profile measurement for stepped microstructures using region-based transport of intensity equation,” Meas. Sci. Technol. 30(2), 025202 (2019). [CrossRef]  

14. E. H. R. Tsai, I. Usov, A. Diaz, A. Menzel, and M. Guizar-Sicairos, “X-ray ptychography with extended depth of field,” Opt. Express 24(25), 29089–29108 (2016). [CrossRef]  

15. M. Kahnt, J. Becher, D. Bruckner, Y. Fam, T. Sheppard, T. Weissenberger, F. Wittwer, J. D. Grunwaldt, W. Schwieger, and C. G. Schroer, “Coupled ptychography and tomography algorithm improves reconstruction of experimental data,” Optica 6(10), 1282–1289 (2019). [CrossRef]  

16. P. Li and A. Maiden, “Multi-slice ptychographic tomography,” Sci. Rep. 8(1), 2049 (2018). [CrossRef]  

17. X. G. Yang, M. Kahnt, D. Bruckner, A. Schropp, Y. Fam, J. Becher, J. D. Grunwaldt, T. L. Sheppard, and C. G. Schroer, “Tomographic reconstruction with a generative adversarial network,” J. Synchrotron Radiat. 27(2), 486–493 (2020). [CrossRef]  

18. T. M. Godden, R. Suman, M. J. Humphry, J. M. Rodenburg, and A. M. Maiden, “Ptychographic microscope for three-dimensional imaging,” Opt. Express 22(10), 12513–12523 (2014). [CrossRef]  

19. A. Maiden, M. Humphry, and J. Rodenburg, “Ptychographic transmission microscopy in three dimensions using a multi-slice approach,” J. Opt. Soc. Am. A 29(8), 1606–1614 (2012). [CrossRef]  

20. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2(2), 104–111 (2015). [CrossRef]  

21. D. Goldberger, J. Barolak, C. G. Durfee, and D. E. Adams, “Three-dimensional single-shot ptychography,” Opt. Express 28(13), 18887–18898 (2020). [CrossRef]  

22. M. Guizar-Sicairos and J. R. Fienup, “Phase retrieval with transverse translation diversity: a nonlinear optimization approach,” Opt. Express 16(10), 7264–7278 (2008). [CrossRef]  

23. C. C. Chang, X. C. Pan, H. Tao, C. Liu, S. P. Veetil, and J. Q. Zhu, “Single-shot ptychography with highly tilted illuminations,” Opt. Express 28(19), 28441–28451 (2020). [CrossRef]  

24. R. Horstmeyer, J. Chung, X. Z. Ou, G. A. Zheng, and C. H. Yang, “Diffraction tomography with Fourier ptychography,” Optica 3(8), 827–835 (2016). [CrossRef]  

25. Y. J. Sung, W. Choi, C. Fang-Yen, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Optical diffraction tomography for high resolution live cell imaging,” Opt. Express 17(1), 266–277 (2009). [CrossRef]  

26. E. Wolf, “Three-dimensional structure determination of semi-transparent objects from holographic data,” Optics Communications 1(4), 153–156 (1969). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Schematic of 3D single shot ptychography with highly tilted illuminations.
Fig. 2.
Fig. 2. (a) the vectors of scattered light in 2D Ewald sphere, (b) $\widetilde V({{K_x},{K_y},{K_z}} )$ with three paraxial illuminating beams, (c) $\widetilde V({{K_x},{K_y},{K_z}} )$ with non-paraxial illuminating beams, (d) $\widetilde V({{K_x},{K_y},{K_z}} )$ with several non-paraxial illuminating beams.
Fig. 3.
Fig. 3. (a) Amplitude and (b) phase of first slice, and (c) amplitude and (d) phase of second slice of the sample that are used in simulation. (e) Modulus and (f) phase of selected illumination beams on the plane of first sample slice, and (g) corresponding diffraction patterns formed on detector.
Fig. 4.
Fig. 4. (a) and (c) show reconstructed modulus, and (b) and (d) respective phase of each sample slice. (e) The reconstruction error with number of iterations.
Fig. 5.
Fig. 5. (a) (b) show the phase of two resolution targets, (c) corresponding diffraction patterns formed on detector, (d), (e) reconstructed phase of two resolution targets, (f) the reconstruction error with number of iterations.
Fig. 6.
Fig. 6. (a) Setup of 3D single-shot ptychography with highly tilted illuminations, (b), (c) and (d) are microscopic images of three samples obtained with a 10 × objective. S1 is a sample of Pachira macrocarpa stem, S2 that of Pumpkin stem and S0, a phase resolution target.
Fig. 7.
Fig. 7. (a) Diffraction pattern array recorded by CCD and (b) retrieved complex amplitude distribution of illumination beam array where brightness indicates the phase and the color, intensity. Angles are labeled on the image.
Fig. 8.
Fig. 8. (a) Recorded diffraction patterns array (b) reconstructed amplitude and (c) phase distribution of first sample (d) reconstructed amplitude and (e) phase distribution of second sample.
Fig. 9.
Fig. 9. (a) Recorded diffraction patterns array obtained with phase resolution targets. Reconstructed phase distribution of (b) first target and (c) second target; (d) show the horizontal intensity distribution lines drawn on first resolution target, and (e), drawn along the vertical direction on second resolution target.

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

P ( x , y )  =  P ( x , y ) e i 2 π ( cos α λ x + cos β λ y )
T 1 ( x , y ) = P ( x , y ) t 1 ( x , y ) e i 2 π ( cos α λ x + cos β λ y ) = T 1 ( x , y ) e i 2 π ( cos α λ x + cos β λ y )
P 2 ( x , y ) = F 1 [ F [ T 1 ( x , y ) ] H d ( f x , f y ) ]
P 2 ( x , y ) = F 1 [ T ~ 1 ( f x cos α λ , f y cos β λ ) H d ( f x , f y ) ]   = F 1 [ T ~ 1 ( f x , f y ) H d ( f x + cos α λ , f y + cos β λ ) ] e i 2 π ( cos α λ x + cos β λ y )   = P 2 ( x , y ) e i 2 π ( cos α λ x + cos β λ y ) .
 2 π d λ  -  1 1 ( λ f x + cos α ) 2 ( λ f y + cos β ) 2 = 2 π d λ 1 { 1 1 2 [ ( λ f x + cos α ) 2 + ( λ f y + cos β ) 2 ] 1 8 [ ( λ f x + cos α ) 4 + ( λ f y + cos β ) 4 ] + } .
H d ( f x + cos α λ , f y + cos β λ )  =  H d ( f x + cos α λ , f y + cos β λ ) e  -  i 2 π d ( cos α f x + cos β f y ) .
P 2 ( x , y ) = F 1 [ T ~ 1 ( f x , f y ) H d ( f x + cos α λ , f y + cos β λ ) e i 2 π d ( cos α f x + cos β f y ) ] e i 2 π ( cos α λ x + cos β λ y )   = { F 1 [ T ~ 1 ( f x , f y ) H d ( f x + cos α λ , f y + cos β λ ) ] F 1 [ e i 2 π d ( cos α f x + cos β f y ) ] } e i 2 π ( cos α λ x + cos β λ y )   = [ P 2 ( x , y ) δ ( x + d cos α , y + d cos β ) ] e i 2 π ( cos α λ x + cos β λ y )   = P 2 ( x + d cos α , y + d cos β ) e i 2 π ( cos α λ x + cos β λ y )
S Z K ( x , y ) = S Z K ( x , y ) + T K m n ( x , y ) T k m n ( x , y ) | P K m n ( x , y ) | 2 + δ [ P K m n ( x , y ) ] P K m n ( x , y ) = P K m n ( x , y ) + T K m n ( x , y ) T k m n ( x , y ) | S Z K ( x , y ) | 2 + δ [ S Z K ( x , y ) ] .
ε  =  Σ m , n | | U m n ( x , y ) | 2 I m n ( x , y ) | Σ m n I m n ( x , y ) .
cos α = cos α + e α , cos β = cos β + e β .
T 1 ( x , y ) = P ( x , y ) t 1 ( x , y ) e i 2 π ( cos α λ x + cos β λ y ) e  -  i 2 π ( e α λ x + e β λ y )   = P ( x , y ) e  -  i 2 π ( e α λ x + e β λ y ) t 1 ( x , y ) e i 2 π ( cos α λ x + cos β λ y )   = P T ( x , y ) t 1 ( x , y ) e i 2 π ( cos α λ x + cos β λ y ) .
V ~ ( K x , K y , K z )  =  i k z π U ~ s ( k x , k y ; z = 0 )
V ~ ( K x , K y , K z )  =  n = 1 N V ( x , y , z ) e i ( K x x n + K y y n + K z z n ) .
E rror = m n [ | S 1 ( x m , y n ) S 1 ( x m , y n ) | + | S 2 ( x m , y n ) S 2 ( x m , y n ) | ] m n [ | S 1 ( x m , y n ) | + | S 2 ( x m , y n ) | ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.