Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Optical ptychography for biomedical imaging: recent progress and future directions [Invited]

Open Access Open Access

Abstract

Ptychography is an enabling microscopy technique for both fundamental and applied sciences. In the past decade, it has become an indispensable imaging tool in most X-ray synchrotrons and national laboratories worldwide. However, ptychography’s limited resolution and throughput in the visible light regime have prevented its wide adoption in biomedical research. Recent developments in this technique have resolved these issues and offer turnkey solutions for high-throughput optical imaging with minimum hardware modifications. The demonstrated imaging throughput is now greater than that of a high-end whole slide scanner. In this review, we discuss the basic principle of ptychography and summarize the main milestones of its development. Different ptychographic implementations are categorized into four groups based on their lensless/lens-based configurations and coded-illumination/coded-detection operations. We also highlight the related biomedical applications, including digital pathology, drug screening, urinalysis, blood analysis, cytometric analysis, rare cell screening, cell culture monitoring, cell and tissue imaging in 2D and 3D, polarimetric analysis, among others. Ptychography for high-throughput optical imaging, currently in its early stages, will continue to improve in performance and expand in its applications. We conclude this review article by pointing out several directions for its future development.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

Corrections

9 February 2023: A minor correction was made to a reference.

1. Introduction

Conventional light detectors can only measure intensity variations of the incoming light wave. Phase information that characterizes the optical delay is lost during the data acquisition process. The loss of phase information is termed ‘the phase problem’. It was first noted in the field of crystallography: the real-space crystal structure can be determined if the phase of the diffraction pattern can be recovered in reciprocal space [1]. To regain the phase information of a complex field, it typically involves interferometric measurements with a known reference wave, in the process of holography.

In 1969, Hoppe proposed the original concept of ptychography in a three-part paper series, aiming to solve the phase problem encountered in electron crystallography [2]. The name ‘ptychography’ is pronounced tie-KOH-gra-fee, where ‘p’ is silent. The word is derived from the Greek ‘ptycho’, meaning convolution in German. By translating a confined coherent probe beam on a crystalline object, Hoppe aspired to extract the phase of the Bragg peaks in reciprocal space. If successful, the crystal structure in real space can be determined through Fourier synthesis. In the original concept, the object-probe multiplication in real space is modeled by a convolution between the Bragg peaks and the probe’s Fourier spectrum in reciprocal space. Therefore, convolution is a key aspect of this technique, justifying its name.

In 2004, Faulkner and Rodenburg adopted the iterative phase retrieval framework [3] for ptychographic reconstruction, thereby bringing the technique to its modern form [4]. Although the original concept was developed to solve the phase problem in crystallography, the modern form of this technique is equally applicable to non-crystalline structures. The experimental procedure is similar to Hoppe’s original idea: The object is laterally translated across a confined probe beam in real space, and the corresponding diffraction patterns are acquired in reciprocal space without using any lens. However, different from the original concept, the reconstruction process iteratively imposes two different sets of constraints. In real space, the confined probe beam limits the physical extent of the object for each measurement, serving as the compact support constraint. In reciprocal space, the diffraction measurements are enforced for the estimated solution, serving as the Fourier magnitude constraints. The iterative reconstruction process effectively looks for an object estimate that satisfies both constraints, in a way similar to the Gerchberg-Saxton algorithm where the phase is recovered from intensity measurements at different planes [5].

With the help of the iterative phase retrieval framework, ptychography has since proliferated and evolved into an enabling microscopy technique with imaging applications in different fields. Figure 1 shows the number of related publications since its conception in 1969. We also highlight several milestones of its development in this figure.

 figure: Fig. 1.

Fig. 1. Development of the ptychography technique. The number of ptychography-related publications grows exponentially since the adoption of the iterative phase retrieval framework for reconstruction. Several milestones are highlighted for its development.

Download Full Size | PDF

The key advantages of the technique can be summarized as follows: First, it does not need a stable reference beam as required in holography. A low-coherence light source can be used for sample illumination. This advantage is an important consideration for coherent X-ray imaging, where the coherence property of the light source is often poor compared to that of laser. Second, the requirement of isolated objects in conventional coherent diffraction imaging (CDI) is no longer required for ptychography. The spatially confined probe beam naturally imposes the compact support constraint for the phase retrieval process. Ptychography can routinely image contiguously connected samples over an extended area via object translation. Third, the lensless operation of this technique makes it appealing for high-resolution imaging in the X-ray and extreme ultraviolet (EUV) regimes, where lenses are costly and challenging to manufacture [68]. The recovered phase further allows high-contrast visualization of the object and provides quantitative morphological measurements. Fourth, the richness of the ptychographic dataset contains information on both the object and different system components in the setup. For example, the captured dataset can be used to jointly recover the object and the probe beam [911]. Similarly, the dataset can characterize and computationally compensate for the pupil aberration of a lens [12,13]. Multi-slice modeling of the object allows it to recover 3D volumetric information [1417]. State mixture modeling [18] allows it to remove the effect of partial coherence in a light source [18,19] and perform multiplexed imaging at different spectral channels [2022]. The diffraction data beyond the detector size limit can also be recovered for super-resolution imaging [23]. Lastly, ptychography has been recognized as a dose-efficient imaging technique for both X-ray [24] and electron microscopy [25,26].

Ptychography’s unique advantages have rapidly attracted attention from different research communities. In the past decade, it has become an indispensable imaging tool at most X-ray synchrotrons and national laboratories worldwide [27]. For electron microscopy, recent developments have also pushed the imaging resolution to the record-breaking deep sub-angstrom limit [28]. In the visible light regime, however, ptychography needs to compete with the well-optimized optical imaging systems. Conventional lensless ptychography has a limited resolution and throughput compared to that of regular light microscopy, thus preventing its widespread adoption in biomedical research. Recent developments of Fourier ptychography [29] and coded ptychography [30,31] can address these issues in the visible light regime. They can overcome the intrinsic tradeoff between imaging resolution and field of view, allowing researchers to have the best of both worlds. The imaging throughput can now be greater than that of high-end whole slide scanners [30,32,33], offering unique solutions for various biomedical applications.

In this review article, we will orient our discussions around optical ptychography and its biomedical imaging applications. In Section 2, we will first review and discuss the imaging models and operations of four representative ptychographic implementations. In Section 3, we will survey and discuss different ptychographic implementations based on their lensless / lens-based configurations and coded-illumination / coded-detection operations. In Section 4, we will discuss different software implementations, including the conventional phase retrieval algorithms, system corrections and extensions, and neural network-based implementations. In Section 5, we will highlight the related biomedical imaging applications, including digital pathology, drug screening, urinalysis, blood analysis, cytometric analysis, rare cell screening, cell culture monitoring, cell and tissue imaging in 2D and 3D, polarimetric analysis, among others. In Section 6, we will conclude this review article by pointing out several promising directions for future development.

We note that the field of ptychography has rapidly progressed in recent years. This review article only covers a small fraction of its developments. We encourage interested readers to visit the following relevant resources for more information: A recent comprehensive book chapter by Rodenburg and Maiden [34], an excellent introduction article by Guizar-Sicairos and Thibault [35], recent review papers on X-ray ptychography [27], Fourier ptychography [3638], and EUV ptychography [39], surveys on ptychography-related phase retrieval algorithms [4043], and an open-source MATLAB application [44].

2. Concepts and operations of representative ptychographic schemes

In this section, we will discuss the imaging models and operations of four representative ptychographic schemes, namely conventional ptychography [4], lensless coded ptychography [30,31], Fourier ptychography [29], and lens-based ptychographic structured modulation [45]. We choose these four implementations based on their lensless / lens-based configurations and coded-illumination / coded-detection operations. In the following, we will use the coordinates $({x,y} )$ to denote the real space and $({{k_x},{k_y}} )$ to denote the reciprocal space (i.e., the Fourier space) of an imaging system. A Fourier transform can convert the complex-valued light field from the real space $({x,y} )$ to the reciprocal (Fourier) space $({{k_x},{k_y}} )$.

2.1 Conventional ptychography: lensless configuration with coded illumination

Figure 2(a) shows the imaging model and operation of conventional ptychography [4]. In this scheme, a spatially confined aperture limits the extent of the illumination probe beam $P({x,y} )$, and the image sensor is placed at the far field for data acquisition. In its operation, the complex-valued object $O({x,y} )$ is mechanically translated to different position (${x_i}$, ${y_i}$) in real space. The resulting product between the object and the probe beam propagates to the far-field via a Fourier transform. As such, the diffraction measurement can be obtained in reciprocal space with the coordinates of $({{k_x},{k_y}} )$. The resulting dataset ${I_i}({{k_x},{k_y}} )\,({i = 1,2,3 \ldots } )$, termed ptychogram, is a collection of diffraction measurements for all translated positions of the object.

 figure: Fig. 2.

Fig. 2. Imaging models and operations of four ptychographic schemes chosen based on their lensless / lens-based configuration and coded-illumination / coded-detection operations. (a) Conventional ptychography: lensless configuration with coded illumination. $O({x,y} )$ denotes the complex object, (${x_i}$, ${y_i}$) denotes the translational shift of the object in real space, $Probe({x,y} )$ denotes the spatially confined probe beam, ‘FT’ denotes Fourier transform, and ‘·’ denotes point-wise multiplication. (b) Coded ptychography: lensless configuration with coded detection. $W({x^{\prime},y^{\prime}} )$ denotes the object exit wavefront at the coded surface plane, $CS({x^{\prime},y^{\prime}} )$ denotes the transmission profile of the coded surface, and ‘Propd’ denotes free-space propagation for a distance d. (c) Fourier ptychography: lens-based configuration with coded illumination. $({{k_{xi}},{k_{yi}}} )$ denotes the incident wavevector of the ith LED element, ‘*’ denotes the convolution operation, ‘PSFobj’ denotes the point spread function of the objective lens. (d) Ptychographic structured modulation: lens-based configuration with coded detection. $D({x^{\prime},y^{\prime}} )$ denotes the transmission profile of the diffuser placed between the object and the objective lens.

Download Full Size | PDF

The reconstruction process of conventional ptychography is shown in Fig. 2(a). It is performed by iteratively imposing two different sets of constraints: The first constraint is the support constraint for each measurement in real space. It is imposed by setting the signals outside the probe beam area to zeros while keeping the signals inside unchanged [46]. The second constraint is the Fourier magnitude constraint in reciprocal space. It is implemented by replacing the modulus of the estimated pattern with the measurement while keeping the phase unchanged. The iterative process will converge to an object recovery with both intensity and phase properties as shown in the right panel of Fig. 2(a).

One key consideration for conventional ptychography is the overlapped area of illumination during the object translation process (the overlapped circular region in the middle panel of Fig. 2(a)). The overlapped area allows the object to be visited multiple times in the acquisition process. The resulting additional information can resolve the ambiguities of the phase retrieval process. If no overlap is imposed, the recovery process will be carried out independently for each acquisition, leading to the usual ambiguities inherent to the phase problem [47].

In summary, conventional ptychography does not use any lens, and the operation is based on coded illumination of the spatially confined probe beam. There are many variants of this scheme, including the near-field implementation by placing the object closer to the detector [48], the tomographic implementation by rotating the object for 3D volumetric imaging [4952], the reflective Bragg ptychography [5358], and more. We will discuss them in Section 3.1.

2.2 Coded ptychography: lensless configuration with coded detection

Figure 2(b) shows the imaging model and operation of coded ptychography [30,31]. The system configuration is shown in the left panel of Fig. 2(b): The specimen is placed at the object plane $({x,y} )$, the coded surface is placed at the modulation plane $({x^{\prime},y^{\prime}} )$, and the detector is placed at the image plane $({x^{\prime\prime},y^{\prime\prime}} )$. In the image formation process, the light waves propagate for a distance ${d_1}$ from the object plane $({x,y} )$ to the modulation plane $({x^{\prime},y^{\prime}} )$, and a distance ${d_2}$ from the modulation plane $({x^{\prime},y^{\prime}} )$ to the image plane $({x^{\prime\prime},y^{\prime\prime}} )$. The coded surface placed at the modulation plane can redirect large-angle diffracted light waves into smaller angles that can be detected by the underlying pixel array. Consequently, previously inaccessible high-resolution object details can now be acquired with the sensor pixel array. The operation of this coded surface is similar to that of structured illumination microscopy [59], where the high-frequency object information is down-modulated into the low-frequency passband of the system for detection.

To prepare the coded surface, one can etch micron-sized phase scatterers on the image sensor’s coverglass followed by printing sub-wavelength absorbers on the etched surface [30]. The left panel of Fig. 2(b) shows an alternative approach where a drop of blood is directly smeared on the image sensor’s coverglass and fixated with alcohol [60,61]. The rich spatial feature of the coded surface makes it an effective high-resolution lens with a theoretically unlimited field of view (the coded layer can be made of any size). It can unlock an optical space with spatial extent $({x,y} )$ and spatial frequency content $({{k_x},{k_y}} )$ that is inaccessible using conventional lens-based optics [30].

In a typical implementation of coded ptychography, a fiber-coupled laser beam is used to illuminate the entire object over an extended area. By translating the object (or the integrated coded sensor) to different lateral positions, the system records a set of intensity images ${I_i}({x\mathrm{^{\prime\prime}},y\mathrm{^{\prime\prime}}} )$ ($i = 1,2, \ldots $) for reconstruction. In the forward imaging model of Fig. 2(b), we use ‘downsample’ to denote the down-sampling process of the pixel array. To achieve high-resolution reconstruction, spatial and angular responses of individual pixels need to be considered in the imaging model [30]. The spatial response characterizes the pixel sensitivity at different regions of the sensing area (pixel sensitivity is often higher at the central region than the edge). The angular response characterizes the pixel readout with respect to different incident angles. Following the iterative phase retrieval process, one can recover the complex object exit wavefront $W({x^{\prime},y^{\prime}} )$ at the coded surface plane. This recovered wavefront can then be digitally propagated back to the object plane to obtain the high-resolution object profile $O({x,y} )$.

In coded ptychography, the coded surface on the image sensor serves as an effective ptychographic probe beam (the coded surface multiplies the object wavefront in the imaging model of Fig. 2(b)). We assume the transmission profile remains unchanged for diffracted waves with different incident angles. The transmission matrix [62] of the coded layer can be approximated as a diagonal matrix. This assumption is valid when the coded surface is a thin layer on the sensor. For a thick coded surface, one may need to measure the full transmission matrix to characterize its modulation property.

The coded ptychography scheme has several unique advantages among different ptychographic implementations: First, the modulation effect of the coded surface enables high-resolution imaging without using any optical lens. The current setup can resolve the 308-nm linewidth on the resolution target. The best-demonstrated resolution corresponds to a numeral aperture (NA) of ∼0.8, among the highest in lensless imaging demonstrations. The resolution can be further improved by integrating angle-varied illumination for aperture synthesizing [29].

Second, the illumination beam covers the entire object over a large field of view. The scanning step size is on the micron level between adjacent acquisitions. As a result, one can continuously acquire images at the full camera frame rate during the scanning process, enabling whole slide imaging of various bio-specimens at high speed. It has been shown that gigapixel high-resolution images with a 240 mm2 effective field of view can be acquired in 15 seconds [30]. The corresponding imaging throughput is comparable to or higher than that of the fastest whole slide scanner [63].

Third, the spatially confined (structured) probe beam in conventional ptychography may vary between different experiments. For example, it is challenging to place different specimens in the same exact position of the probe beam. The probe beam profile may also vary between different experiments. In the phase retrieval process, it is often required to jointly recover both the object and the probe beam simultaneously. However, in some cases, the object and the probe beam can never be completely and unambiguously separated from one another, especially when both contain slow-varying phase features with many 2π wraps [30]. In coded ptychography, the coded layer serves as the effective probe beam for the imaging process. This effective probe beam is hardcoded into the imaging system and stays unchanged between different experiments. It can quantitatively recover the slow-varying phase profiles (with many 2π wraps) of different samples, including optical prisms and lenses [30], bacterial colonies [64], urine crystals [61], unstained thyroid smears obtained from fine needle aspiration [60].

Fourth, the small distance between the object and the coded sensor allows the direct recovery of the object’s positional shift based on the raw diffraction measurements. As a result, coded ptychography allows open-loop optical acquisition without requiring any positional feedback from the mechanical stage. A coded ptychography platform can be built with low-cost stepper motors or even a Blu-ray drive [61]. In contrast, precise positional feedback is often an important consideration for conventional ptychography.

Fifth, rapid autofocusing is a challenge for conventional lens-based whole slide imaging systems. Common whole slide scanners often generate a focus map prior to the image acquisition process [63]. With coded ptychography, the recovered wavefront can be propagated back to any axial plane post-measurement. Focusing is no longer required in the image acquisition process. Post-acquisition autofocusing can be performed by maximizing the phase contrast of the recovered images [60].

Lastly, the recovered object wavefront in coded ptychography depends solely on how the complex wavefront exits the sample and not on how it enters. Therefore, the sample thickness becomes irrelevant in modeling the image formation process and this scheme can image objects with arbitrary thicknesses. In contrast, coded-illumination approaches (including conventional ptychography and Fourier ptychography) have requirements for object thickness [6,37]. Multi-slice or diffraction tomography approaches may be needed for modeling and imaging thick specimens [14,65].

In summary, coded ptychography does not use any lens, and the operation is based on coded detection of the structured surface on the image sensor. There are many variants of this scheme, including the rotational implementation by placing the object on a spinning disk [61], an on-chip optofluidic implementation by translating the object through a microfluidic channel [66], and synthetic aperture implementation by translating the coded sensor at the far field [67], among others. We will discuss them in Section 3.2.

2.3 Fourier ptychography: lens-based configuration with coded illumination

Figure 2(c) shows the imaging model and operation of Fourier ptychographic microscopy (FPM) [29]. Unlike the two lensless schemes discussed above, FPM is a lens-based implementation built using a regular microscope platform. The left panel of Fig. 2(c) shows a typical FPM setup: A programmable LED array is used for angle-varied illumination, and a low-NA objective lens is used for image acquisition. In this setup, the specimen is placed at the object plane $({x,y} )$, the pupil aperture is located at the Fourier plane $({{k_x},{k_y}} )$, and the detector is placed at the image plane $({x,y} )$. In the image formation process, the microscope objective lens performs a Fourier transform to convert the light waves from the object plane $({x,y} )$ to the aperture plane $({{k_x},{k_y}} )$. The tube lens then performs a second Fourier transform to convert the light waves from the aperture plane $({{k_x},{k_y}} )$ to the image plane $({x,y} )$ [37].

In the operation of FPM, the programmable LED array sequentially illuminates the object from different incident angles and the FPM system records the corresponding low-resolution intensity images ${I_i}({x,y} )$ ($i = 1,2, \ldots $). If the object is a 2D thin section, changing the illumination wavevector $({k_{xi}},\; {k_{yi}})$ in real space effectively translates the object spectrum in reciprocal (Fourier) space:

$$FT\{{O({x,y} )\cdot {e^{i{k_{xi}}x}}{e^{i{k_{yi}}y}}} \}= \hat{O}({{k_x} - {k_{xi}},{k_y} - {k_{yi}}} ), $$
where $FT$ denotes the Fourier transform operation and $\hat{O}({{k_x},{k_y}} )$ denotes the object spectrum. Therefore, the imaging model in Fig. 2(c) can be re-written as
$${I_i}({x,y} )= {|{F{T^{ - 1}}\{{\hat{O}({{k_x} - {k_{xi}},\; {k_y} - {k_{yi}}} )\cdot Pupil({{k_x},\; {k_y}} )} \}} |^2}$$

In this model, we can see that the pupil aperture of the microscope system serves as the effective ptychographic probe beam for object spectrum. The translational shift of the object spectrum is determined by the ${i^{th}}$ LED element’s illumination wave vector (${k_{xi}},{k_{yi}}$). As such, each captured raw image corresponds to a circular aperture region centered at position (${k_{xi}},{k_{yi}}$) in the Fourier space, as shown in the middle panel of Fig. 2(c).

While conventional ptychography stitches measurements in real space to expand the imaging field of view, the reconstruction process of FPM stitches measurements in reciprocal (Fourier) space to expand the spatial-frequency passband. Therefore, the synthesized object spectrum $\hat{O}({{k_x},{k_y}} )$ in FPM can generate a bandwidth that far exceeds the original NA of the microscope platform, as shown in the right panel of Fig. 2(c). Once the object spectrum is recovered from the phase retrieval process, the high-resolution object image $O({x,y} )$ can be obtained by performing an inverse Fourier transform. The resolution of the recovered object image is no longer limited by the NA of the employed objective lens. Instead, it is determined by the synthesized bandwidth of the object spectrum: The larger the incident angle, the higher the resolution. Meanwhile, the recovered object image retains the low-NA objective’s large field of view, thus having a high resolution and a large field of view simultaneously.

Compared with conventional ptychography, FPM swaps real space and reciprocal space using a microscopy system [29,68]. With conventional ptychography, the confined probe beam serves as the finite support constraint in real space. With FPM, the confined pupil aperture serves as the finite support constraint in Fourier space. Both techniques share the same imaging model and require aperture overlap in-between adjacent acquisitions to resolve phase ambiguity of the recovery process. Similarly, both techniques are based on coded-illumination operations and have requirements on the object thickness. However, the aperture synthesizing process in FPM provides a straightforward solution for resolution improvement. By utilizing a high-resolution objective lens with a NA of 0.95, FPM can synthesize a NA of 1.9, which is close to the maximum possible synthetic NA of 2 in free space [69]. The lens elements in FPM can also compensate for the chromatic dispersion of light at different wavelengths, thereby leading to less stringent requirements on the temporal coherence of the light source. Board-band LED sources can be used in FPM for sample illumination. With conventional ptychography and other lensless ptychographic implementations, signals at different wavelengths will be dispersed to different axial planes. Thus, laser sources are often preferred in these setups. Other differences between FPM and conventional ptychography include the dynamic range of the detector, real-space sampling versus Fourier-space sampling, and the initialization strategies [37].

Compared to coded ptychography, FPM replaces the coded layer with the pupil aperture at the Fourier space. The two free-space propagation processes in coded ptychography are also replaced by two Fourier transforms in FPM. Both techniques can perform large-field-of-view and high-resolution microscopy imaging. With coded ptychography, the phase retrieval process recovers the object wavefront at the coded surface plane. The wavefront is then propagated back to the object plane to obtain the final object recovery. With FPM, the process first recovers the object spectrum in the Fourier space, and the object image can then be obtained via an inverse Fourier transform. One key distinction between coded ptychography and FPM is the sample thickness requirement. As a coded-illumination technique, the imaging model of FPM depends on how the incident beam enters the sample. Thick objects must be modeled as multiple layers or 3D scattering potential. In contrast, the imaging model of coded ptychography depends solely on how the complex wavefront exits the sample. Therefore, coded ptychography can image 3D objects with arbitrary thickness. Another distinction between coded ptychography and FPM is the angularly- and spatially-varying properties of the effective ptychographic probe beams. With coded ptychography, the thin coded surface on the image sensor serves as the effective ptychographic probe beam. The transmission profile of this coded surface is assumed to be angle invariant. With FPM, the pupil aperture serves as the effective ptychographic probe beam, and it varies for different spatial locations of the imaging field of view. For best results, the spatially varying property of the pupil needs to be modeled in FPM or recovered in a calibration experiment [12,13,70].

In summary, FPM is a lens-based implementation, and its operation is based on coded illumination with a programmable LED array. There are also many variants of this scheme, including Fourier ptychographic diffraction tomography [65,71], reflective implementations [7276], single-shot implementations [7779], annular illumination [78,80,81], and beam steering implementations [8284]. We will discuss them in Section 3.3.

2.4 Ptychographic modulation: lens-based configuration with coded detection

Figure 2(d) shows the imaging model and operation of the ptychographic structured modulation scheme [45]. In this scheme, a thin diffuser is placed in between the object and the objective lens. The left panel of Fig. 2(d) shows the system configuration: The sample is placed at the object plane $({x,y} )$, the diffusing layer is placed at the modulation plane $({x^{\prime},y^{\prime}} )$, and the microscope system relays the image plane to the object plane $({x,y} )$.

In a typical structured ptychographic modulation implementation, a plane wave is used to illuminate the entire object over an extended area. By translating the diffuser (or the object) to different lateral positions, the system records a set of intensity images ${I_i}({x,y} )$ ($i = 1,2, \ldots $) using the microscope system. Following the iterative phase retrieval process, the complex object exit wavefront $W({x^{\prime},y^{\prime}} )$ will be recovered at the diffuser plane. This recovered wavefront can then be digitally propagated back to the object plane to obtain the high-resolution object profile $O({x,y} )$. Like coded ptychography, the diffuser in this scheme serves as a computational lens for modulating the object’s diffracted waves. With the diffuser, the otherwise inaccessible high-resolution object information can now be encoded into the system. Thus, it can be viewed as a lens-based implementation of coded ptychography.

Compared to FPM, this scheme performs coded detection for super-resolution imaging. The multiplication process between the object and the tilted planewave in FPM becomes a multiplication process between the object exit wavefront $W({x^{\prime},y^{\prime}} )$ and the diffuser profile $D({x^{\prime},y^{\prime}} )$ in ptychographic structured modulation. As a result, this scheme converts the thin object requirement in FPM to a thin diffuser requirement. Once the object wavefront is recovered, it can be digitally propagated to any plane along the optical axis for post-measurement refocusing. Similar to FPM, this scheme can also bypass the resolution limit set by the NA of the objective lens. To this end, Song et al. demonstrated a 4.5-fold resolution gain using a low-NA objective lens [45]. A low-cost microscope add-on module can also be made by attaching the diffuser to a vibration holder. By applying voltages to two vibrational motors, users can introduce random positional shifts to the diffuser and these shifts can be recovered via a correlation analysis process [85].

In summary, ptychographic structured modulation is lens-based implementation, and its operation is based on coded detection of the diffusing layer. It can be viewed as a lens-based version of coded ptychography or a coded-detection version of FPM. There are several variants of this scheme, including configurations by translating the diffuser at the image plane [86] and the aperture plane [87] of the microscope platform, an aperture-scanning Fourier ptychography scheme [88], among others. We will further discuss these variants in Section 3.4.

3. Survey on different ptychographic implementations

With the four representative schemes discussed in the previous section, we further categorize different ptychographic implementations into four groups based on their lensless / lens-based configuration and coded-illumination / coded detection operations in Table 1. Figure 3 shows some of the representative schematics of these four groups, whose operations will be discussed in Sections 3.13.4. The key considerations for different hardware implementations will be summarized in Section 3.5.

 figure: Fig. 3.

Fig. 3. Different ptychographic implementations are categorized into four groups based on their lensless / lens-based configuration and coded-illumination / coded-detection operations.

Download Full Size | PDF

Tables Icon

Table 1. Different hardware implementations of ptychography

3.1 Lensless implementations via coded illumination

The scheme of conventional ptychography is reproduced in Fig. 3(a1). Its first demonstration attracted significant attention from the X-ray imaging community, as nanometer-scale resolution can be achieved using coherent X-ray sources [6,8]. By integrating the concept of conventional ptychography with computed tomography (CT), it is also possible to perform 3D imaging of thick samples using coherent X-ray sources [4952]. As shown in Fig. 3(a2), a thick 3D object is rotated to different angles in the experiment. For each angle, a lateral x-y scan of the object produces one ptychographic reconstruction. With different orientation angles, one can obtain multiple ptychographic reconstructions and they can be used to recover the 3D volumetric information in a way similar to that of a CT scan. This ptycho-tomography scheme has demonstrated great success in imaging different specimens, from bone to silicon chips, with impressive 3D resolution at the nanometer-scale.

Conventional ptychography can also be implemented in a reflection configuration. In the visible light regime, the phase of the reflected light can be used to recover the surface topology of an object, and the sensitivity is comparable to that of white light interferometry [89]. In the X-ray regime, one prominent example is Bragg ptychography (Fig. 3(a3)) [54]. In this scheme, an X-ray beam is focused on a crystalline sample and the reflected light is acquired using a detector placed in the far field. This configuration can be used to image the strain of an epitaxial layer on a silicon-on-insulator device and map the 3D strain of semiconductors [55,56]. Likewise, the reflection configuration can also be implemented in the EUV regime, where the object surface structure can be recovered with high resolution [7,90,91].

Another notable development in this group is near-field ptychography demonstrated by Stockmar et al. in 2013 [48] (also referred to as Fresnel ptychography [34]). As shown in Fig. 3(a4), this scheme places the object closer to the detector. An extended structured beam also replaces the original spatially confined beam for object illumination. As a result, this scheme generally produces a larger field of view and requires fewer measurements for the phase retrieval process [48]. Additionally, since the entire image sensor is evenly illuminated, this scheme does not require conventional high dynamic range detection as in conventional ptychography. In the visible light regime, Zhang et al. demonstrated a field-portable near-field ptychography platform for high-resolution on-chip imaging [94]. As shown in Fig. 3(a5), this platform places a diffuser next to a laser diode to generate an extended structured illumination beam on the object. A low-cost galvo scanner then steers the structured beam to slightly tilted incident angles. These tilted incident angles result in lateral translations of the structured probe beam at the object plane. Thus, the object translation process in conventional ptychography can now be implemented by an efficient angle steering process in this platform. Additionally, this platform’s pixel super resolution model bypasses the resolution limit set by the detector pixel size [94].

Lastly, the implementation of conventional ptychography is not limited to using 2D image sensors. For example, Li et al. demonstrated the use of a single-pixel detector for data acquisition [92]. In this single-pixel implementation, a sequence of binary modulation patterns was projected on a 2D object and the DC component of the diffracted wavefront was acquired using a single-pixel detector. The recorded signals were then used to recover the object’s intensity and phase information. The use of a single-pixel detector enables ptychographic imaging at the THz frequency range and other regimes where 2D detector arrays would otherwise be expensive or unavailable.

3.2 Lensless implementations via coded detection

Instead of using an aperture to generate a spatially confined probe beam for object illumination, one can place the same confined aperture at the detection path to modulate the object’s diffracted wavefront, as shown in Fig. 3(b1). By translating the aperture [103] (or the object [104]) to different lateral positions for image acquisition, the object wavefront can be recovered at the aperture plane and then propagated back to the object plane. Other implementations of this aperture modulation scheme include 1D scanning of a diffuser at the aperture plane [105] and programmable aperture control using a spatial light modulator [106].

The left panel of Fig. 3(b2) reproduces the coded ptychography setup where a spatially extended coded surface is attached to the image sensor for wavefront modulation. In addition to the coded ptychography scheme discussed in Section 2.2, the coded surface layer can be scanned to different lateral positions, and the corresponding images can be acquired for ptychographic reconstruction. To this end, Jiang et al. demonstrated a near-field blind ptychographic modulation scheme to image different types of bio-specimens, including transparent and stained tissue sections, a thick biological sample, and in vitro cell cultures [31].

Instead of performing lateral translation in coded ptychography, one can also place the object on a spinning disc of a Blu-ray drive for image acquisition, as shown in the left panel of Fig. 3(b2). The laser beam can be obtained by coupling the light from the Blu-ray drive’s laser diode to an optical fiber. In this respect, Jiang et al. modified a Blu-ray drive for large-scale, high-resolution ptychographic imaging and demonstrated the device’s capacity for different biomedical applications, including live bacterial culture monitoring, high-throughput urinalysis, and blood-cell analysis [61]. By integrating the temporal correlation constraint for phase retrieval, a compact cell culture platform has also been developed for antimicrobial drug screening and quantitatively tracking bacterial growth from single cells to micro-colonies [64].

The coded ptychography approach can be combined with a wavelength multiplexing strategy for spectral imaging [18,20]. For example, Song et al. reported an angle-titled, wavelength-multiplexed coded ptychography setup for multispectral lensless on-chip imaging [21]. As shown in the right panel of Fig. 3(b2), the platform places a prism at the illumination path to disperse light waves at different wavelengths to different incident angles. As such, the coded surface profiles become uncorrelated at different wavelengths, breaking the ambiguities in state-mixed ptychographic reconstruction. More recently, a handheld ptychographic whole slide scanner has also been developed to perform high-throughput color imaging of tissue sections [60]. This platform can acquire gigapixel images with a 14 mm by 11 mm area in ∼70 seconds. The recovered phase can then be used to visualize the 3D height map of unstained bio-specimen such as thyroid smears obtained via fine needle aspiration.

Figure 3(b3) shows another notable development of the coded ptychography scheme, termed ‘optofluidic ptychography’ [66]. In this approach, a microfluidic channel is attached to the top surface of a coverslip, and a layer of microbeads coats the bottom surface of the same coverslip. The device utilizes microfluidic flow to deliver specimens across the channel, and the microbead layer modulates the object diffractive waves reaching the coverslip. By automatically tracking the object’s motion in the microfluidic channel, one can recover the high-resolution object images from the diffraction measurements. This ptychographic implementation complements the miniaturization provided by microfluidics and allows the integration of ptychography into various lab-on-a-chip devices [66].

Another important development in this group is the synthetic aperture ptychography approach shown in Fig. 3(b4) [67]. In this scheme, an object is illuminated with an extended planewave, and a coded image sensor is translated at the far field for data acquisition. The coded sensor translation process can effectively synthesize the object wavefront over a large area at the aperture plane. By propagating this wavefront back to the object plane, one can simultaneously widen the field of view in real space and expand the Fourier bandwidth in reciprocal space. Both the transmission and reflection configurations have been demonstrated for this approach [67]. A 20-mm aperture was synthesized using a 5-mm coded sensor, achieving a 4-fold gain in resolution and a 16-fold gain in the field of view. If the image sensor does not have the coded layer on top of it, the sensor translation process will simply produce one diffraction measurement with a large field of view. The loss of phase information in this single-intensity measurement prevents the recovery of the object information.

3.3 Lens-based implementations via coded illumination

Ptychographic imaging can be implemented with a lens-based microscope setup. The first example of this group of configurations is the single-shot ptychography scheme reported by Sidorenko and Cohen [111]. As shown in Fig. 3(c1), a pinhole array is used to illuminate the object from different incident angles and the object is placed close to the Fourier plane of the lens. As such, different incident angles from the pinhole array generate separated diffraction patterns at the detector plane in Fig. 3(c1). In addition to this pinhole-based scheme, one can also use multiple tilted beams for single-shot ptychography [108110]. These different single-shot implementations essentially compromise the imaging field of view to achieve single-shot capabilities [37].

The second example of this group is the Fourier ptychography approach discussed in Section 2.3. The left panel of Fig. 3(c2) reproduces the original FPM scheme with a programmable LED matrix [29]. Different strategies can be applied to reduce the number of acquisitions, including LED multiplexing [117119], phase initialization via differential phase contrast recovery [122], non-uniform or content-based sampling in the Fourier domain [120,121], rapid laser beam scanning [8284], annular illumination [78,80,81], among others. By modeling the sample as multiple slices or with diffraction tomography, it is possible to recover the 3D volumetric information of the object [16,17,65,71]. To this end, Zuo et al. utilized both bright-field and darkfield images to recover high-resolution 3D objects in a Fourier ptychographic diffraction tomography platform [71].

Fourier ptychography can also be implemented in a reflection configuration. The right panel of Fig. 3(c2) shows a typical reflective FPM setup where an LED ring is mounted outside the objective lens for sample illumination with large incident angles [72,76]. It is also possible to perform aperture scanning to steer the illuminating beam on a reflective target [7375]. Recently, Park et al. demonstrated a reflective Fourier ptychography platform using the 193-nm deep ultraviolet light.

Figure 3(c3) shows another notable implementation of Fourier ptychography, where the angle-varied plane waves are replaced by translated speckle patterns [137,138]. The speckle pattern can be viewed as an amalgamation of localized phase gradients, each approximating a plane wave with a different angle [37]. Therefore, translating the speckle on the object is equivalent to sequentially illuminating the object with different plane waves.

We note that Fourier ptychography has evolved from a simple microscopy tool to a general technique for different communities [37]. Table 1 covers only a small fraction of its implementations, and we direct interested readers to the recent review articles [3638].

The concept of ptychography can also be integrated with optical coherence tomography for depth-resolving volumetric imaging. Figure 3(c4) shows the recent implementation of ptychographic optical coherent tomography by Du et al. [142]. This scheme uses a swept-source laser to project a spatially confined probe beam onto the object. For each wavelength of the light source, the system records a ptychographic dataset by translating the object to different positions. In this way, one can obtain many ptychographic reconstructions at different wavelengths. A Fourier transform along the wavelength axis then recovers the depth information of the object. This novel scheme provides a simple yet robust solution for 3D imaging and can be applied in various biomedical applications.

3.4 Lens-based implementations via coded detection

Coded illumination in FPM generally requires modeling of how the illumination beam interacts with the object. A thin sample assumption is often required to facilitate the point-wise multiplication between the structured beam and the object. To address this issue, one can perform coded detection in lens-based ptychographic implementations.

The first example of this group is the selected area ptychography approach shown in Fig. 3(d1). In this scheme, a microscope is used to relay the object to a magnified image at the image plane. A spatially confined aperture is placed at the image plane for performing coded detection, serving as an effective confined probe beam for the virtual object. Another lens is adopted to transform the virtual object image to reciprocal space for diffraction data acquisition. By translating the object to different lateral positions, the corresponding diffraction measurements can then be used to recover the virtual object at the image plane. This scheme was first demonstrated in an electron microscope [143]. The visible light implementation generated high-contrast quantitative phase images of cell cultures [144]. We also note that a commercial product of this scheme has been developed by Phasefocus (phasefocus.com).

The second example of this group is camera-scanning Fourier ptychography shown in Fig. 3(d2) [88,145]. In this scheme, the object is placed far away from the camera. As such, light propagation from the object to the camera aperture corresponds to the operation of a Fourier transform. By translating the entire camera to different lateral positions, one can acquire the corresponding images for synthetic aperture ptychographic reconstruction. The aperture size of the camera lens does not limit the final resolution. Instead, the resolution is determined by how far one can translate the camera. This scheme has been demonstrated in both visible light [88,145] and X-ray regimes [147]. More recently, Wang et al. reported an imaging platform built with a 16-camera array [148]. High-resolution synthetic aperture images can be recovered with a single snapshot acquisition.

The third example of this group is aperture-scanning Fourier ptychography shown in Fig. 3(d3). In this approach, the object is illuminated by a fixed extended beam, and an aperture is placed at the Fourier plane of the optical system. By translating the aperture to different lateral positions, it records a set of corresponding low-resolution object images that are synthesized in the Fourier domain for reconstruction. This approach can be implemented by performing mechanical scanning of a confined aperture [88,149,150] or a diffuser [87]. A spatial light modulator can also be used to perform rapid digital scanning of the aperture [151153]. One limitation of this approach is that the NA of the first objective lens limits the resolution.

The fourth example of this group is the diffuser modulation scheme for lens-based ptychographic imaging. In the left panel of Fig. 3(d4), a diffuser is placed at the image plane for light modulation, and the detector is placed at a defocused position for diffraction pattern acquisition [86]. By translating the object (or the diffuser) to different positions, one can acquire multiple images for ptychographic reconstruction. Based on this scheme, Zhang et al. demonstrated a microscope add-on that can be attached to the camera port of a microscope for ptychographic imaging [157]. Instead of placing the diffuser at the image plane, one can also place it between the object and the objective lens. As shown in the right panel of Fig. 3(d4), the diffuser in this configuration serves as a thin scattering lens for light wave modulation. The otherwise inaccessible high-resolution object information can thus be modulated by this scattering lens and enters the optical system for detection. The detailed imaging model and related discussions of this scheme can be found in Section 2.4.

3.5 Key considerations for different hardware implementations

In Fig. 4, we show several hardware platforms from the current literature in ptychography. In Fig. 4(a), a commercially available system of selected area ptychography is built using a regular microscope. The operation of this platform relies on a high-resolution objective lens to generate a virtual object image at the image plane. This product also integrates fluorescence imaging capabilities for different biological applications.

 figure: Fig. 4.

Fig. 4. Hardware platforms for different ptychographic implementations. (a) A commercial product based on selected area ptychography (by PhaseFocus). (b) A prototype platform of Fourier ptychography built with a programmable LED matrix [37]. (c) A Fourier ptychographic diffraction tomography platform [71]. (d) A microscope add-on for near-field ptychography [157]. Fourier ptychography setups built using a smartphone [134] (e), a Raspberry Pi system [131] (f), a cell phone lens [132] (g). (h) Lensless on-chip ptychography via rapid galvo mirror scanning [94]. (i) Parallel coded ptychography using an array of coded image sensors [30]. (j) Color-multiplexed ptychographic whole slide scanner [60]. (k) Optofluidic ptychography with a microfluidic chip for sample delivery [66]. (l) Rotational coded ptychography implemented using a blood-coated sensor and a Blu-ray player [61].

Download Full Size | PDF

Figure 4(b) shows a prototype platform of FPM built with a programmable LED matrix [37]. The LED array is custom-made with a small 2.5 mm pitch. Figure 4(c) shows a Fourier ptychographic diffraction tomography platform, where the captured images are used to update the 3D Ewald sphere in the Fourier space [71]. Figure 4(d) shows a microscope add-on termed ‘ptycho-cam’ for lens-based near-field ptychography [157]. Figures 4(e)–4(g) show three compact FPM platforms built using a smartphone [134], a Raspberry-Pi system [131], and a cell phone lens [132].

Figures 4(h)–4(l) show the hardware platforms for lensless ptychographic implementations. In Fig. 4(h), a low-cost galvo scanner is used to project the translated speckle patterns on the object to implement lensless near-field ptychography [94]. Figure 4(i) shows the parallel coded ptychography platform for high-throughput optical imaging [30]. Using a disorder-engineered surface for coded detection (inset of Fig. 4(i)), this setup can resolve the 308 nm linewidth on the resolution target and achieve a NA of ∼0.8, which is the highest NA among different lensless ptychographic implementations. Figure 4(j) shows a prototype of a color-multiplexed ptychographic whole slide scanner for digital pathology applications [60]. This platform utilizes one coded image sensor for ptychographic acquisition and one bare image sensor to track the positions of the slide holder. Figure 4(k) shows a prototype platform of optofluidic ptychography, where a microfluidic chip is used for sample delivery [66]. Figure 4(l) shows an implementation of rotational coded ptychography using a modified Blu-ray drive [61]. The laser diode of the Blu-ray drive was used for sample illumination in this platform. The coded surface on the image sensor was made by smearing a drop of blood on the sensor’s coverglass (inset of Fig. 4(l)). The entire device can be placed within an incubator to monitor cell culture growth in a longitudinal study.

In the following, we summarize several key considerations for different hardware implementations in the visible light regime:

  • 1) Resolution. The resolution r of a ptychography system is determined by both the illumination and detection NA: $r = \lambda /({N{A_{detection}} + N{A_{illumination}}} )$, where λ denote the illumination wavelength.

    For lens-based FPM, resolution improvement can be achieved by using an objective lens and an LED array with large incident angles. Currently, the best resolution was achieved using a 40×, 0.95 NA objective lens with an illumination NA of ∼0.85. The corresponding synthetic NA is ∼1.9 [69], which is the best among all ptychographic implementations. For camera-scanning Fourier ptychography, the illumination NA is 0. The spanning angle of the lens aperture does not limit the detection NA. Instead, the detection NA is determined by the aperture size synthesized by the camera translation process.

    For lens-based selected area ptychography, the illumination NA is 0, and the NA of the employed objective lens determines the resolution. Similarly, lens-based ptychographic structured modulation has an illumination NA of 0. Its detection NA is determined by the summation of the scattering diffuser’s NA and the objective’s NA. As a result, placing the diffuser between the object and the objective lens increases the effective detection NA of the system.

    For conventional ptychography, the illumination NA is often very small. Therefore, the spanning angle of the detector (i.e., the detection NA) determines the achievable resolution, as shown in Fig. 2(a). It is also possible to use a focused beam or a structured illumination beam to increase the illumination NA [19,23,48].

    For lensless coded ptychography, the illumination NA is 0, and the resolution is determined by the detection NA of the coded image sensor. Currently, the demonstrated detection NA is ∼0.8, which is the highest among different lensless ptychographic implementations. It is also possible to further improve the resolution by increasing the illumination NA for aperture synthesizing, as seen in FPM.

    For lensless synthetic aperture ptychography, the illumination is 0. The spanning angle of the lens aperture does not limit the detection NA. Instead, the detection NA is determined by the aperture synthesized by the coded sensor translation process.

  • 2) Field of view and imaging throughput. For lens-based implementations, the employed objective lens determines the maximum imaging field of view. For FPM, a typical 2×, 0.1 NA objective lens has a field of view of ∼1 cm in diameter. To synthesize a NA of 0.5, the acquisition time is on the order of 1 min, which corresponds to an imaging throughput comparable to or lower than that of a whole slide scanner [63]. However, the low-NA objective lens often contains severe aberrations at the edge of the field of view [12,13,70]. The aberrations of these regions need to be properly calibrated or measured prior to the imaging process. The reconstruction qualities in these regions are generally not as good as those captured using a regular high-NA objective lens.

    For selected area ptychography and high-resolution FPM implementation, a 40× objective lens can be used for image acquisition, and the field of view is ∼0.5 mm in diameter. To widen the field of view, one needs to move the sample stage to different spatial positions, stop the stage and then acquire the corresponding datasets. The image acquisition process cannot be operated at the full camera framerate and the overall imaging throughput is lower than that of a whole slide scanner.

    For conventional ptychography, the field of view is limited by the confined probe beam for each measurement. In the visible light regime, the size of the confined probe beam typically ranges from several hundred microns to a few millimeters in diameter. The object translation process can widen the field of view to cover arbitrarily large specimens. However, the scanning step size between adjacent acquisitions is often > 100 microns. As a result, one needs to stop the motion of the mechanical stage for data acquisition, and the camera cannot be operated at its full framerate. The resulting imaging throughput is much lower than that of a whole slide scanner.

    For lensless coded ptychography, the field of view is limited by the image sensor size for each measurement, which is ∼40 mm2 in current demonstrations [30,60]. The translation operation of the coded sensor can naturally widen the field of view and image large-scale bio-specimens such as the entire 35-mm Petri dish [61,64]. The scanning step size is on the micron level, allowing continuous image acquisition without needing to stop the motion of the mechanical stage. For example, whole slide images of pathology sections (∼15 mm by 15 mm) can be acquired by stitching together 6 detector’s fields of view. The corresponding acquisition time is 1-2 mins with the camera operating at its full framerate [60], and the imaging throughput is comparable to or higher than that of a whole slide scanner. By using an array of coded sensors, the imaging throughput can even surpass the fastest whole slide scanner at a small fraction of the cost [30]. Compared to FPM, coded ptychography has no spatially varying aberration at different regions of the field of view. The quality of reconstruction is generally comparable to that obtained with a high-NA objective lens [30].

    For lensless synthetic aperture ptychography, the translation process of the coded sensor can widen the imaging field of view. The scanning step size between adjacent acquisitions is on the millimeter scale. The resulting imaging throughput is much lower than that of a whole slide scanner. However, compared to other lensless ptychographic implementations, the coded sensor translation process can also expand the Fourier bandpass to improve the resolution.

  • 3) Light source. Ptychography is a coherent imaging modality that requires using coherent or partially coherent light sources for object illumination. We typically have two options for implementations in the visible light regime: LED and laser. If the detector is placed at the image plane of a lens-based system, the wavelength dispersion can be partially compensated by the lens system. Therefore, LED light sources can be used for object illumination. FPM is one example that uses a programmable LED array for angle-varied illumination. The key advantages of LED sources include less coherent artifacts and ease of operation for various applications. However, the major challenge of LED sources is the low optical flux that leads to long exposure time for image acquisition. Thus, the throughput of a typical FPM platform is currently limited by the relatively long exposure time required for darkfield image acquisition.

    If the detector is placed at the defocused plane or the far-field diffraction plane for image acquisition, a laser source is preferred for its monochromatic nature. Another key advantage of a laser source is its high optical flux. For example, a 10-mW fiber-coupled laser can be used for object illumination in coded ptychography. The exposure time is on the sub-millisecond level. This short exposure time can effectively freeze the object motion during the image acquisition process, allowing for continuous image acquisition at the full camera framerate. The demonstrated throughput of coded ptychography is only limited by the data transfer speed of USB cables. On the other hand, the major challenge of a laser source is the coherent artifacts caused by the multiple reflections from different surfaces. The captured images often contain interference patterns that are difficult to model and remove in the reconstruction process. To address this issue, one can reduce the spatial coherence of the laser by using a multi-mode fiber coupled to a mode scrambler.

  • 4) Detector. For lens-based implementations, detectors with a high pixel count are often preferred. One cost-effective option is the Sony IMX 183, a 20-megapixel monochromatic camera with a 2.4-µm pixel size. Other options include the Sony IMX 455 (a 60-megapixel monochromatic camera with a 3.76-µm pixel size) and the Sony IMX 571 (a 26-megapixel monochromatic camera with a 3.76-µm pixel size).

    For lensless implementations, detectors with a small pixel size are often preferred. In coded ptychography setups, current choices include the Sony IMX 226 (a 12-megapixel monochromatic camera with a 1.85-µm pixel size) and the ON Semiconductor MT9J003 (a 10-megapixel monochromatic camera with a 1.67-µm pixel size). Cell phone image sensors with smaller pixel sizes can also be used for this lensless implementation.

  • 5) Sample thickness. Coded-illumination schemes have certain requirements for the sample thickness [6]. For example, a typical FPM setup assumes the sample to be a thin 2D section. For a thick 3D object, changing the illumination angle would modify the object’s spectrum rather than just shifting it in the Fourier space [37]. Multi-slice and diffraction tomography can partially address this problem. In contrast, coded detection schemes have no requirement on the object thickness. The recovered object exit wavefront can be digitally propagated to any plane along the axial direction.
  • 6) Sample focusing. Sample focusing is often needed to achieve the best performance in coded-illumination schemes. For example, samples need to be placed at the proper position of the probe beam in conventional ptychography. In FPM, a defocused pupil can be introduced in the reconstruction process for post-measurement refocusing (termed ‘digital refocusing’ in the original demonstration [29]). However, this solution requires knowledge of the focal position. Searching for the focal positions would be computationally expensive if, for example, the sample is tilted with respect to the objective lens. Furthermore, a recent paper has shown that the refocusing process in FPM cannot be disentangled from the iterative phase retrieval process [158].

    For coded-detection schemes, the sample-focusing process can be disentangled from the iterative phase retrieval process. With the recovered object exit wavefront, one can propagate it back to different axial positions and use a focus metric to locate the best focal position. Intensity-based focus metrics can be used for stained samples while phase-based focus metrics can be used for unstained samples [60].

4. Reconstruction approaches and algorithms

Table 2 summarizes different reconstruction approaches and algorithms for ptychographic imaging. We categorize them into three groups for discussion: Reconstruction algorithms (Section 4.1), approaches for system corrections and extensions (Section 4.2), and neural network-based approaches (Section 4.3).

Tables Icon

Table 2. Reconstruction approaches and algorithms

4.1 Reconstruction algorithms

The Wigner distribution deconvolution was first proposed to solve the ptychographic phase problem with densely sampled data [159]. As a non-iterative approach, it is remarkable that this approach can solve a nonlinear phase problem with linear computations (Fourier transforms). However, a key step in this approach involves filtering out the phase differences between different areas of the diffraction patterns. Therefore, it requires that the object be translated over positions separated by the desired resolution of the reconstruction, leading to a prohibitively large amount of data for any practical imaging application.

In 2004, an iterative phase retrieval algorithm, termed ‘ptychographic iterative engine (PIE)’, was adopted for ptychographic reconstruction. It can recover the complex-valued object from a significantly smaller dataset by iteratively imposing the support constraint and the Fourier magnitude constraint [4]. In 2009, Maiden et al. reported an extension of the original PIE algorithm, termed ‘extended PIE (ePIE)’, to remove the requirement for an accurate model of the illumination function [11]. Due to its simplicity and effectiveness, ePIE became a widely adopted algorithm for ptychographic reconstruction. The PIE algorithm was recently further extended to deliver a speed increase and handle difficult data sets where the original version would have failed [167].

In parallel with the development of the PIE family, Guizar-Sicairos and Fienup derived the analytical expressions for the gradient of a squared-error metric with respect to the object, illumination probe beam, and translations [9]. A nonlinear optimization process was then developed to jointly update the object and system parameters. Similarly, Thibault et al. used the difference map approach [283] to jointly recover the object and the illumination probe beam [10]. This group of authors also introduced the maximum-likelihood principle to formulate the optimization problem for ptychography [168]. More recently, Odstrcil et al. demonstrated a ptychographic reconstruction scheme using a least-squares maximum-likelihood approach that is based on an optimal decomposition of the exit wave update into several directions [42]. The optimal updating step size was also derived from the optimization model. For FPM, Bian et al. proposed an iterative optimization framework based on the Wirtinger flow algorithm with noise relaxation. This framework can be used for FPM reconstruction without requiring high signal-to-noise (SNR) measurements captured with a long exposure time [180].

To compare the performance of different algorithms, Wen et al. performed a wide-ranging survey that covered the alternating direction method, conjugate gradient, Newton-type optimization, set projection approaches, and the relaxed average alternating reflections method [172]. The convergence performances of common ptychographic algorithms are provided in the book chapter by Rodenburg and Maiden [34]. For FPM, Yeh et al. performed a comprehensive review of first- and second-order optimization approaches [41]. It was shown that the second-order Gauss-Newton method with amplitude-based cost function gave the best results in general.

For biomedical applications, the dataset’s quality is critical for a successful ptychographic reconstruction. If the dataset is free from artifacts and has adequate redundancy, the phase retrieval process is typically well-conditioned. Any iterative algorithm may be used – from alternating projection to other advanced nonlinear algorithms [37].

4.2 System corrections and extensions

The limitation of requiring exact knowledge of the probe function can be partially resolved by the joint object-probe recovery scheme [911,193]. In FPM, pupil aberration can be characterized using a calibration target [70]. It can also be jointly recovered in the ptychographic reconstruction process like that in conventional ptychography [12]. However, unlike conventional ptychography, the spatially varying property of pupil aberration needs to be considered in FPM. To this end, Song et al. reported a full-field recovery scheme that models the spatially varying pupil with only a dozen parameters [13]. In coded ptychography, the transmission profile of the coded layer can be jointly recovered with the object from a calibration experiment [31]. However, it has been shown that the joint object-probe recovery scheme (i.e., blind ptychography) would fail if the object contains slow-varying phase features [30]. This is due to the intrinsic ambiguity introduced by the slow-varying phase gradient that cannot be effectively encoded in the intensity measurements (the phase transfer function is close to 0 [284]). To properly recover the coded layer, a blood smear can be utilized as a suitable calibration object due to its rich spatial feature and the absence of slow-varying phase features.

To address the issues caused by limited light source coherence and system stability, Thibault et al. demonstrated a general approach to model diffractive imaging systems with low-rank mixed states [18]. Mode decomposition of the probe beam and object can then be performed to address the limited coherence of the light source and the system stability issue. By modeling the object profiles at different wavelengths as different coherent states, Batey et al. demonstrated the reconstruction of a color object under color multiplexed illumination [20]. Similarly, the color multiplexing scheme can be implemented in FPM [118] and coded ptychography [60] for high-throughput digital pathology applications. To model the time-varying illumination probe beam during the data acquisition process, Odstrcil et al. reported the strategy of orthogonal probe relaxation in the reconstruction process [195]. Other notable developments in this direction include the fly-scan scheme by introducing multiple mutually incoherent modes into the illumination probe [205].

To correct the positional errors of conventional ptychography, the positional shifts of the object can be jointly updated with the object and/or the probe beam in the iterative phase retrieval process [9,42,197200]. To improve the reconstruction quality, Dierolf et al. introduced a non-rectangular scanning route to avoid difficulties with the raster grid ambiguity inherent in ptychographic reconstruction [285]. In FPM, if an LED matrix is used for sample illumination, its sampling positions in the Fourier domain can be precisely calibrated based on the brightfield-to-darkfield transition zone of the captured raw images [37]. If the LED array does not have a well-defined pitch, the illumination angles can be iteratively refined in the reconstruction process [41,211215], similar to the positional correction process in conventional ptychography. Recently, Bianco et al. demonstrated a multi-look approach for miscalibration-tolerant Fourier ptychographic imaging [224226]. This approach generates and combines multiple reconstructions of the same set of observables where phase artifacts are largely uncorrelated and, thus, automatically suppress each other.

Ptychographic implementations with coded illumination assume the sample to be a 2D thin section. However, extending the imaging model to handle 3D objects is possible. One notable development is the multi-slice modeling for conventional ptychography [14]. In this approach, the 3D object is represented by a series of thin slices separated by a certain distance. With this strategy, Godden et al. demonstrated the recovery of 34 sections of semi-transparent bio-specimens [15]. Li et al. further combined this multi-slice modeling with sample rotation to image thick specimens with an isotropic 3D resolution [93]. The multi-slice modeling can also be adopted in FPM for imaging 3D specimens. To this end, Tian et al. [16] and Li et al. [17] have demonstrated the imaging of multiple slides of transparent objects using a conventional microscope with an LED array.

Another notable extension of ptychography is the integration of diffraction tomography with FPM for 3D imaging [65]. This approach models the object as 3D scattering potentials in the Fourier domain. Each captured FPM image is used to update a spherical cap region of the scattering potential. As such, one can obtain the real-space 3D image from the recovered 3D scattering potential in the Fourier domain. To this end, Zuo et al. demonstrated impressive 3D high-resolution recovery of different bio-specimens [71].

4.3 Neural networks and the related approaches

The developments of neural networks and related approaches can be categorized into four groups shown in Fig. 5. The first group of developments in Fig. 5(a) is to model the ptychographic forward imaging process using a neural network. This strategy is also termed ‘automatic differentiation’ [252255,258261]. In this approach, the derivatives can be efficiently evaluated and the optimization process can be performed via the network training process [286]. Unlike data-driven approaches, this process requires no training data for the recovery process. For FPM, Jiang et al. utilized this framework to model the complex object as the learnable weights of a convolutional layer and minimize the loss function using different built-in optimizers of a machine learning library, TensorFlow [252]. Additional parameters like pupil function and LED positions can also be included in this framework for joint refinement [253255]. Likewise, Nashed et al. reported the automatic differentiation scheme for conventional ptychography [258]. Kandel et al. reported the automatic differentiation scheme for near-field ptychography and multi-angle Bragg ptychography [260].

 figure: Fig. 5.

Fig. 5. Neural networks and related approaches for ptychographic reconstruction. (a) A neural network is used to model the imaging formation process of ptychography (also termed automatic differentiation). The training process recovers the object and other system parameters. (b) The physical model is incorporated into the design of the network. (c) The network takes the raw measurements and outputs reconstructions. (d) The network takes the ptychographic reconstructions and outputs virtual-stained images or images with other improvements.

Download Full Size | PDF

The second group focuses on incorporating the physical model into the design of the network [79,119,262264]. The training of the network can jointly optimize the physical parameters used in the imaging model, such as the illumination pattern of the LED array in FPM. To this end, Horstmeyer et al. demonstrated the use of a neural network to jointly optimize the LED array illumination to highlight important sample features for the classification task [262]. Kellman et al. demonstrated a framework to create interpretable context-specific illumination patterns for optimized FPM reconstructions [119].

The third group focuses on inferring the high-resolution intensity and/or the phase images from low-resolution FPM measurements [269274]. For example, Nguyen et al. and Zhang et al. demonstrated the use of deep neural networks to produce high-resolution images from FPM raw measurements [269,274]. More recently, Xue et al. demonstrated the use of a Bayesian network to output both the high-resolution phase images and the uncertainty maps that quantifies the uncertainty of the predictions [272].

Lastly, the recovered images from ptychographic imaging setups can be further improved by neural networks. For example, virtual brightfield and fluorescence staining can be performed on FPM-recovered images without paired training data [277]. Coherent artifacts of FPM reconstructions can also be reduced in this unsupervised image-to-image translation process. Similarly, the recovered images from conventional ptychography and coded ptychography can also be virtually stained using data-driven deep neural networks [30,278].

5. Biomedical applications

5.1 Large field of view, high-resolution microscopy for digital pathology

The utilization of light microscopy in pathology and histology remains the gold standard for diagnosing a large number of diseases, including almost all types of cancers. The development of whole slide imaging systems can replace conventional light microscopes for quantitative and accelerated histopathological analyses. A key milestone was accomplished in 2017 when the Philips’ whole slide scanner was approved for primary diagnostic use in the United States [287]. The rapid development of artificial intelligence (AI) in medical diagnostics promises further growth of this field in the coming decades [288].

Compared to conventional whole slide imaging systems, ptychographic imaging setups have several advantages for digital pathology. For example, FPM and coded ptychography can acquire high-resolution, large-field-of-view images of histology sections rapidly. They also allow post-acquisition autofocusing and thus avoid the focusing issue that plagues conventional whole slide scanners. Another key advantage of ptychographic implementations is the recovery of the phase images that reveal the quantitative morphology features of the tissue sections [30,144,289]. For example, one can plot the height map of cytology smears to virtualize the 3D topographic profiles. The phase imaging capability also provides a label-free strategy to inspect unstained specimens. This is useful for rapid on-site evaluation of samples obtained from fine needle aspiration, where real-time pathology guidance is highly desired.

Figure 6(a) shows a recovered FPM image of a histology slide [290]. The field of view is the same as that of a 2× objective lens while the synthetic NA is similar to that of a 20× objective lens. The recovered phase can be used to obtain the local scattering and reduced scattering coefficients of the specimen, as shown in the zoomed-in views of Fig. 6(a).

 figure: Fig. 6.

Fig. 6. Digital pathology applications via different ptychographic implementations. (a) The recovered whole slide image by FPM [290]. (b) The recovered monochromatic image via near-field ptychography and the virtually stained image [278]. (c) Virtual staining of a recovered FPM image based on the color transfer strategy [279]. (d) All-in-focus recovered image of a biopsy sample based on the digital refocusing capability of FPM [292]. (e) Whole slide phase image recovered by the lensless ptychographic whole slide scanner [60]. (f) Rapid whole slide imaging using the parallel coded ptychography platform [30]. (f1) The focus map generated by maximizing a focus metric post-measurement. (f2) The recovered whole slide image by coded ptychography. (f3) The ground truth image captured using a regular light microscope. (f4) The difference between (f2) and (f3).

Download Full Size | PDF

An important aspect of digital pathology is to acquire color information of the stained histology sections. One can sequentially illuminate the slides with red, green, and blue light to obtain the color information as in Fig. 6(a). It is also possible to apply image-to-image translation to convert the recovered monochromatic FPM image to the style of regular incoherent microscopy. To this end, Wang et al. adopted a cycle generative adversarial network (cycleGAN) [291] to perform unsupervised virtual staining of FPM images [277]. The coherent artifacts of FPM recovery can also be reduced in this image-to-image translation process [281]. Similarly, one can also perform virtual staining to the recovered images obtained from other ptychographic implementations, such as coded ptychography [30] and near-field ptychography [278]. Figure 6(b1) shows the recovered monochromatic image using a near-field ptychography platform. Figure 6(b2) shows the virtual stained image using the cycleGAN [278]. Figure 6(c) demonstrates another strategy of virtual staining. Instead of using a neural network for image translation, Gao et al. reported a color transfer strategy to virtually stain an FPM image [279]. In this approach, the color texture information was extracted from a captured color image using the low-NA lens in an FPM platform. The texture information was then applied to the high-resolution FPM monochromatic recovery for virtual staining.

An important advantage of FPM is its capability of performing post-acquisition refocusing. Figure 6(d) shows a recovered image of a thick thyroid biopsy smear based on the digital refocusing capability of FPM. The digitally refocused patches of the sample were synthesized into one all-in-focus image in this demonstration [292].

Figures 6(e) and 6(f) demonstrate the applications of coded ptychography for high-throughput digital pathology. Figure 6(e) shows the recovered phase image of an unstained thyroid smear sample [60]. The zoomed-in height map reveals the 3D topographic structure of this thick cytology smear sample. Figure 6(f) shows the recovered whole slide image of a stained histology slide. The focus map in Fig. 6(f1) was generated by maximizing a focus metric post-measurement. Figure 6(f2) shows the recovered gigapixel whole slide image and the zoomed-in view of the slide. Figure 6(f3) shows the ground truth captured using a regular light microscope [30]. The difference between the coded ptychography and the ground truth is shown in Fig. 6(f4).

5.2 High-throughput cytometric analysis and screening

With the recovered gigapixel images of the bio-specimens, ptychography can also find applications in cytometric analysis and screening, where different parameters of the cells can be precisely measured and analyzed. Figure 7(a) shows a recovered gigapixel phase image of a blood smear sample using the rotational coded ptychography platform [61]. The recovered image was used to automatically segment the white blood cells and Trypanosoma brucei parasites. The inset of Fig. 7(a1) shows the locations of white blood cells and parasites, marked with red and blue dots, respectively. Figure 7(a2) shows a zoomed-in view of the gigapixel whole slide image. Figure 7(a3) shows a scatter plot of the cell area versus the average phase for the white blood cells and the parasites. The two different clusters in this figure indicate the two types of cells. Similarly, Fig. 7(b1) shows the recovered phase image of a blood smear using a high-NA FPM system [128]. The recovered phase image can be used to identify the white blood cells as shown in the zoomed-in views of Figs. 7(b2) and 7(b3).

 figure: Fig. 7.

Fig. 7. High-throughput cytometric analysis via different ptychographic implementations. (a) The recovered whole slide phase image of trypanosomes in a blood smear. The image was acquired using rotational coded ptychography with the specimen mounted on the spinning disk of a Blu-ray drive [61]. (b) The high-resolution recovered phase image of a blood smear using FPM [128]. (c) Ki-67 cell analysis based on the recovered images using the lensless ptychographic whole slide scanner [60]. (d) Whole slide intensity and phase images of a blood smear captured using coded ptychography [30]. The zoomed-in views highlight the phase and intensity images of the white blood cells, which can be used for performing high-throughput differential white blood cell counting.

Download Full Size | PDF

In Fig. 7(c), Jiang et al. performed a cytometric analysis of Ki-67 cells using the lensless ptychographic whole slide scanner [60]. The Ki-67 biomarker is a proliferation-associated nuclear protein for labeling dividing cells. The fraction of Ki-67 positive tumor cells is often correlated with the clinical course of cancer. Figure 7(c1) shows the recovered whole slide intensity and phase images of a tissue section labeled with the Ki-67 biomarkers. Figure 7(c2) shows the segmentation results using a deep neural network. Figures 7(c3) and 7(c4) show the zoomed-in views of Fig. 7(c1) and 7(c2). Figure 7(c5) shows the measurement of dry mass and cell area of the segmented Ki-67 positive and negative cells. The histogram analysis of the cell eccentricity, cell area, dry mass, and average phase for both the positive and negative cells are also provided in this figure. Figure 7(d) shows the application of the high-throughput parallel coded ptychography platform for white blood cell counting [30]. Figure 7(d1) shows the recovered whole slide image of a blood smear. Figures 7(d2)-7(d5) show the zoomed-in views of the white blood cells.

Ptychographic imaging platforms can also be used for high-throughput screening of biological cells, drugs, and other objects. Figure 8(a1) shows the full field-of-view image of the entire microfilter obtained by an FPM platform [293]. The zoomed-in views of Figs. 8(a2)-8(a4) show the circulating tumor cells captured by the microfilter array. Circulating tumor cells are recognized as a candidate biomarker with strong prognostic and predictive potentials in metastatic disease. High-throughput screening of these cells allows for early cancer detection and treatment effectiveness monitoring.

 figure: Fig. 8.

Fig. 8. High-throughput screening via different ptychographic implementations. (a) Large-scale color imaging of the entire microfilter for circulating tumor cell screening [293]. (b) High-throughput urinalysis based on the rotational coded ptychography platform built with a Blu-ray drive [61]. (c) Large-scale bacterial growth monitoring for rapid antimicrobial drug screening [64]. By imposing the temporal correlation constraint in coded ptychography, the imaging platform can achieve a centimeter-scale field of view, a half-pitch resolution of 488 nm, and a temporal resolution of 15-second intervals [64].

Download Full Size | PDF

Figure 8(b) shows the application of the rotational coded ptychography platform for high-throughput urine sediment examination [61]. Inspection of urine sediments is currently performed using a regular light microscope, offering a direct indication of the state of the renal and genitourinary systems. However, the process is labor-intensive and time-consuming, and the imaging area is only limited to a small field of view of the urine sediment slide. As a result, the result is imprecise with wide variability. Lensless coded ptychography can address these issues by imaging the entire urine sediment slide at high speeds. Figure 8(b1) shows the recovered phase image of the entire sediment slide. The inset of Fig. 8(b1) shows the locations of calcium oxalate crystals tracked by an automated screening algorithm (crystal locations are labeled by the blue dots). A large number of calcium oxalate crystals often implies kidney diseases. Figure 8(b2) shows the recovered phase images of different crystals on the urine sediment slide. The slow-varying phase profiles of these crystals are challenging to obtain using other common lensless techniques, such as lensless in-line holography [294,295], transport-of-intensity equation approach [284], and blind ptychography [173].

Figure 8(c) shows the application of an integrated coded ptychographic sensor for microbial monitoring and detection [64]. Performing rapid microbial detection can shorten the time required by antimicrobial susceptibility tests. These susceptibility arrays determine whether an antibiotic drug is effective in stopping the growth of a specific bacterial strain. If an effective drug can be found in the early stages of the clinical course, proper administration and adherence can avert the development of antibiotic resistance. Current optical detection methods often rely on the overall optical property of microbial cultures. They cannot resolve individual growth events at the single-cell level. As a result, one may need to wait until the bacteria grow to confluence before the results can be interpreted. Coded ptychography can perform lensless cytometric analysis of microbial cultures over a large scale and with high spatiotemporal resolution. In a time-lapse experiment, the microbial cultures often change little in-between adjacent acquisitions. Therefore, the recovered image from the last time point can be used as the initial guess for the object estimate in the current time point. By imposing this temporal correlation constraint in coded ptychography, Jiang et al. demonstrated an integrated imaging platform for achieving a centimeter-scale field of view, a half-pitch resolution of 488 nm, and a temporal resolution of 15-second intervals [64]. Figure 8(c1) shows the recovered large-scale phase image of bacterial cultures on an agar plate. Figure 8(c2) shows the growth of bacterial colonies under different treatment conditions. Figure 8(c3) shows bacterial cells’ average dry mass measurements over time. The minimum inhibitory concentration value was determined to be ∼1 µg/mL from the growth curves. The combination of high phase sensitivity, high spatiotemporal resolution, and large field of view of this platform can facilitate various cell screening and cytometric analysis applications.

5.3 Quantitative cell and tissue imaging in 2D and 3D

Ptychography’s capability of generating quantitative phase contrast can find applications when imaging various biological samples. Figure 9(a) shows the recovered phase images of live cell cultures using a high-speed FPM platform, where initial phase estimates were obtained from the differential phase contrast approach [122]. Similarly, Fig. 9(b) shows the recovered phase images of HeLa cell culture captured using an annular illumination FPM platform [80]. At each time point, four images were captured under four annular illumination angles in this platform. These measurements were then used to recover the phase images of the cell culture. Cell mitotic and apoptotic events can be acquired at a frame rate of 25 Hz [80]. Figure 9(c) shows the recovered phase of live A549 cells using a lens-based selected-area ptychography system [144]. The high-contrast, artifact-free, and focus-free images distinguish the dividing cells from non-dividing cells. In contrast with the images obtained from lens-based implementations, Fig. 9(d) shows the recovered phase image of U87MG cell culture using coded ptychography [31]. This lensless imaging approach offers a centimeter-scale large field of view and sub-cellular resolution for cell culture monitoring.

 figure: Fig. 9.

Fig. 9. 2D live-cell imaging via different ptychographic implementations. (a) The recovered phase images of U2OS cell culture using the in-vitro FPM system [122], where the phase is initialized using the differential-phase-contrast approach. (b) Video-rate phase imaging of HeLa cells using annular-illumination FPM [80], where the slow-varying phase information is effectively converted into intensity variations in the captured images. (c) Phase imaging and cell state identification of A549 cells using a lens-based selected-area ptychography system [144]. The white arrows show a proportion of brighter dividing cells, and the intense lines within the cells mark chromosome alignment prior to cytokinesis. (d) The recovered large-scale phase image of U87MG cell culture obtained by a lensless coded ptychography platform [31].

Download Full Size | PDF

Ptychography can also find applications in 3D imaging of different biological samples. Figure 10(a) shows the recovered 3D volume of a Spirogyra sample using multi-slice ptychography [15]. This approach models the sample as multiple slices separated by a certain distance in the forward imaging process. The difference between the estimated signal and the actual measurement is back-propagated to different object layers to update the corresponding slices. Thus, this approach does not require rotation of the specimen with respect to the illumination source. The same multi-slice modeling can also be implemented in FPM. Figure 10(b) shows the recovered intensity and phase images of a 3D Spirogyra algae sample using multi-slice FPM [16].

 figure: Fig. 10.

Fig. 10. Ptychography for 3D microscopy. (a) The recovered 3D volume of a Spirogyra sample via multi-slice ptychography [15]. (b) The recovered intensity and phase images of a 3D Spirogyra algae sample based on multi-slice FPM [16]. (c) Phase projection and 3D rendering of a 3D sample using multi-slice ptychographic tomography [93]. (d) Wide-field-of-view and high-resolution 3D imaging of a large population of HeLa cells via Fourier ptychographic diffraction tomography [71], where 3D refractive index of the cell culture is recovered from the FPM intensity measurements. (e) Different projected views of X-ray fluorescence and ptychographic tomography reconstructions [52], where pyrenoid locates near the top region and acidocalcisomes near the bottom region. (f) 3D rendering of a reconstructed mouse brain using ptychographic optical coherence tomography [142]. The colorscale represents the logarithm of the normalized reflectivity.

Download Full Size | PDF

In Fig. 10(c), Li et al. demonstrated a ptychographic tomography approach by combining multi-slice modeling with rotation-based tomographic reconstruction [93]. The recovered images in Fig. 10(c) show a glass tube filled with glass beads. This platform can achieve isotropic 3D resolution with a small number of rotational measurements. Multi-slice modeling in this approach improves reconstructions by accounting for propagation effects inside the 3D sample. In Fig. 10(d), Zuo et al. demonstrated the use of Fourier ptychographic diffraction tomography for 3D imaging. In this approach, each captured image is used to update a spherical cap of the Ewald sphere in the 3D Fourier space [65]. Figure 10(d) shows the recovered high-resolution 3D refractive index distribution of live HeLa cell culture [71]. In Fig. 10(e), Deng et al. performed correlative 3D imaging through a combination of simultaneous cryogenic X-ray ptychographic tomography and X-ray fluorescence microscopy [52]. High-quality 3D maps of the unlabeled cellular structure and elemental distributions within the cell can be clearly visualized from the recovered projections in Fig. 10(e). Lastly, Fig. 10(f) shows the recovered 3D volume of a mouse brain captured with ptychographic optical coherent tomography [142]. The color scale in this figure represents the logarithm of the normalized reflectivity of the brain tissue sample.

5.4 Polarization-sensitive imaging

Polarization techniques can be combined with ptychography to measure the vectorial properties of light or the vectorial transformations imposed on light by bio-specimens. Biomedical polarimetry can provide extra vectorial information through methods compatible with existing optical systems, such as microscopes and endoscopes. In particular, it provides an additional intrinsic contrast mechanism for imaging anisotropic bio-specimen in a label-free manner. In the past decade, it has become a rapidly developing field for biomedical imaging [297].

In Fig. 11(a), Lo et al. performed polarization-sensitive detection using an X-ray ptychography platform [296]. The ptychographic datasets were collected at four different linear polarization states in Fig. 11(a). The corresponding ptychographic reconstructions were then used to generate the polarization-dependent imaging contrast of anisotropic crystalline materials.

 figure: Fig. 11.

Fig. 11. Polarization-sensitive imaging using different ptychographic implementations. (a) The recovered absorption and phase images of coral-skeleton particles via X-ray linear dichroic ptychography [296]. (b) The large-area phase and retardance reconstruction of a thin cardiac tissue section using vectorial FPM [250]. (c) The recovered intensity, phase, and birefringence map of a Tilia stem using polarization-sensitive FPM [251]. (d) Recovered birefringence maps of mouse eye and heart tissue using polarization-sensitive lensless ptychography [102].

Download Full Size | PDF

In the visible light regime, Ferrand et al. reported the formalism of vectorial ptychography for polarization-sensitive imaging [248]. To retrieve the full anisotropic properties of the sample, the authors proposed a measurement scheme using linearly polarized probes and polarization analyzers. With this vectorial ptychography framework, Ferrand et al. reported the first experimental demonstration of recovering the Jones matrix of an anisotropic specimen [249]. The same vectorial imaging concept can also be adapted for FPM. Figure 11(b) shows the phase and retardance reconstructions of a thin cardiac tissue section using vectorial FPM [250]. In this approach, a generator polarizer is placed between the object and the LED array, and an analyzer polarizer is placed at the detection path. With four generator-analyzer configurations, the authors then recovered the Jones matrix of the object. Similarly, it is possible to use a polarization-sensitive camera (the polarizer array is directly placed on top of the pixels) to acquire four different polarization states simultaneously [251]. With this strategy, Fig. 11(c) shows large-scale, high-resolution intensity, phase, and birefringence maps of a Tilia stem. The same strategy has also been implemented in a lensless ptychography platform [102] built based on a mask-modulated configuration with angle-varied illumination [97,100]. Figure 11(d) shows the recovered birefringence maps of mouse eye (the left panel) and heart tissue (the right panel) using this platform. The overlaid short white line indicates the mean orientation of the optical axis evaluated over a small region.

6. Summary and outlook

This review provides an overview of the ptychography technique, with a focus on its optical implementations. In particular, we categorize different implementations into four groups based on their lensless / lens-based configurations and coded-illumination / coded-detection operations. We anticipate that high-throughput optical ptychography, currently in its early stage, will continue to grow and expand in its applications. We identified the following directions for further potential developments.

  • 1) Metasurface for coded ptychography. The optimal choice of the coded pattern in coded ptychography remains an exciting research topic. Current implementations using disorder-engineered surfaces [21,30,31,64,66] and blood-coated surfaces [60,61] have achieved some degree of success. However, we envision that better metasurface designs can further improve the resolution [298]. For example, one can tailor the angular scattering profile with subwavelength scatters to achieve the best resolution performance. Spectral responses can also be integrated with the metasurface design for ptychographic spectroscopy.
  • 2) Synthetic aperture coded ptychography. Currently, the best detection NA of coded ptychography was achieved without performing aperture synthesizing. It is possible to implement angle-varied illumination for coded ptychography to further improve the resolution in a similar way to FPM. We note that angle-varied illumination has been demonstrated in lensless near-field ptychographic setups [97100]. However, the achieved resolution is relatively low due to the limited temporal coherence of the light sources. To push the resolution limit in coded ptychography, we can steer an extended laser beam for angle-varied illumination in high speed [83].
  • 3) Lensless coded ptychographic tomography. The concept of diffraction tomography can be integrated into coded ptychography for 3D high-resolution imaging in a similar way to Fourier ptychographic diffraction tomography [65,71]. By illuminating the object from different incident angles, one can acquire a sequence of diffraction measurements. Each measurement corresponds to a spherical cap of the Ewald sphere in the Fourier space. Stitching all spherical caps can recover the 3D scattering potential of the object. Efforts in this direction are ongoing.
  • 4) Synthetic aperture ptychography for EUV and X-ray imaging. In a typical X-ray or EUV ptychography setup, the object is translated to different lateral positions for image acquisition. The resolution is largely limited by the spanning angle of the detector. Synthetic aperture ptychography can bypass this limit by translating a coded sensor at the far field (Fig. 3(b4)). The resolution is no longer limited by the spanning angle of the detector. Instead, it is determined by how far one translates the coded sensor. As a coded detection scheme, it also waives the thin sample requirement in conventional ptychography. For EUV imaging, the penetration depth is often limited to the surface layer. Therefore, we envision the implementation of synthetic aperture ptychography in a reflection modality. On the other hand, X-ray imaging has a long penetration depth, and we envision a transmission-mode implementation.
  • 5) A better light source to improve the imaging throughput. The captured raw images of lensless and lens-based implementations often suffer from coherent artifacts of laser sources, such as the interference patterns caused by light reflection from multiple surfaces. LED sources can partially address this problem. However, the limited temporal coherence of LED often leads to a degraded resolution in lensless implementations. In addition, the low optical flux of LED further limits the achievable imaging throughput, especially when darkfield images need to be acquired. An ideal source for optical ptychography can be a laser with high temporal coherence but relatively low spatial coherence. This source can be achieved using a multiple-mode fiber coupled to a mode scrambler. With an optimal coherent light source, we envision a significant reduction in the number of acquisitions in a lensless implementation.
  • 6) Chemical imaging in the near-infrared and infrared regimes. Near-infrared and infrared wavelengths contain vibration modes of many molecular bonds. Implementing lens-based or lensless ptychography in these regimes can lead to important applications in label-free chemical imaging. One challenge would be the relatively large pixel size for imaging in this regime. Sub-pixel modeling [30,246] may be an important consideration in recovering high-resolution images.
  • 7) Imaging model and phase retrieval algorithm. Ptychographic imaging setups can be a testbed for different phase retrieval algorithms. The large amount of data acquired for biomedical applications also necessitates the development of memory-efficient optimization algorithms. Recovery guarantee is another important research topic for applied mathematicians. The design of the coded pattern in coded ptychography may provide extra degrees of freedom to tailor the measurement matrix with recovery guarantee [299,300]. For lensless implementations, the pixel down-sampling process needs to be properly modeled in the image formation process. For example, we may need to consider the spatial and angular response of the pixels for high-resolution imaging [30]. The crosstalk between different pixels may also need to be modeled in the image formation process, especially for image sensors with sub-micron pixel sizes.
  • 8) Education. An FPM experiment can provide hands-on microcontroller programming, image acquisition, and processing experiences. It also serves as an exemplary demonstration of several important concepts in Fourier optics, including the optical transfer function, NA, Ewald sphere, pupil aberration, and free-space propagation, among others. On the other hand, the DIY nature of the coded ptychography platform allows it to be implemented with minimum resources. As we have demonstrated, one can modify a Blu-ray drive to develop a rotational coded ptychography platform [61]. A high-throughput ptychographic whole slide scanner can be built by modifying a low-cost 3-axis manual stage (XYZ 3-Axis Manual Linear Stage, Amazon, ∼${\$}$120) [60]. Configuring these setups can serve as excellent projects for lab courses that teach circuits and electronics, optics, biomedical instrumentation, and robotics. It can also be part of a capstone project for secondary, post-secondary, and graduate students.

Funding

UConn Spark grant; National Science Foundation (2012140).

Acknowledgments

P. S. acknowledges the support of the Thermo Fisher Scientific fellowship.

Disclosures

The authors declare no competing financial interests.

Data availability

No data were generated or analyzed in the presented research.

References

1. D. Sayre, “Some implications of a theorem due to Shannon,” Acta Crystallogr. 5(6), 843 (1952). [CrossRef]  

2. W. Hoppe and G. Strube, “Diffraction in inhomogeneous primary wave fields. 2. Optical experiments for phase determination of lattice interferences,” Acta Crystallogr., Sect. A: Cryst. Phys., Diffr., Theor. Gen. Crystallogr. 25(4), 502–507 (1969). [CrossRef]  

3. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]  

4. H. M. L. Faulkner and J. Rodenburg, “Movable aperture lensless transmission microscopy: a novel phase retrieval algorithm,” Phys. Rev. Lett. 93(2), 023903 (2004). [CrossRef]  

5. R. W. Gerchberg, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

6. P. Thibault, M. Dierolf, A. Menzel, O. Bunk, C. David, and F. Pfeiffer, “High-resolution scanning x-ray diffraction microscopy,” Science 321(5887), 379–382 (2008). [CrossRef]  

7. M. D. Seaberg, B. Zhang, D. F. Gardner, E. R. Shanblatt, M. M. Murnane, H. C. Kapteyn, and D. E. Adams, “Tabletop nanometer extreme ultraviolet imaging in an extended reflection mode using coherent Fresnel ptychography,” Optica 1(1), 39–44 (2014). [CrossRef]  

8. J. M. Rodenburg, A. Hurst, A. G. Cullis, B. R. Dobson, F. Pfeiffer, O. Bunk, C. David, K. Jefimovs, and I. Johnson, “Hard-x-ray lensless imaging of extended objects,” Phys. Rev. Lett. 98(3), 034801 (2007). [CrossRef]  

9. M. Guizar-Sicairos and J. R. Fienup, “Phase retrieval with transverse translation diversity: a nonlinear optimization approach,” Opt. Express 16(10), 7264–7278 (2008). [CrossRef]  

10. P. Thibault, M. Dierolf, O. Bunk, A. Menzel, and F. Pfeiffer, “Probe retrieval in ptychographic coherent diffractive imaging,” Ultramicroscopy 109(4), 338–343 (2009). [CrossRef]  

11. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109(10), 1256–1262 (2009). [CrossRef]  

12. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22(5), 4960–4972 (2014). [CrossRef]  

13. P. Song, S. Jiang, H. Zhang, X. Huang, Y. Zhang, and G. Zheng, “Full-field Fourier ptychography (FFP): Spatially varying pupil modeling and its application for rapid field-dependent aberration metrology,” APL Photonics 4(5), 050802 (2019). [CrossRef]  

14. A. M. Maiden, M. J. Humphry, and J. Rodenburg, “Ptychographic transmission microscopy in three dimensions using a multi-slice approach,” J. Opt. Soc. Am. A 29(8), 1606–1614 (2012). [CrossRef]  

15. T. Godden, R. Suman, M. Humphry, J. Rodenburg, and A. Maiden, “Ptychographic microscope for three-dimensional imaging,” Opt. Express 22(10), 12513–12523 (2014). [CrossRef]  

16. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2(2), 104–111 (2015). [CrossRef]  

17. P. Li, D. J. Batey, T. B. Edo, and J. M. Rodenburg, “Separation of three-dimensional scattering effects in tilt-series Fourier ptychography,” Ultramicroscopy 158, 1–7 (2015). [CrossRef]  

18. P. Thibault and A. Menzel, “Reconstructing state mixtures from diffraction measurements,” Nature 494(7435), 68–71 (2013). [CrossRef]  

19. P. Li and A. Maiden, “Lensless LED matrix ptychographic microscope: problems and solutions,” Appl. Opt. 57(8), 1800–1806 (2018). [CrossRef]  

20. D. J. Batey, D. Claus, and J. M. Rodenburg, “Information multiplexing in ptychography,” Ultramicroscopy 138, 13–21 (2014). [CrossRef]  

21. P. Song, R. Wang, J. Zhu, T. Wang, Z. Bian, Z. Zhang, K. Hoshino, M. Murphy, S. Jiang, and C. Guo, “Super-resolved multispectral lensless microscopy via angle-tilted, wavelength-multiplexed ptychographic modulation,” Opt. Lett. 45(13), 3486–3489 (2020). [CrossRef]  

22. Y. Yao, Y. Jiang, J. Klug, Y. Nashed, C. Roehrig, C. Preissner, F. Marin, M. Wojcik, O. Cossairt, and Z. Cai, “Broadband X-ray ptychography using multi-wavelength algorithm,” J. Synchrotron Radiat. 28(1), 309–317 (2021). [CrossRef]  

23. A. M. Maiden, M. J. Humphry, F. Zhang, and J. M. Rodenburg, “Superresolution imaging via ptychography,” J. Opt. Soc. Am. A 28(4), 604–612 (2011). [CrossRef]  

24. K. Wakonig, A. Diaz, A. Bonnin, M. Stampanoni, A. Bergamaschi, J. Ihli, M. Guizar-Sicairos, and A. Menzel, “X-ray Fourier ptychography,” Sci. Adv. 5(2), eaav0282 (2019). [CrossRef]  

25. H. Yang, R. Rutte, L. Jones, M. Simson, R. Sagawa, H. Ryll, M. Huth, T. Pennycook, M. Green, and H. Soltau, “Simultaneous atomic-resolution electron ptychography and Z-contrast imaging of light and heavy elements in complex nanostructures,” Nat. Commun. 7(1), 12532 (2016). [CrossRef]  

26. P. M. Pelz, W. X. Qiu, R. Bücker, G. Kassier, and R. Miller, “Low-dose cryo electron ptychography via non-convex Bayesian optimization,” Sci. Rep. 7(1), 9883 (2017). [CrossRef]  

27. F. Pfeiffer, “X-ray ptychography,” Nat. Photonics 12(1), 9–17 (2018). [CrossRef]  

28. Y. Jiang, Z. Chen, Y. Han, P. Deb, H. Gao, S. Xie, P. Purohit, M. W. Tate, J. Park, and S. M. Gruner, “Electron ptychography of 2D materials to deep sub-ångström resolution,” Nature 559(7714), 343–349 (2018). [CrossRef]  

29. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

30. S. Jiang, C. Guo, P. Song, N. Zhou, Z. Bian, J. Zhu, R. Wang, P. Dong, Z. Zhang, and J. Liao, “Resolution-enhanced parallel coded ptychography for high-throughput Optical imaging,” ACS Photonics 8(11), 3261–3271 (2021). [CrossRef]  

31. S. Jiang, J. Zhu, P. Song, C. Guo, Z. Bian, R. Wang, Y. Huang, S. Wang, H. Zhang, and G. Zheng, “Wide-field, high-resolution lensless on-chip microscopy via near-field blind ptychographic modulation,” Lab Chip 20(6), 1058–1065 (2020). [CrossRef]  

32. A. C. Chan, J. Kim, A. Pan, H. Xu, D. Nojima, C. Hale, S. Wang, and C. Yang, “Parallel Fourier ptychographic microscopy for high-throughput screening with 96 cameras (96 Eyes),” Sci. Rep. 9(1), 11114 (2019). [CrossRef]  

33. T. Aidukas, P. C. Konda, and A. R. Harvey, “High-speed multi-objective Fourier ptychographic microscopy,” Opt. Express 30(16), 29189–29205 (2022). [CrossRef]  

34. J. Rodenburg and A. Maiden, “Ptychography,” in Springer Handbook of Microscopy (Springer, 2019), pp. 819–904.

35. M. Guizar-Sicairos and P. Thibault, “Ptychography: A solution to the phase problem,” Phys. Today 74(9), 42–48 (2021). [CrossRef]  

36. P. C. Konda, L. Loetgering, K. C. Zhou, S. Xu, A. R. Harvey, and R. Horstmeyer, “Fourier ptychography: current applications and future promises,” Opt. Express 28(7), 9603–9630 (2020). [CrossRef]  

37. G. Zheng, C. Shen, S. Jiang, P. Song, and C. Yang, “Concept, implementations and applications of Fourier ptychography,” Nat. Rev. Phys. 3(3), 207–223 (2021). [CrossRef]  

38. A. Pan, C. Zuo, and B. Yao, “High-resolution and large field-of-view Fourier ptychographic microscopy and its applications in biomedicine,” Rep. Prog. Phys. 83(9), 096101 (2020). [CrossRef]  

39. L. Loetgering, S. Witte, and J. Rothhardt, “Advances in laboratory-scale ptychography using high harmonic sources,” Opt. Express 30(3), 4133–4164 (2022). [CrossRef]  

40. J. Qian, C. Yang, A. Schirotzek, F. Maia, and S. Marchesini, “Efficient algorithms for ptychographic phase retrieval,” Inverse Problems and Applications, Contemp. Math 615, 261–279 (2014). [CrossRef]  

41. L.-H. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, and L. Waller, “Experimental robustness of Fourier ptychography phase retrieval algorithms,” Opt. Express 23(26), 33214–33240 (2015). [CrossRef]  

42. M. Odstrčil, A. Menzel, and M. Guizar-Sicairos, “Iterative least-squares solver for generalized maximum-likelihood ptychography,” Opt. Express 26(3), 3108–3123 (2018). [CrossRef]  

43. H. Chang, L. Yang, and S. Marchesini, “Fast Iterative Algorithms for Blind Phase Retrieval: A survey,” in K. Chen, K. Chen, and K. Chen, eds. Handbook of Mathematical Models and Algorithms in Computer Vision and Imaging (Springer, 2021). [CrossRef]  

44. M. Rogalski, P. Zdańkowski, and M. Trusiak, “FPM app: an open-source MATLAB application for simple and intuitive Fourier ptychographic reconstruction,” Bioinformatics 37(20), 3695–3696 (2021). [CrossRef]  

45. P. Song, S. Jiang, H. Zhang, Z. Bian, C. Guo, K. Hoshino, and G. Zheng, “Super-resolution microscopy via ptychographic structured modulation of a diffuser,” Opt. Lett. 44(15), 3645–3648 (2019). [CrossRef]  

46. J. R. Fienup, “Reconstruction of a complex-valued object from the modulus of its Fourier transform using a support constraint,” J. Opt. Soc. Am. A 4(1), 118–123 (1987). [CrossRef]  

47. J. R. Fienup, “Phase retrieval algorithms: a personal tour,” Appl. Opt. 52(1), 45–56 (2013). [CrossRef]  

48. M. Stockmar, P. Cloetens, I. Zanette, B. Enders, M. Dierolf, F. Pfeiffer, and P. Thibault, “Near-field ptychography: phase retrieval for inline holography using a structured illumination,” Sci. Rep. 3(1), 1927 (2013). [CrossRef]  

49. M. Dierolf, A. Menzel, P. Thibault, P. Schneider, C. M. Kewish, R. Wepf, O. Bunk, and F. Pfeiffer, “Ptychographic X-ray computed tomography at the nanoscale,” Nature 467(7314), 436–439 (2010). [CrossRef]  

50. M. Esmaeili, J. B. Fløystad, A. Diaz, K. Høydalsvik, M. Guizar-Sicairos, J. W. Andreasen, and D. W. Breiby, “Ptychographic X-ray tomography of silk fiber hydration,” Macromolecules 46(2), 434–439 (2013). [CrossRef]  

51. M. Holler, A. Diaz, M. Guizar-Sicairos, P. Karvinen, E. Färm, E. Härkönen, M. Ritala, A. Menzel, J. Raabe, and O. Bunk, “X-ray ptychographic computed tomography at 16 nm isotropic 3D resolution,” Sci. Rep. 4(1), 1–5 (2014). [CrossRef]  

52. J. Deng, Y. H. Lo, M. Gallagher-Jones, S. Chen, A. Pryor Jr, Q. Jin, Y. P. Hong, Y. S. Nashed, S. Vogt, and J. Miao, “Correlative 3D x-ray fluorescence and ptychographic tomography of frozen-hydrated green algae,” Sci. Adv. 4(11), eaau4548 (2018). [CrossRef]  

53. P. Godard, G. Carbone, M. Allain, F. Mastropietro, G. Chen, L. Capello, A. Diaz, T. Metzger, J. Stangl, and V. Chamard, “Three-dimensional high-resolution quantitative microscopy of extended crystals,” Nat. Commun. 2(1), 568 (2011). [CrossRef]  

54. S. Hruszkewycz, M. Holt, C. Murray, J. Bruley, J. Holt, A. Tripathi, O. Shpyrko, I. McNulty, M. Highland, and P. Fuoss, “Quantitative nanoscale imaging of lattice distortions in epitaxial semiconductor heterostructures using nanofocused X-ray Bragg projection ptychography,” Nano Lett. 12(10), 5148–5154 (2012). [CrossRef]  

55. V. Chamard, M. Allain, P. Godard, A. Talneau, G. Patriarche, and M. Burghammer, “Strain in a silicon-on-insulator nanostructure revealed by 3D x-ray Bragg ptychography,” Sci. Rep. 5(1), 9827 (2015). [CrossRef]  

56. S. Hruszkewycz, M. Allain, M. Holt, C. Murray, J. Holt, P. Fuoss, and V. Chamard, “High-resolution three-dimensional structural microscopy by single-angle Bragg ptychography,” Nat. Mater. 16(2), 244–251 (2017). [CrossRef]  

57. Y. Takahashi, A. Suzuki, S. Furutaku, K. Yamauchi, Y. Kohmura, and T. Ishikawa, “Bragg x-ray ptychography of a silicon crystal: Visualization of the dislocation strain field and the production of a vortex beam,” Phys. Rev. B 87(12), 121201 (2013). [CrossRef]  

58. M. V. Holt, S. O. Hruszkewycz, C. E. Murray, J. R. Holt, D. M. Paskiewicz, and P. H. Fuoss, “Strain imaging of nanoscale semiconductor heterostructures with X-ray Bragg projection ptychography,” Phys. Rev. Lett. 112(16), 165502 (2014). [CrossRef]  

59. M. G. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000). [CrossRef]  

60. S. Jiang, C. Guo, P. Song, T. Wang, R. Wang, T. Zhang, Q. Wu, R. Pandey, and G. Zheng, “High-throughput digital pathology via a handheld, multiplexed, and AI-powered ptychographic whole slide scanner,” Lab Chip 22(14), 2657–2670 (2022). [CrossRef]  

61. S. Jiang, C. Guo, T. Wang, J. Liu, P. Song, T. Zhang, R. Wang, B. Feng, and G. Zheng, “Blood-Coated Sensor for High-Throughput Ptychographic Cytometry on a Blu-ray Disc,” ACS Sens. 7(4), 1058–1067 (2022). [CrossRef]  

62. S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104(10), 100601 (2010). [CrossRef]  

63. Z. Bian, C. Guo, S. Jiang, J. Zhu, R. Wang, P. Song, Z. Zhang, K. Hoshino, and G. Zheng, “Autofocusing technologies for whole slide imaging and automated microscopy,” J. Biophotonics 13(12), e202000227 (2020). [CrossRef]  

64. S. Jiang, C. Guo, Z. Bian, R. Wang, J. Zhu, P. Song, P. Hu, D. Hu, Z. Zhang, K. Hoshino, B. Feng, and G. Zheng, “Ptychographic sensor for large-scale lensless microbial monitoring with high spatiotemporal resolution,” Biosens. Bioelectron. 196, 113699 (2022). [CrossRef]  

65. R. Horstmeyer, J. Chung, X. Ou, G. Zheng, and C. Yang, “Diffraction tomography with Fourier ptychography,” Optica 3(8), 827–835 (2016). [CrossRef]  

66. P. Song, C. Guo, S. Jiang, T. Wang, P. Hu, D. Hu, Z. Zhang, B. Feng, and G. Zheng, “Optofluidic ptychography on a chip,” Lab Chip 21(23), 4549–4556 (2021). [CrossRef]  

67. P. Song, S. Jiang, T. Wang, C. Guo, R. Wang, T. Zhang, and G. Zheng, “Synthetic aperture ptychography: coded sensor translation for joint spatial-Fourier bandwidth expansion,” Photonics Res. 10(7), 1624–1632 (2022). [CrossRef]  

68. R. Horstmeyer and C. Yang, “A phase space model of Fourier ptychographic microscopy,” Opt. Express 22(1), 338–358 (2014). [CrossRef]  

69. M. Liang and C. Yang, “Implementation of free-space Fourier Ptychography with near maximum system numerical aperture,” Opt. Express 30(12), 20321–20332 (2022). [CrossRef]  

70. G. Zheng, X. Ou, R. Horstmeyer, and C. Yang, “Characterization of spatially varying aberrations for wide field-of-view microscopy,” Opt. Express 21(13), 15131–15143 (2013). [CrossRef]  

71. C. Zuo, J. Sun, J. Li, A. Asundi, and Q. Chen, “Wide-field high-resolution 3D microscopy with Fourier ptychographic diffraction tomography,” Opt. Lasers Eng. 128, 106003 (2020). [CrossRef]  

72. K. Guo, S. Dong, and G. Zheng, “Fourier ptychography for brightfield, phase, darkfield, reflective, multi-slice, and fluorescence imaging,” IEEE J. Sel. Top. Quantum Electron. 22(4), 77–88 (2016). [CrossRef]  

73. S. Pacheco, B. Salahieh, T. Milster, J. J. Rodriguez, and R. Liang, “Transfer function analysis in epi-illumination Fourier ptychography,” Opt. Lett. 40(22), 5343–5346 (2015). [CrossRef]  

74. S. Pacheco, G. Zheng, and R. Liang, “Reflective Fourier ptychography,” J. Biomed. Opt. 21(2), 026010 (2016). [CrossRef]  

75. K. S. Park, Y. S. Bae, S.-S. Choi, and M. Y. Sohn, “High numerical aperture reflective deep ultraviolet Fourier ptychographic microscopy for nanofeature imaging,” APL Photonics 7(9), 096105 (2022). [CrossRef]  

76. H. Lee, B. H. Chon, and H. K. Ahn, “Reflective Fourier ptychographic microscopy using a parabolic mirror,” Opt. Express 27(23), 34382–34391 (2019). [CrossRef]  

77. B. Lee, J.-Y. Hong, D. Yoo, J. Cho, Y. Jeong, S. Moon, and B. Lee, “Single-shot phase retrieval via Fourier ptychographic microscopy,” Optica 5(8), 976–983 (2018). [CrossRef]  

78. J. Sun, Q. Chen, J. Zhang, Y. Fan, and C. Zuo, “Single-shot quantitative phase microscopy based on color-multiplexed Fourier ptychography,” Opt. Lett. 43(14), 3365–3368 (2018). [CrossRef]  

79. Y. F. Cheng, M. Strachan, Z. Weiss, M. Deb, D. Carone, and V. Ganapati, “Illumination pattern design with deep learning for single-shot Fourier ptychographic microscopy,” Opt. Express 27(2), 644–656 (2019). [CrossRef]  

80. J. Sun, C. Zuo, J. Zhang, Y. Fan, and Q. Chen, “High-speed Fourier ptychographic microscopy based on programmable annular illuminations,” Sci. Rep. 8, 1–12 (2018).

81. Y. Shu, J. Sun, J. Lyu, Y. Fan, N. Zhou, R. Ye, G. Zheng, Q. Chen, and C. Zuo, “Adaptive Optical quantitative phase imaging based on annular illumination Fourier ptychographic microscopy,” PhotoniX 3(1), 1–15 (2022). [CrossRef]  

82. C. Kuang, Y. Ma, R. Zhou, J. Lee, G. Barbastathis, R. R. Dasari, Z. Yaqoob, and P. T. So, “Digital micromirror device-based laser-illumination Fourier ptychographic microscopy,” Opt. Express 23(21), 26999–27010 (2015). [CrossRef]  

83. J. Chung, H. Lu, X. Ou, H. Zhou, and C. Yang, “Wide-field Fourier ptychographic microscopy using laser illumination source,” Biomed. Opt. Express 7(11), 4787–4802 (2016). [CrossRef]  

84. X. Tao, J. Zhang, C. Tao, P. Sun, R. Wu, and Z. Zheng, “Tunable-illumination for laser Fourier ptychographic microscopy based on a background noise-reducing system,” Opt. Commun. 468, 125764 (2020). [CrossRef]  

85. Z. Bian, S. Jiang, P. Song, H. Zhang, P. Hoveida, K. Hoshino, and G. Zheng, “Ptychographic modulation engine: a low-cost DIY microscope add-on for coherent super-resolution imaging,” J. Phys. D: Appl. Phys. 53(1), 014005 (2020). [CrossRef]  

86. S. McDermott and A. Maiden, “Near-field ptychographic microscope for quantitative phase imaging,” Opt. Express 26(19), 25471–25480 (2018). [CrossRef]  

87. X. He, Z. Jiang, Y. Kong, S. Wang, and C. Liu, “Fourier ptychography via wavefront modulation with a diffuser,” Opt. Commun. 459, 125057 (2020). [CrossRef]  

88. S. Dong, R. Horstmeyer, R. Shiradkar, K. Guo, X. Ou, Z. Bian, H. Xin, and G. Zheng, “Aperture-scanning Fourier ptychography for 3D refocusing and super-resolution macroscopic imaging,” Opt. Express 22(11), 13586–13599 (2014). [CrossRef]  

89. D. Claus, D. Robinson, D. Chetwynd, Y. Shuo, W. Pike, J. D. J. José, and J. Rodenburg, “Dual wavelength Optical metrology using ptychography,” J. Opt. 15(3), 035702 (2013). [CrossRef]  

90. B. Zhang, D. F. Gardner, M. D. Seaberg, E. R. Shanblatt, H. C. Kapteyn, M. M. Murnane, and D. E. Adams, “High contrast 3D imaging of surfaces near the wavelength limit using tabletop EUV ptychography,” Ultramicroscopy 158, 98–104 (2015). [CrossRef]  

91. P. Helfenstein, R. Rajeev, I. Mochi, A. Kleibert, C. Vaz, and Y. Ekinci, “Beam drift and partial probe coherence effects in EUV reflective-mode coherent diffractive imaging,” Opt. Express 26(9), 12242–12256 (2018). [CrossRef]  

92. M. Li, L. Bian, G. Zheng, A. Maiden, Y. Liu, Y. Li, J. Suo, Q. Dai, and J. Zhang, “Single-pixel ptychography,” Opt. Lett. 46(7), 1624–1627 (2021). [CrossRef]  

93. P. Li and A. Maiden, “Multi-slice ptychographic tomography,” Sci. Rep. 8(1), 1–10 (2018). [CrossRef]  

94. H. Zhang, Z. Bian, S. Jiang, J. Liu, P. Song, and G. Zheng, “Field-portable quantitative lensless microscopy based on translated speckle illumination and sub-sampled ptychographic phase retrieval,” Opt. Lett. 44(8), 1976–1979 (2019). [CrossRef]  

95. A. Pan and B. Yao, “Three-dimensional space optimization for near-field ptychography,” Opt. Express 27(4), 5433–5446 (2019). [CrossRef]  

96. W. Xu, H. Lin, H. Wang, and F. Zhang, “Super-resolution near-field ptychography,” Opt. Express 28(4), 5164–5178 (2020). [CrossRef]  

97. Z. Zhang, Y. Zhou, S. Jiang, K. Guo, K. Hoshino, J. Zhong, J. Suo, Q. Dai, and G. Zheng, “Invited Article: Mask-modulated lensless imaging with multi-angle illuminations,” APL Photonics 3(6), 060803 (2018). [CrossRef]  

98. Y. Zhou, J. Wu, J. Suo, X. Han, G. Zheng, and Q. Dai, “Single-shot lensless imaging via simultaneous multi-angle LED illumination,” Opt. Express 26(17), 21418–21432 (2018). [CrossRef]  

99. Y. Zhou, X. Hua, Z. Zhang, X. Hu, K. Dixit, J. Zhong, G. Zheng, and X. Cao, “Wirtinger gradient descent optimization for reducing gaussian noise in lensless microscopy,” Opt. Lasers Eng. 134, 106131 (2020). [CrossRef]  

100. C. Lu, Y. Zhou, Y. Guo, S. Jiang, Z. Zhang, G. Zheng, and J. Zhong, “Mask-modulated lensless imaging via translated structured illumination,” Opt. Express 29(8), 12491–12501 (2021). [CrossRef]  

101. Y. Guo, R. Guo, P. Qi, Y. Zhou, Z. Zhang, G. Zheng, and J. Zhong, “Robust multi-angle structured illumination lensless microscopy via illumination angle calibration,” Opt. Lett. 47(7), 1847–1850 (2022). [CrossRef]  

102. J. Kim, S. Song, B. Kim, M. Park, S. J. Oh, D. Kim, B. Cense, Y.-M. Huh, J. Y. Lee, and C. Joo, “Ptychographic lens-less polarization microscopy,” arXiv, arXiv:2209.06305 (2022).

103. F. Zhang, G. Pedrini, and W. Osten, “Phase retrieval of arbitrary complex-valued fields through aperture-plane modulation,” Phys. Rev. A 75(4), 043805 (2007). [CrossRef]  

104. A. M. Maiden, J. M. Rodenburg, and M. J. Humphry, “Optical ptychography: a practical implementation with useful resolution,” Opt. Lett. 35(15), 2585–2587 (2010). [CrossRef]  

105. X. Wen, Y. Geng, X. Zhou, J. Tan, S. Liu, C. Tan, and Z. Liu, “Ptychography imaging by 1-D scanning with a diffuser,” Opt. Express 28(15), 22658–22668 (2020). [CrossRef]  

106. H. Sha, C. He, S. Jiang, P. Song, S. Liu, W. Zou, P. Qin, H. Wang, and Y. Zhang, “Lensless coherent diffraction imaging based on spatial light modulator with unknown modulation curve,” arXiv, arXiv:2204.03947 (2022).

107. S. Jiang, C. Guo, P. Hu, D. Hu, P. Song, T. Wang, Z. Bian, Z. Zhang, and G. Zheng, “High-throughput lensless whole slide imaging via continuous height-varying modulation of a tilted sensor,” Opt. Lett. 46(20), 5212–5215 (2021). [CrossRef]  

108. C. Chang, X. Pan, H. Tao, C. Liu, S. P. Veetil, and J. Zhu, “Single-shot ptychography with highly tilted illuminations,” Opt. Express 28(19), 28441–28451 (2020). [CrossRef]  

109. C. Chang, X. Pan, H. Tao, C. Liu, S. P. Veetil, and J. Zhu, “3D single-shot ptychography with highly tilted illuminations,” Opt. Express 29(19), 30878–30891 (2021). [CrossRef]  

110. X. Pan, C. Liu, and J. Zhu, “Single shot ptychographical iterative engine based on multi-beam illumination,” Appl. Phys. Lett. 103(17), 171105 (2013). [CrossRef]  

111. P. Sidorenko and O. Cohen, “Single-shot ptychography,” Optica 3(1), 9–14 (2016). [CrossRef]  

112. W. Xu, H. Xu, Y. Luo, T. Li, and Y. Shi, “Optical watermarking based on single-shot-ptychography encoding,” Opt. Express 24(24), 27922–27936 (2016). [CrossRef]  

113. B. K. Chen, P. Sidorenko, O. Lahav, O. Peleg, and O. Cohen, “Multiplexed single-shot ptychography,” Opt. Lett. 43(21), 5379–5382 (2018). [CrossRef]  

114. O. Wengrowicz, O. Peleg, T. Zahavy, B. Loevsky, and O. Cohen, “Deep neural networks in single-shot ptychography,” Opt. Express 28(12), 17511–17520 (2020). [CrossRef]  

115. D. Goldberger, J. Barolak, C. G. Durfee, and D. E. Adams, “Three-dimensional single-shot ptychography,” Opt. Express 28(13), 18887–18898 (2020). [CrossRef]  

116. J. Barolak, D. Goldberger, J. Squier, Y. Bellouard, C. Durfee, and D. Adams, “Wavelength-multiplexed single-shot ptychography,” Ultramicroscopy 233, 113418 (2022). [CrossRef]  

117. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier Ptychography with an LED array microscope,” Biomed. Opt. Express 5(7), 2376–2389 (2014). [CrossRef]  

118. S. Dong, R. Shiradkar, P. Nanda, and G. Zheng, “Spectral multiplexing and coherent-state decomposition in Fourier ptychographic imaging,” Biomed. Opt. Express 5(6), 1757–1767 (2014). [CrossRef]  

119. M. Kellman, E. Bostan, M. Chen, and L. Waller, “Data-driven design for fourier ptychographic microscopy,” in 2019 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2019), 1–8.

120. K. Guo, S. Dong, P. Nanda, and G. Zheng, “Optimization of sampling pattern and the design of Fourier ptychographic illuminator,” Opt. Express 23(5), 6171–6180 (2015). [CrossRef]  

121. L. Bian, J. Suo, G. Situ, G. Zheng, F. Chen, and Q. Dai, “Content adaptive illumination for Fourier ptychography,” Opt. Lett. 39(23), 6648–6651 (2014). [CrossRef]  

122. L. Tian, Z. Liu, L.-H. Yeh, M. Chen, J. Zhong, and L. Waller, “Computational illumination for high-speed in vitro Fourier ptychographic microscopy,” Optica 2(10), 904–911 (2015). [CrossRef]  

123. S. Sen, I. Ahmed, B. Aljubran, A. A. Bernussi, and L. G. de Peralta, “Fourier ptychographic microscopy using an infrared-emitting hemispherical digital condenser,” Appl. Opt. 55(23), 6421–6427 (2016). [CrossRef]  

124. A. Pan, Y. Zhang, K. Wen, M. Zhou, J. Min, M. Lei, and B. Yao, “Subwavelength resolution Fourier ptychography with hemispherical digital condensers,” Opt. Express 26(18), 23119–23131 (2018). [CrossRef]  

125. Z. F. Phillips, R. Eckert, and L. Waller, “Quasi-dome: a self-calibrated high-NA LED illuminator for Fourier ptychography,” in Imaging Systems and Applications, (Optical Society of America, 2017), IW4E. 5.

126. X. Ou, R. Horstmeyer, G. Zheng, and C. Yang, “High numerical aperture Fourier ptychography: principle, implementation and characterization,” Opt. Express 23(3), 3472–3491 (2015). [CrossRef]  

127. M. G. Mayani, K. R. Tekseth, D. W. Breiby, J. Klein, and M. N. Akram, “High-resolution polarization-sensitive Fourier ptychography microscopy using a high numerical aperture dome illuminator,” Opt. Express 30(22), 39891–39903 (2022). [CrossRef]  

128. J. Sun, C. Zuo, L. Zhang, and Q. Chen, “Resolution-enhanced Fourier ptychographic microscopy based on high-numerical-aperture illuminations,” Sci. Rep. 7(1), 1–11 (2017). [CrossRef]  

129. K. Zhang, X. Lu, X. Chen, R. Zhang, K.-M. Fung, H. Liu, B. Zheng, S. Li, and Y. Qiu, “Using Fourier ptychography microscopy to achieve high-resolution chromosome imaging: an initial evaluation,” J. Biomed. Opt. 27(01), 016504 (2022). [CrossRef]  

130. K. Guo, Z. Bian, S. Dong, P. Nanda, Y. M. Wang, and G. Zheng, “Microscopy illumination engineering using a low-cost liquid crystal display,” Biomed. Opt. Express 6(2), 574–579 (2015). [CrossRef]  

131. T. Aidukas, R. Eckert, A. R. Harvey, L. Waller, and P. C. Konda, “Low-cost, sub-micron resolution, wide-field computational microscopy using opensource hardware,” Sci. Rep. 9(1), 7457 (2019). [CrossRef]  

132. S. Dong, K. Guo, P. Nanda, R. Shiradkar, and G. Zheng, “FPscope: a field-portable high-resolution microscope using a cellphone lens,” Biomed. Opt. Express 5(10), 3305–3310 (2014). [CrossRef]  

133. T. Kamal, L. Yang, and W. M. Lee, “In situ retrieval and correction of aberrations in moldless lenses using Fourier ptychography,” Opt. Express 26(3), 2708–2719 (2018). [CrossRef]  

134. K. C. Lee, K. Lee, J. Jung, S. H. Lee, D. Kim, and S. A. Lee, “A smartphone-based fourier ptychographic microscope using the display screen for illumination,” ACS Photonics 8(5), 1307–1315 (2021). [CrossRef]  

135. J. Kim, B. M. Henley, C. H. Kim, H. A. Lester, and C. Yang, “Incubator embedded cell culture imaging system (EmSight) based on Fourier ptychographic microscopy,” Biomed. Opt. Express 7(8), 3097–3110 (2016). [CrossRef]  

136. M. Xiang, A. Pan, Y. Zhao, X. Fan, H. Zhao, C. Li, and B. Yao, “Coherent synthetic aperture imaging for visible remote sensing via reflective Fourier ptychography,” Opt. Lett. 46(1), 29–32 (2021). [CrossRef]  

137. H. Zhang, S. Jiang, J. Liao, J. Deng, J. Liu, Y. Zhang, and G. Zheng, “Near-field Fourier ptychography: super-resolution phase retrieval via speckle illumination,” Opt. Express 27(5), 7498–7512 (2019). [CrossRef]  

138. L.-H. Yeh, S. Chowdhury, and L. Waller, “Computational structured illumination for high-content fluorescence and phase microscopy,” Biomed. Opt. Express 10(4), 1978–1998 (2019). [CrossRef]  

139. S. Dong, P. Nanda, K. Guo, J. Liao, and G. Zheng, “Incoherent Fourier ptychographic photography using structured light,” Photonics Res. 3(1), 19–23 (2015). [CrossRef]  

140. S. Dong, P. Nanda, R. Shiradkar, K. Guo, and G. Zheng, “High-resolution fluorescence imaging via pattern-illuminated Fourier ptychography,” Opt. Express 22(17), 20856–20870 (2014). [CrossRef]  

141. K. Guo, Z. Zhang, S. Jiang, J. Liao, J. Zhong, Y. C. Eldar, and G. Zheng, “13-fold resolution gain through turbid layer via translated unknown speckle illumination,” Biomed. Opt. Express 9(1), 260–275 (2018). [CrossRef]  

142. M. Du, L. Loetgering, K. S. Eikema, and S. Witte, “Ptychographic Optical coherence tomography,” Opt. Lett. 46(6), 1337–1340 (2021). [CrossRef]  

143. A. Maiden, M. Sarahan, M. Stagg, S. Schramm, and M. Humphry, “Quantitative electron phase imaging with high sensitivity and an unlimited field of view,” Sci. Rep. 5(1), 14690 (2015). [CrossRef]  

144. J. Marrison, L. Räty, P. Marriott, and P. O’Toole, “Ptychography–a label free, high-contrast imaging technique for live cells using quantitative phase information,” Sci. Rep. 3(1), 2369 (2013). [CrossRef]  

145. J. Holloway, Y. Wu, M. K. Sharma, O. Cossairt, and A. Veeraraghavan, “SAVI: Synthetic apertures for long-range, subdiffraction-limited visible imaging using Fourier ptychography,” Sci. Adv. 3(4), e1602564 (2017). [CrossRef]  

146. J. Wu, F. Yang, and L. Cao, “Resolution enhancement of long-range imaging with sparse apertures,” Opt. Lasers Eng. 155, 107068 (2022). [CrossRef]  

147. C. Detlefs, M. A. Beltran, J.-P. Guigay, and H. Simons, “Translative lens-based full-field coherent X-ray imaging,” J. Synchrotron Radiat. 27(1), 119–126 (2020). [CrossRef]  

148. C. Wang, M. Hu, Y. Takashima, T. J. Schulz, and D. J. Brady, “Snapshot ptychography on array cameras,” Opt. Express 30(2), 2585–2598 (2022). [CrossRef]  

149. G.-J. Choi, J. Lim, S. Jeon, J. Cho, G. Lim, N.-C. Park, and Y.-P. Park, “Dual-wavelength Fourier ptychography using a single LED,” Opt. Lett. 43(15), 3526–3529 (2018). [CrossRef]  

150. C. Shen, A. C. S. Chan, J. Chung, D. E. Williams, A. Hajimiri, and C. Yang, “Computational aberration correction of VIS-NIR multispectral imaging microscopy based on Fourier ptychography,” Opt. Express 27(18), 24923–24937 (2019). [CrossRef]  

151. X. Ou, J. Chung, R. Horstmeyer, and C. Yang, “Aperture scanning Fourier ptychographic microscopy,” Biomed. Opt. Express 7(8), 3140–3150 (2016). [CrossRef]  

152. J. Chung, G. W. Martinez, K. C. Lencioni, S. R. Sadda, and C. Yang, “Computational aberration compensation by coded-aperture-based correction of aberration obtained from Optical Fourier coding and blur estimation,” Optica 6(5), 647–661 (2019). [CrossRef]  

153. R. Horstmeyer, X. Ou, J. Chung, G. Zheng, and C. Yang, “Overlapped Fourier coding for Optical aberration removal,” Opt. Express 22(20), 24062–24080 (2014). [CrossRef]  

154. M. Xiang, A. Pan, J. Liu, T. Xi, X. Guo, F. Liu, and X. Shao, “Phase Diversity-Based Fourier Ptychography for Varying Aberration Correction,” Front. Phys. 10, 129 (2022). [CrossRef]  

155. X. He, C. Liu, and J. Zhu, “Single-shot Fourier ptychography based on diffractive beam splitting,” Opt. Lett. 43(2), 214–217 (2018). [CrossRef]  

156. X. He, C. Liu, and J. Zhu, “Single-shot aperture-scanning Fourier ptychography,” Opt. Express 26(22), 28187–28196 (2018). [CrossRef]  

157. Y. Zhang, Z. Zhang, and A. Maiden, “Ptycho-cam: a ptychographic phase imaging add-on for Optical microscopy,” Appl. Opt. 61(10), 2874–2880 (2022). [CrossRef]  

158. H. Zhou, C. Shen, M. Liang, and C. Yang, “Analysis of postreconstruction digital refocusing in Fourier ptychographic microscopy,” Opt. Eng. 61(07), 073102 (2022). [CrossRef]  

159. J. Rodenburg and R. Bates, “The theory of super-resolution electron microscopy via Wigner-distribution deconvolution,” Phil. Trans. R. Soc. Lond. A 339(1655), 521–553 (1992). [CrossRef]  

160. B. McCallum and J. Rodenburg, “Two-dimensional demonstration of Wigner phase-retrieval microscopy in the STEM configuration,” Ultramicroscopy 45(3-4), 371–380 (1992). [CrossRef]  

161. B. McCallum and J. Rodenburg, “Simultaneous reconstruction of object and aperture functions from multiple far-field intensity measurements,” J. Opt. Soc. Am. A 10(2), 231–239 (1993). [CrossRef]  

162. H. N. Chapman, “Phase-retrieval X-ray microscopy by Wigner-distribution deconvolution,” Ultramicroscopy 66(3-4), 153–172 (1996). [CrossRef]  

163. P. Li, T. B. Edo, and J. M. Rodenburg, “Ptychographic inversion via Wigner distribution deconvolution: noise suppression and probe design,” Ultramicroscopy 147, 106–113 (2014). [CrossRef]  

164. T. Plamann and J. Rodenburg, “Electron ptychography. II. Theory of three-dimensional propagation effects,” Acta Crystallogr., Sect. A: Found. Crystallogr. 54(1), 61–73 (1998). [CrossRef]  

165. T. Plamann and J. Rodenburg, “Double resolution imaging with infinite depth of focus in single lens scanning microscopy,” Optik (Stuttgart) 96, 31–36 (1994).

166. J. M. Rodenburg and H. M. Faulkner, “A phase retrieval algorithm for shifting illumination,” Appl. Phys. Lett. 85(20), 4795–4797 (2004). [CrossRef]  

167. A. Maiden, D. Johnson, and P. Li, “Further improvements to the ptychographical iterative engine,” Optica 4(7), 736–745 (2017). [CrossRef]  

168. P. Thibault and M. Guizar-Sicairos, “Maximum-likelihood refinement for coherent diffractive imaging,” New J. Phys. 14(6), 063004 (2012). [CrossRef]  

169. S. Gravel and V. Elser, “Divide and concur: A general approach to constraint satisfaction,” Phys. Rev. E 78(3), 036706 (2008). [CrossRef]  

170. D. R. Luke, “Relaxed averaged alternating reflections for diffraction imaging,” Inverse problems 21(1), 37–50 (2005). [CrossRef]  

171. M. Pham, A. Rana, J. Miao, and S. Osher, “Semi-implicit relaxed Douglas-Rachford algorithm (sDR) for ptychography,” Opt. Express 27(22), 31246–31260 (2019). [CrossRef]  

172. Z. Wen, C. Yang, X. Liu, and S. Marchesini, “Alternating direction methods for classical and ptychographic phase retrieval,” Inverse Problems 28(11), 115010 (2012). [CrossRef]  

173. H. Chang, P. Enfedaque, and S. Marchesini, “Blind ptychographic phase retrieval via convergent alternating direction method of multipliers,” SIAM J. Imaging Sci. 12(1), 153–185 (2019). [CrossRef]  

174. Y. Huang, S. Jiang, R. Wang, P. Song, J. Zhang, G. Zheng, X. Ji, and Y. Zhang, “Ptychography-based high-throughput lensless on-chip microscopy via incremental proximal algorithms,” Opt. Express 29(23), 37892–37906 (2021). [CrossRef]  

175. L. Yang, Z. Liu, G. Zheng, and H. Chang, “Batch-based alternating direction methods of multipliers for Fourier ptychography,” Opt. Express 30(19), 34750–34764 (2022). [CrossRef]  

176. A. Wang, Z. Zhang, S. Wang, A. Pan, C. Ma, and B. Yao, “Fourier Ptychographic Microscopy via Alternating Direction Method of Multipliers,” Cells 11(9), 1512 (2022). [CrossRef]  

177. H. Yan, “Ptychographic phase retrieval by proximal algorithms,” New J. Phys. 22(2), 023035 (2020). [CrossRef]  

178. Y. Sun, S. Xu, Y. Li, L. Tian, B. Wohlberg, and U. S. Kamilov, “Regularized Fourier ptychography using an online plug-and-play algorithm,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (IEEE, 2019), 7665–7669.

179. C. Zuo, J. Sun, and Q. Chen, “Adaptive step-size strategy for noise-robust Fourier ptychographic microscopy,” Opt. Express 24(18), 20724–20744 (2016). [CrossRef]  

180. L. Bian, J. Suo, G. Zheng, K. Guo, F. Chen, and Q. Dai, “Fourier ptychographic reconstruction using Wirtinger flow optimization,” Opt. Express 23(4), 4856–4866 (2015). [CrossRef]  

181. L. Bian, J. Suo, J. Chung, X. Ou, C. Yang, F. Chen, and Q. Dai, “Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient,” Sci. Rep. 6(1), 27384 (2016). [CrossRef]  

182. E. Bostan, M. Soltanolkotabi, D. Ren, and L. Waller, “Accelerated Wirtinger flow for multiplexed Fourier ptychographic microscopy,” in 2018 25th IEEE International Conference on Image Processing (ICIP), (IEEE, 2018), 3823–3827.

183. J. Liu, Y. Li, W. Wang, J. Tan, and C. Liu, “Accelerated and high-quality Fourier ptychographic method using a double truncated Wirtinger criteria,” Opt. Express 26(20), 26556–26565 (2018). [CrossRef]  

184. R. Xu, M. Soltanolkotabi, J. P. Haldar, W. Unglaub, J. Zusman, A. F. J. Levi, and R. M. Leahy, “Accelerated wirtinger flow: A fast algorithm for ptychography,” arXiv, arXiv:1806.05546 (2018).

185. S. Chen, T. Xu, J. Zhang, X. Wang, and Y. Zhang, “Optimized denoising method for fourier ptychographic microscopy based on wirtinger flow,” IEEE Photonics J. 11(1), 1–14 (2019). [CrossRef]  

186. R. Horstmeyer, R. Y. Chen, X. Ou, B. Ames, J. A. Tropp, and C. Yang, “Solving ptychography with a convex relaxation,” New J. Phys. 17(5), 053044 (2015). [CrossRef]  

187. Y. Zhang, P. Song, J. Zhang, and Q. Dai, “Fourier ptychographic microscopy with sparse representation,” Sci. Rep. 7(1), 1–10 (2017). [CrossRef]  

188. Y. Zhang, Z. Cui, J. Zhang, P. Song, and Q. Dai, “Group-based sparse representation for Fourier ptychography microscopy,” Opt. Commun. 404, 55–61 (2017). [CrossRef]  

189. Y. Zhang, P. Song, and Q. Dai, “Fourier ptychographic microscopy using a generalized Anscombe transform approximation of the mixed Poisson-Gaussian likelihood,” Opt. Express 25(1), 168–179 (2017). [CrossRef]  

190. Y. Fan, J. Sun, Q. Chen, M. Wang, and C. Zuo, “Adaptive denoising method for Fourier ptychographic microscopy,” Opt. Commun. 404, 23–31 (2017). [CrossRef]  

191. H. Chang, P. Enfedaque, J. Zhang, J. Reinhardt, B. Enders, Y.-S. Yu, D. Shapiro, C. G. Schroer, T. Zeng, and S. Marchesini, “Advanced denoising for X-ray ptychography,” Opt. Express 27(8), 10395–10418 (2019). [CrossRef]  

192. G. Jagatap, Z. Chen, S. Nayer, C. Hegde, and N. Vaswani, “Sample efficient fourier ptychography for structured data,” IEEE Trans. Comput. Imaging 6, 344–357 (2020). [CrossRef]  

193. C. M. Kewish, P. Thibault, M. Dierolf, O. Bunk, A. Menzel, J. Vila-Comamala, K. Jefimovs, and F. Pfeiffer, “Ptychographic characterization of the wavefield in the focus of reflective hard X-ray optics,” Ultramicroscopy 110(4), 325–329 (2010). [CrossRef]  

194. P. Li, T. Edo, D. Batey, J. Rodenburg, and A. Maiden, “Breaking ambiguities in mixed state ptychography,” Opt. Express 24(8), 9038–9052 (2016). [CrossRef]  

195. M. Odstrcil, P. Baksh, S. Boden, R. Card, J. Chad, J. Frey, and W. Brocklesby, “Ptychographic coherent diffractive imaging with orthogonal probe relaxation,” Opt. Express 24(8), 8360–8369 (2016). [CrossRef]  

196. Z. Chen, M. Odstrcil, Y. Jiang, Y. Han, M.-H. Chiu, L.-J. Li, and D. A. Muller, “Mixed-state electron ptychography enables sub-angstrom resolution imaging with picometer precision at low dose,” Nat. Commun. 11(1), 1–10 (2020). [CrossRef]  

197. A. Shenfield and J. M. Rodenburg, “Evolutionary determination of experimental parameters for ptychographical imaging,” J. Appl. Phys. 109(12), 124510 (2011). [CrossRef]  

198. A. Maiden, M. Humphry, M. Sarahan, B. Kraus, and J. Rodenburg, “An annealing algorithm to correct positioning errors in ptychography,” Ultramicroscopy 120, 64–72 (2012). [CrossRef]  

199. F. Zhang, I. Peterson, J. Vila-Comamala, A. Diaz, F. Berenguer, R. Bean, B. Chen, A. Menzel, I. K. Robinson, and J. M. Rodenburg, “Translation position determination in ptychographic coherent diffraction imaging,” Opt. Express 21(11), 13592–13606 (2013). [CrossRef]  

200. P. Dwivedi, A. Konijnenberg, S. Pereira, and H. Urbach, “Lateral position correction in ptychography using the gradient of intensity patterns,” Ultramicroscopy 192, 29–36 (2018). [CrossRef]  

201. N. Burdet, X. Shi, D. Parks, J. N. Clark, X. Huang, S. D. Kevan, and I. K. Robinson, “Evaluation of partial coherence correction in X-ray ptychography,” Opt. Express 23(5), 5452–5467 (2015). [CrossRef]  

202. W. Yu, S. Wang, S. Veetil, S. Gao, C. Liu, and J. Zhu, “High-quality image reconstruction method for ptychography with partially coherent illumination,” Phys. Rev. B 93(24), 241105 (2016). [CrossRef]  

203. J. N. Clark, X. Huang, R. J. Harder, and I. K. Robinson, “Continuous scanning mode for ptychography,” Opt. Lett. 39(20), 6066–6069 (2014). [CrossRef]  

204. P. M. Pelz, M. Guizar-Sicairos, P. Thibault, I. Johnson, M. Holler, and A. Menzel, “On-the-fly scans for X-ray ptychography,” Appl. Phys. Lett. 105(25), 251101 (2014). [CrossRef]  

205. X. Huang, K. Lauer, J. N. Clark, W. Xu, E. Nazaretski, R. Harder, I. K. Robinson, and Y. S. Chu, “Fly-scan ptychography,” Sci. Rep. 5(1), 1–5 (2015). [CrossRef]  

206. J. Deng, Y. S. Nashed, S. Chen, N. W. Phillips, T. Peterka, R. Ross, S. Vogt, C. Jacobsen, and D. J. Vine, “Continuous motion scan ptychography: characterization for increased speed in coherent x-ray imaging,” Opt. Express 23(5), 5438–5451 (2015). [CrossRef]  

207. M. Odstrčil, M. Holler, and M. Guizar-Sicairos, “Arbitrary-path fly-scan ptychography,” Opt. Express 26(10), 12585–12593 (2018). [CrossRef]  

208. O. Bunk, M. Dierolf, S. Kynde, I. Johnson, O. Marti, and F. Pfeiffer, “Influence of the overlap parameter on the convergence of the ptychographical iterative engine,” Ultramicroscopy 108(5), 481–487 (2008). [CrossRef]  

209. A. de Beurs, L. Loetgering, M. Herczog, M. Du, K. S. Eikema, and S. Witte, “aPIE: an angle calibration algorithm for reflection ptychography,” Opt. Lett. 47(8), 1949–1952 (2022). [CrossRef]  

210. Z. Bian, S. Dong, and G. Zheng, “Adaptive system correction for robust Fourier ptychographic imaging,” Opt. Express 21(26), 32400–32410 (2013). [CrossRef]  

211. A. Pan, Y. Zhang, T. Zhao, Z. Wang, D. Dan, M. Lei, and B. Yao, “System calibration method for Fourier ptychographic microscopy,” J. Biomed. Opt. 22(09), 1 (2017). [CrossRef]  

212. J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Efficient positional misalignment correction method for Fourier ptychographic microscopy,” Biomed. Opt. Express 7(4), 1336–1350 (2016). [CrossRef]  

213. R. Eckert, Z. F. Phillips, and L. Waller, “Efficient illumination angle self-calibration in Fourier ptychography,” Appl. Opt. 57(19), 5434–5442 (2018). [CrossRef]  

214. A. Zhou, W. Wang, N. Chen, E. Y. Lam, B. Lee, and G. Situ, “Fast and robust misalignment correction of Fourier ptychographic microscopy for full field of view reconstruction,” Opt. Express 26(18), 23661–23674 (2018). [CrossRef]  

215. J. Liu, Y. Li, W. Wang, H. Zhang, Y. Wang, J. Tan, and C. Liu, “Stable and robust frequency domain position compensation strategy for Fourier ptychographic microscopy,” Opt. Express 25(23), 28053–28067 (2017). [CrossRef]  

216. A. Schropp, R. Hoppe, V. Meier, J. Patommel, F. Seiboth, H. J. Lee, B. Nagler, E. C. Galtier, B. Arnold, and U. Zastrau, “Full spatial characterization of a nanofocused x-ray free-electron laser beam by ptychographic imaging,” Sci. Rep. 3(1), 1633 (2013). [CrossRef]  

217. C. Zheng, S. Zhang, G. Zhou, Y. Hu, and Q. Hao, “Robust Fourier ptychographic microscopy via a physics-based defocusing strategy for calibrating angle-varied LED illumination,” Biomed. Opt. Express 13(3), 1581–1594 (2022). [CrossRef]  

218. C. Zheng, S. Zhang, D. Yang, G. Zhou, Y. Hu, and Q. Hao, “Robust full-pose-parameter estimation for the LED array in Fourier ptychographic microscopy,” Biomed. Opt. Express 13(8), 4468–4482 (2022). [CrossRef]  

219. S. Dong, Z. Bian, R. Shiradkar, and G. Zheng, “Sparsely sampled Fourier ptychography,” Opt. Express 22(5), 5455–5464 (2014). [CrossRef]  

220. L. Bian, G. Zheng, K. Guo, J. Suo, C. Yang, F. Chen, and Q. Dai, “Motion-corrected Fourier ptychography,” Biomed. Opt. Express 7(11), 4543–4553 (2016). [CrossRef]  

221. A. Maiden, G. Morrison, B. Kaulich, A. Gianoncelli, and J. Rodenburg, “Soft X-ray spectromicroscopy using ptychography with randomly phased illumination,” Nat. Commun. 4(1), 1669 (2013). [CrossRef]  

222. Y. Zhang, A. Pan, M. Lei, and B. Yao, “Data preprocessing methods for robust Fourier ptychographic microscopy,” Opt. Eng. 56(12), 1 (2017). [CrossRef]  

223. A. Pan, C. Zuo, Y. Xie, M. Lei, and B. Yao, “Vignetting effect in Fourier ptychographic microscopy,” Opt. Lasers Eng. 120, 40–48 (2019). [CrossRef]  

224. V. Bianco, B. Mandracchia, J. Běhal, D. Barone, P. Memmolo, and P. Ferraro, “Miscalibration-tolerant Fourier ptychography,” IEEE J. Sel. Top. Quantum Electron. 27(4), 1–17 (2021). [CrossRef]  

225. V. Bianco, M. D. Priscoli, D. Pirone, G. Zanfardino, P. Memmolo, F. Bardozzo, L. Miccio, G. Ciaparrone, P. Ferraro, and R. Tagliaferri, “Deep learning-based, misalignment resilient, real-time Fourier Ptychographic Microscopy reconstruction of biological tissue slides,” IEEE J. Sel. Top. Quantum Electron. 28(4), 1–10 (2022). [CrossRef]  

226. D. Pirone, V. Bianco, M. Valentino, M. Mugnano, V. Pagliarulo, P. Memmolo, L. Miccio, and P. Ferraro, “Fourier ptychographic microscope allows multi-scale monitoring of cells layout onto micropatterned substrates,” Opt. Lasers Eng. 156, 107103 (2022). [CrossRef]  

227. B. Cui, S. Zhang, Y. Wang, Y. Hu, and Q. Hao, “Pose correction scheme for camera-scanning Fourier ptychography based on camera calibration and homography transform,” Opt. Express 30(12), 20697–20711 (2022). [CrossRef]  

228. R. Karl, C. Bevis, R. Lopez-Rios, J. Reichanadter, D. Gardner, C. Porter, E. Shanblatt, M. Tanksalvala, G. F. Mancini, and M. Murnane, “Spatial, spectral, and polarization multiplexed ptychography,” Opt. Express 23(23), 30250–30258 (2015). [CrossRef]  

229. E. H. R. Tsai, I. Usov, A. Diaz, A. Menzel, and M. Guizar-Sicairos, “X-ray ptychography with extended depth of field,” Opt. Express 24(25), 29089–29108 (2016). [CrossRef]  

230. S. Gao, P. Wang, F. Zhang, G. T. Martinez, P. D. Nellist, X. Pan, and A. I. Kirkland, “Electron ptychographic microscopy for three-dimensional imaging,” Nat. Commun. 8(1), 163 (2017). [CrossRef]  

231. S. Chowdhury, M. Chen, R. Eckert, D. Ren, F. Wu, N. Repina, and L. Waller, “High-resolution 3D refractive index microscopy of multiple-scattering samples from intensity images,” Optica 6(9), 1211–1219 (2019). [CrossRef]  

232. A. Suzuki, S. Furutaku, K. Shimomura, K. Yamauchi, Y. Kohmura, T. Ishikawa, and Y. Takahashi, “High-resolution multislice x-ray ptychography of extended thick objects,” Phys. Rev. Lett. 112(5), 053903 (2014). [CrossRef]  

233. R. Ling, W. Tahir, H.-Y. Lin, H. Lee, and L. Tian, “High-throughput intensity diffraction tomography with a computational microscope,” Biomed. Opt. Express 9(5), 2130–2141 (2018). [CrossRef]  

234. J. Li, A. Matlock, Y. Li, Q. Chen, C. Zuo, and L. Tian, “High-speed in vitro intensity diffraction tomography,” Adv. Photonics 1(06), 1 (2019). [CrossRef]  

235. A. Matlock and L. Tian, “High-throughput, volumetric quantitative phase imaging with multiplexed intensity diffraction tomography,” Biomed. Opt. Express 10(12), 6432 (2019). [CrossRef]  

236. T.-A. Pham, E. Soubies, A. Goy, J. Lim, F. Soulez, D. Psaltis, and M. Unser, “Versatile reconstruction framework for diffraction tomography with intensity measurements and multiple scattering,” Opt. Express 26(3), 2749–2763 (2018). [CrossRef]  

237. L. Loetgering, M. Du, K. S. E. Eikema, and S. Witte, “zPIE: an autofocusing algorithm for ptychography,” Opt. Lett. 45(7), 2030–2033 (2020). [CrossRef]  

238. B. Enders and P. Thibault, “A computational framework for ptychographic reconstructions,” Proc. R. Soc. London, Ser. A 472(2196), 20160640 (2016). [CrossRef]  

239. Y. S. Nashed, D. J. Vine, T. Peterka, J. Deng, R. Ross, and C. Jacobsen, “Parallel ptychographic reconstruction,” Opt. Express 22(26), 32082–32097 (2014). [CrossRef]  

240. X. Wen, Y. Geng, C. Guo, X. Zhou, J. Tan, S. Liu, C. Tan, and Z. Liu, “A parallel ptychographic iterative engine with a co-start region,” J. Opt. 22(7), 075701 (2020). [CrossRef]  

241. S. Marchesini, H. Krishnan, B. J. Daurer, D. A. Shapiro, T. Perciano, J. A. Sethian, and F. R. Maia, “SHARP: a distributed GPU-based ptychographic solver,” J. Appl. Crystallogr. 49(4), 1245–1252 (2016). [CrossRef]  

242. D. F. Gardner, S. Divitt, and A. T. Watnik, “Ptychographic imaging of incoherently illuminated extended objects using speckle correlations,” Appl. Opt. 58(13), 3564–3569 (2019). [CrossRef]  

243. G. Li, W. Yang, H. Wang, and G. Situ, “Image transmission through scattering media using ptychographic iterative engine,” Appl. Sci. 9(5), 849 (2019). [CrossRef]  

244. M. Rosenfeld, G. Weinberg, D. Doktofsky, Y. Li, L. Tian, and O. Katz, “Acousto-optic ptychography,” Optica 8(6), 936–943 (2021). [CrossRef]  

245. F. Wittwer, J. Hagemann, D. Brückner, S. Flenner, and C. G. Schroer, “Phase retrieval framework for direct reconstruction of the projected refractive index applied to ptychography and holography,” Optica 9(3), 295–302 (2022). [CrossRef]  

246. D. Batey, T. Edo, C. Rau, U. Wagner, Z. Pešić, T. Waigh, and J. Rodenburg, “Reciprocal-space up-sampling from real-space oversampling in x-ray ptychography,” Phys. Rev. A 89(4), 043812 (2014). [CrossRef]  

247. G. Zhou, S. Zhang, Y. Hu, and Q. Hao, “Adaptive high-dynamic-range Fourier ptychography microscopy data acquisition with a red-green-blue camera,” Opt. Lett. 45(17), 4956–4959 (2020). [CrossRef]  

248. P. Ferrand, M. Allain, and V. Chamard, “Ptychography in anisotropic media,” Opt. Lett. 40(22), 5144–5147 (2015). [CrossRef]  

249. P. Ferrand, A. Baroni, M. Allain, and V. Chamard, “Quantitative imaging of anisotropic material properties with vectorial ptychography,” Opt. Lett. 43(4), 763–766 (2018). [CrossRef]  

250. X. Dai, S. Xu, X. Yang, K. C. Zhou, C. Glass, P. C. Konda, and R. Horstmeyer, “Quantitative Jones matrix imaging using vectorial Fourier ptychography,” Biomed. Opt. Express 13(3), 1457–1470 (2022). [CrossRef]  

251. S. Song, J. Kim, S. Hur, J. Song, and C. Joo, “Large-area, high-resolution birefringence imaging with polarization-sensitive Fourier ptychographic microscopy,” ACS Photonics 8(1), 158–165 (2021). [CrossRef]  

252. S. Jiang, K. Guo, J. Liao, and G. Zheng, “Solving Fourier ptychographic imaging problems via neural network modeling and TensorFlow,” Biomed. Opt. Express 9(7), 3306–3319 (2018). [CrossRef]  

253. M. Sun, X. Chen, Y. Zhu, D. Li, Q. Mu, and L. Xuan, “Neural network model combined with pupil recovery for Fourier ptychographic microscopy,” Opt. Express 27(17), 24161–24174 (2019). [CrossRef]  

254. J. Zhang, X. Tao, L. Yang, R. Wu, P. Sun, C. Wang, and Z. Zheng, “Forward imaging neural network with correction of positional misalignment for Fourier ptychographic microscopy,” Opt. Express 28(16), 23164–23175 (2020). [CrossRef]  

255. Y. Zhang, Y. Liu, S. Jiang, K. Dixit, P. Song, X. Zhang, X. Ji, and X. Li, “Neural network model assisted Fourier ptychography with Zernike aberration recovery and total variation constraint,” J. Biomed. Opt. 26(03), 036502 (2021). [CrossRef]  

256. D. Yang, S. Zhang, C. Zheng, G. Zhou, L. Cao, Y. Hu, and Q. Hao, “Fourier ptychography multi-parameunter neural network with composite physical priori optimization,” Biomed. Opt. Express 13(5), 2739–2753 (2022). [CrossRef]  

257. Q. Chen, D. Huang, and R. Chen, “Fourier ptychographic microscopy with untrained deep neural network priors,” Opt. Express 30(22), 39597–39612 (2022). [CrossRef]  

258. Y. S. Nashed, T. Peterka, J. Deng, and C. Jacobsen, “Distributed automatic differentiation for ptychography,” Procedia Comput. Sci. 108, 404–414 (2017). [CrossRef]  

259. S. Ghosh, Y. S. Nashed, O. Cossairt, and A. Katsaggelos, “ADP: Automatic differentiation ptychography,” in 2018 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2018), 1–10.

260. S. Kandel, S. Maddali, M. Allain, S. O. Hruszkewycz, C. Jacobsen, and Y. S. Nashed, “Using automatic differentiation as a general framework for ptychographic reconstruction,” Opt. Express 27(13), 18653–18672 (2019). [CrossRef]  

261. M. Du, S. Kandel, J. Deng, X. Huang, A. Demortiere, T. T. Nguyen, R. Tucoulou, V. De Andrade, Q. Jin, and C. Jacobsen, “Adorym: A multi-platform generic X-ray image reconstruction framework based on automatic differentiation,” Opt. Express 29(7), 10000–10035 (2021). [CrossRef]  

262. R. Horstmeyer, R. Y. Chen, B. Kappes, and B. Judkewitz, “Convolutional neural networks that teach microscopes how to image,” arXiv, arXiv:1709.07223 (2017).

263. M. R. Kellman, E. Bostan, N. A. Repina, and L. Waller, “Physics-based learned design: optimized coded-illumination for quantitative phase imaging,” IEEE Trans. Comput. Imaging 5(3), 344–353 (2019). [CrossRef]  

264. A. Muthumbi, A. Chaware, K. Kim, K. C. Zhou, P. C. Konda, R. Chen, B. Judkewitz, A. Erdmann, B. Kappes, and R. Horstmeyer, “Learned sensing: jointly optimized microscope hardware for accurate image classification,” Biomed. Opt. Express 10(12), 6351–6369 (2019). [CrossRef]  

265. K. Kim, P. C. Konda, C. L. Cooke, R. Appel, and R. Horstmeyer, “Multi-element microscope optimization by a learned sensing network with composite physical layers,” Opt. Lett. 45(20), 5684–5687 (2020). [CrossRef]  

266. M. Kellman, K. Zhang, E. Markley, J. Tamir, E. Bostan, M. Lustig, and L. Waller, “Memory-efficient learning for large-scale computational imaging,” IEEE Trans. Comput. Imaging 6, 1403–1414 (2020). [CrossRef]  

267. F. Guzzi, G. Kourousias, A. Gianoncelli, F. Billè, and S. Carrato, “A parameter refinement method for Ptychography based on Deep Learning concepts,” Condens. Matter 6(4), 36 (2021). [CrossRef]  

268. F. Ströhl, S. Jadhav, B. S. Ahluwalia, K. Agarwal, and D. K. Prasad, “Object detection neural network improves Fourier ptychography reconstruction,” Opt. Express 28(25), 37199–37208 (2020). [CrossRef]  

269. T. Nguyen, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Deep learning approach for Fourier ptychography microscopy,” Opt. Express 26(20), 26470–26484 (2018). [CrossRef]  

270. L. Boominathan, M. Maniparambil, H. Gupta, R. Baburajan, and K. Mitra, “Phase retrieval for Fourier Ptychography under varying amount of measurements,” arXiv, arXiv:1805.03593 (2018).

271. A. Kappeler, S. Ghosh, J. Holloway, O. Cossairt, and A. Katsaggelos, “PtychNet: CNN based Fourier ptychography,” in 2017 IEEE International Conference on Image Processing (ICIP), (IEEE, 2017), 1712–1716.

272. Y. Xue, S. Cheng, Y. Li, and L. Tian, “Reliable deep-learning-based phase imaging with uncertainty quantification,” Optica 6(5), 618–629 (2019). [CrossRef]  

273. F. Shamshad, F. Abbas, and A. Ahmed, “Deep ptych: Subsampled fourier ptychography using generative priors,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (IEEE, 2019), 7720–7724.

274. J. Zhang, T. Xu, Z. Shen, Y. Qiao, and Y. Zhang, “Fourier ptychographic microscopy reconstruction with multiscale deep residual network,” Opt. Express 27(6), 8612–8625 (2019). [CrossRef]  

275. M. J. Cherukara, T. Zhou, Y. Nashed, P. Enfedaque, A. Hexemer, R. J. Harder, and M. V. Holt, “AI-enabled high-resolution scanning coherent diffraction imaging,” Appl. Phys. Lett. 117(4), 044103 (2020). [CrossRef]  

276. Z. Zhang, T. Wang, S. Feng, Y. Yang, C. Lai, X. Li, L. Shao, and X. Jiang, “Sparse phase retrieval using a physics-informed neural network for Fourier ptychographic microscopy,” Opt. Lett. 47(19), 4909–4912 (2022). [CrossRef]  

277. R. Wang, P. Song, S. Jiang, C. Yan, J. Zhu, C. Guo, Z. Bian, T. Wang, and G. Zheng, “Virtual brightfield and fluorescence staining for Fourier ptychography via unsupervised deep learning,” Opt. Lett. 45(19), 5405–5408 (2020). [CrossRef]  

278. Y. Bian, Y. Jiang, J. Wang, S. Yang, W. Deng, X. Yang, R. Shen, H. Shen, and C. Kuang, “Deep learning colorful ptychographic iterative engine lens-less diffraction microscopy,” Opt. Lasers Eng. 150, 106843 (2022). [CrossRef]  

279. Y. Gao, J. Chen, A. Wang, A. Pan, C. Ma, and B. Yao, “High-throughput fast full-color digital pathology based on Fourier ptychographic microscopy via color transfer,” Sci. China Phys. Mech. Astron. 64(11), 114211 (2021). [CrossRef]  

280. J. Chen, A. Wang, A. Pan, G. Zheng, C. Ma, and B. Yao, “Rapid full-color Fourier ptychographic microscopy via spatially filtered color transfer,” Photonics Res. 10(10), 2410–2421 (2022). [CrossRef]  

281. C. Guo, S. Jiang, L. Yang, P. Song, T. Wang, X. Shao, Z. Zhang, M. Murphy, and G. Zheng, “Deep learning-enabled whole slide imaging (DeepWSI): oil-immersion quality using dry objectives, longer depth of field, higher system throughput, and better functionality,” Opt. Express 29(24), 39669–39684 (2021). [CrossRef]  

282. X. Wang, T. Xu, J. Zhang, S. Chen, and Y. Zhang, “SO-YOLO based WBC detection with Fourier ptychographic microscopy,” IEEE Access 6, 51566–51576 (2018). [CrossRef]  

283. V. Elser, “Phase retrieval by iterated projections,” J. Opt. Soc. Am. A 20(1), 40–55 (2003). [CrossRef]  

284. C. Zuo, J. Li, J. Sun, Y. Fan, J. Zhang, L. Lu, R. Zhang, B. Wang, L. Huang, and Q. Chen, “Transport of intensity equation: a tutorial,” Opt. Lasers Eng. 135, 106187 (2020). [CrossRef]  

285. M. Dierolf, P. Thibault, A. Menzel, C. M. Kewish, K. Jefimovs, I. Schlichting, K. Von Koenig, O. Bunk, and F. Pfeiffer, “Ptychographic coherent diffractive imaging of weakly scattering specimens,” New J. Phys. 12(3), 035017 (2010). [CrossRef]  

286. A. G. Baydin, B. A. Pearlmutter, A. A. Radul, and J. M. Siskind, “Automatic differentiation in machine learning: a survey,” The Journal of Machine Learning Research 18, 5595–5637 (2017). [CrossRef]  

287. E. Abels and L. Pantanowitz, “Current state of the regulatory trajectory for whole slide imaging devices in the USA,” J. Pathol. Inform. 8(1), 23 (2017). [CrossRef]  

288. M. K. K. Niazi, A. V. Parwani, and M. N. Gurcan, “Digital pathology and artificial intelligence,” Lancet Oncol. 20(5), e253–e261 (2019). [CrossRef]  

289. X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, “Quantitative phase imaging via Fourier ptychographic microscopy,” Opt. Lett. 38(22), 4845–4848 (2013). [CrossRef]  

290. R. Horstmeyer, X. Ou, G. Zheng, P. Willems, and C. Yang, “Digital pathology with Fourier ptychography,” Computerized Medical Imaging and Graphics 42, 38–43 (2015). [CrossRef]  

291. J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proceedings of the IEEE international conference on computer vision, (2017), 2223–2232.

292. M. Liang, C. Bernadt, S. B. J. Wong, C. Choi, R. Cote, and C. Yang, “All-in-focus fine needle aspiration biopsy imaging based on Fourier ptychographic microscopy,” J. Pathol. Inform. 13, 100119 (2022). [CrossRef]  

293. A. J. Williams, J. Chung, X. Ou, G. Zheng, S. Rawal, Z. Ao, R. Datar, C. Yang, and R. J. Cote, “Fourier ptychographic microscopy for filtration-based circulating tumor cell enumeration and analysis,” J. Biomed. Opt. 19(6), 066007 (2014). [CrossRef]  

294. W. Xu, M. Jericho, I. Meinertzhagen, and H. Kreuzer, “Digital in-line holography for biological applications,” Proc. Natl. Acad. Sci. 98(20), 11301–11305 (2001). [CrossRef]  

295. O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab Chip 10(11), 1417–1428 (2010). [CrossRef]  

296. Y. H. Lo, J. Zhou, A. Rana, D. Morrill, C. Gentry, B. Enders, Y.-S. Yu, C.-Y. Sun, D. A. Shapiro, and R. W. Falcone, “X-ray linear dichroic ptychography,” Proc. Natl. Acad. Sci. 118(3), e2019068118 (2021). [CrossRef]  

297. C. He, H. He, J. Chang, B. Chen, H. Ma, and M. J. Booth, “Polarisation optics for biomedical and clinical applications: a review,” Light: Sci. Appl. 10(1), 1–20 (2021). [CrossRef]  

298. M. Jang, Y. Horie, A. Shibukawa, J. Brake, Y. Liu, S. M. Kamali, A. Arbabi, H. Ruan, A. Faraon, and C. Yang, “Wavefront shaping with disorder-engineered metasurfaces,” Nat. Photonics 12(2), 84–90 (2018). [CrossRef]  

299. E. J. Candes, X. Li, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” Appl. Comput. Harmon. Anal. 39(2), 277–299 (2015). [CrossRef]  

300. D. Gross, F. Krahmer, and R. Kueng, “Improved recovery guarantees for phase retrieval from coded diffraction patterns,” Appl. Comput. Harmon. Anal. 42(1), 37–64 (2017). [CrossRef]  

Data availability

No data were generated or analyzed in the presented research.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Development of the ptychography technique. The number of ptychography-related publications grows exponentially since the adoption of the iterative phase retrieval framework for reconstruction. Several milestones are highlighted for its development.
Fig. 2.
Fig. 2. Imaging models and operations of four ptychographic schemes chosen based on their lensless / lens-based configuration and coded-illumination / coded-detection operations. (a) Conventional ptychography: lensless configuration with coded illumination. $O({x,y} )$ denotes the complex object, (${x_i}$, ${y_i}$) denotes the translational shift of the object in real space, $Probe({x,y} )$ denotes the spatially confined probe beam, ‘FT’ denotes Fourier transform, and ‘·’ denotes point-wise multiplication. (b) Coded ptychography: lensless configuration with coded detection. $W({x^{\prime},y^{\prime}} )$ denotes the object exit wavefront at the coded surface plane, $CS({x^{\prime},y^{\prime}} )$ denotes the transmission profile of the coded surface, and ‘Propd’ denotes free-space propagation for a distance d. (c) Fourier ptychography: lens-based configuration with coded illumination. $({{k_{xi}},{k_{yi}}} )$ denotes the incident wavevector of the ith LED element, ‘*’ denotes the convolution operation, ‘PSFobj’ denotes the point spread function of the objective lens. (d) Ptychographic structured modulation: lens-based configuration with coded detection. $D({x^{\prime},y^{\prime}} )$ denotes the transmission profile of the diffuser placed between the object and the objective lens.
Fig. 3.
Fig. 3. Different ptychographic implementations are categorized into four groups based on their lensless / lens-based configuration and coded-illumination / coded-detection operations.
Fig. 4.
Fig. 4. Hardware platforms for different ptychographic implementations. (a) A commercial product based on selected area ptychography (by PhaseFocus). (b) A prototype platform of Fourier ptychography built with a programmable LED matrix [37]. (c) A Fourier ptychographic diffraction tomography platform [71]. (d) A microscope add-on for near-field ptychography [157]. Fourier ptychography setups built using a smartphone [134] (e), a Raspberry Pi system [131] (f), a cell phone lens [132] (g). (h) Lensless on-chip ptychography via rapid galvo mirror scanning [94]. (i) Parallel coded ptychography using an array of coded image sensors [30]. (j) Color-multiplexed ptychographic whole slide scanner [60]. (k) Optofluidic ptychography with a microfluidic chip for sample delivery [66]. (l) Rotational coded ptychography implemented using a blood-coated sensor and a Blu-ray player [61].
Fig. 5.
Fig. 5. Neural networks and related approaches for ptychographic reconstruction. (a) A neural network is used to model the imaging formation process of ptychography (also termed automatic differentiation). The training process recovers the object and other system parameters. (b) The physical model is incorporated into the design of the network. (c) The network takes the raw measurements and outputs reconstructions. (d) The network takes the ptychographic reconstructions and outputs virtual-stained images or images with other improvements.
Fig. 6.
Fig. 6. Digital pathology applications via different ptychographic implementations. (a) The recovered whole slide image by FPM [290]. (b) The recovered monochromatic image via near-field ptychography and the virtually stained image [278]. (c) Virtual staining of a recovered FPM image based on the color transfer strategy [279]. (d) All-in-focus recovered image of a biopsy sample based on the digital refocusing capability of FPM [292]. (e) Whole slide phase image recovered by the lensless ptychographic whole slide scanner [60]. (f) Rapid whole slide imaging using the parallel coded ptychography platform [30]. (f1) The focus map generated by maximizing a focus metric post-measurement. (f2) The recovered whole slide image by coded ptychography. (f3) The ground truth image captured using a regular light microscope. (f4) The difference between (f2) and (f3).
Fig. 7.
Fig. 7. High-throughput cytometric analysis via different ptychographic implementations. (a) The recovered whole slide phase image of trypanosomes in a blood smear. The image was acquired using rotational coded ptychography with the specimen mounted on the spinning disk of a Blu-ray drive [61]. (b) The high-resolution recovered phase image of a blood smear using FPM [128]. (c) Ki-67 cell analysis based on the recovered images using the lensless ptychographic whole slide scanner [60]. (d) Whole slide intensity and phase images of a blood smear captured using coded ptychography [30]. The zoomed-in views highlight the phase and intensity images of the white blood cells, which can be used for performing high-throughput differential white blood cell counting.
Fig. 8.
Fig. 8. High-throughput screening via different ptychographic implementations. (a) Large-scale color imaging of the entire microfilter for circulating tumor cell screening [293]. (b) High-throughput urinalysis based on the rotational coded ptychography platform built with a Blu-ray drive [61]. (c) Large-scale bacterial growth monitoring for rapid antimicrobial drug screening [64]. By imposing the temporal correlation constraint in coded ptychography, the imaging platform can achieve a centimeter-scale field of view, a half-pitch resolution of 488 nm, and a temporal resolution of 15-second intervals [64].
Fig. 9.
Fig. 9. 2D live-cell imaging via different ptychographic implementations. (a) The recovered phase images of U2OS cell culture using the in-vitro FPM system [122], where the phase is initialized using the differential-phase-contrast approach. (b) Video-rate phase imaging of HeLa cells using annular-illumination FPM [80], where the slow-varying phase information is effectively converted into intensity variations in the captured images. (c) Phase imaging and cell state identification of A549 cells using a lens-based selected-area ptychography system [144]. The white arrows show a proportion of brighter dividing cells, and the intense lines within the cells mark chromosome alignment prior to cytokinesis. (d) The recovered large-scale phase image of U87MG cell culture obtained by a lensless coded ptychography platform [31].
Fig. 10.
Fig. 10. Ptychography for 3D microscopy. (a) The recovered 3D volume of a Spirogyra sample via multi-slice ptychography [15]. (b) The recovered intensity and phase images of a 3D Spirogyra algae sample based on multi-slice FPM [16]. (c) Phase projection and 3D rendering of a 3D sample using multi-slice ptychographic tomography [93]. (d) Wide-field-of-view and high-resolution 3D imaging of a large population of HeLa cells via Fourier ptychographic diffraction tomography [71], where 3D refractive index of the cell culture is recovered from the FPM intensity measurements. (e) Different projected views of X-ray fluorescence and ptychographic tomography reconstructions [52], where pyrenoid locates near the top region and acidocalcisomes near the bottom region. (f) 3D rendering of a reconstructed mouse brain using ptychographic optical coherence tomography [142]. The colorscale represents the logarithm of the normalized reflectivity.
Fig. 11.
Fig. 11. Polarization-sensitive imaging using different ptychographic implementations. (a) The recovered absorption and phase images of coral-skeleton particles via X-ray linear dichroic ptychography [296]. (b) The large-area phase and retardance reconstruction of a thin cardiac tissue section using vectorial FPM [250]. (c) The recovered intensity, phase, and birefringence map of a Tilia stem using polarization-sensitive FPM [251]. (d) Recovered birefringence maps of mouse eye and heart tissue using polarization-sensitive lensless ptychography [102].

Tables (2)

Tables Icon

Table 1. Different hardware implementations of ptychography

Tables Icon

Table 2. Reconstruction approaches and algorithms

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

F T { O ( x , y ) e i k x i x e i k y i y } = O ^ ( k x k x i , k y k y i ) ,
I i ( x , y ) = | F T 1 { O ^ ( k x k x i , k y k y i ) P u p i l ( k x , k y ) } | 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.