## Abstract

Fourier ptychography captures intensity images with varying source patterns (illumination angles) in order to computationally reconstruct large space-bandwidth-product images. Accurate knowledge of the illumination angles is necessary for good image quality; hence, calibration methods are crucial, despite often being impractical or slow. Here, we propose a fast, robust, and accurate self-calibration algorithm that uses only experimentally collected data and general knowledge of the illumination setup. First, our algorithm makes a fast direct estimate of the brightfield illumination angles based on image processing. Then, a more computationally intensive spectral correlation method is used inside the iterative solver to further refine the angle estimates of both brightfield and darkfield images. We demonstrate our method for correcting large and small misalignment artifacts in 2D and 3D Fourier ptychography with different source types: an LED array, a galvo-steered laser, and a high-NA quasi-dome LED illuminator.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. INTRODUCTION

Computational imaging leverages the power of optical hardware and computational algorithms to reconstruct images from indirect measurements. In optical microscopy, programmable sources have been used for computational illumination techniques, including multicontrast [1,2], quantitative phase [3–6], and super-resolution [3,7–10]. Implementation is simple, requiring only an inexpensive source attachment for a commercial microscope. However, these methods are also sensitive to experimental misalignment errors and can suffer severe artifacts due to model mismatch. Extensive system calibration is needed to ensure that the inverse algorithm is consistent with the experimental setup, which can be time- and labor-intensive. This often requires significant user expertise, making the setup less accessible to reproduction by nonexperts and undermining the simplicity of the scheme. Further, precalibration methods are not robust to changes in the system (e.g., bumping the setup, changing objectives, sample-induced aberrations) and require precise knowledge of a ground-truth test object.

Algorithmic self-calibration methods [11–25] eliminate the need for pre-calibration and precise test objects by making calibration part of the inverse problem. These methods jointly solve two inverse problems: one for the reconstructed image of the object and the other for the calibration parameters. By recovering system calibration information directly from captured data, the system becomes robust to dynamic changes in the system.

Here, we focus on *illumination angle* self-calibration for
Fourier ptychographic microscopy (FPM) [3]. FPM is a coherent computational imaging method that
reconstructs high-resolution amplitude and phase across a wide
field-of-view (FoV) from intensity images captured with a low-resolution
objective lens and a dynamically coded illumination source. Images
captured with different illumination angles are combined computationally
in an iterative phase retrieval algorithm that constrains the measured
intensity in the image domain and pupil support in the Fourier domain.
This algorithm can be described as stitching together different sections
of Fourier space (synthetic aperture imaging [26,27]) coupled
with iterative phase retrieval. FPM has enabled fast *in
vitro* capture via multiplexing [9,10], fluorescence imaging
[25], and 3D microscopy [28,29]. It requires significant redundancy (pupil overlap) in the
data set [8,30], making it suitable for joint estimation
self-calibration.

Self-calibration routines have previously been developed to solve for pupil aberrations [11–13], illumination angles [14–18], LED intensity [19], sample motion [20], and autofocusing [21] in FPM. The state-of-the-art self-calibration method for illumination angles is simulated annealing [14,15], a joint estimation solution that (under proper initialization) removes LED misalignment artifacts that usually manifest as low-frequency noise. Unfortunately, because the simulated annealing procedure operates inside the FPM algorithm iterative loop, it slows the runtime of the solver by an order of magnitude or more. For 3D FPM (which is particularly sensitive to angle calibration [28]), the computational costs become infeasible.

Moreover, most self-calibration algorithms require a relatively close initial guess for the calibration parameters. This is especially true when the problem is nonconvex or if multiple calibration variables are to be solved for (e.g., object, pupil, and angles of illumination). Of the relevant calibration variables for FPM, illumination angles are the most prone to error, due to shifts or rotations of the LED array [31], source instabilities [22,32], nonplanar illuminator arrangements [33–36], or sample-induced aberrations [37,38]. Sample-induced aberrations can also change the effective illumination angles dynamically, such as when the sample is in a moving aqueous solution.

We propose here a two-pronged-angle self-calibration method that uses both
preprocessing (*brightfield calibration*) and iterative
joint estimation (*spectral correlation calibration*) that
is quicker and more robust to system changes than state-of-the-art
calibration methods. A circle-finding step prior to the FPM solver
accurately identifies the angles of illumination in the brightfield (BF)
region. A transformation between the expected and BF calibrated angles
extrapolates the correction to illuminations in the darkfield (DF) region.
Then, a local grid-search-based algorithm inside the FPM solver further
refines the angle estimates, with an optional prior based on the
illuminator geometry (Fig. 1). Our method is object-independent, robust to coherent noise,
and time-efficient, adding only seconds to the processing time. We
demonstrate on-line angle calibration for 2D and 3D FPM with three
different source types: an LED array, a galvanometer-steered laser, and a
high-NA (max ${\mathrm{NA}}_{\text{illum}}=0.98$) quasi-dome illuminator [36].

## 2. METHODS

The image formation process for a thin sample under off-axis spatially coherent plane wave illumination can be described by

Our algorithm relies on analysis of the raw intensity Fourier transform to recover illumination angles. Fourier domain analysis of intensity images has been used previously to deduce aberrations [39] and determine the center of diffraction patterns [40,41] for system calibration. We show here that the individual Fourier spectra can be used to accurately determine illumination angles in the brightfield and darkfield regimes.

#### A. Brightfield Calibration

Locating the center of the circles in the amplitude of a Fourier
spectrum is an image processing problem. Previous work in finding
circles in images uses the Hough transform, which relies on an
accurate edge detector as an initial step [42,43]. In
practice, however, we find that edge detectors do not function well on
our data sets due to speckle noise, making the Hough transform an
unreliable tool for our purpose. Therefore, we propose a new method
that we call *circular edge detection*.

Intuitively, circular edge detection can be understood as performing
edge detection (i.e., calculating image gradients) along a circular
arc around a candidate center point in $k$-space (the Fourier domain). To first
approximation, we assume $|{\tilde{I}}_{i}|$ is a binary function that is 1 inside the
two circles and 0 everywhere else. Our goal is to find the strong
binary edge in order to locate the circle center. We need only
consider one of the circles because the intensity image is
real-valued, and so its Fourier transform is symmetric. Based on
knowledge of the setup, we *expect* the illumination
spatial frequency (and circle center) for spectrum
${\tilde{I}}_{i}$ to be at ${\mathbf{k}}_{i,0}=({k}_{x,i,0},{k}_{y,i,0})$ (polar coordinates
${\mathbf{k}}_{i,0}=({d}_{i,0},{\theta}_{i,0})$) [Fig. 2(a)]. If this is the *correct*
center ${\mathbf{k}}_{i}^{\prime}$, we expect there to be a sharp drop in
$|{\tilde{I}}_{i}|$ at radius $R$ along any radial line
$f(r,{\varphi}_{n})$ out from ${\mathbf{k}}_{i}^{\prime}$ [Fig. 2(b)]. This amplitude edge will appear as a peak at
$r=R$ in the first derivative of each radial
line with respect to $r$, ${f}^{\prime}(r,{\varphi}_{n})$ [Fig. 2(d)]. Here, $(r,{\varphi}_{n})$ are the polar coordinates of the radial
line with respect to the center ${\mathbf{k}}_{i}$, considering the
$n$th of $N$ radial lines.

We identify the correct ${\mathbf{k}}_{i}^{\prime}$ by evaluating the summation of the first derivative around the circular arc at $r=R$ from several candidate ${\mathbf{k}}_{i}=({d}_{i},{\theta}_{i})$ positions:

*only*at the correct center ${\mathbf{k}}_{i}^{\prime}$ [Fig. 2(d)], creating a peak in ${E}_{1}$ [Fig. 2(e)]. This is analogous to applying a classic edge filter in the radial direction from a candidate center and accumulating the gradient values at radius $R$.

In order to bring our data closer to our binary image approximation, we divide out the average spectrum ${\mathrm{mean}}_{i}(|{\tilde{I}}_{i}|)$ across all $i$ spectra. Because the object remains constant across images while the angles of illumination change, the average spectrum is similar in form to the object’s auto-correlation spectrum, with a sharp central peak decaying toward higher frequencies. The resulting normalized spectra contain near-constant circles on top of background from higher-order terms. We then convolve with a Gaussian blur kernel with standard deviation $\sigma $ to remove speckle noise (Algorithm 1.1-2). Experimentally, we choose $\sigma =2$ pixels, which balances blurring speckle noise and maintains the circular edge. Under this model, the radial line $f(r,{\varphi}_{n})$ from our correct center ${\mathbf{k}}_{i}^{\prime}$ can be modeled near the circular edge as a binary step function convolved with a Gaussian:

*and*second derivatives increases our accuracy and robustness to noise across a wide variety of data sets. We therefore calculate a second derivative metric,

*both*${E}_{1}$ and ${E}_{2}$ [Figs. 2(e) and 2(f)], then use a least-squares error metric to determine the final calibrated ${\mathbf{k}}_{i}^{\prime}$ (Algorithm 1.5-8). In practice, we also only consider the nonoverlapping portion of the circle’s edge, bounding $\varphi $.

Until now, we have assumed that the precise radius $R$ of the pupil is known. However, in pixel units, $R$ is dependent on the pixel size of the sensor, ${p}_{s}$, and the system magnification, mag:

as well as ${\mathrm{NA}}_{\text{obj}}$ and $\lambda $, where ${\tilde{I}}_{i}$ is dimension $M\times M$. Given that $\text{mag}$ and ${\mathrm{NA}}_{\text{obj}}$ are often imprecisely known but are unchanged across all images, we calibrate the radius by finding the ${R}^{\prime}$, which gives the maximum gradient peak ${E}_{1}$ across multiple images before calibrating ${\mathbf{k}}_{i}^{\prime}$ (Algorithm 1.3). A random subset of images may be used to decrease computation time.Finally, once all images are calibrated, we want to remove outliers and extrapolate the correction to the darkfield images. Outliers occur due to (1) little high-frequency image content and therefore no defined circular edge, (2) strong background, or (3) shifts such that the conjugate circle center $-{\mathbf{k}}_{i}$ is identified as ${\mathbf{k}}_{i}^{\prime}$. In these cases, we cannot recover the correct center based on a single image and must rely on the overall calibrated change in the illuminator’s position. We find outliers based on an illuminator-specific transformation $A$ (e.g., rigid motion) between the expected initial guess of circle centers ${\mathbf{k}}_{i,0}$ (e.g., the LED array map) and the calibrated centers ${\mathbf{k}}_{i}^{\prime}$ using a RANSAC-based method [44]. This transformation is used to correct outliers and darkfield images (Algorithm 1.9-12), serving as an initialization for our spectral correlation (SC) method.

#### B. Spectral Correlation Calibration

While the brightfield (BF) calibration method localizes illumination angles using intrinsic contrast from each measurement, this contrast is not present in high-angle (darkfield) measurements [Fig. 1(b)]. Therefore, we additionally solve a more general joint estimation problem to refine the initialization provided by BF calibration, where the object $O(\mathbf{r})$, pupil $P(\mathbf{k})$, and illumination angles ${\mathbf{k}}_{\mathbf{i}}$ are optimized within the FPM algorithm. At each inner iteration, we estimate the $i$th illumination angle by minimizing the FPM objective function with respect to illumination angle [Fig. 3(a)]. This step finds the relative $k$-space location of the current spectrum ${\tilde{I}}_{i}$ relative to the overall object, providing an estimate ${\mathbf{k}}_{i}^{(m)}$ relative to the other illuminator angles ${\mathbf{k}}_{j}^{(m)}$, $j\ne i$. We call this the spectral correlation method because this optimization implicitly finds ${\mathbf{k}}_{i}^{(m)}$, which best aligns the $i$th spectrum with the estimated object spectrum $\tilde{O}{(\mathbf{k})}^{(m)}$.

Unlike previous methods [14,15], we constrain ${\mathbf{k}}_{i}$ to exist on the $k$-space grid defined by our image sampling. Our $k$-space resolution is band-limited by the size of the image patch, $\mathbf{s}=({s}_{x},{s}_{y})$, across which the illumination can be assumed coherent. This coherent area size is determined by the van Cittert–Zernike theorem, which can be simplified [45] to show that the coherence length ${l}_{c}$ of illumination with mean source wavelength $\overline{\lambda}$ produced by a source of size $\rho $ at a distance $R$ is ${l}_{c}=0.61R\overline{\lambda}/\rho $. For example, a 300 μm wide LED placed 50 mm above the sample with $\overline{\lambda}=530\text{\hspace{0.17em}}\mathrm{nm}$ gives ${l}_{c}=53.8\text{\hspace{0.17em}}\mathrm{\mu m}$, which provides an upper bound on the size of image patch used in the FPM reconstruction, $({s}_{x},{s}_{y})\le {l}_{c}$. This limitation imposes a minimum resolvable discretization of illumination angles $\mathrm{\Delta}\mathbf{k}=\frac{2}{\mathbf{s}}$ due to the Nyquist criterion. Because we cannot resolve finer angle changes, we need only perform a local grid search over integer multiples of $\mathrm{\Delta}\mathbf{k}$, which makes our joint estimation SC method much faster than previous methods.

SC calibration is cast as an iterative optimization of discrete perturbations of the estimated angle using a local grid search. At each FPM iteration, we solve for the optimal perturbation of illumination angle ${\mathbf{k}}_{i}^{(m)}$ over integer multiples $\mathbf{n}=({n}_{x},{n}_{y})$ of $k$-space resolution-limited steps $\mathrm{\Delta}\mathbf{k}$ such that the updated illumination position ${\mathbf{k}}_{i}^{(m+1)}={\mathbf{k}}_{i}^{(m)}+\mathbf{n}\xb7\mathrm{\Delta}\mathbf{k}$ minimizes the $\ell 2$ distance between the object and illumination angle estimates and measurements:

The choice of $\mathbf{n}=({n}_{x},{n}_{y})$ to search can be tuned to match the problem. In most experimental cases, we find that a search of the immediate locality of the current estimate ($({n}_{x},{n}_{y})\in [-1,0,1]$) gives a good balance between speed and gradient performance when paired with the close initialization from our BF calibration. A larger search range (e.g., $({n}_{x},{n}_{y})\in [-2,-1,0,1,2]$) may be required in the presence of noise or without a close initialization, but the number of points searched will increase with the square of the search range, causing the algorithm to slow considerably.

Including prior information about the design of the illumination source can make our calibration problem more well-posed. For example, we can include knowledge that an LED array is a rigid, planar illuminator in our initial guess of the illumination angle map, ${\mathbf{k}}_{i,0}$. By forcing the current estimates ${\mathbf{k}}_{i}^{(m)}$ to fit a transformation of this initial angle map at the end of each FPM subiteration, we can use this knowledge to regularize our optimization [Fig. 3(a)]. The transformation model used depends on the specific illuminator. For example, our quasi-dome LED array is composed of five circuit boards with precise LED positioning within each board but variable board position relative to each other. Thus, imposing an affine transformation from the angle map of each board to the current estimates ${\mathbf{k}}_{i}^{(m)}$ significantly reduces the problem dimensionality and mitigates noise across LEDs, making the reconstruction more stable.

## 3. RESULTS

#### A. Planar LED Array

We first show experimental results from a conventional LED array illumination system with a $10\times $, 0.25NA, and a $4\times $, 0.1NA, objective lens at $\lambda =514\text{\hspace{0.17em}}\mathrm{nm}$ and ${\mathrm{NA}}_{\text{illum}}\le 0.455$ (Fig. 4). We compare reconstructions with simulated annealing, our BF pre-processing alone, and our combined BF + SC calibration method. All methods were run in conjunction with EPRY pupil reconstruction [12]. We include results with and without the SC calibration to illustrate that the BF calibration is sufficient to correct for most misalignment of the LED array because we can accurately extrapolate LED positions to the darkfield region when the LEDs fall on a planar grid. However, when using a low NA objective (${\mathrm{NA}}_{\text{obj}}\le 0.1$), as in Fig. 4(d), the SC method becomes necessary because the BF calibration is only able to use nine images (compared with 69 brightfield images with a $10\times $, 0.25NA objective, as in Figs. 4(a)–4(c).

Our method is object-independent and can be used for phase and amplitude targets as well as biological samples. All methods reconstruct similar quality results for the well-aligned LED array with the USAF resolution target [Fig. 4(a)]. To simulate an aqueous sample, we place a drop of oil on top of the resolution target. The drop causes uneven changes in the illumination, giving low-frequency artifacts in the uncalibrated and simulated annealing cases, which are corrected by our method [Fig. 4(b)]. Our method is also able to recover a 5° rotation, 0.02 NA shift, and $1.1\times $ scaled computationally imposed misalignment on a well-aligned LED array data for a cheek cell [Fig. 4(c)] and gives a good reconstruction of an experimentally misaligned LED array for a phase Siemens star (Benchmark Technologies, Inc.) [Fig. 4(d)]. In contrast with simulated annealing, which on average takes $26\times $ as long to process as FPM without calibration, our brightfield calibration only takes an additional 24 s of processing time, and the combined calibration takes roughly only $2.25\times $ as long as no calibration.

#### B. Steered Laser

Laser illumination can be used instead of LED arrays to increase the coherence and light efficiency of FPM [32,33]. In practice, laser systems are typically less rigidly aligned than LED arrays, making them more difficult to calibrate. We constructed a laser-based FPM system using a dual-axis galvanometer to steer a 532 nm, 5 mW laser, which is focused on the sample by large condenser lenses [Fig. 5(a)]. This laser illumination system allows finer, more agile illumination control than an LED array as well as higher light throughput. However, the laser illumination angle varies from the expected value due to offsets in the dual-axis galvanometer mirrors, relay lens aberrations, and mirror position misestimations when run at high speeds. Our method can correct for these problems in a fraction of the time of the previous methods [Fig. 5(b)].

#### C. Quasi-Dome

Because the FPM resolution limit is set by ${\mathrm{NA}}_{\text{obj}}+{\mathrm{NA}}_{\text{illum}}$, high-NA illuminators are needed for large space-bandwidth product imaging [36,46]. To achieve high-angle illumination with sufficient signal-to-noise ratio in the darkfield region, the illuminators should be more dome-like rather than planar [34]. We previously developed a novel programmable quasi-dome array made of five separate planar LED arrays that can illuminate up to 0.98 NA [36] with discrete control of the RGB LEDs ($\overline{\lambda}=[475\text{\hspace{0.17em}}\mathrm{nm},530\text{\hspace{0.17em}}\mathrm{nm},630\text{\hspace{0.17em}}\mathrm{nm}]$). It can be easily attached to most commercial inverted microscopes [Fig. 5(c)].

As with conventional LED arrays, we assume that the LEDs on each board are rigidly placed as designed. However, each circuit board may have some relative shift, tilt, or rotation because the final mating of the five boards is performed by hand. LEDs with high-angle incidence are more difficult to calibrate and more likely to suffer from misestimation due to the dome geometry, so the theoretical reconstruction NA would be nearly impossible to reach without self-calibration. Using our method, we obtain the theoretical resolution limit available to the quasi-dome [Fig. 5(d)]. The SC calibration is especially important in the quasi-dome case because it usually has many darkfield LEDs.

#### D. 3D FPM

Calibration is particularly important for 3D FPM. Even small changes in angle become large when they are propagated to large defocus depths, leading to reduced resolution and reconstruction artifacts [22,28]. For example, using a well-aligned LED array, [28] was unable to reconstruct high-resolution features of a resolution target defocused beyond 30 μm due to angle misestimation; using the same data set, our method allows us to reconstruct high-resolution features of the target even when it is 70 μm off-focus (Fig. 6).

Because iterative angle estimation (including our SC calibration) unfeasibly increases the computational complexity of 3D FPM, we use BF calibration only. While we do not attain the theoretical limits for all depths, we offer significant reconstruction improvement. Our calibration only slightly changes the angles of illumination [Fig. 6(c)], highlighting that small angular changes have a large effect on 3D reconstructions. Experimental resolution was determined by resolvable bars on the USAF resolution target in Fig. 6(c), where we declare a feature “resolved” when there is a $>20\%$ dip between ${I}_{\mathrm{max}}$ and ${I}_{\mathrm{min}}$.

## 4. DISCUSSION

Our calibration method offers significant gains in speed and robustness as compared with previous methods. BF calibration enables these capabilities by obtaining a good calibration that needs to be calculated only once in preprocessing, reducing computation. Because an estimate of a global shift in the illuminator based only on the brightfield images provides such a close initialization for the rest of the angles, we can use a quicker, easier joint estimation in our SC calibration than would be otherwise possible. Jointly, these two methods work together to create fast and accurate reconstructions.

3D FPM algorithms are slowed by an untenable amount by iterative calibration methods because they require the complicated 3D forward model to be calculated multiple times during each iteration. Combined with 3D FPM’s reliance on precise illumination angles to obtain a good reconstruction, it has previously been difficult to obtain accurate reconstruction of large volumes with 3D FPM. However, because BF calibration occurs outside the 3D FPM algorithm, we can now correct for the angle misestimations that have degraded these reconstructions in the past, allowing 3D FPM to be applied to larger volumes.

We analyze the robustness of our method to illumination changes by simulating an object illuminated by a grid of LEDs with ${\mathrm{NA}}_{\text{illum}}<0.41$, with LEDs spaced at 0.041NA intervals. We define the system to have $\lambda =532\text{\hspace{0.17em}}\mathrm{nm}$, with a $10\times $, 0.25 NA objective, a $2\times $ system magnification, and a camera with 6.5 μm pixels. While the actual illumination angles in the simulated data remain fixed, we perturb the expected angle of illumination in typical misalignment patterns for LED arrays: rotation, shift, and scale (analogous to LED array distance from sample). We then calibrate the unperturbed data with the perturbed expected angles of illumination as our initial guess.

Our method recovers the actual illumination angles with errors less than 0.005 NA for rotations of $-45\xb0$ to 45° [Fig. 7(a)]; shifts of $-0.1$ to 0.1 NA, or approximately a displacement of $\pm 2$ LEDs [Fig. 7(b)]; and scalings of $0.5\times $ to $1.75\times $ (or LED array height between 40–140 cm if the actual LED array height is 70 cm) [Fig. 7(c)]. In these ranges, the average error is 0.0024 NA, less than the $k$-space resolution of 0.0032 NA. Hence, our calibrated angles are close to the actual angles even when the input expected angles are extremely far off. This result demonstrates that our method is robust to most misalignments in the illumination scheme.

## 5. CONCLUSION

We have presented a novel two-part calibration method for recovering the illumination angles of a computational illumination system for Fourier ptychography. We have demonstrated how this self-calibrating method makes Fourier ptychographic microscopes more robust to system changes and sample-induced aberrations. The method also makes it possible to use high-angle illuminators, such as the quasi-dome, and nonrigid illuminators, such as laser-based systems, to their full potential. Our preprocessing brightfield calibration further enables 3D multislice Fourier ptychography to reconstruct high-resolution features across larger volumes than previously possible. These gains were all made with minimal additional computation, especially when compared with current state-of-the-art methods. Efficient self-calibrating methods such as these are important to make computational imaging methods more robust and available for broad use. Open source code is available at www.laurawaller.com/opensource.

## Funding

National Science Foundation (NSF) (DGE 1106400); Gordon and Betty Moore Foundation (GBMF4562); David and Lucile Packard Foundation; Office of Naval Research (ONR) (N00014-17-1-2401).

## REFERENCES

**1. **G. Zheng, C. Kolner, and C. Yang, “Microscopy refocusing
and dark-field imaging by using a simple LED
array,” Opt. Lett. **36**, 3987–3989
(2011). [CrossRef]

**2. **Z. Liu, L. Tian, S. Liu, and L. Waller, “Real-time brightfield,
darkfield, and phase contrast imaging in a light-emitting diode array
microscope,” J. Biomed. Opt. **19**, 106002 (2014). [CrossRef]

**3. **G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field,
high-resolution Fourier ptychographic
microscopy,” Nat. Photonics **7**, 739–745
(2013). [CrossRef]

**4. **L. Tian, J. Wang, and L. Waller, “3D differential
phase-contrast microscopy with computational illumination using an LED
array,” Opt. Lett. **39**, 1326–1329
(2014). [CrossRef]

**5. **L. Tian and L. Waller, “Quantitative
differential phase contrast imaging in an LED array
microscope,” Opt. Express **23**, 11394–11403
(2015). [CrossRef]

**6. **M. Chen, L. Tian, and L. Waller, “3D differential phase
contrast microscopy,” Biomed. Opt.
Express **7**,
3940–3950
(2016). [CrossRef]

**7. **X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, “Quantitative phase
imaging via Fourier ptychographic microscopy,”
Opt. Lett. **38**,
4845–4848
(2013). [CrossRef]

**8. **S. Dong, Z. Bian, R. Shiradkar, and G. Zheng, “Sparsely sampled
Fourier ptychography,” Opt.
Express **22**,
5455–5464
(2014). [CrossRef]

**9. **L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded
illumination for Fourier ptychography with an LED array
microscope,” Biomed. Opt.
Express **5**,
2376–2389
(2014). [CrossRef]

**10. **L. Tian, Z. Liu, L.-H. Yeh, M. Chen, J. Zhong, and L. Waller, “Computational
illumination for high-speed in vitro Fourier ptychographic
microscopy,” Optica **2**, 904–911
(2015). [CrossRef]

**11. **P. Thibault, M. Dierolf, O. Bunk, A. Menzel, and F. Pfeiffer, “Probe retrieval in
ptychographic coherent diffractive imaging,”
Ultramicroscopy **109**,
338–343
(2009). [CrossRef]

**12. **X. Ou, G. Zheng, and C. Yang, “Embedded pupil function
recovery for Fourier ptychographic microscopy,”
Opt. Express **22**,
4960–4972
(2014). [CrossRef]

**13. **R. Horstmeyer, X. Ou, J. Chung, G. Zheng, and C. Yang, “Overlapped Fourier
coding for optical aberration removal,”
Opt. Express **22**,
24062–24080
(2014). [CrossRef]

**14. **L.-H. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, and L. Waller, “Experimental robustness
of Fourier ptychography phase retrieval
algorithms,” Opt. Express **23**, 33214–33240
(2015). [CrossRef]

**15. **J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Efficient positional
misalignment correction method for Fourier ptychographic
microscopy,” Biomed. Opt.
Express **7**,
1336–1350
(2016). [CrossRef]

**16. **J. Liu, Y. Li, W. Wang, H. Zhang, Y. Wang, J. Tan, and C. Liu, “Stable and robust
frequency domain position compensation strategy for Fourier
ptychographic microscopy,” Opt.
Express **25**,
28053–28067
(2017). [CrossRef]

**17. **A. Maiden, M. Humphry, M. Sarahan, B. Kraus, and J. Rodenburg, “An annealing algorithm
to correct positioning errors in ptychography,”
Ultramicroscopy **120**,
64–72 (2012). [CrossRef]

**18. **F. Zhang, I. Peterson, J. Vila-Comamala, A. Diaz, F. Berenguer, R. Bean, B. Chen, A. Menzel, I. K. Robinson, and J. M. Rodenburg, “Translation position
determination in ptychographic coherent diffraction
imaging,” Opt. Express **21**, 13592–13606
(2013). [CrossRef]

**19. **Z. Bian, S. Dong, and G. Zheng, “Adaptive system
correction for robust Fourier ptychographic
imaging,” Opt. Express **21**, 32400–32410
(2013). [CrossRef]

**20. **L. Bian, G. Zheng, K. Guo, J. Suo, C. Yang, F. Chen, and Q. Dai, “Motion-corrected
Fourier ptychography,” Biomed. Opt.
Express **7**,
4543–4553
(2016). [CrossRef]

**21. **J. Dou, Z. Gao, J. Ma, C. Yuan, Z. Yang, and L. Wang, “Iterative autofocusing
strategy for axial distance error correction in
ptychography,” Opt. Lasers
Eng. **98**,
56–61 (2017). [CrossRef]

**22. **R. Eckert, L. Tian, and L. Waller, “Algorithmic
self-calibration of illumination angles in Fourier ptychographic
microscopy,” in *Imaging and Applied
Optics* (Optical Society of
America, 2016), paper CT2D.3.

**23. **G. Satat, B. Heshmat, D. Raviv, and R. Raskar, “All photons imaging
through volumetric scattering,” Sci.
Rep. **6**, 33946
(2016). [CrossRef]

**24. **A. Pan, Y. Zhang, T. Zhao, Z. Wang, D. Dan, M. Lei, and B. Yao, “System calibration
method for Fourier ptychographic microscopy,”
J. Biomed. Opt. **22**,
096005 (2017). [CrossRef]

**25. **J. Chung, J. Kim, X. Ou, R. Horstmeyer, and C. Yang, “Wide field-of-view
fluorescence image deconvolution with aberration-estimation from
Fourier ptychography,” Biomed. Opt.
Express **7**,
352–368
(2016). [CrossRef]

**26. **T. M. Turpin, L. H. Gesell, J. Lapides, and C. H. Price, *Theory of the Synthetic Aperture
Microscope* (1995),
Vol. 2566,
pp. 1–11.

**27. **J. Di, J. Zhao, H. Jiang, P. Zhang, Q. Fan, and W. Sun, “High resolution digital
holographic microscopy with a wide field of view based on a synthetic
aperture technique and use of linear CCD
scanning,” Appl. Opt. **47**, 5654–5659
(2008). [CrossRef]

**28. **L. Tian and L. Waller, “3D intensity and phase
imaging from light field measurements in an LED array
microscope,” Optica **2**, 104–111
(2015). [CrossRef]

**29. **R. Horstmeyer, J. Chung, X. Ou, G. Zheng, and C. Yang, “Diffraction tomography
with Fourier ptychography,”
Optica **3**,
827–835
(2016). [CrossRef]

**30. **J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Sampling criteria for
Fourier ptychographic microscopy in object space and frequency
space,” Opt. Express **24**, 15765–15781
(2016). [CrossRef]

**31. **K. Guo, S. Dong, P. Nanda, and G. Zheng, “Optimization of
sampling pattern and the design of Fourier ptychographic
illuminator,” Opt. Express **23**, 6171–6180
(2015). [CrossRef]

**32. **C. Kuang, Y. Ma, R. Zhou, J. Lee, G. Barbastathis, R. R. Dasari, Z. Yaqoob, and P. T. C. So, “Digital micromirror
device-based laser-illumination Fourier ptychographic
microscopy,” Opt. Express **23**, 26999–27010
(2015). [CrossRef]

**33. **J. Chung, H. Lu, X. Ou, H. Zhou, and C. Yang, “Wide-field Fourier
ptychographic microscopy using laser illumination
source,” Biomed. Opt. Express **7**, 4787–4802
(2016). [CrossRef]

**34. **Z. F. Phillips, M. V. D’Ambrosio, L. Tian, J. J. Rulison, H. S. Patel, N. Sadras, A. V. Gande, N. A. Switz, D. A. Fletcher, and L. Waller, “Multi-contrast imaging
and digital refocusing on a mobile microscope with a domed LED
array,” PLoS ONE **10**, e0124938
(2015). [CrossRef]

**35. **S. Sen, I. Ahmed, B. Aljubran, A. A. Bernussi, and L. G. de Peralta, “Fourier ptychographic
microscopy using an infrared-emitting hemispherical digital
condenser,” Appl. Opt. **55**, 6421–6427
(2016). [CrossRef]

**36. **Z. Phillips, R. Eckert, and L. Waller, “Quasi-dome: A
self-calibrated high-NA LED illuminator for Fourier
ptychography,” in *Imaging and Applied
Optics* (Optical Society of
America, 2017), paper IW4E.5.

**37. **S. Hell, G. Reiner, C. Cremer, and E. H. K. Stelzer, “Aberrations in confocal
fluorescence microscopy induced by mismatches in refractive
index,” J. Microsc. **169**, 391–405
(1993). [CrossRef]

**38. **S. Kang, P. Kang, S. Jeong, Y. Kwon, T. D. Yang, J. H. Hong, M. Kim, K.-D. Song, J. H. Park, J. H. Lee, M. J. Kim, K. H. Kim, and W. Choi, “High-resolution
adaptive optical imaging within thick scattering media using
closed-loop accumulation of single scattering,”
Nat. Commun. **8**, 2157
(2017). [CrossRef]

**39. **A. Shanker, A. Wojdyla, G. Gunjala, J. Dong, M. Benk, A. Neureuther, K. Goldberg, and L. Waller, “Off-axis aberration
estimation in an EUV microscope using natural
speckle,” in *Imaging and Applied
Optics* (Optical Society of
America, 2016), paper ITh1F.2.

**40. **C. Dammer, P. Leleux, D. Villers, and M. Dosire, “Use of the Hough
transform to determine the center of digitized x-ray diffraction
patterns,” Nucl. Instrum. Methods Phys.
Res. Sect. B **132**,
214–220
(1997). [CrossRef]

**41. **J. Cauchie, V. Fiolet, and D. Villers, “Optimization of an
Hough transform algorithm for the search of a
center,” Pattern Recognit. **41**, 567–574
(2008). [CrossRef]

**42. **H. K. Yuen, J. Princen, J. Illingworth, and J. Kittler, “A comparative study of
Hough transform methods for circle finding,” in
Proceedings of the 5th Alvey Vision Conference,
Reading, August 31,
1989,
pp. 169–174.

**43. **E. Davies, *Machine Vision: Theory, Algorithms and
Practicalities*, 3rd ed.
(Morgan Kauffmann,
2004).

**44. **M. Jacobson, “Absolute orientation
MATLAB package,” in *MATLAB Central File
Exchange* (2015).

**45. **M. Born and E. Wolf, *Principles of Optics: Electromagnetic
Theory of Propagation, Interference and Diffraction of Light*,
7th ed. (Cambridge
University, 1999).

**46. **J. Sun, C. Zuo, L. Zhang, and Q. Chen, “Resolution-enhanced
Fourier ptychographic microscopy based on high-numerical-aperture
illuminations,” Sci. Rep. **7**, 1187 (2017). [CrossRef]