Fourier ptychography captures intensity images with varying source patterns (illumination angles) in order to computationally reconstruct large space-bandwidth-product images. Accurate knowledge of the illumination angles is necessary for good image quality; hence, calibration methods are crucial, despite often being impractical or slow. Here, we propose a fast, robust, and accurate self-calibration algorithm that uses only experimentally collected data and general knowledge of the illumination setup. First, our algorithm makes a fast direct estimate of the brightfield illumination angles based on image processing. Then, a more computationally intensive spectral correlation method is used inside the iterative solver to further refine the angle estimates of both brightfield and darkfield images. We demonstrate our method for correcting large and small misalignment artifacts in 2D and 3D Fourier ptychography with different source types: an LED array, a galvo-steered laser, and a high-NA quasi-dome LED illuminator.
© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Computational imaging leverages the power of optical hardware and computational algorithms to reconstruct images from indirect measurements. In optical microscopy, programmable sources have been used for computational illumination techniques, including multicontrast [1,2], quantitative phase [3–6], and super-resolution [3,7–10]. Implementation is simple, requiring only an inexpensive source attachment for a commercial microscope. However, these methods are also sensitive to experimental misalignment errors and can suffer severe artifacts due to model mismatch. Extensive system calibration is needed to ensure that the inverse algorithm is consistent with the experimental setup, which can be time- and labor-intensive. This often requires significant user expertise, making the setup less accessible to reproduction by nonexperts and undermining the simplicity of the scheme. Further, precalibration methods are not robust to changes in the system (e.g., bumping the setup, changing objectives, sample-induced aberrations) and require precise knowledge of a ground-truth test object.
Algorithmic self-calibration methods [11–25] eliminate the need for pre-calibration and precise test objects by making calibration part of the inverse problem. These methods jointly solve two inverse problems: one for the reconstructed image of the object and the other for the calibration parameters. By recovering system calibration information directly from captured data, the system becomes robust to dynamic changes in the system.
Here, we focus on illumination angle self-calibration for Fourier ptychographic microscopy (FPM) . FPM is a coherent computational imaging method that reconstructs high-resolution amplitude and phase across a wide field-of-view (FoV) from intensity images captured with a low-resolution objective lens and a dynamically coded illumination source. Images captured with different illumination angles are combined computationally in an iterative phase retrieval algorithm that constrains the measured intensity in the image domain and pupil support in the Fourier domain. This algorithm can be described as stitching together different sections of Fourier space (synthetic aperture imaging [26,27]) coupled with iterative phase retrieval. FPM has enabled fast in vitro capture via multiplexing [9,10], fluorescence imaging , and 3D microscopy [28,29]. It requires significant redundancy (pupil overlap) in the data set [8,30], making it suitable for joint estimation self-calibration.
Self-calibration routines have previously been developed to solve for pupil aberrations [11–13], illumination angles [14–18], LED intensity , sample motion , and autofocusing  in FPM. The state-of-the-art self-calibration method for illumination angles is simulated annealing [14,15], a joint estimation solution that (under proper initialization) removes LED misalignment artifacts that usually manifest as low-frequency noise. Unfortunately, because the simulated annealing procedure operates inside the FPM algorithm iterative loop, it slows the runtime of the solver by an order of magnitude or more. For 3D FPM (which is particularly sensitive to angle calibration ), the computational costs become infeasible.
Moreover, most self-calibration algorithms require a relatively close initial guess for the calibration parameters. This is especially true when the problem is nonconvex or if multiple calibration variables are to be solved for (e.g., object, pupil, and angles of illumination). Of the relevant calibration variables for FPM, illumination angles are the most prone to error, due to shifts or rotations of the LED array , source instabilities [22,32], nonplanar illuminator arrangements [33–36], or sample-induced aberrations [37,38]. Sample-induced aberrations can also change the effective illumination angles dynamically, such as when the sample is in a moving aqueous solution.
We propose here a two-pronged-angle self-calibration method that uses both preprocessing (brightfield calibration) and iterative joint estimation (spectral correlation calibration) that is quicker and more robust to system changes than state-of-the-art calibration methods. A circle-finding step prior to the FPM solver accurately identifies the angles of illumination in the brightfield (BF) region. A transformation between the expected and BF calibrated angles extrapolates the correction to illuminations in the darkfield (DF) region. Then, a local grid-search-based algorithm inside the FPM solver further refines the angle estimates, with an optional prior based on the illuminator geometry (Fig. 1). Our method is object-independent, robust to coherent noise, and time-efficient, adding only seconds to the processing time. We demonstrate on-line angle calibration for 2D and 3D FPM with three different source types: an LED array, a galvanometer-steered laser, and a high-NA (max ) quasi-dome illuminator .
The image formation process for a thin sample under off-axis spatially coherent plane wave illumination can be described by1). Hence, we can calibrate the illumination angle by finding these circle centers. For darkfield images, the DC term is outside , and so we do not observe clearly defined circles in [Fig. 1(b)], making calibration more complicated.
Our algorithm relies on analysis of the raw intensity Fourier transform to recover illumination angles. Fourier domain analysis of intensity images has been used previously to deduce aberrations  and determine the center of diffraction patterns [40,41] for system calibration. We show here that the individual Fourier spectra can be used to accurately determine illumination angles in the brightfield and darkfield regimes.
A. Brightfield Calibration
Locating the center of the circles in the amplitude of a Fourier spectrum is an image processing problem. Previous work in finding circles in images uses the Hough transform, which relies on an accurate edge detector as an initial step [42,43]. In practice, however, we find that edge detectors do not function well on our data sets due to speckle noise, making the Hough transform an unreliable tool for our purpose. Therefore, we propose a new method that we call circular edge detection.
Intuitively, circular edge detection can be understood as performing edge detection (i.e., calculating image gradients) along a circular arc around a candidate center point in -space (the Fourier domain). To first approximation, we assume is a binary function that is 1 inside the two circles and 0 everywhere else. Our goal is to find the strong binary edge in order to locate the circle center. We need only consider one of the circles because the intensity image is real-valued, and so its Fourier transform is symmetric. Based on knowledge of the setup, we expect the illumination spatial frequency (and circle center) for spectrum to be at (polar coordinates ) [Fig. 2(a)]. If this is the correct center , we expect there to be a sharp drop in at radius along any radial line out from [Fig. 2(b)]. This amplitude edge will appear as a peak at in the first derivative of each radial line with respect to , [Fig. 2(d)]. Here, are the polar coordinates of the radial line with respect to the center , considering the th of radial lines.
We identify the correct by evaluating the summation of the first derivative around the circular arc at from several candidate positions:2(c)]. The derivatives at are all maximized only at the correct center [Fig. 2(d)], creating a peak in [Fig. 2(e)]. This is analogous to applying a classic edge filter in the radial direction from a candidate center and accumulating the gradient values at radius .
In order to bring our data closer to our binary image approximation, we divide out the average spectrum across all spectra. Because the object remains constant across images while the angles of illumination change, the average spectrum is similar in form to the object’s auto-correlation spectrum, with a sharp central peak decaying toward higher frequencies. The resulting normalized spectra contain near-constant circles on top of background from higher-order terms. We then convolve with a Gaussian blur kernel with standard deviation to remove speckle noise (Algorithm 1.1-2). Experimentally, we choose pixels, which balances blurring speckle noise and maintains the circular edge. Under this model, the radial line from our correct center can be modeled near the circular edge as a binary step function convolved with a Gaussian:3). We identify candidate centers that occur near the peak of both and [Figs. 2(e) and 2(f)], then use a least-squares error metric to determine the final calibrated (Algorithm 1.5-8). In practice, we also only consider the nonoverlapping portion of the circle’s edge, bounding .
Until now, we have assumed that the precise radius of the pupil is known. However, in pixel units, is dependent on the pixel size of the sensor, , and the system magnification, mag:
Finally, once all images are calibrated, we want to remove outliers and extrapolate the correction to the darkfield images. Outliers occur due to (1) little high-frequency image content and therefore no defined circular edge, (2) strong background, or (3) shifts such that the conjugate circle center is identified as . In these cases, we cannot recover the correct center based on a single image and must rely on the overall calibrated change in the illuminator’s position. We find outliers based on an illuminator-specific transformation (e.g., rigid motion) between the expected initial guess of circle centers (e.g., the LED array map) and the calibrated centers using a RANSAC-based method . This transformation is used to correct outliers and darkfield images (Algorithm 1.9-12), serving as an initialization for our spectral correlation (SC) method.
B. Spectral Correlation Calibration
While the brightfield (BF) calibration method localizes illumination angles using intrinsic contrast from each measurement, this contrast is not present in high-angle (darkfield) measurements [Fig. 1(b)]. Therefore, we additionally solve a more general joint estimation problem to refine the initialization provided by BF calibration, where the object , pupil , and illumination angles are optimized within the FPM algorithm. At each inner iteration, we estimate the th illumination angle by minimizing the FPM objective function with respect to illumination angle [Fig. 3(a)]. This step finds the relative -space location of the current spectrum relative to the overall object, providing an estimate relative to the other illuminator angles , . We call this the spectral correlation method because this optimization implicitly finds , which best aligns the th spectrum with the estimated object spectrum .
Unlike previous methods [14,15], we constrain to exist on the -space grid defined by our image sampling. Our -space resolution is band-limited by the size of the image patch, , across which the illumination can be assumed coherent. This coherent area size is determined by the van Cittert–Zernike theorem, which can be simplified  to show that the coherence length of illumination with mean source wavelength produced by a source of size at a distance is . For example, a 300 μm wide LED placed 50 mm above the sample with gives , which provides an upper bound on the size of image patch used in the FPM reconstruction, . This limitation imposes a minimum resolvable discretization of illumination angles due to the Nyquist criterion. Because we cannot resolve finer angle changes, we need only perform a local grid search over integer multiples of , which makes our joint estimation SC method much faster than previous methods.
SC calibration is cast as an iterative optimization of discrete perturbations of the estimated angle using a local grid search. At each FPM iteration, we solve for the optimal perturbation of illumination angle over integer multiples of -space resolution-limited steps such that the updated illumination position minimizes the distance between the object and illumination angle estimates and measurements:3).
The choice of to search can be tuned to match the problem. In most experimental cases, we find that a search of the immediate locality of the current estimate () gives a good balance between speed and gradient performance when paired with the close initialization from our BF calibration. A larger search range (e.g., ) may be required in the presence of noise or without a close initialization, but the number of points searched will increase with the square of the search range, causing the algorithm to slow considerably.
Including prior information about the design of the illumination source can make our calibration problem more well-posed. For example, we can include knowledge that an LED array is a rigid, planar illuminator in our initial guess of the illumination angle map, . By forcing the current estimates to fit a transformation of this initial angle map at the end of each FPM subiteration, we can use this knowledge to regularize our optimization [Fig. 3(a)]. The transformation model used depends on the specific illuminator. For example, our quasi-dome LED array is composed of five circuit boards with precise LED positioning within each board but variable board position relative to each other. Thus, imposing an affine transformation from the angle map of each board to the current estimates significantly reduces the problem dimensionality and mitigates noise across LEDs, making the reconstruction more stable.
A. Planar LED Array
We first show experimental results from a conventional LED array illumination system with a , 0.25NA, and a , 0.1NA, objective lens at and (Fig. 4). We compare reconstructions with simulated annealing, our BF pre-processing alone, and our combined BF + SC calibration method. All methods were run in conjunction with EPRY pupil reconstruction . We include results with and without the SC calibration to illustrate that the BF calibration is sufficient to correct for most misalignment of the LED array because we can accurately extrapolate LED positions to the darkfield region when the LEDs fall on a planar grid. However, when using a low NA objective (), as in Fig. 4(d), the SC method becomes necessary because the BF calibration is only able to use nine images (compared with 69 brightfield images with a , 0.25NA objective, as in Figs. 4(a)–4(c).
Our method is object-independent and can be used for phase and amplitude targets as well as biological samples. All methods reconstruct similar quality results for the well-aligned LED array with the USAF resolution target [Fig. 4(a)]. To simulate an aqueous sample, we place a drop of oil on top of the resolution target. The drop causes uneven changes in the illumination, giving low-frequency artifacts in the uncalibrated and simulated annealing cases, which are corrected by our method [Fig. 4(b)]. Our method is also able to recover a 5° rotation, 0.02 NA shift, and scaled computationally imposed misalignment on a well-aligned LED array data for a cheek cell [Fig. 4(c)] and gives a good reconstruction of an experimentally misaligned LED array for a phase Siemens star (Benchmark Technologies, Inc.) [Fig. 4(d)]. In contrast with simulated annealing, which on average takes as long to process as FPM without calibration, our brightfield calibration only takes an additional 24 s of processing time, and the combined calibration takes roughly only as long as no calibration.
B. Steered Laser
Laser illumination can be used instead of LED arrays to increase the coherence and light efficiency of FPM [32,33]. In practice, laser systems are typically less rigidly aligned than LED arrays, making them more difficult to calibrate. We constructed a laser-based FPM system using a dual-axis galvanometer to steer a 532 nm, 5 mW laser, which is focused on the sample by large condenser lenses [Fig. 5(a)]. This laser illumination system allows finer, more agile illumination control than an LED array as well as higher light throughput. However, the laser illumination angle varies from the expected value due to offsets in the dual-axis galvanometer mirrors, relay lens aberrations, and mirror position misestimations when run at high speeds. Our method can correct for these problems in a fraction of the time of the previous methods [Fig. 5(b)].
Because the FPM resolution limit is set by , high-NA illuminators are needed for large space-bandwidth product imaging [36,46]. To achieve high-angle illumination with sufficient signal-to-noise ratio in the darkfield region, the illuminators should be more dome-like rather than planar . We previously developed a novel programmable quasi-dome array made of five separate planar LED arrays that can illuminate up to 0.98 NA  with discrete control of the RGB LEDs (). It can be easily attached to most commercial inverted microscopes [Fig. 5(c)].
As with conventional LED arrays, we assume that the LEDs on each board are rigidly placed as designed. However, each circuit board may have some relative shift, tilt, or rotation because the final mating of the five boards is performed by hand. LEDs with high-angle incidence are more difficult to calibrate and more likely to suffer from misestimation due to the dome geometry, so the theoretical reconstruction NA would be nearly impossible to reach without self-calibration. Using our method, we obtain the theoretical resolution limit available to the quasi-dome [Fig. 5(d)]. The SC calibration is especially important in the quasi-dome case because it usually has many darkfield LEDs.
D. 3D FPM
Calibration is particularly important for 3D FPM. Even small changes in angle become large when they are propagated to large defocus depths, leading to reduced resolution and reconstruction artifacts [22,28]. For example, using a well-aligned LED array,  was unable to reconstruct high-resolution features of a resolution target defocused beyond 30 μm due to angle misestimation; using the same data set, our method allows us to reconstruct high-resolution features of the target even when it is 70 μm off-focus (Fig. 6).
Because iterative angle estimation (including our SC calibration) unfeasibly increases the computational complexity of 3D FPM, we use BF calibration only. While we do not attain the theoretical limits for all depths, we offer significant reconstruction improvement. Our calibration only slightly changes the angles of illumination [Fig. 6(c)], highlighting that small angular changes have a large effect on 3D reconstructions. Experimental resolution was determined by resolvable bars on the USAF resolution target in Fig. 6(c), where we declare a feature “resolved” when there is a dip between and .
Our calibration method offers significant gains in speed and robustness as compared with previous methods. BF calibration enables these capabilities by obtaining a good calibration that needs to be calculated only once in preprocessing, reducing computation. Because an estimate of a global shift in the illuminator based only on the brightfield images provides such a close initialization for the rest of the angles, we can use a quicker, easier joint estimation in our SC calibration than would be otherwise possible. Jointly, these two methods work together to create fast and accurate reconstructions.
3D FPM algorithms are slowed by an untenable amount by iterative calibration methods because they require the complicated 3D forward model to be calculated multiple times during each iteration. Combined with 3D FPM’s reliance on precise illumination angles to obtain a good reconstruction, it has previously been difficult to obtain accurate reconstruction of large volumes with 3D FPM. However, because BF calibration occurs outside the 3D FPM algorithm, we can now correct for the angle misestimations that have degraded these reconstructions in the past, allowing 3D FPM to be applied to larger volumes.
We analyze the robustness of our method to illumination changes by simulating an object illuminated by a grid of LEDs with , with LEDs spaced at 0.041NA intervals. We define the system to have , with a , 0.25 NA objective, a system magnification, and a camera with 6.5 μm pixels. While the actual illumination angles in the simulated data remain fixed, we perturb the expected angle of illumination in typical misalignment patterns for LED arrays: rotation, shift, and scale (analogous to LED array distance from sample). We then calibrate the unperturbed data with the perturbed expected angles of illumination as our initial guess.
Our method recovers the actual illumination angles with errors less than 0.005 NA for rotations of to 45° [Fig. 7(a)]; shifts of to 0.1 NA, or approximately a displacement of LEDs [Fig. 7(b)]; and scalings of to (or LED array height between 40–140 cm if the actual LED array height is 70 cm) [Fig. 7(c)]. In these ranges, the average error is 0.0024 NA, less than the -space resolution of 0.0032 NA. Hence, our calibrated angles are close to the actual angles even when the input expected angles are extremely far off. This result demonstrates that our method is robust to most misalignments in the illumination scheme.
We have presented a novel two-part calibration method for recovering the illumination angles of a computational illumination system for Fourier ptychography. We have demonstrated how this self-calibrating method makes Fourier ptychographic microscopes more robust to system changes and sample-induced aberrations. The method also makes it possible to use high-angle illuminators, such as the quasi-dome, and nonrigid illuminators, such as laser-based systems, to their full potential. Our preprocessing brightfield calibration further enables 3D multislice Fourier ptychography to reconstruct high-resolution features across larger volumes than previously possible. These gains were all made with minimal additional computation, especially when compared with current state-of-the-art methods. Efficient self-calibrating methods such as these are important to make computational imaging methods more robust and available for broad use. Open source code is available at www.laurawaller.com/opensource.
National Science Foundation (NSF) (DGE 1106400); Gordon and Betty Moore Foundation (GBMF4562); David and Lucile Packard Foundation; Office of Naval Research (ONR) (N00014-17-1-2401).
1. G. Zheng, C. Kolner, and C. Yang, “Microscopy refocusing and dark-field imaging by using a simple LED array,” Opt. Lett. 36, 3987–3989 (2011). [CrossRef]
2. Z. Liu, L. Tian, S. Liu, and L. Waller, “Real-time brightfield, darkfield, and phase contrast imaging in a light-emitting diode array microscope,” J. Biomed. Opt. 19, 106002 (2014). [CrossRef]
3. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7, 739–745 (2013). [CrossRef]
4. L. Tian, J. Wang, and L. Waller, “3D differential phase-contrast microscopy with computational illumination using an LED array,” Opt. Lett. 39, 1326–1329 (2014). [CrossRef]
5. L. Tian and L. Waller, “Quantitative differential phase contrast imaging in an LED array microscope,” Opt. Express 23, 11394–11403 (2015). [CrossRef]
6. M. Chen, L. Tian, and L. Waller, “3D differential phase contrast microscopy,” Biomed. Opt. Express 7, 3940–3950 (2016). [CrossRef]
7. X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, “Quantitative phase imaging via Fourier ptychographic microscopy,” Opt. Lett. 38, 4845–4848 (2013). [CrossRef]
8. S. Dong, Z. Bian, R. Shiradkar, and G. Zheng, “Sparsely sampled Fourier ptychography,” Opt. Express 22, 5455–5464 (2014). [CrossRef]
9. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier ptychography with an LED array microscope,” Biomed. Opt. Express 5, 2376–2389 (2014). [CrossRef]
10. L. Tian, Z. Liu, L.-H. Yeh, M. Chen, J. Zhong, and L. Waller, “Computational illumination for high-speed in vitro Fourier ptychographic microscopy,” Optica 2, 904–911 (2015). [CrossRef]
11. P. Thibault, M. Dierolf, O. Bunk, A. Menzel, and F. Pfeiffer, “Probe retrieval in ptychographic coherent diffractive imaging,” Ultramicroscopy 109, 338–343 (2009). [CrossRef]
12. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22, 4960–4972 (2014). [CrossRef]
13. R. Horstmeyer, X. Ou, J. Chung, G. Zheng, and C. Yang, “Overlapped Fourier coding for optical aberration removal,” Opt. Express 22, 24062–24080 (2014). [CrossRef]
14. L.-H. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, and L. Waller, “Experimental robustness of Fourier ptychography phase retrieval algorithms,” Opt. Express 23, 33214–33240 (2015). [CrossRef]
15. J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Efficient positional misalignment correction method for Fourier ptychographic microscopy,” Biomed. Opt. Express 7, 1336–1350 (2016). [CrossRef]
16. J. Liu, Y. Li, W. Wang, H. Zhang, Y. Wang, J. Tan, and C. Liu, “Stable and robust frequency domain position compensation strategy for Fourier ptychographic microscopy,” Opt. Express 25, 28053–28067 (2017). [CrossRef]
17. A. Maiden, M. Humphry, M. Sarahan, B. Kraus, and J. Rodenburg, “An annealing algorithm to correct positioning errors in ptychography,” Ultramicroscopy 120, 64–72 (2012). [CrossRef]
18. F. Zhang, I. Peterson, J. Vila-Comamala, A. Diaz, F. Berenguer, R. Bean, B. Chen, A. Menzel, I. K. Robinson, and J. M. Rodenburg, “Translation position determination in ptychographic coherent diffraction imaging,” Opt. Express 21, 13592–13606 (2013). [CrossRef]
19. Z. Bian, S. Dong, and G. Zheng, “Adaptive system correction for robust Fourier ptychographic imaging,” Opt. Express 21, 32400–32410 (2013). [CrossRef]
20. L. Bian, G. Zheng, K. Guo, J. Suo, C. Yang, F. Chen, and Q. Dai, “Motion-corrected Fourier ptychography,” Biomed. Opt. Express 7, 4543–4553 (2016). [CrossRef]
21. J. Dou, Z. Gao, J. Ma, C. Yuan, Z. Yang, and L. Wang, “Iterative autofocusing strategy for axial distance error correction in ptychography,” Opt. Lasers Eng. 98, 56–61 (2017). [CrossRef]
22. R. Eckert, L. Tian, and L. Waller, “Algorithmic self-calibration of illumination angles in Fourier ptychographic microscopy,” in Imaging and Applied Optics (Optical Society of America, 2016), paper CT2D.3.
23. G. Satat, B. Heshmat, D. Raviv, and R. Raskar, “All photons imaging through volumetric scattering,” Sci. Rep. 6, 33946 (2016). [CrossRef]
24. A. Pan, Y. Zhang, T. Zhao, Z. Wang, D. Dan, M. Lei, and B. Yao, “System calibration method for Fourier ptychographic microscopy,” J. Biomed. Opt. 22, 096005 (2017). [CrossRef]
25. J. Chung, J. Kim, X. Ou, R. Horstmeyer, and C. Yang, “Wide field-of-view fluorescence image deconvolution with aberration-estimation from Fourier ptychography,” Biomed. Opt. Express 7, 352–368 (2016). [CrossRef]
26. T. M. Turpin, L. H. Gesell, J. Lapides, and C. H. Price, Theory of the Synthetic Aperture Microscope (1995), Vol. 2566, pp. 1–11.
27. J. Di, J. Zhao, H. Jiang, P. Zhang, Q. Fan, and W. Sun, “High resolution digital holographic microscopy with a wide field of view based on a synthetic aperture technique and use of linear CCD scanning,” Appl. Opt. 47, 5654–5659 (2008). [CrossRef]
28. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2, 104–111 (2015). [CrossRef]
29. R. Horstmeyer, J. Chung, X. Ou, G. Zheng, and C. Yang, “Diffraction tomography with Fourier ptychography,” Optica 3, 827–835 (2016). [CrossRef]
30. J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Sampling criteria for Fourier ptychographic microscopy in object space and frequency space,” Opt. Express 24, 15765–15781 (2016). [CrossRef]
31. K. Guo, S. Dong, P. Nanda, and G. Zheng, “Optimization of sampling pattern and the design of Fourier ptychographic illuminator,” Opt. Express 23, 6171–6180 (2015). [CrossRef]
32. C. Kuang, Y. Ma, R. Zhou, J. Lee, G. Barbastathis, R. R. Dasari, Z. Yaqoob, and P. T. C. So, “Digital micromirror device-based laser-illumination Fourier ptychographic microscopy,” Opt. Express 23, 26999–27010 (2015). [CrossRef]
33. J. Chung, H. Lu, X. Ou, H. Zhou, and C. Yang, “Wide-field Fourier ptychographic microscopy using laser illumination source,” Biomed. Opt. Express 7, 4787–4802 (2016). [CrossRef]
34. Z. F. Phillips, M. V. D’Ambrosio, L. Tian, J. J. Rulison, H. S. Patel, N. Sadras, A. V. Gande, N. A. Switz, D. A. Fletcher, and L. Waller, “Multi-contrast imaging and digital refocusing on a mobile microscope with a domed LED array,” PLoS ONE 10, e0124938 (2015). [CrossRef]
35. S. Sen, I. Ahmed, B. Aljubran, A. A. Bernussi, and L. G. de Peralta, “Fourier ptychographic microscopy using an infrared-emitting hemispherical digital condenser,” Appl. Opt. 55, 6421–6427 (2016). [CrossRef]
36. Z. Phillips, R. Eckert, and L. Waller, “Quasi-dome: A self-calibrated high-NA LED illuminator for Fourier ptychography,” in Imaging and Applied Optics (Optical Society of America, 2017), paper IW4E.5.
37. S. Hell, G. Reiner, C. Cremer, and E. H. K. Stelzer, “Aberrations in confocal fluorescence microscopy induced by mismatches in refractive index,” J. Microsc. 169, 391–405 (1993). [CrossRef]
38. S. Kang, P. Kang, S. Jeong, Y. Kwon, T. D. Yang, J. H. Hong, M. Kim, K.-D. Song, J. H. Park, J. H. Lee, M. J. Kim, K. H. Kim, and W. Choi, “High-resolution adaptive optical imaging within thick scattering media using closed-loop accumulation of single scattering,” Nat. Commun. 8, 2157 (2017). [CrossRef]
39. A. Shanker, A. Wojdyla, G. Gunjala, J. Dong, M. Benk, A. Neureuther, K. Goldberg, and L. Waller, “Off-axis aberration estimation in an EUV microscope using natural speckle,” in Imaging and Applied Optics (Optical Society of America, 2016), paper ITh1F.2.
40. C. Dammer, P. Leleux, D. Villers, and M. Dosire, “Use of the Hough transform to determine the center of digitized x-ray diffraction patterns,” Nucl. Instrum. Methods Phys. Res. Sect. B 132, 214–220 (1997). [CrossRef]
41. J. Cauchie, V. Fiolet, and D. Villers, “Optimization of an Hough transform algorithm for the search of a center,” Pattern Recognit. 41, 567–574 (2008). [CrossRef]
42. H. K. Yuen, J. Princen, J. Illingworth, and J. Kittler, “A comparative study of Hough transform methods for circle finding,” in Proceedings of the 5th Alvey Vision Conference, Reading, August 31, 1989, pp. 169–174.
43. E. Davies, Machine Vision: Theory, Algorithms and Practicalities, 3rd ed. (Morgan Kauffmann, 2004).
44. M. Jacobson, “Absolute orientation MATLAB package,” in MATLAB Central File Exchange (2015).
45. M. Born and E. Wolf, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light, 7th ed. (Cambridge University, 1999).
46. J. Sun, C. Zuo, L. Zhang, and Q. Chen, “Resolution-enhanced Fourier ptychographic microscopy based on high-numerical-aperture illuminations,” Sci. Rep. 7, 1187 (2017). [CrossRef]