Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Apodized coherent transfer function constraint for partially coherent Fourier ptychographic microscopy

Open Access Open Access

Abstract

Fourier ptychographic microscopy (FPM) is a recently developed computational microscopy approach that produces both wide field-of-view (FOV) and high resolution (HR) intensity and a phase image of the sample. Inspired by the ideas of synthetic aperture and phase retrieval, FPM iteratively stitches multiple low-resolution (LR) images with variable illumination angles in Fourier space to reconstruct an HR complex image. Typically, FPM illuminating the sample with an LED array is approximated as a coherent imaging process, and the coherent transfer function (CTF) is imposed as a support constraint in Fourier space. However, a millimeter-scale LED is inapposite to be treated as a coherent light source. As a result, the quality of reconstructed image is degraded by the inappropriate approximation. In this paper, we analyze the coherence of an FPM system and propose a novel constraint approach termed Apodized CTF (AC) constraint in Fourier space. Results on both simulated data and actual captured data show that this new constraint is more stable and robust than CTF. This approach can also relax the coherence requirement of illumination. In addition, it is simple, does not require additional computations, and is easy to be embedded in almost all the reconstruction algorithms proposed so far.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In microscopy, a large space-bandwidth product (SBP) is highly desirable in biomedical applications, such as digital pathology. However, it is difficult to increase the SBP due to the compromise between the resolution and field-of-view (FOV). To address this problem, Zheng et al. proposed a novel computational imaging method termed Fourier ptychographic microscopy (FPM) which can provided large FOV and high-resolution (HR) imaging [1–3]. The FPM platform simply replaces the illumination module of a microscope with an LED array light source. When a single LED illuminates the sample at a non-normal incidence, the spectrum of the sample shifts in proportion to the angle of illumination. As a result, the information collected by the camera is different from the normal-illumination case. By lighting up the LEDs in sequence, multiple images corresponding to different sub-regions of the sample’s Fourier spectrum are captured by the camera. Stitching these images by phase retrieval method in the Fourier space iteratively, the information beyond the cut-off frequency of the objective lens is recovered. Therefore, FPM has the ability to overcome the resolution limit of the employed objective lens. Finally, a HR complex image is reconstructed [1].

Fourier ptychography presents enormous potential in many applications. This technology shares its roots with aperture synthesis [4–6] and phase retrieval [7,8], and introduces the ‘angular diversity’ concept to simultaneously expand the Fourier passband and recover the ‘lost’ phase information [9]. Dong et al. proposed a state-multiplexed FP recovery routine to decompose the incoherent mixture of the FP acquisitions and recover a high-resolution sample image. A color-multiplexed image was also obtained by simultaneously turning on R/G/B LEDs for data acquisition [10]. Ou et al. used an objective lens (40 × , 0.75 NA) to synthesize a system with a numerical aperture (NA) of 1.45 [11].So far, FPM has been successfully applied in digital pathology [12–14]. In addition, FPM was also applied for quantitative phase imaging of unstained live samples in vitro with FPM was also reported, and a 0.8 NA resolution across a 4 × FOV with subsecond capture time was achieved [15]. Pacheco et al. designed a reflective FPM system which has a great prospect for applications in endoscopy, biomedical imaging, surface metrology, and industrial inspection [16,17]. Chung et al. utilized FPM to estimate the microscope aberrations in fluorescence imaging for wide FOV and aberration-free fluorescence imaging [18]. A single-shot phase retrieval method via FPM was devised by Lee et al. to obtain a lateral resolution of 3.10 μm over a FOV of 0.34 mm2 [19]. The concept of FPM was also adopted in HR lenless imaging [20].

In order to exceed the limitations of the original FPM technique, several studies have been reported to further improve the performance of FPM. Bian et al. used an optimization process to correct for illumination intensity fluctuation, pupil aberrations and several system parameters [21]. Ou et al. proposed an embedded pupil function recovery (EPRY) algorithm to determine the pupil function of the FPM system [22]. Tian et al. demonstrated a multiplexed-illumination strategy to accelerate the process of image acquisition and chose Gauss-Newton algorithm for the reconstruction [23]. Yeh et al. tested multiple amplitude-based algorithms and intensity-based algorithms and proved that the amplitude-based Newton’s method provides a better reconstruction performance although it takes a longer running time [24]. Guo et al. investigated the affections of scanning sequences and sampling patterns to the FP approach, which showed that a non-uniform sampling overlap leads to a better FP reconstruction compared to the original LED array illuminator [25]. Spatial and spectral data redundancy requirements were discussed for FPM [26,27]. Chung et al. designed a laser-illumination FPM platform, which could capture 96 raw FPM input images in 0.96 seconds [28]. Zhou et al. reported a wavelength multiplexing strategy to speed up the acquisition process of FPM [29]. In order to solve the LED positional misalignment, several algorithms have been developed to estimate and compensate for the positional error [30–33]. A motion-corrected algorithm was also proposed to correct the unknown sample motion [34]. Horstmeyer et al. formulated a convex program for the FPM reconstruction, which can always obtain a solution with the global optimum [35]. When the illumination angle is larger than the objective NA, dark field images with low signal-to-noise-ratio (SNR) are captured. In practice, a conventional FPM algorithm is sensitive to the noise. To address the noise, a few modified algorithms have been reported [36–40]. Hou et al. proposed an adaptive background interference removal method which presents a faster convergence speed and higher reconstruction accuracy than the conventional FPM model [41].

Based on the above introduction, we can conclude that a robust algorithm is urgently required in FPM for practical applications. Typically, the LED as the illumination light source used in FPM platform is considered as a point source and the FPM system is treated as a coherent imaging system [1,10,11,17]. Therefore, the coherent transfer function (CTF) of the objective lens is imposed as the support constraint in Fourier space. However, some millimeter-scale LEDs used in FPM systems [11,19,23,30] are inapposite to be treated as a coherent light source in both spatial and temporal domains because they behave in a partially coherent way [10]. As a result, the conventional FPM algorithm will significantly degrade the reconstruction results of FPM. Moreover, the reflected light between the LED board and object will make it worse. Ou et al. proposed the EPEY algorithm to correct the aberrations (phase of the pupil function) in FPM imaging system, but they didn’t consider the amplitude part of the pupil function seriously [22]. In this paper, we analyze the coherence of FPM system and propose a novel constraint termed Apodized CTF (AC) constraint in FPM reconstruction procedure. This approach is validated to be more robust than CTF by relaxing the coherence requirement of illumination.

In summary, the AC constraint proposed for FPM is significative for the practical applications of FPM. The AC approach has two practical contributions: First, the AC constraint approach is more appropriate to model the capturing process in actual imaging systems. We can obtain better results with the AC approach in practical applications. Second, the AC constraint approach can relax the coherence requirement of illumination. We can utilize LEDs with a larger size to accelerate the image acquisition process. We believe that this approach can significantly promote the practical applications of FPM.

The remainder of this paper is organized as follows: Section 2 reviews the image formation and optimization algorithm and analyzes the spatial and temporal coherence of FPM system. We introduce the AC approach to better fit the partially coherent condition and improve the stability and robustness of the FPM reconstruction. Section 3 investigates the effectiveness of the AC approach for different LED sizes by simulations. Experimental results are presented in Section 4, and this paper ends with a discussion and conclusions in Section 5.

2. Principles

2.1. Fourier ptychographic image formation

Before we introduce the principle of the apodized CTF constraint, it is worthwhile to briefly review the image formation of FPM. A typical FPM experimental platform is shown in Fig. 1, in which an LED array is used to illuminate the sample. Each single LED with different incident angles is sequentially lit up to capture multiple images of different parts of the sample spatial spectrum. In general, one single LED is assumed a point source, and the FPM is considered as a coherent imaging system.

 figure: Fig. 1

Fig. 1 Schematic of a conventional FPM experimental platform.

Download Full Size | PDF

For a relatively thin sample o(r), where r = (x, y) is the two-dimensional (2D) spatial coordinates in the sample plane, which can also be described as O(u) in frequency domain, where u = (kX, kY) is the 2D coordinates of spatial spectrum. When the ith LED illuminates the sample, under the assumption that the light incident on the sample is a plane wave, the transmitted light field of the sample can be expressed as o(r)exp(j2πuir), where j is the imaginary unit and ui is the illumination vector of ith LED. The phase shift in the space domain introduces a linear translation in the frequency domain, in which the transmitted light field can also be written as O(u-ui). And then the light field is low-pass filtered by the aperture of the objective lens at the pupil plane. This process can be represented as P(u)O(u-ui), where P(u) is the pupil function with one inside the circle and zero outside, and the radius of the circle is NA/λ. Finally, the light field is Fourier transformed when it passes through the tube lens to the camera. Since the camera can only capture the light’s intensity, we donate each intensity image by:

Ii(r)=|F1{P(u)O(u-ui)}|2,
where ℱ−1 is the inverse Fourier transform operator. To solve O(u), most of the current algorithms can be derived by formulating FPM as the following optimization problem:
minO(u)ir|Ii(r)|F1{P(u)O(uui)}||2,
where the actual captured images Ii(r) are used as object constraints in the space domain, and the pupil function P(u) (i.e. the coherent transfer function) is imposed as the support constraint in the frequency domain.

2.2. Coherence analysis of LED-illumination FPM

In a coherent imaging system, the illumination light waves come from a point source and the phasor amplitudes of the light waves vary in unison at all spatial points. However, the LED as a quasi-monochromatic light source with a size of several millimeters is inapposite to be approximated as a coherent light source. In the next paragraphs, we will analyze the temporal and spatial coherence properties of FPM.

First, we analyze the temporal coherence of FPM. As is mentioned in [11], researchers usually approximate a single LED as a coherent source which can reduce the full description of each LED illumination to a single coherent field via the cross-spectral density (CSD) function. To make this approximation, the quasi-monochromatic criterion (v/△v>M, where v is the center wavelength, △v is the spectral bandwidth and M is the number of sensor pixels along one axis) should be ideally fulfilled [42]. Therefore, highly temporally coherent LEDs are required. For normal SMD 3528 LEDs with the spectral range is 620~630nm used in [10,13,14,35] cannot satisfy the criterion. If the criterion is not met, lights with different wavelengths may reduce the contrast of the raw images, because the central location of the sub-region of sample spatial spectrum corresponding to an LED is determined by the wavelength and the spatial location of the LED. So, image in the camera plane is contributed by multiple parts of sample spatial spectrum with slightly different locations corresponding to different wavelengths.

Second, we analyze the spatial coherence of FPM. When the illumination source is an extended incoherent source, it is possible to specify under which conditions the imaging system will substantially behave as a coherent or incoherent system [43]. We set that θs represents the effective angular diameter of the incoherent source that illuminates the sample, θp is the angular diameter of the entrance pupil of the imaging system (i.e. the NA), θo is the angular diameter of the angular spectrum of the object, and all angles are measured from the object plane. Therefore, the system will be shown to behave as a coherent system provided.

θsθp
and be an incoherent system when

θsθo+θp.

For conditions between these extremes such as FPM systems, in which a single LED has a size of a few millimeters and the distance between LED array and the sample is 80~100 mm, they will behave as a partially coherent system.

In summary, it is not appropriate to treat a conventional LED-illumination FPM as a fully coherent imaging system and apply the coherent transfer function (CTF) to filter the sample’s frequency spectrum image. If we use CTF as a constraint in the optimized formation of Eq. (2), FPM reconstruction results will be significantly degraded. However, the conventional FPM algorithm has not considered this point yet.

2.3. Apodized CTF as a constraint

An extended light source can be treated as a set of multiple point sources with slightly different locations and is incoherent among point sources. We simulate the FPM imaging process through the image formation described in Section 2.1. A 3-millimeter-size LED is divided into 900 point sources, and the gap distance between adjacent point sources is 0.1 mm. Note that a 0.1-millimeter-size source matches the specification of Eq. (3). We use the ‘Lena’ and ‘Aerial’ image (512 × 512 pixels) from the USC-SIPI image database [44] as the ground truth HR amplitude (see Fig. 2(a)) and phase map, respectively. The pixel number of LR images is set to be one-fourth of the HR image along both dimensions. We synthesize LR images when the LED is considered as both a point and extended sources. In the first case, the captured LR images are synthesized based on Eq. (1). In the second case, 900 point sources with slightly different incident angles from an LED illuminate the sample at the same time. Therefore, the LR intensity images are obtained by adding all 900 intensity images together:

 figure: Fig. 2

Fig. 2 (a) The ‘Lena’ image as the ground truth HR amplitude map. (b) and (c) The 3D view of CTF and apodized CTF (α = 0.37), respectively. (d) The LR amplitude image illuminated by an extended source. (e) The LR amplitude image illuminated by a point source. (f) The LR image filtered by the apodized CTF shown in Fig. 2(c).

Download Full Size | PDF

Ii=1900p=1900|F1{P(u)O(uui,p)}|2

The LR amplitude images of a normal-incidence LED of both cases are shown in Figs. 2(d) and 2(e), respectively. As we can see, due to the sharp discontinuities in CTF, Figure. 2(e) presents pronounced ‘ringing’ next to the sharp edges of the sample, called Ringing Effect [43]. Whereas the LR image of the extended source shows less influence of the ringing effect. Compared with the point-source case, the extended source smooths the LR image.

In attempts to reduce the strength of side-lobes or side rings, methods known as apodization have been developed [43]. Apodization is a filter technique to ‘soften’ the edges of the aperture by introducing an attenuating mask (see Fig. 2(c)). As we all know, diffraction by an abrupt aperture can be considered as coming from edge waves originating around the rim of the aperture. So by softening the edge, we can spread the origin of these diffracted waves over a broader area around the edges of the pupil, so that suppress ringing effects caused by edge waves with a highly localized origin.

In partially coherent case, when an extended light source illuminates the sample, the edge of transfer function is smoothed by convolving the area of the extended source with the CTF. In an extreme case, when the extended source is big enough that the angular diameter of the source is bigger than the NA of the objective lens, the optical transfer function (OTF) is formed by self-convolving the CTF. Inspired by the apodization, we propose a novel filter termed apodized CTF (AC) to better fit the FPM imaging model. It is assumed that the smoothing caused by the extended source can also be achieved by introducing attenuation in CTF (see Fig. 2(c)). In this filter, the value of the CTF is linearly reduced from the center to the edge to reduce the ringing effect. The attenuated CTF can be represented as:

A(u)=1α|u|NAλ,(|u|<NAλ),
where α is the attenuation parameter from 0 to 1 and NA/λ is the cut-off frequency of the objective lens. An LR image filtered by the AC is shown in Fig. 2(f), in which α is 0.37. The mean square error (MSE) and the structural similarity index (SSIM) [45] between Fig. 2(e) and Fig. 2(d) is 0.0095 and 0.9567, respectively. While the MSE and SSIM between Fig. 2(f) and Fig. 2(d) is 0.0092 and 0.9886. This LR image in Fig. 2(f) is less affected by the ringing effect and presents more similarity to the extended-source one. Therefore, by introducing attenuation at the high frequency part of the CTF, we can better fit the transfer function changed by an extended light source and handle the ringing effect problem. This demonstrates that AC is a more accurate filter in the LED-illumination FPM imaging system.

By embedding the AC constraint in the object function represented in Eq. (2), the object function is turned into:

minO(u)ir|Ii(r)|F1{A(u)O(uui)}||2.

In this way, we find that the reconstruction process is more robust to convergence and no additional computations are introduced into the reconstruction process.

3. Simulations

We first validate the effectiveness of the AC constraint approach via simulations. In order to model an FPM platform realistically, the simulation parameters are chosen as follows: the wavelength of the incident illumination is 631 nm, the magnification and NA of the objective lens are 4 × and 0.1, respectively; the pixel size on the object plane is 2 μm, the distance between the LED plane and the sample plane is 80 mm, the interval between adjacent LEDs is 5 mm, and a 15 × 15 LED array is used to provide a synthetic NA of ~0.5, Different sizes of LEDs (1 × 1 mm, 2 × 2 mm, 3 × 3 mm and 4 × 4 mm) are simulated based on Eq. (5) by dividing each LED into point sources of 0.1 mm in order to match the specification of Eq. (3), and the ‘Lena’ and the ‘Aerial’ images (512 × 512 pixels) from the USC-SIPI [44] image database are used as the ground truth HR amplitude and phase map, respectively. The pixel number of LR images is set to be one-fourth of the HR image along both dimensions.

During the iterative reconstruction process, 20 iterations are applied in the AP algorithm [1] to ensure convergence. The reconstruction accuracy is measured by the MSE and the SSIM. The MSE evaluates the absolute difference between the reconstruction HR image and ground truth HR image:

MSE=1mnmn(c(r)o(r))2,
where m and n are the pixel numbers of the HR image along two dimensions, c(r) is the reconstructed HR image, and o(r) is the ground truth HR image. The SSIM measures the spatial structural closeness between two images, which is consistent with the human perception and better than the MSE. The SSIM is a perception-based measurement that assesses image degradation by the perceived change in structural information, and also incorporating important perceptual phenomena, including luminance masking and contrast masking terms:
SSIM(c,o)=(2μcμ+oC1)(2σco+C2)(μc2+μo2+C1)(σc2+σo2+C2),
where μc is the average of c(r), μo is the average of o(r), σc2 is the variance of c(r), σo2 is the variance of o(r), σco is the covariance of c(r) and o(r), C1 and C2 are two variables to stabilize the division with the weak denominator. The SSIM value ranges from 0 to 1, and a high value means that two images have more similar structural information.

We simulate the reconstruction process with different AC constraints base on Eq. (6) to find the best-fitting α. When α is equal to zero, the CTF is unattenuated. Whereas, when α is equal to 1, the CTF is attenuated to 0 at the edge of the cut-off frequency, which makes the AC to form a triangle in the transverse cross section. Note that we use the reconstructed HR image multiplied by uA(u)/uP(u) to calculate the MSE and SSIM because an attenuated CTF causes a larger value of reconstructed HR image. Figure. 3 depicts the MSE and SSIM curves versus α for different sizes of LED. As we can see, CTF (i.e. α = 0) is not the best constraint. The LED with a bigger size presents a larger best-fitting α. When the LED size is 4 mm, using α = 0.44 as the attenuation parameter, the MSE is 0.001591 and the SSIM is 0.8134. Compared to the unattenuated CTF case, in which the MSE is 0.00411 and the SSIM is 0.7805, AC shows better convergence.

 figure: Fig. 3

Fig. 3 (a) The MSE curves versus α for different sizes of LEDs. (b) The SSIM curves versus α for different sizes of LEDs.

Download Full Size | PDF

Figure 4 shows the reconstructed HR images using CTF and AC as the constraint when the LED size is 3 mm. The ground truth HR amplitude and phase images are shown in Figs. 4(a) and 4(d). Compared to the CTF case shown in Figs. 4(b) and 4(e), the AC constraint-based HR reconstructed images appear less contamination (see Figs. 4(c) and 4(f)).

 figure: Fig. 4

Fig. 4 (a) and (d) The ground truth HR amplitude and phase images, respectively. (b) and (e) The reconstructed HR amplitude and phase images using CTF constraint, respectively. (c) and (f) The reconstructed HR amplitude and phase images using the AC constraint, respectively.

Download Full Size | PDF

4. Experiments

In order to evaluate the effectiveness of the AC constraint approach, an FPM setup is built to capture real LR images for reconstruction. The experimental setup follows above simulation parameters, except that the size of the LED is 2 × 2 mm. A multi-exposure acquisition scheme is typically adopted to capture high dynamic range (HDR) images for both bright-field and dark-field illuminations to obtain robust results [1,26]. However, we only capture the LR images once under the same exposure time to demonstrate the robustness of the AC constraint approach. The AC constraint is embedded into three state-of-the-art reconstruction algorithms, i.e. AP [1], EPRY [22] and Adaptive step-size algorithm [37] to verify its universality.

4.1 Experiments on a USAF resolution target

First, we capture 225 LR images of the USAF resolution target corresponding to 15 × 15 LEDs. The spatial frequency of the smallest group of feature (group 9 element 3) is 645.1 line pairs per millimeter (i.e. 0.775 μm the line width). Figure. 5(a) shows the raw LR image under the normal illumination, and the small region of interest of 128 × 128 pixels is shown in Fig. 5(b), and Fig. 5(c) presents the close-up of groups 8 and 9.

 figure: Fig. 5

Fig. 5 (a) The LR image of the USAF 1951 resolution target under the normal illumination captured by the camera. (b) The region of interest of (a). (c) The close-up of (b). (d)-(f) The reconstructed HR amplitude images of different algorithms using CTF constraint. (j)-(l) The reconstructed HR amplitude images of different algorithms using AC constraint. (m)-(o) The close-ups of Figs. 5(j)-5(l).

Download Full Size | PDF

Figures. 5(d)-5(i) depicts the reconstruction results of different algorithms (AP, EPRY and Adaptive step-size) with the CTF constraint. The reconstruction results of the same algorithms using the AC constraint are shown in Figs. 5(j)-5(o). The AC constraint exhibits a more excellent effectiveness than the CTF constraint in both amplitude images and close-ups. Whereas, the features of group 8 element 3 in the close-ups are not well resolved by the CTF constraint.

4.2 Experiments on a pathological section

Finally, the proposed AC constraint approach is tested on a pathological section of tuberculosis granuloma, in which the phase part of the complex field usually contains valuable information. Figure. 6(a) presents the full FOV of the specimen and Fig. 6(b) shows the corresponding magnified region of interest. An HR image of the small region captured by a 40 × 0.75 NA objective is shown in Fig. 6(c) for comparison.

 figure: Fig. 6

Fig. 6 (a) The LR image of a pathological section under the normal illumination captured by the camera. (b) The small region of interest of 128 × 128 pixels. (c) The small region of interest captured by a 40 × 0.75 NA objective. (d)-(f) The reconstructed HR amplitude images of different algorithms using CTF constraint. (g)-(i) The reconstructed HR phase images of different algorithms using CTF constraint. (j)-(l) The reconstructed HR amplitude images of different algorithms using AC constraint. (m)-(o) The reconstructed HR phase images of different algorithms using AC constraint.

Download Full Size | PDF

Figures. 6(d)-6(i) shows the reconstruction results of different algorithms (AP, EPRY and Adaptive step-size) using CTF constraint, and the reconstruction results of corresponding algorithms using AC constraint are presented in Figs. 6(j)-6(o). Figures. 6(d)-6(f) show that the CTF constraint-based amplitude images are contaminated by different amounts of fake black spots (see red circles), which seems that these reconstruction algorithms trap into the local optimum. In addition, some sharp fissures appear in the corresponding phase images (Figs. 6(g)-6(i)), which makes it difficult to distinguish the real phase information of the specimen. We also notice that the fissures are distributed at the locations of the black spots in the corresponding amplitude images. On the other hand, clean HR amplitude and phase images without black spots and fissures are obtained from the AC constraint-based reconstruction results (Figs. 6(j)-6(o)), whose quality is comparable to a 40 × objective for practical applications. Therefore, we can conclude that the AC constraint has better performance than the CTF constraint in both amplitude and phase images.

5. Discussions and conclusions

In this work, we analyzed the coherence of the conventional Fourier ptychography. An LED-illumination FPM platform behaves as a partially coherent system. Therefore, it is inapposite to treat a conventional LED-illumination FPM system as a fully coherent imaging system and think that the frequency spectrum image of the sample is filtered by the coherent transfer function (CTF). If the CTF as a constraint is used in the optimized formation, it will lead to local convergence and hence decrease the quality of the reconstructed HR image. Inspired by the apodization, we proposed a novel Apodized CTF (AC) constraint and applied it to FPM reconstruction procedure to better fit the imaging model. Both simulations and experimental results demonstrate that the AC constraint approach outperforms the conventional CTF constraint with higher stability and robustness. The AC constraint approach not only avoids trapping into the local optimum and produces reconstruction results with higher quality, but also relaxes the coherence requirement in FPM. In other words, the AC constraint approach makes it possible to use large size LEDs as the illumination for an FPM platform. Currently, FPM is typically limited by the image acquisition time [23,26,28,29] due to the low SNR of dark field images. We could accelerate this process by adopting large-size LEDs with higher luminous intensities. Last but not least, the AC constraint approach is simple, does not require any additional computations, and is easy to be embedded in almost all the reconstruction algorithms proposed so far. When a type of LED is used in the experimental platform, the attenuation parameter α is fixed regardless of the sample. We believe that this approach will significantly promote the practical applications of FPM.

Funding

National Natural Science Foundation of China (61405194); Science Foundation of State Key Laboratory of Applied Optics; Science and Technology Development Plan Project of Jilin Province (20170519016JH).

Acknowledgments

This work was supported by CAS Interdisciplinary Innovation Team; the authors sincerely acknowledge the open source provided by Chao Zuo.

References

1. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]   [PubMed]  

2. G. Zheng, “Breakthroughs in photonics 2013: Fourier ptychographic imaging,” IEEE Photonics J. 6(2), 1 (2014). [CrossRef]  

3. G. Zheng, X. Ou, R. Horstmeyer, J. Chung, and C. Yang, “Fourier ptychographic microscopy: a gigapixel superscope for biomedicine,” Opt. Photonics News 25(4), 26–33 (2014). [CrossRef]  

4. V. Mico, Z. Zalevsky, P. García-Martínez, and J. García, “Synthetic aperture superresolution with multiple off-axis holograms,” J. Opt. Soc. Am. A 23(12), 3162–3170 (2006). [CrossRef]   [PubMed]  

5. C. Yuan, H. Zhai, and H. Liu, “Angular multiplexing in pulsed digital holography for aperture synthesis,” Opt. Lett. 33(20), 2356–2358 (2008). [CrossRef]   [PubMed]  

6. P. Gao, G. Pedrini, and W. Osten, “Structured illumination for resolution enhancement and autofocusing in digital holographic microscopy,” Opt. Lett. 38(8), 1328–1330 (2013). [CrossRef]   [PubMed]  

7. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik (Stuttg.) 35, 237–249 (1972).

8. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]   [PubMed]  

9. X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, “Quantitative phase imaging via Fourier ptychographic microscopy,” Opt. Lett. 38(22), 4845–4848 (2013). [CrossRef]   [PubMed]  

10. S. Dong, R. Shiradkar, P. Nanda, and G. Zheng, “Spectral multiplexing and coherent-state decomposition in Fourier ptychographic imaging,” Biomed. Opt. Express 5(6), 1757–1767 (2014). [CrossRef]   [PubMed]  

11. X. Ou, R. Horstmeyer, G. Zheng, and C. Yang, “High numerical aperture Fourier ptychography: principle, implementation and characterization,” Opt. Express 23(3), 3472–3491 (2015). [CrossRef]   [PubMed]  

12. A. Williams, J. Chung, X. Ou, G. Zheng, S. Rawal, Z. Ao, R. Datar, C. Yang, and R. Cote, “Fourier ptychographic microscopy for filtration-based circulating tumor cell enumeration and analysis,” J. Biomed. Opt. 19(6), 066007 (2014). [CrossRef]   [PubMed]  

13. J. Chung, X. Ou, R. P. Kulkarni, and C. Yang, “Counting white blood cells from a blood smear using Fourier ptychographic microscopy,” PLoS One 10(7), e0133489 (2015). [CrossRef]   [PubMed]  

14. R. Horstmeyer, X. Ou, G. Zheng, P. Willems, and C. Yang, “Digital pathology with Fourier ptychography,” Comput. Med. Imaging Graph. 42, 38–43 (2015). [CrossRef]   [PubMed]  

15. L. Tian, Z. Liu, L. Yeh, M. Chen, J. Zhong, and L. Waller, “Computational illumination for high-speed in vitro Fourier ptychographic microscopy,” Optica 2(10), 904–911 (2015). [CrossRef]  

16. S. Pacheco, G. Zheng, and R. Liang, “Reflective Fourier ptychography,” J. Biomed. Opt. 21(2), 026010 (2016). [CrossRef]   [PubMed]  

17. S. Pacheco, B. Salahieh, T. Milster, J. J. Rodriguez, and R. Liang, “Transfer function analysis in epi-illumination Fourier ptychography,” Opt. Lett. 40(22), 5343–5346 (2015). [CrossRef]   [PubMed]  

18. J. Chung, J. Kim, X. Ou, R. Horstmeyer, and C. Yang, “Wide field-of-view fluorescence image deconvolution with aberration-estimation from Fourier ptychography,” Biomed. Opt. Express 7(2), 352–368 (2016). [CrossRef]   [PubMed]  

19. B. Lee, J. Hong, D. Yoo, J. Cho, Y. Jeong, S. Moon, and B. Lee, “Single-shot phase retrieval via Fourier ptychographic microscopy,” Optica 5(8), 976–983 (2018). [CrossRef]  

20. W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light Sci. Appl. 4(3), e261 (2015). [CrossRef]  

21. Z. Bian, S. Dong, and G. Zheng, “Adaptive system correction for robust Fourier ptychographic imaging,” Opt. Express 21(26), 32400–32410 (2013). [CrossRef]   [PubMed]  

22. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22(5), 4960–4972 (2014). [CrossRef]   [PubMed]  

23. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier Ptychography with an LED array microscope,” Biomed. Opt. Express 5(7), 2376–2389 (2014). [CrossRef]   [PubMed]  

24. L. H. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, and L. Waller, “Experimental robustness of Fourier ptychography phase retrieval algorithms,” Opt. Express 23(26), 33214–33240 (2015). [CrossRef]   [PubMed]  

25. K. Guo, S. Dong, P. Nanda, and G. Zheng, “Optimization of sampling pattern and the design of Fourier ptychographic illuminator,” Opt. Express 23(5), 6171–6180 (2015). [CrossRef]   [PubMed]  

26. S. Dong, Z. Bian, R. Shiradkar, and G. Zheng, “Sparsely sampled Fourier ptychography,” Opt. Express 22(5), 5455–5464 (2014). [CrossRef]   [PubMed]  

27. J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Sampling criteria for Fourier ptychographic microscopy in object space and frequency space,” Opt. Express 24(14), 15765–15781 (2016). [CrossRef]   [PubMed]  

28. J. Chung, H. Lu, X. Ou, H. Zhou, and C. Yang, “Wide-field Fourier ptychographic microscopy using laser illumination source,” Biomed. Opt. Express 7(11), 4787–4802 (2016). [CrossRef]   [PubMed]  

29. Y. Zhou, J. Wu, Z. Bian, J. Suo, G. Zheng, and Q. Dai, “Fourier ptychographic microscopy using wavelength multiplexing,” J. Biomed. Opt. 22(6), 066006 (2017). [CrossRef]   [PubMed]  

30. J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Efficient positional misalignment correction method for Fourier ptychographic microscopy,” Biomed. Opt. Express 7(4), 1336–1350 (2016). [CrossRef]   [PubMed]  

31. A. Pan, Y. Zhang, T. Zhao, Z. Wang, D. Dan, M. Lei, and B. Yao, “System calibration method for Fourier ptychographic microscopy,” J. Biomed. Opt. 22(9), 1–11 (2017). [CrossRef]   [PubMed]  

32. J. Liu, Y. Li, W. Wang, H. Zhang, Y. Wang, J. Tan, and C. Liu, “Stable and robust frequency domain position compensation strategy for Fourier ptychographic microscopy,” Opt. Express 25(23), 28053–28067 (2017). [CrossRef]  

33. R. Eckert, Z. F. Phillips, and L. Waller, “Efficient illumination angle self-calibration in Fourier ptychography,” Appl. Opt. 57(19), 5434–5442 (2018). [CrossRef]   [PubMed]  

34. L. Bian, G. Zheng, K. Guo, J. Suo, C. Yang, F. Chen, and Q. Dai, “Motion-corrected Fourier ptychography,” Biomed. Opt. Express 7(11), 4543–4553 (2016). [CrossRef]   [PubMed]  

35. R. Horstmeyer, R. Y. Chen, X. Ou, B. Ames, J. A. Tropp, and C. Yang, “Solving ptychography with a convex relaxation,” New J. Phys. 17(5), 053044 (2015). [CrossRef]   [PubMed]  

36. L. Bian, J. Suo, G. Zheng, K. Guo, F. Chen, and Q. Dai, “Fourier ptychographic reconstruction using Wirtinger flow optimization,” Opt. Express 23(4), 4856–4866 (2015). [CrossRef]   [PubMed]  

37. C. Zuo, J. Sun, and Q. Chen, “Adaptive step-size strategy for noise-robust Fourier ptychographic microscopy,” Opt. Express 24(18), 20724–20744 (2016). [CrossRef]   [PubMed]  

38. L. Bian, J. Suo, J. Chung, X. Ou, C. Yang, F. Chen, and Q. Dai, “Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient,” Sci. Rep. 6(1), 27384 (2016). [CrossRef]   [PubMed]  

39. Y. Zhang, P. Song, J. Zhang, and Q. Dai, “Fourier ptychographic microscopy with sparse representation,” Sci. Rep. 7(1), 8664 (2017). [CrossRef]   [PubMed]  

40. Y. Zhang, P. Song, and Q. Dai, “Fourier ptychographic microscopy using a generalized Anscombe transform approximation of the mixed Poisson-Gaussian likelihood,” Opt. Express 25(1), 168–179 (2017). [CrossRef]   [PubMed]  

41. L. Hou, H. Wang, M. Sticker, L. Stoppe, J. Wang, and M. Xu, “Adaptive background interference removal for Fourier ptychographic microscopy,” Appl. Opt. 57(7), 1575–1580 (2018). [CrossRef]   [PubMed]  

42. A. W. Lohmann, “Matched filtering with self-luminous objects,” Appl. Opt. 7(3), 561–563 (1968). [CrossRef]   [PubMed]  

43. J. Goodman, Introduction to Fourier Optics. (1968).

44. University of Southern California, “The USC-SIPI Image Database,”http://sipi.usc.edu./database.

45. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 Schematic of a conventional FPM experimental platform.
Fig. 2
Fig. 2 (a) The ‘Lena’ image as the ground truth HR amplitude map. (b) and (c) The 3D view of CTF and apodized CTF (α = 0.37), respectively. (d) The LR amplitude image illuminated by an extended source. (e) The LR amplitude image illuminated by a point source. (f) The LR image filtered by the apodized CTF shown in Fig. 2(c).
Fig. 3
Fig. 3 (a) The MSE curves versus α for different sizes of LEDs. (b) The SSIM curves versus α for different sizes of LEDs.
Fig. 4
Fig. 4 (a) and (d) The ground truth HR amplitude and phase images, respectively. (b) and (e) The reconstructed HR amplitude and phase images using CTF constraint, respectively. (c) and (f) The reconstructed HR amplitude and phase images using the AC constraint, respectively.
Fig. 5
Fig. 5 (a) The LR image of the USAF 1951 resolution target under the normal illumination captured by the camera. (b) The region of interest of (a). (c) The close-up of (b). (d)-(f) The reconstructed HR amplitude images of different algorithms using CTF constraint. (j)-(l) The reconstructed HR amplitude images of different algorithms using AC constraint. (m)-(o) The close-ups of Figs. 5(j)-5(l).
Fig. 6
Fig. 6 (a) The LR image of a pathological section under the normal illumination captured by the camera. (b) The small region of interest of 128 × 128 pixels. (c) The small region of interest captured by a 40 × 0.75 NA objective. (d)-(f) The reconstructed HR amplitude images of different algorithms using CTF constraint. (g)-(i) The reconstructed HR phase images of different algorithms using CTF constraint. (j)-(l) The reconstructed HR amplitude images of different algorithms using AC constraint. (m)-(o) The reconstructed HR phase images of different algorithms using AC constraint.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

I i (r)= | F 1 {P(u)O(u- u i )} | 2 ,
min O(u) i r | I i (r) | F 1 {P(u)O(u u i )} | | 2 ,
θ s θ p
θ s θ o + θ p .
I i = 1 900 p=1 900 | F 1 {P(u)O(u u i,p )} | 2
A(u)=1 α| u | NA λ ,(| u |< NA λ ),
min O(u) i r | I i (r) | F 1 {A(u)O(u u i )} | | 2 .
MSE= 1 mn m n (c(r)o(r)) 2 ,
SSIM(c,o)= (2 μ c μ + o C 1 )(2 σ co + C 2 ) ( μ c 2 + μ o 2 + C 1 )( σ c 2 + σ o 2 + C 2 ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.