Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Nonlinear optimization approach for Fourier ptychographic microscopy

Open Access Open Access

Abstract

Fourier ptychographic microscopy (FPM) is recently proposed as a computational imaging method to bypass the limitation of the space-bandwidth product of the traditional optical system. It employs a sequence of low-resolution images captured under angularly varying illumination and applies the phase retrieval algorithm to iteratively reconstruct a wide-field, high-resolution image. In current FPM imaging system, system uncertainties, such as the pupil aberration of the employed optics, may significantly degrade the quality of the reconstruction. In this paper, we develop and test a nonlinear optimization algorithm to improve the robustness of the FPM imaging system by simultaneously considering the reconstruction and the system imperfections. Analytical expressions for the gradient of a squared-error metric with respect to the object and illumination allow joint optimization of the object and system parameters. The algorithm achieves superior reconstructions when the system parameters are inaccurately known or in the presence of noise and corrects the pupil aberrations simultaneously. Experiments on both synthetic and real captured data validate the effectiveness of the proposed method.

© 2015 Optical Society of America

1. Introduction

A microscope, an instrument used to see objects that are too small for naked eyes, usually contains one or more lenses, producing an enlarged image of a sample placed in the focal plane. However, the traditional optical microscope always forces the users to compromise between a high-resolution image or a wide-field one, limited by the space-bandwidth product (SBP) [1] of the optical system. Fourier ptychographic microscopy (FPM) [2] is recently proposed as a computational imaging method to enhance the SBP of the optical system via post-processing. In this method, a simple light-emitting diode (LED) matrix illumination module, which is used to provide angularly varying illumination, is added to the traditional optical microscope. The required low-resolution images are captured by sequentially lighting up a single LED and snapshotting. Then the phase retrieval algorithm is employed to reconstruct a wide-field, high-resolution image. This method is able to transform a conventional optical microscope into a high-resolution (0.78 mm half-pitch resolution, 0.5 NA), wide-FOV (120 mm2) microscope with a final SBP of 0.23 gigapixels.

In current FPM imaging system, system uncertainties may significantly degrade the reconstruction quality. To exploit the full throughput of the FPM imaging system, Zheng et al. [2] introduced a digital wavefront correction strategy to correct for spatially varying aberration [3–5]. One of the drawbacks of this strategy is that it needs the pre-characterization of the spatially varying aberration of the microscopy system, which can be computationally onerous and sensitive to the movement of the elements in the system. Bian et al. [6] put forward an adaptive Fourier ptychographic recovery framework for wavefront correction under the guidance of an image-quality metric. However, this framework includes a global optimization process, which imposes a heavy load on computational resources, and is only able to correct a limited number of low order aberrations in a reasonable time duration. Ou et al. [7] proposed an embedded pupil function with reasonable computational cost, based on the extended ptychographic iterative engine [8–10]. However, this method is susceptible to the system initialization and easily trapped in a locally optimal solution due to the raster grid artefact problem [11].

In this paper, we develop and test a nonlinear optimization algorithm to improve the robustness of the FPM imaging system via simultaneously recovering the reconstruction as well as correcting system imperfections and uncertainties. By defining the squared-error metric of the reconstruction problem, we employ a gradient-descent-based algorithm to jointly optimize over the object and the pupil function, which not only improves the quality of the reconstructed object but also refines the estimation of the pupil function. In this case, an aberration-free reconstruction of the object can be recovered and the pupil aberration of the image system can be corrected without a complicated calibration process.

The remainder of this paper is organized as follows. In Section 2, we briefly overview the procedure of the conventional FPM method and elucidate the importance of a precise pupil function. Then we introduce a nonlinear factor into the original convergence-related metric and define a squared-error metric for the FPM reconstruction problem. Furthermore, we provide analytic expressions for the gradient of the metric with respect to the object and the pupil function and introduce a conjugate-gradient routine to update the sample function and pupil function iteratively. In Section 3, we verify the effectiveness of the proposed algorithm by simulation. Meanwhile, we introduce a quality metric to quantify the quality of the reconstructions and propose an empirical strategy for determining an appropriate nonlinear factor. In Section 4, we demonstrate that our method can help improve the quality of the FPM imaging system and correct the pupil aberration via implementing the proposed method over the truly captured data. In Section 5, we summarize our present work and briefly introduce the future work.

2. Theory and method

2.1. Fourier ptychographic microscopy

As detailed described in previous publications [2, 12–16], the FPM method iteratively stitches together a number of variably illuminated, low-resolution intensity images in Fourier space to produce a wide-field, high-resolution complex sample image. Before explaining the procedure of FPM, we should note that this method is based upon three assumptions:

  1. The recovery process alternates between the spatial and Fourier domains.
  2. Illuminating a thin sample by an oblique plane wave is equivalent to shifting the center of the sample’s spectrum in the Fourier domain.
  3. The filtering function of the objective lens (that is, coherent optical transfer function) in the Fourier space is a circular pupil.

In the acquisition procedure, sequentially scanning through the LEDs in the array creates angularly varying illumination. The required low-resolution images are captured while the sample is illuminated from different angles, which corresponds to shifts of the samples Fourier spectrum in the pupil plane. Here, we define r = (x,y) as the coordinate in the spatial domain and k = (u,v) as the coordinate in the spatial frequency domain (Fourier domain) and model the acquisition process as a complex multiplication: the exit light wave from a thin sample s(r), which is illuminated by oblique plane wave (from the nth LED) with a wavevector kn = (un,vn), can be expressed as e(r) = s(r)exp(ikn · r). The light propagates to the detector is the convolution of the exit wave and the spatially invariant point spread function p(r) of the microscope system, such that:

In=|e(r)p(r)|2=|1{[e(r)][p(r)]}|2=|1{S(kkn)P(k)}|2,
where In is the intensity of the image captured under the illumination of the nth LED, S(k) is the Fourier spectrum of the sample and P(k) is the pupil function of the imaging system.

The goal of the reconstruction algorithm is to recover the functions of S(k) and P(k) which satisfy Eq. (1) for all captured images. In the conventional FPM algorithm, we are supposed to have a precise estimation of the pupil function, and therefore the reconstruction problem is transformed into finding a proper sample function S(k) that satisfies Eq. (1). Traditional phase retrieval algorithms [17–21] reconstruct the sample function by an iterative approach. Such an iterative approach relies on a precise estimation of the pupil function, which is difficult to acquire in practice because the pupil aberration of the employed optics is usually unknown. That is, the conventional FPM algorithm only updates the sample function S(k), which neglects the influence of the potential pupil aberration. In case that pupil aberration is small enough, the conventional FPM algorithm works well, but under the condition of severe pupil aberration, the algorithm may lead to bad reconstructions.

Before detailed explaining our algorithm, it is necessary to give a brief overview of the reconstruction procedure of the conventional FPM algorithm. During the acquisition procedure, we capture low-resolution images under angularly varying illumination In, where n = 1,2,…,L, corresponds to the index of the LED and L is supposed to be the number of LEDs employed in the acquisition procedure. At the beginning, an initial guess of the sample function Sg (k) is provided with the pupil function Pg (k) to start the algorithm, where subscript g denotes guess. The notations Sg(j) (k) and Pg(j) (k) are the sample function and pupil function in the jth loop. The first step is an extraction procedure. According to Eq. (1), a sub-region is extracted from the sample function with the pupil function and transformed into the spatial domain to generate the simulated image:

Is,neiφs=1{Sg(j)(kkn)Pg(j)(k)}.

The second step is a replace procedure. The amplitude of the simulated image is replaced by the captured image to obtain the corrected image

Ic,neiφc=InIs,neiφsIs,n=Ineiφs,
where subscript c and s denote corrected and simulated respectively.

In the third step, which can be described as an update procedure, the corrected image is used to update the sample function. Specially, the corrected image is transformed into the Fourier domain and updates the corresponding sub-region of the sample function.

Sg(j+1)(k)=[Pg(j)(k+kn)]*|Pg(j)(k+kn)|max2{Ic,neiφc}.

And it should be noted that the pupil function remains unchanged during the iteration, which can be expressed as

Pg(j+1)(k)=Pg(j)(k).

Then in the fourth step, the updated sample function is used as the input of the next extract-replace-update procedure and repeat step 1–3 until all the captured images are employed. Empirically, one iteration is not enough for a convergent reconstruction, so in the fifth step, repeat step 1–4 until the algorithm reaches convergence.

2.2. Nonlinear optimization approach

The limitation of the traditional FPM algorithm is that it is based on a precisely estimated pupil function, and this may have a negative effect on reconstruction quality when the system suffers severe pupil aberration. A better solution to this problem is to jointly optimize over the sample function and the pupil function, which eliminates the effects of pupil aberration.

A proper metric, which describes the difference between the reconstruction and the correct one, can be of great help to finding a good solution for the FPM recovery routine. However, a typical image-quality metric like the sharpness metric [22], can be a result of incorrect modeling. For example, conventional FPM with incorrect pupil function may lead to reconstructions with noise artifacts, but also contain many sharp features, which may have a good performance in terms of the sharpness metric [6]. Considering the reconstruction procedure of FPM (Eqs. (1) and (2)), a convergence-related metric, which measures how good our reconstruction matches the captured data, is provided as follows

CM=n=1Lx,yWn(x,y)(Is,n(x,y)In(x,y))2,
where Wn (x,y) is a weighting function that can be used to emphasize regions with high SNR or set to zero for regions with low SNR or where no signal was measured. The weighting function can be used to eliminate the effects of a beam stop or dead detector pixels, if needed. In this paper, we used a uniform unity weighting function for all pixels.

With the provided metric, we are able to achieve a better FPM reconstruction with a gradient-descent-based algorithm. However, the metric in Eq. (6) is a necessary but not sufficient condition for the FPM recovery routine in math. That is, a good FPM reconstruction can satisfy the convergence-related metric well, but a solution derived from the metric may be incorrect for FPM. Inspired by previous works [10,23–25], we introduce a nonlinear factor to generalize the original convergence-related metric and define a squared-error metric for the FPM recovery routine as follows

ε=n=1Lx,yWn(x,y){[Is,n(x,y)+δ]γ[In(x,y)+δ]γ}2,
where δ is a small constant that prevents problem in the gradient computation when Is,n and In are close to zero and γ is a real-valued constant that is considered as the nonlinear factor.

The squared-error metric in Eq. (7) can be considered as a generalized version of that in Eq. (6), and each given γ can help make a FPM reconstruction. Specially, when γ = 0.5, the squared-error metric is the same as the convergence-related metric. Mathematically, generalization expands the solution domain. In this case, we employ a gradient-descent-based algorithm to produce different reconstructions and then select a best reconstruction in all results. In this way, we may achieve a better reconstruction than the one reconstructed with the convergence-related metric.

To run the gradient-descent-based algorithm, first of all, we need to calculate the gradients of ε. The gradient of ε with respect to the real and imaginary parts of the sample function, Sg(j)(k)=Sg,R(j)(k)+iSg,I(j)(k), is obtained by computing the expression

S(j)=εSg,R(j)(k)+iεSg,I(j)(k)=4n=1L[Pg(j)(k+kn)]*{Wn[(Is,n+δ)γ(In+δ)γ]×γ(Is,n+δ)γ1Is,nei(φs2π(xunM+yvnN))},
where M, N is the size of the captured image and subscripts R and I denote real and imaginary.

The gradient with respect to the real and imaginary parts of the pupil function, Pg(j)(k)=Pg,Rj(k)+iPg,I(j)(k), can be computed in a similar fashion

P(j)=εPg,R(j)(k)+iεPg,I(j)(k)=4n=1L[Sg(j)(kkn)]*{Wn[(Is,n+δ)γ(In+δ)γ]×γ(Is,n+δ)γ1Is,neiφs.

With the expressions for the gradients given by Eq. (8) and Eq. (9), we update the sample function and pupil function iteratively with a conjugate-gradient routine [25].

Sg(j+1)(k)=Sg(j)(k)+α|Pg(j)(k+kn)max2|S(j),
Pg(j+1)(k)=Pg(j)(k)+β|Sg(j)(kkn)max2|P(j),
where α and β are real-valued constants, which can be adjusted to alter the step-size of the update. In this paper, α = 1 and β = 1 are used for the results.

Apparently, the pupil function has also been corrected during the reconstruction procedure. Jointly optimize over the sample function and the pupil function also helps suppress noise because more constraints can be introduced into the procedure. For example, as an assumption of the FPM imaging system, the pupil function is a circularly shaped low-pass filter, so the area of the pupil function, which is out of the circle, should always be zero. During the reconstruction procedure, the zero points in the pupil function may become non-zero, which we call non-zero errors, when updated by the noisy data. Then we can use the constraint to correct the pupil function by eliminating the non-zero errors and use the corrected pupil function to update the sample function. Iteratively, this constraint can eliminate or suppress the bad effects of the noise during the reconstruction procedure.

3. Simulation results

To verify the effectiveness of our method, we first test the algorithms on simulated FPM datasets, where we can evaluate the quality of reconstructions qualitatively and quantitatively. Without loss of generality, we employ different types of samples, including an image of boat (512 × 512 pixels, from [26]) and an image of a pathological slide (500 × 500 pixels, from [27]). The correct pupil function is set circularly shaped and the phase of the pupil function is set as zero for simplicity, as shown in Figs. 1(d) and 2(d). We simulated a sequence of 121 images with enough overlap in Fourier domain to assure convergence of the algorithm [28] and the simulation procedure is similar to [7,15,16].

 figure: Fig. 1

Fig. 1 Reconstructions of the conventional FPM method and our method (γ = 0.15) on the simulated dataset of boat. (a) The ground truth of the sample intensity. (b–c) Reconstructed intensity with the conventional FPM method and our method respectively, each takes 20 iterations. (d) The ground truth of the pupil function. The phase of the pupil function is set as zero for simplicity. (e) The ground truth of the sample phase. (f–g) Reconstructed intensity with the conventional FPM method and our method respectively. (h) The reconstructed pupil function with our method (amplified by 4 times for better visualization).

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 Reconstructions of the conventional FPM method and our method (γ = 0.2) on the simulated dataset of the pathological slide. (a) The ground truth of the sample intensity. (b–c) Reconstructed intensity with the conventional FPM method and our method respectively, each takes 20 iterations. (d) The ground truth of the pupil function. The phase of the pupil function is set as zero for simplicity. (e) The ground truth of the sample phase. (f–g) Reconstructed intensity with the conventional FPM method and our method respectively. (h) The reconstructed pupil function with our method (amplified by 4 times for better visualization).

Download Full Size | PDF

As mentioned in previous work [2], the reconstruction procedure starts with a FPM pupil function guess, which is set as a circularly shaped low-pass filter, and a sample function guess which is the spectrum of the upsampled low-resolution image. In this case, the up sample ratio is 10. Comparisons between the results reconstructed by the conventional FPM method and our method are shown in Figs. 1 and 2. It is clear that the imprecise pupil function degrades the quality of the FPM reconstruction severely whereas our method makes a successful reconstruction. This is because the imprecise pupil function repeatedly influences the low and high frequency components of the sample spectrum, leading to a significant degree of crosstalk between the sample intensity and phase. With a proper nonlinear factor and joint optimization over the sample function and the pupil function, our method eliminates the bad effect of the pupil aberration and achieves a much better reconstruction. Meanwhile, our method has also corrected the pupil aberration of the system during the reconstruction procedure, which can be further studied to characterize the behavior of the lenses.

Besides, we introduce a quantitative metric to evaluate the reconstruction quality. Although the error metric given in Eq. (7) is a good measure of convergence to a solution, it only measures how good our estimation matches the captured data. In numerical simulations, we actually have the correct solution (ground truth for reconstruction), thus an ideal quantitative metric should not only measure the agreement with the captured data but also indicate convergence to the correct solution. Inspired by the work of [7,25,29], we introduce the normalized invariant field root mean square error (NIF-RMSE) as

E2=1Ln=1L[minρnΣk|ρnSg(kkn)Pg(k)St(kkn)Pt(k)|2Σk|St(kkn)Pt(k)|2],
where St (k) is the true sample spectrum, Pt (k) is the true pupil function with the subscript t denoting true, and the parameter ρn is given by:
ρn=ΣkSt(kkn)Pt(k)[Sg(kkn)Pg(k)]*Σk|Sg(kkn)Pg(k)|2.

This parameter allows the error metric to be invariant to a constant multiplication and a constant phase offset. As shown in Fig. 3, we employ the NIF-RMSE metric to evaluate the quality of reconstructions on both simulated datasets. Apparently, imprecise estimation of the pupil function severely degrades the quality of the FPM reconstructions, which leads to a large NIF-RMSE. For the dataset of pathological slide, the NIF-RMSE value even increases along with the increasement of iterations (the red curve in Fig. 3(b)). In contrast, the ones reconstructed by our method have a much better performance in NIF-RMSE for both two simulated datasets. The quality of the reconstructions improves iteration by iteration and the NIF-RMSE finally reaches a value much smaller than that in the conventional FPM method.

 figure: Fig. 3

Fig. 3 Convergence of algorithms on simulated datasets. (a) The convergence of the conventional FPM method and our method (γ = 0.15) on the dataset of boat. The NIF-RMSE in Eqs. (12)(13) is used to evaluate the quality of reconstructions. (b) The convergence of the conventional FPM method and our method (γ = 0.2) on the dataset of pathological slide.

Download Full Size | PDF

As discussed in Section 2.2, an appropriate nonlinear factor is important for our method and will lead to a satisfying reconstruction. Therefore, choosing a proper nonlinear factor is the key to our method. Here, we define the normalized error like εN=εεo, where εo is the squared-error of the initial sample function guess, to evaluate the convergence speed of the algorithms and make a comparison between different γ. As shown in Fig. 4, when γ becomes very small (red curve), the algorithm needs much more iterations for convergence. When γ is large (blue curve), the performance of the algorithm becomes unsteady. And a proper γ converges fast and steady for different types of samples.

 figure: Fig. 4

Fig. 4 Comparisons of the convergence speed between different γ. We use the normalized error to evaluate the convergence of reconstructions.

Download Full Size | PDF

A good reconstruction of the sample function and pupil function should match the captured data well, that is, having a small value in squared-error metric (convergence error). Therefore, it would be a bad solution if a reconstruction has a large convergence error. As shown in Fig. 5(a), when the nonlinear factor is large, the algorithm converges with a large convergence error, but when the nonlinear factor is small, the reconstructions have a much smaller value in squared-error metric. Meanwhile, we calculate the NIF-RMSE value of all reconstructions, as shown in Fig. 5(b). It is clear that the ones reconstructed with small γ have a much smaller NIF-RMSE value than those reconstructed with large γ, verifying that large convergence error leads to low reconstruction quality.

 figure: Fig. 5

Fig. 5 Convergence error and the NIF-RMSE value of reconstructions according to different γ. Each reconstruction takes 100 iterations to assure convergence. (a) The convergence error according to different γ, here, the squared-error metric in Eq. (7) is used to evaluate the convergence error. (b) The quality of reconstructions according to different γ with the metric of NIF-RMSE.

Download Full Size | PDF

Based on the above analysis, we know that an appropriate nonlinear factor should be neither too small nor too large since the small one leads to more iterations for convergence and the large one leads to large convergence error and low reconstruction quality. However, it is still not clear which γ leads to the best reconstruction. A main problem is that there is no effective way to decide a best reconstruction at present, but visual comparison. A solution with large convergence error is absolutely a bad reconstruction, but the one with the smallest convergence error may not be the best reconstruction. This is because the sample function we reconstruct is actually the Fourier spectrum of the sample. The solution that performs best in convergence error results in a spectrum closest to the correct one, but when it is transformed back into the space domain, the performance may be good but not the best. As we can see in Figs. 1(c) and 1(g), the FPM reconstructions have a clear view of texture but look much darker than the ground truth, even though they perform well in NIF-RMSE (almost near zero), which is also a metric based on the spectrum difference.

To give a perceptual description of our conclusion, we make a visual comparison between reconstructions with different γ, on both two simulated datasets. As shown in Fig. 6, each reconstruction takes 20 iterations. When γ is too small (γ = 0.01), 20 iterations are not enough for the algorithm to converge, in which the reconstruction suffers severe noise artefacts (Fig. 6(a)). When γ is too large (γ = 0.9), the convergence error reaches a large value, which is bigger than 108. As a result, the reconstruction suffers a severe crosstalk between the intensity and phase and the reconstructed intensity is much darker than others (Figs. 6(d) and 6(h)). When γ is neither too large nor too small (γ = 0.15 and γ = 0.5), the convergence error is small (less than 100). However, the reconstructed phase with γ = 0.5 suffers significant noise (white spots in Fig. 6(g)), while the reconstruction with γ = 0.15 performs much better. Similarly, we find the varies of γ influence the performance of reconstructions on the dataset of pathological slide as well. As shown in Fig. 7, too small γ or too large γ leads to bad reconstructions. For those results that reach a small convergence error (γ = 0.2 and γ = 0.4), the quality of reconstructions are different. As shown in Fig. 7(g), the reconstructed phase suffers significant noise which degrades the quality of the reconstruction. In this case, γ = 0.2 is a proper choice.

 figure: Fig. 6

Fig. 6 A comparison between reconstructions with different γ on the dataset of boat. All reconstructions are recovered by the proposed algorithm (20 iterations), including intensity reconstruction and phase reconstruction. (amplified by 4 times for better visualization).

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 A comparison between reconstructions with different γ on the dataset of pathological slide. All reconstructions are recovered by the proposed algorithm with 20 iterations, including intensity reconstruction and phase reconstruction. (amplified by 4 times for better visualization).

Download Full Size | PDF

In conclusion, small γ leads to noise-free reconstructions if it takes enough iterations. When γ is too small (like γ = 0.01), about 100 iterations are necessary for a successful reconstruction (we test it on the two provided simulated datasets). Large γ may lead to a bad reconstruction. Empirically, a reasonable choice of the nonlinear factor is a small one but not too small.

4. Experimental results

In this section, we demonstrate the performance of our method with experimental datasets, which are captured by the real FPM imaging system. The setup of the imaging system includes a conventional microscope with a NA = 0.1 objective lens, a CCD camera with 1.8545µm pixel size and a programmable color LED matrix. The distance between the LED matrix and the sample plane is 90.88mm and the lateral distance between two adjacent LEDs is 4mm. Figure 8 shows the reconstructions using FPM blood smear dataset, which includes a sequence of 225 images and is captured under the illumination of 0.63µm light. Under a low NA objective lens, the captured raw data is not clear enough to distinguish blood cells from each other. The initial guess of the pupil function is set as a circularly shaped low-pass filter, whose radius is determined by NA, with zero phase. The initial guess of the sample function is the Fourier spectrum of a up-sampled raw data. Figures 8(b) and 8(e) show the intensity and phase reconstruction of the blood smear with the conventional FPM method. Due to the very significant pupil aberration of the objective lens, it is difficult to recognize the contour of the blood cells and distinguish blood cells from each other. Figures 8(c) and 8(f) show a high quality reconstruction, which is reconstructed by our method, and in this case, γ is 0.1. In the image of reconstructed intensity, the morphology of the blood cells is clear and the shape of the nucleus of the cells is recognizable. We can also see donut shape of the blood cells and distinguish them from each other. Meanwhile, intensity and phase of the pupil function are recovered during the reconstruction procedure, as shown in Figs. 8(d) and 8(g).

 figure: Fig. 8

Fig. 8 Experimental reconstructions of the conventional FPM method and our method (γ = 0.1) using FPM blood smear dataset. (a) is the raw data captured under the illumination of the central LED. (b) and (e) are the reconstructed sample intensity and phase with the conventional FPM method respectively. (c) and (f) are the reconstructed sample intensity and phase with our method respectively. (d) and (g) are the intensity and phase of the reconstructed pupil function with our method respectively.

Download Full Size | PDF

Besides, our method is able to suppress the noise of the reconstructions as well. Without loss of generality, we employ our method on the dataset of a USAF resolution target, which contains a sequence of 225 images and is captured in a similar setup. Figure 9(a) shows the raw data of USAF dataset (cropped from the whole resolution target). Due to the simple structure of the reconstruction and slight pupil aberration, both reconstructions perform well and the line pairs in group 9, element 3 (0.775µm) are able to be distinguished. However, imprecise estimation of the pupil function adds much noise to the image reconstructed by the conventional FPM method (shown in Fig. 9(b)) and degrades the quality of the reconstruction. As shown in Fig. 9(c), our method jointly optimizes over the sample function and the pupil function, eliminating the bad effect of the pupil aberration and improving the quality of the reconstruction.

 figure: Fig. 9

Fig. 9 Experimental reconstructions of the conventional FPM method and our method (γ = 0.1) using FPM USAF dataset. (a) Raw image of a USAF chart and the bottom image is a magnified view of the central part of the top image (closed by the red rectangle). (b) Reconstruction with the conventional FPM method. (c) Reconstruction with our method.

Download Full Size | PDF

5. Conclusion and discussion

In this paper, we develop and test a nonlinear optimization algorithm to improve the robustness of the FPM imaging system and the performance of the FPM reconstruction under the condition of unknown pupil aberration. By introducing a proper nonlinear factor and jointly optimizing over the sample function and the pupil function, our method reconstructs a better estimation of the sample function and refines the estimation of the pupil function. In this case, without the time-consuming and laborious acquisition of pupil characterization, an aberration free estimation of the object can be recovered and the robustness of the imaging system can be improved. Both simulation and experimental results demonstrate the validity of our method.

The limitation of our method is that the time-consuming is slightly larger than that in the conventional FPM method. Since we jointly optimize both the sample function and the pupil function while the conventional FPM method only updates the sample function, the time-consuming of our method may be twice that of the conventional FPM method. In other words, the extra cost is spent on recovering the pupil function. Besides, lacking of a reliable quality metric for experimental reconstructions makes it difficult to determine the best γ for the reconstruction procedure, where γ is set empirically at present.

More broadly speaking, the efficiency of FPM will strongly promote the development of digital pathology, hematology and neuroanatomy, which requires high-SBP observations. With our method, the imaging system becomes more robust and is no longer influenced by the pupil aberrations. Meanwhile, the reconstructed pupil function can be further studied to characterize the behavior of the lenses. Therefore, developing the robustness of the system and applying the system to more research area will be a research emphasis in the near future.

Acknowledgments

We are grateful to the editor and the anonymous reviewers for their insightful comments on the manuscript. The authors acknowledge funding support from the National Natural Science Foundation of China under Grant U1301257, 61571254 and U1201255.

References and links

1. A. W. Lohmann, R. G. Dorsch, D. Mendlovic, Z. Zalevsky, and C. Ferreira, “Space-bandwidth product of optical signals and systems,” J. Opt. Soc. Am. A 13(3), 470–473 (1996). [CrossRef]  

2. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

3. G. Zheng, X. Ou, R. Horstmeyer, and C. Yang, “Characterization of spatially varying aberrations for wide field-of-view microscopy,” Opt. Express 21(13), 15131–15143 (2013). [CrossRef]   [PubMed]  

4. H. Nomura and T. Sato, “Techniques for measuring aberrations in lenses used in photolithography with printed patterns,” Appl. Phys. Lett. 38(13), 2800–2807 (1999).

5. J. Wesner, J. Heil, and Th. Sure, “Reconstructing the pupil function of microscope objectives from the intensity PSF,” Proc. SPIE 47674845 (2002).

6. Z. Bian, S. Dong, and G. Zheng, “Adaptive system correction for robust Fourier ptychographic imaging,” Opt. Express 21(26), 32400–32410 (2013). [CrossRef]  

7. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22(5), 4960–4972 (2014). [CrossRef]   [PubMed]  

8. J. Marrison, L. Rty, P. Marriott, and P. O’Toole, “Ptychography-a label free, high-contrast imaging technique for live cells using quantitative phase information,” Sci. Rep. 3, 2369 (2013). [CrossRef]  

9. A. M. Maiden, J. M. Rodenburg, and M. J. Humphry, “A new method of high resolution, quantitative phase scanning microscopy,” Proc. SPIE 772977291I (2010). [CrossRef]  

10. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109(10), 1256–1262 (2009). [CrossRef]   [PubMed]  

11. K. Guo, S. Dong, P. Nanda, and G. Zheng, “Optimization of sampling pattern and the design of Fourier ptychographic illuminator,” Opt. Express 23(5), 6171–6180 (2015). [CrossRef]   [PubMed]  

12. X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, “Quantitative phase imaging via Fourier ptychographic microscopy,” Opt. Lett. 38(22), 4845–4848 (2013). [CrossRef]   [PubMed]  

13. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier ptychography with an LED array microscope,” Biomed. Opt. Express 5(7), 2376–2389 (2014). [CrossRef]   [PubMed]  

14. S. Dong, R. Shiradkar, P. Nanda, and G. Zheng, “Spectral multiplexing and coherent-state decomposition in Fourier ptychographic imaging,” Biomed. Opt. Express 5(6), 1757–1767 (2014). [CrossRef]   [PubMed]  

15. Y. Zhang, W. Jiang, L. Tian, L. Waller, and Q. Dai, “Self-learning based Fourier ptychographic microscopy,” Opt. Express 23(14), 18471–18486 (2015). [CrossRef]   [PubMed]  

16. W. Jiang, Y. Zhang, and Q. Dai, “Multi-channel super-resolution with Fourier ptychographic microscopy,” Proc. SPIE 9273927336 (2014). [CrossRef]  

17. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, (15) 2758–2769 (1982). [CrossRef]   [PubMed]  

18. J. R. Fienup, “Reconstruction of an object from the modulus of its Fourier transform,” Opt. Lett. 3(1), 27–29 (1978). [CrossRef]   [PubMed]  

19. J. R. Fienup, “Phase retrieval algorithms: a personal tour [invited],” Appl. Opt. 52(1), 45–56 (2013). [CrossRef]   [PubMed]  

20. R. Horstmeyer and C. Yang, “A phase space model of Fourier ptychographic microscopy,” Opt. Express 22(1), 338–358 (2014). [CrossRef]   [PubMed]  

21. J. M. Rodenburg and H. M. L. Faulkner, “A phase retrieval algorithm for shifting illumination,” Appl. Phys. Lett. 85(20), 1385–1391 (2004). [CrossRef]  

22. J. R. Fienup and J. J. Miller, “Aberration correction by maximizing generalized sharpness metrics,” J. Opt. Soc. Am. A 20(4), 609–620 (2003). [CrossRef]  

23. P. Thibault, M. Dierolf, A. Menzel, O. Bunk, C. David, and F. Pfeiffer, “High-Resolution scanning X-ray diffraction microscopy,” Science 321(5887), 379–382 (2008). [CrossRef]   [PubMed]  

24. P. Thibaulta, M. Dierolfa, O. Bunka, A. Menzela, and F. Pfeiffera, “Probe retrieval in ptychographic coherent diffractive imaging,” Ultramicroscopy 109(4), 338–343 (2009). [CrossRef]  

25. M. Guizar-Sicairos and J. R. Fienup, “Phase retrieval with transverse translation diversity: a nonlinear optimization approach,” Opt. Express 16(10), 7264–7278 (2008). [CrossRef]   [PubMed]  

26. University of South California, “SIPI image database,”http://sipi.usc.edu/database/

27. The computational image lab at University of California Berkeley, “LED array Fourier ptychography dataset,” http://www.laurawaller.com/opensource/.

28. S. Dong, Z. Bian, R. Shiradkar, and G. Zheng, “Sparsely sampled Fourier ptychography,” Opt. Express 22(5), 5455–5464 (2014). [CrossRef]   [PubMed]  

29. J. R. Fienup, “Invariant error metrics for image reconstruction,” Appl. Opt. 36(32), 8352–8357 (1997). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 Reconstructions of the conventional FPM method and our method (γ = 0.15) on the simulated dataset of boat. (a) The ground truth of the sample intensity. (b–c) Reconstructed intensity with the conventional FPM method and our method respectively, each takes 20 iterations. (d) The ground truth of the pupil function. The phase of the pupil function is set as zero for simplicity. (e) The ground truth of the sample phase. (f–g) Reconstructed intensity with the conventional FPM method and our method respectively. (h) The reconstructed pupil function with our method (amplified by 4 times for better visualization).
Fig. 2
Fig. 2 Reconstructions of the conventional FPM method and our method (γ = 0.2) on the simulated dataset of the pathological slide. (a) The ground truth of the sample intensity. (b–c) Reconstructed intensity with the conventional FPM method and our method respectively, each takes 20 iterations. (d) The ground truth of the pupil function. The phase of the pupil function is set as zero for simplicity. (e) The ground truth of the sample phase. (f–g) Reconstructed intensity with the conventional FPM method and our method respectively. (h) The reconstructed pupil function with our method (amplified by 4 times for better visualization).
Fig. 3
Fig. 3 Convergence of algorithms on simulated datasets. (a) The convergence of the conventional FPM method and our method (γ = 0.15) on the dataset of boat. The NIF-RMSE in Eqs. (12)(13) is used to evaluate the quality of reconstructions. (b) The convergence of the conventional FPM method and our method (γ = 0.2) on the dataset of pathological slide.
Fig. 4
Fig. 4 Comparisons of the convergence speed between different γ. We use the normalized error to evaluate the convergence of reconstructions.
Fig. 5
Fig. 5 Convergence error and the NIF-RMSE value of reconstructions according to different γ. Each reconstruction takes 100 iterations to assure convergence. (a) The convergence error according to different γ, here, the squared-error metric in Eq. (7) is used to evaluate the convergence error. (b) The quality of reconstructions according to different γ with the metric of NIF-RMSE.
Fig. 6
Fig. 6 A comparison between reconstructions with different γ on the dataset of boat. All reconstructions are recovered by the proposed algorithm (20 iterations), including intensity reconstruction and phase reconstruction. (amplified by 4 times for better visualization).
Fig. 7
Fig. 7 A comparison between reconstructions with different γ on the dataset of pathological slide. All reconstructions are recovered by the proposed algorithm with 20 iterations, including intensity reconstruction and phase reconstruction. (amplified by 4 times for better visualization).
Fig. 8
Fig. 8 Experimental reconstructions of the conventional FPM method and our method (γ = 0.1) using FPM blood smear dataset. (a) is the raw data captured under the illumination of the central LED. (b) and (e) are the reconstructed sample intensity and phase with the conventional FPM method respectively. (c) and (f) are the reconstructed sample intensity and phase with our method respectively. (d) and (g) are the intensity and phase of the reconstructed pupil function with our method respectively.
Fig. 9
Fig. 9 Experimental reconstructions of the conventional FPM method and our method (γ = 0.1) using FPM USAF dataset. (a) Raw image of a USAF chart and the bottom image is a magnified view of the central part of the top image (closed by the red rectangle). (b) Reconstruction with the conventional FPM method. (c) Reconstruction with our method.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

I n = | e ( r ) p ( r ) | 2 = | 1 { [ e ( r ) ] [ p ( r ) ] } | 2 = | 1 { S ( k k n ) P ( k ) } | 2 ,
I s , n e i φ s = 1 { S g ( j ) ( k k n ) P g ( j ) ( k ) } .
I c , n e i φ c = I n I s , n e i φ s I s , n = I n e i φ s ,
S g ( j + 1 ) ( k ) = [ P g ( j ) ( k + k n ) ] * | P g ( j ) ( k + k n ) | max 2 { I c , n e i φ c } .
P g ( j + 1 ) ( k ) = P g ( j ) ( k ) .
C M = n = 1 L x , y W n ( x , y ) ( I s , n ( x , y ) I n ( x , y ) ) 2 ,
ε = n = 1 L x , y W n ( x , y ) { [ I s , n ( x , y ) + δ ] γ [ I n ( x , y ) + δ ] γ } 2 ,
S ( j ) = ε S g , R ( j ) ( k ) + i ε S g , I ( j ) ( k ) = 4 n = 1 L [ P g ( j ) ( k + k n ) ] * { W n [ ( I s , n + δ ) γ ( I n + δ ) γ ] × γ ( I s , n + δ ) γ 1 I s , n e i ( φ s 2 π ( x u n M + y v n N ) ) } ,
P ( j ) = ε P g , R ( j ) ( k ) + i ε P g , I ( j ) ( k ) = 4 n = 1 L [ S g ( j ) ( k k n ) ] * { W n [ ( I s , n + δ ) γ ( I n + δ ) γ ] × γ ( I s , n + δ ) γ 1 I s , n e i φ s .
S g ( j + 1 ) ( k ) = S g ( j ) ( k ) + α | P g ( j ) ( k + k n ) max 2 | S ( j ) ,
P g ( j + 1 ) ( k ) = P g ( j ) ( k ) + β | S g ( j ) ( k k n ) max 2 | P ( j ) ,
E 2 = 1 L n = 1 L [ min ρ n Σ k | ρ n S g ( k k n ) P g ( k ) S t ( k k n ) P t ( k ) | 2 Σ k | S t ( k k n ) P t ( k ) | 2 ] ,
ρ n = Σ k S t ( k k n ) P t ( k ) [ S g ( k k n ) P g ( k ) ] * Σ k | S g ( k k n ) P g ( k ) | 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.