Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Self-learning based Fourier ptychographic microscopy

Open Access Open Access

Abstract

Fourier Ptychographic Microscopy (FPM) is a newly proposed computational imaging method aimed at reconstructing a high-resolution wide-field image from a sequence of low-resolution images. These low-resolution images are captured under varied illumination angles and the FPM recovery routine then stitches them together in the Fourier domain iteratively. Although FPM has achieved success with static sample reconstructions, the long acquisition time inhibits real-time application. To address this problem, we propose here a self-learning based FPM which accelerates the acquisition and reconstruction procedure. We first capture a single image under normally incident illumination, and then use it to simulate the corresponding low-resolution images under other illumination angles. The simulation is based on the relationship between the illumination angles and the shift of the sample’s spectrum. We analyze the importance of the simulated low-resolution images in order to devise a selection scheme which only collects the ones with higher importance. The measurements are then captured with the selection scheme and employed to perform the FPM reconstruction. Since only measurements of high importance are captured, the time requirements of data collection as well as image reconstruction can be greatly reduced. We validate the effectiveness of the proposed method with simulation and experimental results showing that the reduction ratio of data size requirements can reach over 70%, without sacrificing image reconstruction quality.

© 2015 Optical Society of America

1. Introduction

The development of traditional optical systems is constrained by space-bandwidth product (SBP) [1], which forces the user to choose between high-resolution or large field-of-view (FOV). Although large SBP images generally cannot be captured with a single shot, they can be reconstructed from a sequence of low-resolution images by computational methods. For example, Fourier Ptychographic Microscopy (FPM) [2] bypasses the SBP limit in post-processing. The required low-resolution images are captured by sequentially capturing data under different illumination angles, by lighting up single [2] or multiple LEDs [3] in an arrayed illuminator. The resolution of the final reconstruction is set by the sum of the largest incident angle of the LED source and the largest angle which passes through the objective [4]. As a result, the method can achieve resolution beyond the limit set by the numerical aperture (NA) of the objective, effectively increasing the SBP of the underlying optical system [2].

Efforts to improve the FPM method can be classified into two types. The first improves the robustness of FPM to noise or experimental error. Bian et al. [5] put forward an adaptive FPM recovery framework to get a successful FPM reconstruction under a convergence index (sharpness metric) [6]. Meanwhile, [5] corrected unknown system parameters in the FPM setup, such as defocus distance. Ou et al. [7] implemented gradient descent and digital correction of the pupil function aberrations, inspired by the studies in [8]. Tian et al. [3] derived a similar method for pupil function correction as [9] and proposed a background subtraction method for a better FPM reconstruction. The second type of studies aims to reduce the acquisition time for data collection by capturing fewer images than are needed for a full scan. Bian et al. [10] proposed sparse illumination to reduce the redundancy of the spectrum and Dong et al. [11] proposed sparse sampling to reduce both acquisition time and processing time. Tian et al. [3] demonstrated that it is possible to turn on several LEDs at once during the image collection procedure, showing data reduction from 293 to 40 images, along with a reduction of acquisition time of up to 90%, without significant loss of image fidelity. Multiplexing can also be used for the RGB spectral channels, as shown by Dong et al. [12]. These variations on FPM have found applications in quantitative phase imaging [13], gigapixel microscopy [14], high-resolution fluorescence imaging [15] and more [16–18].

Previously, we proposed a new post-processing scheme, adaptive Fourier ptychography (AFP) [10], based on the properties of natural images. Redundancy in the Fourier spectrum [19] means that a small number of informative sub-regions of the spatial spectrum are sufficient for a successful reconstruction. However, this study focused on the post-processing of the captured images, which leads to no reduction in the data collection time. In this paper, considering the relationship between a high-resolution image spectrum and its corresponding low-resolution spectra [19–22], we add a decision-making procedure to the FPM routine, which helps to select the LEDs corresponding to the most informative measurements in order to only capture the important information. We assume that the similarity ranking between images captured under normally incident illumination and other illumination angles remains invariant for different resolutions of the same scene. Accordingly, we utilize a low-resolution image captured under normally incident illumination to estimate the ”potential distribution” of informative sub-regions in the corresponding spectrum of its high-resolution image. In the following sections, we describe the principle of our method and provide simulation and experimental results.

The remainder of this paper is outlined as follows. In Section 2, we overview the FPM and AFP algorithm and introduce the principle of our method. In Section 3, we provide simulation and experimental results to validate our method quantitatively. In Section 4, we summarize our present work and briefly introduce future work.

2. Theory and method

2.1. Fourier ptychographic microscopy

Derived from ptychography [8, 23–27], the FPM algorithm shares its roots with phase retrieval [9,28–30] and synthetic aperture [31–34]. The phase retrieval technique leads to a reconstruction of the lost phase information from the measured intensity. And the synthetic aperture combines images from different parts of Fourier space to expand the Fourier passband and improve the achievable resolution. FPM integrates these two aspects and achieves a wide-field, high-resolution reconstruction with quantitative phase [13].

The prototype FPM microscope uses a programmable LED matrix as an illumination source, which is placed beneath the sample (as shown in Fig. 1(a)). The data collection procedure is straightforward, sequentially turning on a single LED and capturing images of the sample following a lighting scheme, which chooses which LEDs to collect images from. Since the prototype only works for transmitted light, the sample should be transparent and thin.

 figure: Fig. 1

Fig. 1 Our FPM prototype setup. a. The optical system includes a LED matrix illuminator on a commercial microscope. b. A raw image matrix contains 15×15 images, with each corresponding to illumination from a single LED (the red axis label denotes the lateral coordinates of the LED array). The 70 sub-images surrounded by white rectangles have been found to be the most important measurements selected by the proposed method. c. Diagram of setup. A programmable LED matrix is placed beneath the sample plane. We first capture Ic (r) with illumination from the blue LED (placed directly below the sample), and then devise a selection scheme for choosing the most informative images Im (r). LEDs in black represent less informative illumination angles which are not captured. P(k) is the pupil function and O(k − km) represents the exit wave.

Download Full Size | PDF

The principle of the FPM algorithm is based on the assumption that tilted illumination leads to a shift of the sample’s spectrum, which brings a practical solution to enhance the resolution by utilizing the measurements to expand the spectrum. The recovery procedure goes as follows:

  • Make an initial guess of the high-resolution object.
  • Extract a sub-region from the spectrum to generate a low-resolution image.
  • Replace the amplitude of the extracted low-resolution image by the corresponding measurement and utilize it to update the corresponding sub-region of the spectrum.
  • Repeat the extract-replace-update procedure for other measurements until all the measurements captured under the lighting scheme are employed.
  • Repeat step 2–4 until the reconstruction reaches convergence.

With a raw image matrix of 15×15 low-resolution measurements (as shown in Fig. 1(b)), the FPM routine recovers a high-resolution image despite its low NA objective lens (as shown in Fig. 2, we capture images with NA 0.16). However, the speed of the FPM prototype is extremely slow, due to the large number of images needed and long exposure times required for sufficient signal-to-noise (SNR) in the darkfield images (∼ 3min in [2]), as well as computationally intensive image post-processing procedures (∼ 1.5s for converting 150×150 raw pixels to 1500 × 1500 pixels in [2]). Especially, the former one makes it difficult to apply the FPM routine to real-time observation.

 figure: Fig. 2

Fig. 2 The FPM reconstruction. a1. The FOV of the raw image. b1. The FOV of the FPM reconstruction. a2. The magnified view of the raw image. b2. The magnified view of the FPM reconstruction.

Download Full Size | PDF

2.2. Adaptive Fourier ptychography

Long acquisition time hinders the application of FPM to dynamic samples, even though it has achieved success in observing static samples. The large size of required data is one of the most important reasons that cause the long data acquisition time, hence it is a straightforward solution to cut down the data size by collecting fewer images. Studies on the properties of natural images suggest the existence of redundancy in the Fourier spectrum [19]. That is, the spectrum can be divided into a more informative part and a less informative part. The more informative part contains the necessary information to achieve a successful reconstruction, while the less informative one can be ignored without affecting the reconstruction quality. Thus, it is possible to reconstruct successfully while discarding the less informative part. Conventional FPM gives equal weight to all the measurements and all the sub-regions are updated one by one successively (as shown in Fig. 3(b2)). To accelerate the reconstruction speed, Bian et al. [10] proposed an AFP approach, which selectively updates the sub-regions during the reconstruction (as shown in Fig. 3(a2)) and reduces the replaced sub-regions by around 30%60%.

 figure: Fig. 3

Fig. 3 Simulation results of the AFP reconstruction and the FPM reconstruction. a1. and b1. are reconstructed with 119 measurements by AFP and FPM, respectively. a2. The strategy of spectrum expanding in the AFP algorithm. The algorithm updates the spectrum (enclosed by the yellow circle) circle by circle (shown as the red circles) from the center outward (along the purple arrow). b2. The strategy of spectrum expanding in the FPM algorithm. The algorithm updates the spectrum from top left to bottom right (along the yellow and purple arrows). a3. and b3. are the spectrum (zoomed in) reconstructed by AFP and FPM algorithms, respectively, and (kx,ky) denotes lateral coordinates in the Fourier domain.

Download Full Size | PDF

The AFP routine employs a different update strategy following two rules:

  • The maximum amplitude of the entries corresponding to the updated sub-region should be larger than a manually set threshold.
  • The closest distance between the center of an updated sub-region and its neighboring updated sub-region should be larger than a step size.

Compared to the traditional FPM method, AFP distinguishes the most informative measurements from others and updates the corresponding sub-regions selectively (as shown in Fig. 3(a3)). In this instance, the initial input of the reconstruction is the zero-padded extension of the spatial spectrum of the low-resolution image under normally incident light. As a result, AFP leads to a more successful reconstruction if the used measurements are in the same data size (see the regions surrounded by red rectangles in Fig. 3(a1) and Fig. 3(b1), both of the reconstructions use 119 low-resolution measurements). Even though this method has greatly decreased the amount of used measurements without sacrificing image quality, AFP does not reduce the data collection procedure, since informative measurements are post-selected after all the measurements have been captured.

2.3. Principle of self-learning based FPM

Considering the success that AFP has achieved in eliminating redundancy, it is clear that a successful reconstruction can be achieved by only employing the most informative measurements. Therefore, if the importance of the measurements can be ranked before capture, the measurements of less importance need not be captured. A practical solution is to estimate the spectrum distribution of the reconstruction from what we can directly capture in a single image.

Many studies have focused on the relationship between low-resolution images and their corresponding high-resolution ones, since they look similar to the human visual system. Redundancy in the Fourier spectrum is fundamental for AFP, which helps to rank all the measurements by their importance. Without loss of generality, we transform a simple structure image (1951 USAF test chart) and complex structure images (lena and city) into the Fourier domain and transform them back into the spatial domain after truncating the small values (i.e. the pixels whose value in the Fourier spectrum are smaller than a manually set threshold). Here we introduce the reserved ratio to represent the reserved part (after truncating) against the whole spectrum. Generally, the shape of the curve represents the distribution of the spectrum, which is closely related to the structure of the image [19, 22]. The relationship between reserved ratio and root-mean-square-error (RMSE, the difference between images after truncating and the ground truth) is shown in Fig. 4, with the curves showing decreasing trends along the radial direction. From Fig. 4, we find that images of different resolutions corresponding to the same scene are likely to share the same shape of curve. These observations motivate us to propose a new acceleration method, termed self-learning based FPM, which utilizes the single captured measurement to estimate the spectrum distribution of the final reconstruction and make a selection scheme for data capture.

 figure: Fig. 4

Fig. 4 Redundancy in Fourier Spectrum of images at different resolutions. a. Images from USC-SIPI [36]. b1. The relationship between reserved ratio of the whole Fourier spectrum and recovered image quality. Here we use images with higher resolution. b2. The relationship between reserved ratio of the whole Fourier spectrum and recovered image quality. Here we use images with lower resolution.

Download Full Size | PDF

It should be clarified that the spectrum distributions of a high-resolution image and its corresponding low-resolution one are likely to be similar. That is, predicting the spectrum distribution of an image from its corresponding low-resolution one can be taken as a coarse estimation, as shown in Fig. 5. To score the reconstruction quality quantitatively, we introduce the structural similarity metric (SSIM [35], details in Appendix A) for evaluating the importance of the measurements. Generally, the importance of measurements decreases along the radial direction, which accords with a perceptual intuition that the central LED plays a more important role than the outer LEDs. Besides, the assumption that the distribution is of some directionality is valid in most cases. For example, for the USAF test chart, more informative measurements are situated along the horizontal and vertical directions. Considering these two properties, the spectrum distributions of the high-resolution image and its corresponding low-resolution one are similar. As shown in Fig. 5, the two selection schemes (the red part in b1 and b2), which represent the 77 most informative images, are in a similar shape. Here we introduce two sets S1 and S2 to represent the two selection schemes, each consists of 77 elements which are considered as the images that should be captured. Then we define the similar degree SD=|S1S2||S1|, where |S1 ∩ S2| is the number of elements both in S1 and S2 and |S1| is the number of elements in S1. For the 1951 USAF test chart, the similar degree is over 84%.

 figure: Fig. 5

Fig. 5 The spectrum similarity of the 1951 USAF test chart. a1 and a2 are the importance distributions of the measurements from the high-resolution and the low-resolution image, respectively. b1 and b2 are the corresponding selection schemes from the high-resolution and the low-resolution image, respectively (both using 77 measurements). Red denotes selected measurements, blue denotes discarded ones while yellow and green denote the boundary between them (red and blue).

Download Full Size | PDF

Before detailed explanation of the flowchart of our method, it is necessary to give some mathematical background. In FPM, we take images while sequentially turning on single LEDs. Therefore, the intensity at the image plane resulting from a single LED illumination (neglecting magnification and noise) is [3]

im(r)=|[O(kkm)P(k)](r)|2,
where r = (x,y) denotes the lateral coordinates in the sample plane, [](r) is the 2D Fourier transform, k = (kx,ky) denotes the lateral coordinates in the Fourier domain, km = (kmx,kmy) is the spatial frequency of the local plane wave emitted by each LED, m is the index of the LED (m = 1,2,…,NLED, with NLED being the total number of LEDs in the array), P(k) is the pupil function and O(k−km) represents the exit wave at the pupil plane, as shown in Fig. 1(c). Based on the intensity measurements captured with LED illumination from various angles, the FPM reconstruction can be transformed into a non-convex optimization problem, formulated as
minO(k),P(k)m=1NLED|Im(r)|[O(kkm)P(k)](r)|2|2,
where Im(r) is the intensity of the image captured under the illumination of the mth LED. In practice, the reconstruction is usually performed by successively updating the amplitude of specific sub-regions by replacing it with the square root of the captured intensity measurement.

The flowchart of our method is shown in Fig. 6. In the first step, we capture a single image under normally incident illumination (central LED), since it is considered to be the most informative measurement. In the second step, the low-resolution images under other illumination angles are simulated based on the assumption that illuminating a thin sample by an oblique plane wave is equivalent to shifting the center of the sample’s spectrum in the Fourier domain, as in [2, 37]. The simulation can be expressed as

I(uu0,vv0)={Θ(x,y)}withΘ(x,y){Ic(xi,yi)ej2π(u0xiM+v0yiN),(xi,yi)(x,y)},
where Ic(x,y) is the image captured by the central LED and I(u,v) is the corresponding Fourier transform, (u0,v0) is the shift of the center of sample’s spectrum and M,N is the size of the captured image. In the simulation procedure, we extract sub-regions circle by circle from the spectrum I(u,v) of the captured image Ic(x,y) as indicated in Eq. (3), and perform an inverse fast Fourier transform (IFFT) to generate images corresponding to various illumination angles. All the simulated images have a much lower resolution than the captured image Ic(x,y).

 figure: Fig. 6

Fig. 6 Flowchart of the proposed method. Step 1: capture a single image under normally incident illumination. Step 2: generate simulated images with the measurement. Step 3: rank the simulated images by their relative importance and make the selection scheme. Step 4: capture measurements according to the selection scheme. Step 5: employ the measurements to perform the FPM reconstruction.

Download Full Size | PDF

In the third step, SSIM is employed as a metric to distinguish the importance of simulated images. Here, we set the simulated image corresponding to the central sub-region as the reference image, and the SSIM value between other images and the reference image helps to score all the simulated images. We set a threshold to select the most informative images and make a selection scheme as

Ax,y={1,ifS(x,y)>H0,otherwise.

The selection scheme is described by a binary coding matrix A = [Ax,y], where (x,y) is the position of the LED corresponding to each simulated image. S(x,y) is the SSIM value of each simulated image and H is the manually set threshold.

In the fourth step, we sequentially capture real measurements for those Ax,y = 1. In the final step, the captured measurements are employed to perform the FPM reconstruction.

It is worth mentioning that the extra time consumed in step 2 and step 3 (i.e. after acquiring the first image and before acquiring the others) is very short, compared with the whole capturing process. Assuming each simulated image contains n raw pixels and the number of all simulated images is ms, corresponding to ms different LEDs, the computational complexity of the simulating procedures can be estimated: 1) In step 2, we perform an IFFT to generate each simulated image and the corresponding computational cost is n · log(n). Meanwhile, in this step, we generate ms simulated images, so the total cost becomes ms · n log(n). 2) In step 3, the computational cost for calculating the SSIM for each simulated image is n, and for ms images the total cost is ms · n. Then we compare the SSIM of each image to the threshold and the corresponding computational cost is n. Above all, the total cost of the simulating procedure is ms · n · log(n). Actually, the size of simulated image is far smaller than the captured image (for a magnification factor of 10, the simulated image is 100 times smaller than the captured ones), thus the time cost for simulation procedures can be small. In our method, for the image with 150 × 150 raw pixels captured under the illumination of the central LED, the extra processing time was less than 1s in Matlab, using a personal computer with an Intel i7 CPU (no GPU).

Compared to the FPM routine with simulated results, the supplementary procedure in our method helps to reduce the acquisition time by reducing the number of images needed, often by as much as 70%. For the USAF test chart recovery, our method only captures 70 low-resolution images, while FPM uses 225 images. For the stained dog cardiac region sample, fewer than 70 low-resolution images are captured in our method, while 293 images are captured for FPM.

3. Results

In this section, we validate our method in theory and practice as compared to traditional FPM, reducing the time needed for data collection as well as image reconstruction.

3.1. Simulation results

We first evaluate the efficiency of our method with simulations and provide quantitative analysis. The simulation is set up to match a feasible experimental design (as described in [3]), with incident wavelength of 629.2 nm, pixel size of 1.625 µm and an objective NA of 0.1. We set the LED matrix to be 67.5 mm away from the sample plane and the lateral distance between two adjacent LEDs is 4 mm. Using this geometry, a successful FPM reconstruction can be achieved with 293 measurements captured from the LEDs, making up a circle of NA=0.59 on the array [3]. The simulations use two test objects: the 1951 USAF test chart and a complex structural image (the image of an airport), both shown in Fig. 7. With the estimated spectrum distribution of the final reconstruction, the proposed selection scheme has been applied to find the most informative measurements to capture. Using our scheme, 77 measurements are selected for reconstruction of the USAF test chart and 89 measurements are selected for the airport scene, which leads to an image capture reduction of ∼70%.

 figure: Fig. 7

Fig. 7 A comparison between the traditional FPM algorithm and the proposed method, with simulated data. a. Simulation of low-resolution images captured with the central LED for a 1951 USAF test chart and an image of an airport (USC-SIPI dataset [36]). b. The ground truth high-resolution image. c. The reconstruction and recovered spectrum (zoom in) of the proposed method with fewer measurements. The reconstruction of the 1951 USAF test chart utilizes 77 measurements and the reconstruction of the image of airport utilizes 89 measurements. d. The reconstruction and recovered spectrum of the FPM method with all 293 measurements.

Download Full Size | PDF

Next, we study the relationship between reconstruction quality and different selection schemes, as shown in Fig. 8. For each of our test images, we simulate the FPM result and compare the result utilizing two objective image quality assessments, the Peak Signal to Noise Ratio (PSNR) and SSIM. The PSNR is defined as

PSNR=10×log102552×M×Nx,y(IrecoIrefe)2,
where Ireco is the intensity of the FPM reconstruction, Irefe is the intensity of the ground truth, the square root converts from intensity to amplitude and M × N is the size of the ground truth image.

 figure: Fig. 8

Fig. 8 FPM reconstructions with different selection schemes. a. The PSNR of the reconstructions of the 1951 USAF test chart and the image of airport with different selection schemes. b. The SSIM of the reconstructions of the 1951 USAF test chart and the image of airport with different selection schemes.

Download Full Size | PDF

In the proposed method, we employ the spectrum distribution of the captured measurement to estimate the spectrum distribution of the corresponding reconstruction and select the most informative LEDs. During this process, we rank all the measurements by their relative importance (using the SSIM metric to evaluate relative importance, as is shown in Fig. 5). Therefore, it is possible to select the ns most informative measurements (ns = 1,2,3,…,293) and utilize them for the FPM reconstruction. Each selection scheme corresponds to a ns-value, and each will lead to a different reconstruction. While comparing each reconstruction with the ground truth, we can quantitatively evaluate the reconstruction quality. Figure 8 shows the relationship between reconstruction quality and ns-value (number of images used). The metric of SSIM focuses on the comparison of structure between the two images while the metric of PSNR is a global comparison including structure, noise, etc. For a simple structural image (USAF test chart), the curves (blue curves) are smooth with small fluctuations, which illustrates the low noise during the reconstruction. As for a complex structural image (airport), the curve of the PSNR metric fluctuates strongly while the curve of the SSIM metric goes smoothly, which illustrates that the noise level increases with the complexity of the image structure. It is obvious that the SSIM metric is more robust both for simple and complex structural images, demonstrating that evaluating the importance with the SSIM metric is quite reasonable. Further, the reconstruction quality generally increases with the ns-value and images of simple structure need fewer measurements for a satisfying reconstruction than the ones with complex structure, as expected.

3.2. Experimental results

We next apply our method to experimental datasets captured in our prototype setup (shown in Fig. 1(a)), which is similar to [2], including an Olympus BX-43 microscope, a digital CCD camera (QIClick, pixel size 6.5µm), a 6mm pitch LED matrix which contains 32×32 surface-mounted, full color LEDs. The NA of the used objective lens (4x) is 0.16 and the distance between the LED matrix and the sample plane is ∼95 mm. Because of the low light levels from the outer LEDs at the sample plane, different exposure times are employed, related to the positions of LED, to balance noise effects. Each image is normalized according to its exposure time Inorm=ImTexpo, where Inorm is the normalized measurement, Im is the raw measurement and Texpo is the exposure time of the measurement corresponding to the LED indexed by m.

With our prototype, we capture a full dataset containing 15×15 measurements (as shown in Fig. 1(b)) for the FPM reconstruction (as shown in Fig. 9(b)). Under direct observation, we can only distinguish the line pairs in group 7 (group 7, element 3, line width 3.1µm) and with the FPM reconstruction, the line pairs in group 9 (group 9, element 1, line width 0.977µm) are able to be distinguished. In our self-learning method, we run our algorithm to select the most informative measurements after capturing a single LED image. The number of acquired images can be reduced from 225 to 70, with a reduction of 70%. Actually, the time reduction of the images in the data-collection procedure is more than 70% for the final scan, since the discarded images, which are mostly captured under the illumination of marginal LEDs, require more exposure time than the central brightfield LEDs.

 figure: Fig. 9

Fig. 9 Experimental reconstruction of the 1951 USAF test chart. a. The captured image under the illumination of the central LED. b. The reconstruction and recovered spectrum of the FPM method with 225 measurements. c. The reconstruction and recovered spectrum of our self-learning method with 70 measurements.

Download Full Size | PDF

Without loss of generality, we employ our selection scheme on a biological dataset (UCB dataset [38]), in order to test our method for realistic microscope images with biological samples. The dataset was captured from a similar optical system [3] by scanning through 293 LEDs. Since more details are contained in the biological samples than the 1951 USAF test chart, the complicated sample features may impact the image quality of the reconstructions. While the dataset includes a full set of 293 images (one from each LED), we use only the central LED image to decide which images should be included in the FPM reconstruction. Compared to the 293 measurements, which are employed in the FPM method (as shown in Fig. 10(b2) and Fig. 10(c2)), our method selects 20% of them, corresponding to the most informative ones, and utilizes them for a successful reconstruction (shown in Fig. 10(b3) and Fig. 10(c3)). Compared to the recovered spectrum in Fig. 10(b3) and Fig. 10(c3), it is clear that different microscopy images have different spectrum distributions which lead to recovered spectrum with different shapes.

 figure: Fig. 10

Fig. 10 Reconstruction of a stained dog cardiac region sample experimental dataset ([38]). a. Full-FOV raw image of a stained dog cardiac region sample. b1. Zoom-in of the region in the red rectangle. b2. The recovered spectrum and reconstructed intensity of the FPM method with 293 measurements. b3. The recovered spectrum and reconstructed intensity of our method with 65 measurements. c1. Zoom-in of the region in the blue rectangle. c2. The recovered spectrum and reconstructed intensity of the FPM method with 293 measurements. c3. The recovered spectrum and reconstructed intensity of our method with 53 measurements.

Download Full Size | PDF

4. Conclusion and discussion

We have demonstrated a simple yet effective method, termed self-learning based Fourier Ptychographic Microscopy (FPM), to accelerate the conventional FPM. By estimating the spectrum distribution of the reconstruction from its low-resolution image and collecting only the important images, we improve the speed of FPM in terms of both data acquisition and image reconstruction. Both simulation and experimental results demonstrate the validity of our method.

The limitation of our method is that it is based on a coarse estimation of the sample’s spectrum, and this may have a negative effect on reconstruction quality when the structural similarity assumption does not hold. For situations with small differences, particularly in the high spatial frequency regions, qualitative assessments may not be a convincing and universal quality index. Thus, a quantitative and effective no-reference quality assessment approach, which may lead to a better reconstruction, is desired.

More broadly speaking, the efficiency of FPM will strongly promote the development of digital pathology, hematology and neuroanatomy, which requires high-SBP observations. Therefore, applying this method to traditional optical systems will be a research emphasis until the goal of real-time observations has been reached.

Appendix A: A brief introduction to structural similarity

Image quality assessment is always employed in the area of image processing to evaluate reconstruction quality, including the Peak Signal to Noise Ratio (PSNR) and Structural Similarity (SSIM) [35]. The former one is considered as an overall assessment, focused on the difference of pixel values, while the latter one pays more attention to the structural difference. Actually, in the human visual system, we are usually more sensitive to structural information.

A simulation result is shown in Fig. 11, with a typical example of PSNR and SSIM comparison. Fig. 11(a) is the ground truth, Fig. 11(b) is a slightly blurred image with zero-mean random noise and Fig. 11(c) is a severely blurred image. With a perceptual quality assessment, which is considered to be a subjective assessment, the one with slight blur should be the better one. However, with the quantitative index of PSNR, the one with severe blur gets a higher score, while with the quantitative index of SSIM, the result is the same as the subjective assessment.

 figure: Fig. 11

Fig. 11 Comparison of an image of biological tissue with different distortions. a. The ground truth. b. A slightly blurred image with added zero-mean random noise. PSNR = 12.4975 and MSSIM = 0.49736. c. A severely blurred image. PSNR = 12.5754 and MSSIM = 0.2397.

Download Full Size | PDF

The flowchart of the SSIM measurement system is shown in Fig. 12, which can also explain why SSIM is closer to the human visual system. Natural images can usually be decomposed into three parts, the illumination measurement, the contrast measurement and the structural measurement.

 figure: Fig. 12

Fig. 12 The flowchart of the SSIM measurement system. The input of the SSIM assessment consists of two images, one is considered to have perfect quality and then the similarity measure can serve as a quantitative measurement of the quality of the other one. SSIM consists of three property comparisons: luminance comparison, contrast comparison and structure comparison.

Download Full Size | PDF

The illumination measurement can be estimated as the mean intensity of an image:

μl=1M×N(xi,yi)(x,y)Il(xi,yi),l=1,2,
where M × N is the size of the image and Il is the intensity of the input image.

The contrast measurement can be estimated as the standard deviation of an image:

σl=1M×N1(xi,yi)(x,y)(Il(xi,yi)μl)2,l=1,2.

The structural measurement can be estimated as the covariance coefficient between two images:

σ1,2=1M×N1(xi,yi)(x,y)(I1(xi,yi)μ1)(I2(xi,yi)μ2).

The SSIM index between two images is a combination of the three:

SSIM(I1,I2)=(2μ1μ2+C1)(2σ1,2+C2)(μ12+μ22+C1)(σ12+σ22+C2),
where C1 and C2 are small constants which can be ignored.

In practice, when the overall image quality needs to be evaluated, we employ SSIM on each image segment instead of directly on the entire image, termed as the mean SSIM index (MSSIM), which can lead to a more effective assessment. With the above discussions, the most informative measurement can be considered to be the one with the most structural information. Since the central measurement is considered to be the most informative, the central measurement can be set as the reference image, which helps to quantitatively estimate the image quality of the other measurements. With these quantitative estimations, it is convenient to select the most informative measurements and their corresponding LEDs.

Acknowledgments

We are grateful to the editor and the anonymous reviewers for their insightful comments on the manuscript. The authors acknowledge funding support from the National Natural Science Foundation of China under Grant U1301257 and U1201255.

References and links

1. A. W. Lohmann, R. G. Dorsch, D. Mendlovic, Z. Zalevsky, and C. Ferreira, “Space-bandwidth product of optical signals and systems,” J. Opt. Soc. Am. A 13(3), 470–473 (1996). [CrossRef]  

2. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

3. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier ptychography with an LED array microscope,” Biomed. Opt. Express 5(7), 2376–2389 (2014). [CrossRef]   [PubMed]  

4. J. Goodman, Introduction to Fourier Optics (Roberts and Company Publishers, 2005).

5. Z. Bian, S. Dong, and G. Zheng, “Adaptive system correction for robust Fourier ptychographic imaging,” Opt. Express 21(26), 32400–32410 (2013). [CrossRef]  

6. J. R. Fienup and J. J. Miller, “Aberration correction by maximizing generalized sharpness metrics,” J. Opt. Soc. Am. A 20(4), 609–620 (2003). [CrossRef]  

7. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22(5), 4960–4972 (2014). [CrossRef]   [PubMed]  

8. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109(10), 1256–1262 (2009). [CrossRef]   [PubMed]  

9. J. M. Rodenburg and H. M. L. Faulkner, “A phase retrieval algorithm for shifting illumination,” Appl. Phys. Lett. 85(20), 1385–1391 (2004). [CrossRef]  

10. L. Bian, J. Suo, G. Situ, G. Zheng, F. Chen, and Q. Dai, “Content adaptive sparse illumination for Fourier ptychography,” Opt. Lett. 39(23), 6648–6651 (2014). [CrossRef]   [PubMed]  

11. S. Dong, Z. Bian, R. Shiradkar, and G. Zheng, “Sparsely sampled Fourier ptychography,” Opt. Express 22(5), 5455–5464 (2014). [CrossRef]   [PubMed]  

12. S. Dong, R. Shiradkar, P. Nanda, and G. Zheng, “Spectral multiplexing and coherent-state decomposition in Fourier ptychographic imaging,” Biomed. Opt. Express 5(6), 1757–1767 (2014). [CrossRef]   [PubMed]  

13. X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, “Quantitative phase imaging via Fourier ptychographic microscopy,” Opt. Lett. 38(22), 4845–4848 (2013). [CrossRef]   [PubMed]  

14. G. Zheng, X. Ou, and C. Yang, “0.5 gigapixel microscopy using a flatbed scanner,” Biomed. Opt. Express 5(1), 1–8 (2014). [CrossRef]   [PubMed]  

15. S. Dong, P. Nanda, R. Shiradkar, K. Guo, and G. Zheng, “High-resolution fluorescence imaging via pattern-illuminated Fourier ptychography,” Opt. Express 22(17), 20856–20870 (2014). [CrossRef]   [PubMed]  

16. S. Dong, K. Guo, P. Nanda, R. Shiradkar, and G. Zheng, “FPscope: a field-portable high-resolution microscope using a cellphone lens,” Biomed. Opt. Express 5(10), 3305–3310 (2014). [CrossRef]   [PubMed]  

17. R. Horstmeyer, X. Ou, G. Zheng, P. Willems, and C. Yang, “Digital pathology with Fourier ptychography,” Comput. Med. Imag. Graph. 42, 38–43 (2014). [CrossRef]  

18. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2(2), 104–111 (2015). [CrossRef]  

19. M. W. Marcellin, JPEG2000 Image Compression Fundamentals, Standards and Practice (Springer, 2002).

20. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013). [CrossRef]   [PubMed]  

21. M. Levoy, Z. Zhang, and I. McDowall, “Recording and controlling the 4D light field in a microscope using microlens arrays,” J. Microsc. 235(2), 144–162 (2009). [CrossRef]   [PubMed]  

22. J. Sun, Z. Xu, and H. Shum, “Image super-resolution using gradient profile prior,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

23. J. M. Rodenburg and R. H. T. Bates, “The theory of super-resolution electron microscopy via Wigner-distribution deconvolution,” Philos. Trans. Royal Soc. London Ser. A 339(1655), 521–553 (1992). [CrossRef]  

24. H. M. L. Faulkner and J. M. Rodenburg, “Movable aperture lensless transmission microscopy: a novel phase retrieval algorithm,” Phys. Rev. Lett. 93(2), 023903 (2004). [CrossRef]   [PubMed]  

25. J. M. Rodenburg, A. C. Hurst, A. G. Cullis, B. R. Dobson, F. Pfeiffer, O. Bunk, C. David, K. Jefimovs, and I. Johnson, “Hard-x-ray lensless imaging of extended objects,” Phys. Rev. Lett. 98(3), 034801 (2007). [CrossRef]   [PubMed]  

26. J. M. Rodenburg, Ptychography and Related Diffractive Imaging Methods (Advances in Imaging and Electron Physics, 2008).

27. P. Thibault, M. Dierolf, A. Menzel, O. Bunk, C. David, and F. Pfeiffer, “High-resolution scanning x-ray diffraction microscopy,” Science 321(5887), 379–382 (2008). [CrossRef]   [PubMed]  

28. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]   [PubMed]  

29. J. R. Fienup, “Reconstruction of an object from the modulus of its Fourier transform,” Opt. Lett. 3(1), 27–29 (1978). [CrossRef]   [PubMed]  

30. J. R. Fienup, “Phase retrieval algorithms: a personal tour,” Appl. Opt. 52(1), 45–56 (2013). [CrossRef]   [PubMed]  

31. T. M. Turpin, L. H. Gesell, J. Lapides, and C. H. Price, “Theory of the synthetic aperture microscope,” Proc. SPIE 2566, 230–240 (1995). [CrossRef]  

32. T. S. Ralston, D. L. Marks, P. S. Carney, and S. A. Boppart, “Interferometric synthetic aperture microscopy,” Nat. Phys. 3(2), 129–134 (2007). [CrossRef]   [PubMed]  

33. V. Mico, Z. Zalevsky, and J. Garca, “Synthetic aperture microscopy using off-axis illumination and polarization coding,” Opt. Commun. 276(2), 209–217 (2007). [CrossRef]  

34. M. Kim, Y. Choi, C. Fang-Yen, Y. Sung, R. R. Dasari, M. S. Feld, and W. Choi, “High-speed synthetic aperture microscopy for live cell imaging,” Opt. Lett. 36(2), 148–150 (2011). [CrossRef]   [PubMed]  

35. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). [CrossRef]   [PubMed]  

36. University of South California, “SIPI image database,” http://sipi.usc.edu/database/.

37. W. Jiang, Y. Zhang, and Q. Dai, “Multi-channel super-resolution with Fourier ptychographic microscopy,” Proc. SPIE 9273927336 (2014). [CrossRef]  

38. The computational image lab at University of California Berkeley, “LED array Fourier ptychography dataset,” http://www.laurawaller.com/opensource/.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 Our FPM prototype setup. a. The optical system includes a LED matrix illuminator on a commercial microscope. b. A raw image matrix contains 15×15 images, with each corresponding to illumination from a single LED (the red axis label denotes the lateral coordinates of the LED array). The 70 sub-images surrounded by white rectangles have been found to be the most important measurements selected by the proposed method. c. Diagram of setup. A programmable LED matrix is placed beneath the sample plane. We first capture Ic (r) with illumination from the blue LED (placed directly below the sample), and then devise a selection scheme for choosing the most informative images Im (r). LEDs in black represent less informative illumination angles which are not captured. P(k) is the pupil function and O(k − km) represents the exit wave.
Fig. 2
Fig. 2 The FPM reconstruction. a1. The FOV of the raw image. b1. The FOV of the FPM reconstruction. a2. The magnified view of the raw image. b2. The magnified view of the FPM reconstruction.
Fig. 3
Fig. 3 Simulation results of the AFP reconstruction and the FPM reconstruction. a1. and b1. are reconstructed with 119 measurements by AFP and FPM, respectively. a2. The strategy of spectrum expanding in the AFP algorithm. The algorithm updates the spectrum (enclosed by the yellow circle) circle by circle (shown as the red circles) from the center outward (along the purple arrow). b2. The strategy of spectrum expanding in the FPM algorithm. The algorithm updates the spectrum from top left to bottom right (along the yellow and purple arrows). a3. and b3. are the spectrum (zoomed in) reconstructed by AFP and FPM algorithms, respectively, and (kx,ky) denotes lateral coordinates in the Fourier domain.
Fig. 4
Fig. 4 Redundancy in Fourier Spectrum of images at different resolutions. a. Images from USC-SIPI [36]. b1. The relationship between reserved ratio of the whole Fourier spectrum and recovered image quality. Here we use images with higher resolution. b2. The relationship between reserved ratio of the whole Fourier spectrum and recovered image quality. Here we use images with lower resolution.
Fig. 5
Fig. 5 The spectrum similarity of the 1951 USAF test chart. a1 and a2 are the importance distributions of the measurements from the high-resolution and the low-resolution image, respectively. b1 and b2 are the corresponding selection schemes from the high-resolution and the low-resolution image, respectively (both using 77 measurements). Red denotes selected measurements, blue denotes discarded ones while yellow and green denote the boundary between them (red and blue).
Fig. 6
Fig. 6 Flowchart of the proposed method. Step 1: capture a single image under normally incident illumination. Step 2: generate simulated images with the measurement. Step 3: rank the simulated images by their relative importance and make the selection scheme. Step 4: capture measurements according to the selection scheme. Step 5: employ the measurements to perform the FPM reconstruction.
Fig. 7
Fig. 7 A comparison between the traditional FPM algorithm and the proposed method, with simulated data. a. Simulation of low-resolution images captured with the central LED for a 1951 USAF test chart and an image of an airport (USC-SIPI dataset [36]). b. The ground truth high-resolution image. c. The reconstruction and recovered spectrum (zoom in) of the proposed method with fewer measurements. The reconstruction of the 1951 USAF test chart utilizes 77 measurements and the reconstruction of the image of airport utilizes 89 measurements. d. The reconstruction and recovered spectrum of the FPM method with all 293 measurements.
Fig. 8
Fig. 8 FPM reconstructions with different selection schemes. a. The PSNR of the reconstructions of the 1951 USAF test chart and the image of airport with different selection schemes. b. The SSIM of the reconstructions of the 1951 USAF test chart and the image of airport with different selection schemes.
Fig. 9
Fig. 9 Experimental reconstruction of the 1951 USAF test chart. a. The captured image under the illumination of the central LED. b. The reconstruction and recovered spectrum of the FPM method with 225 measurements. c. The reconstruction and recovered spectrum of our self-learning method with 70 measurements.
Fig. 10
Fig. 10 Reconstruction of a stained dog cardiac region sample experimental dataset ([38]). a. Full-FOV raw image of a stained dog cardiac region sample. b1. Zoom-in of the region in the red rectangle. b2. The recovered spectrum and reconstructed intensity of the FPM method with 293 measurements. b3. The recovered spectrum and reconstructed intensity of our method with 65 measurements. c1. Zoom-in of the region in the blue rectangle. c2. The recovered spectrum and reconstructed intensity of the FPM method with 293 measurements. c3. The recovered spectrum and reconstructed intensity of our method with 53 measurements.
Fig. 11
Fig. 11 Comparison of an image of biological tissue with different distortions. a. The ground truth. b. A slightly blurred image with added zero-mean random noise. PSNR = 12.4975 and MSSIM = 0.49736. c. A severely blurred image. PSNR = 12.5754 and MSSIM = 0.2397.
Fig. 12
Fig. 12 The flowchart of the SSIM measurement system. The input of the SSIM assessment consists of two images, one is considered to have perfect quality and then the similarity measure can serve as a quantitative measurement of the quality of the other one. SSIM consists of three property comparisons: luminance comparison, contrast comparison and structure comparison.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

i m ( r ) = | [ O ( k k m ) P ( k ) ] ( r ) | 2 ,
min O ( k ) , P ( k ) m = 1 N L E D | I m ( r ) | [ O ( k k m ) P ( k ) ] ( r ) | 2 | 2 ,
I ( u u 0 , v v 0 ) = { Θ ( x , y ) } with Θ ( x , y ) { I c ( x i , y i ) e j 2 π ( u 0 x i M + v 0 y i N ) , ( x i , y i ) ( x , y ) } ,
A x , y = { 1 , if S ( x , y ) > H 0 , otherwise .
P S N R = 10 × log 10 255 2 × M × N x , y ( I r e c o I r e f e ) 2 ,
μ l = 1 M × N ( x i , y i ) ( x , y ) I l ( x i , y i ) , l = 1 , 2 ,
σ l = 1 M × N 1 ( x i , y i ) ( x , y ) ( I l ( x i , y i ) μ l ) 2 , l = 1 , 2.
σ 1 , 2 = 1 M × N 1 ( x i , y i ) ( x , y ) ( I 1 ( x i , y i ) μ 1 ) ( I 2 ( x i , y i ) μ 2 ) .
S S I M ( I 1 , I 2 ) = ( 2 μ 1 μ 2 + C 1 ) ( 2 σ 1 , 2 + C 2 ) ( μ 1 2 + μ 2 2 + C 1 ) ( σ 1 2 + σ 2 2 + C 2 ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.