Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Stable and robust frequency domain position compensation strategy for Fourier ptychographic microscopy

Open Access Open Access

Abstract

Fourier ptychographic microscopy is a computational microscopy technique to achieve wide-field and super-resolution complex imaging which has been developed in recent years. The method is based on illuminating the sample by a light source array, and then computationally integrating different images correspondent to each of the sources, in the Fourier domain. Knowledge of the exact relative position of the light sources and the sample is critical for the quality of the final recovered image. In this paper, we present an iterative approach towards correcting the position in the Fourier domain based on Newton's method. Also, an analysis is presented which shows the relation between the position error and the deterioration of the final recovery quality. The effectiveness of the presented method in improving the quality of the final recovered image is demonstrated using simulation and experimental results. Moreover, the method is shown to be more stable and robust to noises in comparison with the state-of-the-art algorithm.

© 2017 Optical Society of America

1. Introduction

Fourier ptychography (FP) [1–4] is a super-resolution computational imaging method that bypasses the limitations of the low numerical aperture (NA) of the imaging system to achieve super-resolution imaging by using the aperture synthesis technology [5–12] and the phase recovery technology [13–20]. Compared with the aperture synthesis technology, the FP recovery technology does not require the phase information of the object. The phase information is calculated by the FP recovery technique, which is similar to the phase recovery technique. They both alternate between the spatial frequency domain and the spatial domain by using the object support and modulus constraint as constraints. In contrast to the phase recovery technique, the object support constraint in FP is imposed in the frequency domain, while the modulus constraint is imposed in the spatial domain. FP shares its roots with ptychographical iterative engine (PIE) [21–27], which actually belongs to lensless phase retrieval techniques.

An inexpensive, programmable LED array replaces the illumination source of a microscope to form a typical Fourier ptychographic imaging microscope, termed FPM [1]. In general, the numerical aperture (NA) of the objective lens is small in the FPM system, resulting in a large field of view (FOV) and small aberrations [28]. The LED array is placed under the sample to provide angle-varied illumination. A low-resolution intensity image of the sample is recorded by turning on each quasi-monochromatic LED element corresponding to a unique region of the sample’s spectrum determined by the NA of the objective lens and the illumination angle. A high-resolution complex spectrum of the sample can be recovered by iteratively synthesizing the low-resolution intensity image in the frequency domain to expand the passband, using the same strategy as in PIE. The final available NA is equal to the sum of the NA of the objective lens and the illumination NA [28].

In practical applications, the recovered high-resolution image cannot reach the desired quality due to the optical system aberrations, defocusing, LED positional misalignment, noise, etc. These factors limit the applications of the technique. A lot of research has therefore focused on improving the technique by, for example, correcting the system aberrations [1,29,30], reducing the data acquisition or algorithm run time [31–35], suppressing noise and shortening the acquisition time [35,36], and correcting the LED positional misalignment [37,38].

In [37], Zuo explained why the FP recovery quality is degraded by the position error of the LED array. The information corresponding to the correct LED position, which should be present in the image, is not included due to the LED misalignment (it was originally thought that the position corresponding to the bright field image is actually corresponding to a dark field image). This means that the reason for the degraded image quality is the absolute position error of sub-spectrum corresponding to the LED element. However, we believe that the main reason could be the unequal translation of each sub-spectrum in the spectrum synthesis step. This implies that the reason for the degraded image quality is the relative position error between the sub-spectrums. After introducing the LED position error, the two adjacent sub-spectra not only shift the center of the spectrum (absolute position), but also change the overlap area (relative position), which is the main reason for the deterioration of the recovery result. In Section 2, we will present a detailed analysis.

In [38], the authors highlighted the important role of the cost function and introduced several recovery algorithms, verifying through simulation and experiment the role of different cost functions on the final recovery result. They verified the tolerance of different cost functions for noise and position errors and suggested a simulated annealing (SA) based algorithm for positional errors correction. Both our work and [37] focus primarily on the correction of position errors. The idea of the position correction method in [37] is to add a procedure based on an SA algorithm to find the correct position in each sub-iteration. The effectiveness of the position correction method is limited by the SA algorithm. In this paper, we propose a novel position correction algorithm in the spatial frequency domain, termed sc-FP algorithm (spatial frequency domain correction FP algorithm). Inspired by the traditional FP algorithm [1] and the improved FP algorithm [30], we construct an iterative procedure for the frequency domain position by deriving the frequency domain position of the cost function, similar to the strategy of deriving the object and the pupil functions [20,23,39]. We use Newton’s method [44] to optimize the frequency domain position, which enhances the convergence rate. Both simulation and experiment demonstrate that the sc-FP algorithm has sufficient accuracy, stability, and robustness to noise, and significantly improves the quality of the recovered high-resolution complex image. Since we correct the frequency domain position errors, the proposed algorithm can be applied to other illumination systems, such as those described in [40–43].

2. Principle

2.1 Analysis of position errors

Different LED elements emit plane waves with different incident angles, resulting in different sub-regions of the sample’s spectrum passing through the band-limited imaging system, and matching the spectral components to the correct relative position of the sample spectrum is necessary to obtain a high-quality recovery image. Therefore, position errors can lead to spectral distortion in the FP algorithm. Note that the actual incidence angles for the same LED element and different sample areas are different.

Different from the conclusions of [37], we believe that the main reason for the deterioration of the recovery quality is the unequal translation of each sub-spectrum in the spectrum synthesis step, that is, the change in the relative overlapping region size in the adjacent sub-spectra.

The translation of the spectrum is caused by the incident oblique plane wave with an incidence angle defined by (θxm, θym), as shown in Fig. 1, where the corresponding translation amounts in the frequency domain are k0 × sinθxm and k0 × sinθym, respectively. Without losing generality, we suppose that the center of the LED array and the center of the sample are both on the optical axis. The formulas for sinθxm and sinθym can be expressed as

sinθxm=xr,cm(xr,cm)2+(yr,cm)2+h2,sinθym=yr,cm(xr,cm)2+(yr,cm)2+h2,
where (xm r,c, ym r,c) represent the position of the m-th LED element in row r and column c with xm r,c = r × dΔled, ym r,c = c × dΔled (dΔled denotes the distance between adjacent LED elements), and h represents the distance between the LED array and sample. In this paper, we set dΔled = 4 mm and we use a 15 × 15 LED matrix for angle-varied illumination, which means r∈{–7, –6, …, 0, …, 6, 7} and c∈{–7, –6, …, 0, …, 6, 7}. The difference in the spectral translation amount between two adjacent LED elements is
(dkxm,dkym)=(k0(sinθxm+1sinθxm),k0(sinθym+1sinθym)),
where (dm kx, dm ky) represent the difference in the spectral translation amount between the m-th and the (m + 1)-th LED elements, defining the size of the overlapping region. The size of the overlapping area between two adjacent LED elements in the frequency domain will change with the position of the LED array. That is, the relative positions of the sub-spectra corresponding to the acquired images are changed; however, we still use the ideal (uncorrected) positions of the LED array in the iteration. According to the shift property of the Fourier transform,
F-1{F(uu0,vv0)}=f(x,y)eiu0x+iv0y
The key factor is the change in the relative positions of the sub-spectra (corresponding to the different images captured by different angle-varied illuminations) rather than the change in the absolute positions of the sub-spectra.

 figure: Fig. 1

Fig. 1 A schematic diagram of a classic FPM setup.

Download Full Size | PDF

In order to verify this conclusion, we performed a simulation as shown in Fig. 2. Note that, practically, we are concerned with the location of the sub-spectrum in the spectral plane (pupil plane) corresponding to the captured images rather than with the position of the LED array. In the simulation, we add a known amount of perturbation to the correct sub-spectral position and perform the iteration on the resulting wrong position array. Since the absolute position of each sub-spectrum changes, that is, the spectrum within each object support constraint (pupil function) is distorted, the theoretical recovery result should be of low quality, similar to the result of the analysis in [37]. However, according to the result of our analysis, the relative positions of these sub-spectra do not change, so the recovery result should be the correct recovery result multiplied by an exponential phase factor, meaning that the amplitude of the result is unchanged, while the phase changes. Note that the exponential phase factor is not a constant, but rather a function of the translation amount and the spatial coordinates. We shift the correct position array by (4, –3) along the kx and ky directions, respectively, to form the wrong position array, as shown in Fig. 2(e). The blue solid-line circle and the blue dots represent the correct position array without positional misalignment, while the red solid-line circle and the red dots represent the wrong positional array with positional misalignment. Figure 2(a) represents the original images. There is no obvious difference in amplitude between the recovered high-resolution image without the position errors and with the position errors, as can be seen from Figs. 2(b) and 2(c). This is consistent with the conclusion of our analysis. The blue line in Fig. 2(f) represents the MSE of the recovered intensity image using the correct positions, while the red line represents the MSE of the recovered intensity image corresponding to the wrong positions. The final MSEs are almost the same. However, from Figs. 2(b) and 2(c) we can see that there is a significant difference in phase between the recovered high-resolution images, while after translating the corresponding pixels by (4, –3) in the spectrum of Fig. 2(c) and then performing inverse Fourier transform to get Fig. 2(d) the recovered phase image becomes very similar to that in Fig. 2(b). This result is also consistent with our analysis.

 figure: Fig. 2

Fig. 2 Recovery experiment using the traditional FP with absolute positional error and without absolute positional error. (a) The input high-resolution intensity and phase images. (b) The recovered high-resolution intensity and phase images for the conditional FP algorithm without absolute positional errors. (c) The recovered high-resolution intensity and phase images for the conditional FP algorithm with absolute positional errors. (d) The recovered high-resolution intensity and phase images from (c). (e) The spectrogram for the input high-resolution image and location map with and without errors. (f) The MSE of the recovery result by the traditional FP algorithm with and without absolution position errors.

Download Full Size | PDF

2.2 Correction algorithm

For the traditional FP algorithm, the reader can refer to [1,33,38]. In this work, we use the following formulas for the sample function and the pupil function:

Oj+1(kx,ky)=Oj(kx,ky)+α|Pj(kx+k0sinθxm,ky+k0sinθym)|Pj*(kx+k0sinθxm,ky+k0sinθym)|Pj(kx,ky)|max(|Pj(kx+k0sinθxm,ky+k0sinθym)|2+δ1)[Φjm(kx+k0sinθxm,ky+k0sinθym)Oj(kx,ky)Pj(kx+k0sinθxm,ky+k0sinθym)]
Pj+1(kx,ky)=Pj(kx,ky)+β|Oj(kxk0sinθxm,kyk0sinθym)|Oj*(kxk0sinθxm,kyk0sinθym)|Oj(kx,ky)|max(|Oj(kxk0sinθxm,kyk0sinθym)|2+δ2)[Φjm(kx,ky)Oj(kxk0sinθxm,kyk0sinθym)Pj(kx,ky)]
Φjm(kx,ky)=F{Imc(x,y)ϕ(x,y)jm|ϕ(x,y)jm|},
where δ1 and δ2 are regularization constants to ensure numerical stability, which are set as δ1 = 1 and δ2 = 1000 in this work, α and β are the parameters related to the step size used in the steepest descent search, which are both set to 1 in this work, (θxm, θym) define the incident angle of the m-th LED, k0 is the wavenumber in vacuum, (kx, ky) are the 2D spatial frequency coordinates in the pupil plane, φm j(x, y) is the conjectural light field of the m-th LED in the imaging plane in the j-th iteration, I c m(x, y) is the square root of the measured intensity corresponding to the m-th LED, Oj(kx, ky) and Pj(kx, ky) are the high-resolution sample function and pupil function in the j-th iteration, ’*’ indicates complex conjugation, and F(…) is the Fourier transform operator.

Inspired by the strategy used in the position correction routines in conventional PIE [23], we propose a correction procedure for FPM, which uses a spectrum domain correction algorithm termed sc-FP to search for the correct position by Newton’s method.

The iterative process of the FP procedure constantly updates the sample complex function o(x, y) and the pupil complex function p(x, y) to minimize the difference between the measured intensity image Im(x, y) and the conjectural intensity image ϕm(x,y) in the imaging plane. This difference, called an error metric, can be defined as

E=mx,y||ϕm(x,y)|2Im(x,y)|2,
where ϕm(x, y) represents the conjectural intensity image corresponding to the m-th LED and can be written as
ϕm(x,y)=F-1{O(kxk0sinθxm,kyk0sinθym)P(kx,ky)}=F-1{O(kxkxm,kykym)P(kx,ky)}
For numerical calculations, Eq. (8) can be written as
ϕm(x,y)=F-1{O(kxkxm,kykym)P(kx,ky)}=IDFT{O(kxkxm,kykym)P(kx,ky)}=kx,kyO(kxkxm,kykym)P(kx,ky)exp(i2π(kxxN+kyyN)),
where N is the dimension of the array, DFT represents Discrete Fourier Transform, IDFT represents Inverse Discrete Fourier Transform and the irrelevant multiplicative scaling factors are omitted both in IDFT and DFT.

We take a derivative of the error metric with respect to the frequency domain translation position (km x, km y) by using Newton’s method for better performance.

(kxm,kym)j+1=(kxm,kym)jα(kxm,kym)jE(kxm,kym)j2E,
where α is the step along the descent directions in Newton’s method, j represents the number of iterations, (kxm,kym)E represents the derivative of the error metric with respect to the frequency domain translation position (km x, km y), (kxm,kym)2E represents the Hessian matrix defined as follows:

(kxm,kym)2E=[2E(kxm)22Ekxmkym2Ekymkxm2E(kym)2]

First, we take a derivative of the error metric with respect to the frequency domain translation position (km x, km y),

Ekxm=2x,y[|ϕm(x,y)|2Im(x,y)]|ϕm(x,y)|2kxmEkym=2x,y[|ϕm(x,y)|2Im(x,y)]|ϕm(x,y)|2kym
According to [20] and properties of the Fourier transform, we can get
|ϕm(x,y)|2kxm=(ϕm(x,y))*ϕm(x,y)kxm+ϕm(x,y)(ϕm(x,y))*kxm
(ϕm(x,y))*ϕm(x,y)kxm=i2πN(ϕm(x,y))*IDFT[P(kx,ky)DFT(xo(x,y)exp(i2π(kxmxN+kymyN)))]
ϕm(x,y)(ϕm(x,y))*kxm=i2πNϕm(x,y)IDFT[P*(kx,ky)DFT(xo*(x,y)exp(i2π(kxmxN+kymyN)))],
where symbol ‘*’ represents complex conjugation.

Thus, we can get the gradient of the error metric with respect to km x,

Ekxm=8πNx,y[|ϕm(x,y)|2Im(x,y)]Im{(ϕm(x,y))*IDFT[P(kx,ky)DFT(xo(x,y)exp(i2π(kxmxN+kymyN)))]}
Likewise, we can obtain the gradient of the error metric with respect to km y,

Ekym=8πNx,y[|ϕm(x,y)|2Im(x,y)]Im{(ϕm(x,y))*IDFT[P(kx,ky)DFT(yo(x,y)exp(i2π(kxmxN+kymyN)))]}

Secondly, we calculate the Hessian matrix, which can be divided into four parts, with the following expression:

2E(kxm)2=2(x,ygxmgxm+x,y[|ϕm(x,y)|2Im(x,y)]gxmkxm)2E(kym)2=2(x,ygymgym+x,y[|ϕm(x,y)|2Im(x,y)]gymkym)2Ekxmkym=2(x,ygxmgym+x,y[|ϕm(x,y)|2Im(x,y)]gxmkym)2Ekymkxm=2(x,ygymgxm+x,y[|ϕm(x,y)|2Im(x,y)]gymkxm),
where

gxmkxm=8ππNN{hxm(hxm)*Re[(ϕm(x,y))*IDFT[P(kx,ky)DFT(xxoe)]]}gxmkym=8ππNN{Re[hxm(hym)*]Re[(ϕm(x,y))*IDFT[P(kx,ky)DFT(xyoe)]]}gymkym=8ππNN{hym(hym)*Re[(ϕm(x,y))*IDFT[P(kx,ky)DFT(yyoe)]]}gymkxm=8ππNN{Re[hym(hxm)*]Re[(ϕm(x,y))*IDFT[P(kx,ky)DFT(xyoe)]]}gxm=4πNIm{(ϕm(x,y))*IDFT[P(kx,ky)DFT(xoe)]}hxm=IDFT[P(kx,ky)DFT(xoe)]gym=4πNIm{(ϕm(x,y))*IDFT[P(kx,ky)DFT(yoe)]}hym=IDFT[P(kx,ky)DFT(yoe)]oe=o(x,y)exp(i2π(kxmxN+kymyN))

Thirdly, we perform a simple line search (the maximum offset in the kx and ky directions to ± 10 pixels) [45] along the descent direction of the Newton’s method to determine the step size α.

Then, we construct an iterative procedure using Newton’s method to update the frequency domain translation position, which minimizes the error metric.

Finally, we add the constructed iterative process to the FP algorithm, which is executed before the calculations in Eq. (4). Using Newton’s method to correct the position error makes the recovery result for the same set of data very stable, with a faster convergence. Since Newton’s method is a local search algorithm, it has an excellent local searching capability. The frequency domain positional errors are caused by the positional errors of the LED array and the sample and therefore are not large, on the order of ten or fewer pixels. From Eq. (1) we can see that the deviation is smaller farther away from the center, and therefore Newton’s method is perfectly applicable. Moreover, due to the small searching range and the strong local searching capability, the proposed method is robust to noise.

3. Simulations

We verify the validity of the sc-FP algorithm by simulation. The conditions used in the simulation are similar to those of the traditional FPM setup with the following parameters: the NA of the objective lens is 0.1; the shape of the corresponding pupil function is assumed to be circular; the angle-varied illumination light source is provided by a programmable 15 × 15 LED matrix with the wavelength of 632 nm; the distance between adjacent LEDs is 4 mm; the size of each LED element is 100 μm; the distance between the LED plane and the sample plane is 86 mm; the size of the CCD pixel is 6.5 μm; the amplification factor is 4. We introduce position errors by moving the sample and at the same time translating, rotating, and tilting the LED matrix. In order to ensure the effectiveness and adaptability of the sc-FP algorithm, the errors introduced in the simulation are uniformly distributed within a certain range, rather than fixed to one or several specific values. The errors are introduced to the translation of the sample and the translation and rotation of the LED matrix. The absolute value of the introduced translation error is less than 1000 μm, and the absolute value of the rotation error is less than 5°. The errors that fall outside of these ranges are artificially corrected. In the following simulations, we take the square root of the captured bright-field image intensity as the initial high-resolution amplitude image, where the phase is constant. The number of iterations in the sc-FP algorithm is set to 15. We first update the sample function four times using the FP algorithm, then update the frequency positions and the sample function once, and then repeat the steps. In the last 5 iterations we update the sample function, pupil function, and the frequency positions simultaneously.

Two images—‘cameraman.png’ and ‘westconcordorthophoto.png’—are used in the simulations as the amplitude and phase samples, respectively, each containing 512 × 512 pixels, as used in Fig. 2(a). We define the actual position by introducing an error to the ideal position. The translation of the sample as well as the translation of the LED matrix in the LED plane are randomly selected from –800 μm to 800 μm; the rotation angle of the LED matrix is randomly selected from –5°to 5°; the deviation of the distance from the LED matrix to the sample is randomly selected from –1000 μm to 1000 μm. All of these factors ultimately lead to deviations in the frequency domain position. We employ the traditional FP recovery procedure to reconstruct the high-resolution complex image with the actual position (without position errors) and the ideal position (with position errors), which we call the correct recovery results and the incorrect recovery results, as shown in Fig. 3(a). For comparison, the state-of-the-art position correction algorithm called pc-FP [37] is employed to reconstruct the high-resolution complex image with the ideal position, as shown in Fig. 3(a). We run the pc-FP and sc-FP algorithms separately 10 times each and calculate the mean-square-error (MSE) of the recovered high-resolution amplitude, as shown in Fig. 3(b). Figure 3(a) also shows the recovered results for the sc-FP algorithm. Figure 3(c) shows the frequency domain position distribution map containing the ideal, actual, and corrected positions for the pc-FP and sc-FP algorithms, indicated with red dots, blue crosses, green squares, and black circles, respectively. It can be seen from Fig. 3(a) that, if the position error is not corrected, the recovery result is of very low quality. Both the pc-FP algorithm and the sc-FP algorithm can provide satisfactory recovery results in that case. Figure 3(f) shows the MSE of the recovered high-resolution amplitude for the pc-FP algorithm (black) and the sc-FP algorithm (red). The MSE values for the two algorithms are very close. Although the pc-FP algorithm can sometimes yield better results than the sc-FP algorithm (MSE is smaller), the sc-FP algorithm is more stable (the MSE fluctuated between 1000 and 1300 by using pc-FP algorithm while the MSE stayed constant by using sc-FP algorithm). It can be seen from Fig. 3(g) that almost all the wrong positions have been corrected, and the results obtained by the two algorithms are similar.

 figure: Fig. 3

Fig. 3 The recovered results for the traditional FP, pc-FP, and sc-FP algorithms. (a) The recovered high-resolution intensity and phase images. (b) The MSE of the recovered high-resolution intensity image for running the pc-FP algorithm 10 times (black) and running the sc-FP algorithm 10 times (red). (g) The frequency domain position distribution map containing the ideal, actual, and corrected positions for the pc-FP and sc-FP algorithms.

Download Full Size | PDF

In order to verify the robustness of the sc-FP algorithm to noise, we add different noise to 225 captured low-resolution images. We test the performance of these algorithms under three different noise models with additive Gaussian white noise (standard deviation σ = 0.02, and 0.04), Poisson noise, and Gaussian-Poisson noise. The reason that we apply these noise types can be seen in [35,36,38]. As can be seen from Fig. 4, for the small additive white Gaussian noise, the pc-FP and sc-FP algorithms can significantly improve the reconstruction quality by correcting the position errors, with very similar quality of the results for the two algorithms. It can be intuitively seen from the MSE in the upper left corner in the image. However, as the noise increases, the position correction performance of the pc-FP algorithm gradually deteriorates. In contrast, the sc-FP algorithm can still correct the position errors, although not perfectly, especially when the position is at the edge of the spatial frequency range. This is because the information corresponding to the edge of the spatial frequency range is dominated by the noise. When the standard deviation of the Gaussian white noise is 0.02, the quality of the recovered high-resolution result for both the pc-FP and sc-FP algorithms is satisfactory. From the corresponding frequency domain position distribution map, the quality of the position correction result for the pc-FP algorithm at the center of the spectrum is more satisfactory than the quality of the position correction result at the edge of the spectrum; however, the position correction result of the sc-FP algorithm is still satisfactory. It also demonstrates that the position of the center of the frequency domain is more important to the reconstruction results than the edge of the frequency domain. As the standard deviation increases to 0.04, the recovery quality for both the pc-FP and sc-FP algorithms decreases; however, the recovery quality of the sc-FP algorithm remains higher than that of the pc-FP algorithm. This can be seen more clearly from the corresponding frequency domain position distribution map and the MSE values. When the added noise type is Poisson noise, the performance of both algorithms is very satisfactory, as can be seen from the recovered high-resolution image and the corresponding frequency domain position distribution map. When the added noise type is Poisson-Gaussian noise (the standard deviation of the Gaussian white noise is 0.02), we can see that the performance of the sc-FP algorithm is much better than that of the pc-FP algorithm. Therefore, both the sc-FP and the pc-FP algorithms show robustness to noise, but the sc-FP algorithm is more robust to noise.

 figure: Fig. 4

Fig. 4 The recovered results for the traditional FP, pc-FP, and sc-FP algorithms with different types of noise added. The value of the upper left corner of the image is MSE value.

Download Full Size | PDF

4. Experiment

In order to verify the performance of the sc-FP algorithm in practical applications, we test it on real experimental data sets. First, we use an Olympus objective lens (magnification 4 × , NA = 0.1) and a microscope module to build an optical imaging system. A programmable LED matrix (15 × 15) with the incident wavelength of λ = 532 nm is used as the angle-varied illumination source. The distance between adjacent LEDs is 4 mm, and the distance from the LED matrix to the sample is about 90 mm. A CMOS camera (PCO.edge) with the pixel size of 5.6 μm is used as the image recording device. We add a tube lens (magnification 2 × ) between the microscope module and the CMOS camera. Secondly, we capture 225 low-resolution intensity images corresponding to the 15 × 15 LED elements. Similar to the FP algorithm, in the sc-FP algorithm each reconstruction only rebuilds a part of the sample. The purpose of the experiment is to verify the performance of the sc-FP algorithm, therefore there is no need to reconstruct the entire field of view (FOV) of the captured sample, and only one segment of the FOV of the sample is selected for reconstruction. A classic USAF resolution board is used as the experimental sample. Without loss of generality, we select a small part of the whole field of view to conduct reconstruction. As shown in Fig. 5, the elements of eighth group and ninth group cannot be distinguished in the low-resolution intensity image detected directly by CCD. After the reconstruction of the traditional FP algorithm, the resolution can be improved obviously, and the third element of the eighth group can be distinguished, however the third elements of the ninth group are still indistinguishable. Then we conduct reconstruction twice utilizing pc-FP algorithm to correct the position error and obtain two reconstruction results. The fifth element of the eighth group can be distinguished in the first result and the third element of the ninth group can be distinguished in the other result. The pc-FP algorithm could obtain better reconstruction result with high resolution compared with FP when there is a position error in the system, but the performance is not stable. At the same time, we use the proposed algorithm to reconstruct high-resolution image twice. We can see the third element of the ninth group can be resolved in both two reconstruction images. We could conclude that the proposed algorithm works more stable than pc-FP algorithm, which is consistent with the previous simulation.

 figure: Fig. 5

Fig. 5 Experimental results for one region in a classic USAF sample obtained from the traditional FP, pc-FP, and sc-FP algorithms.

Download Full Size | PDF

5. Conclusion

In this paper, we proposed a high-quality and high-stability spatial frequency domain position error correction strategy, termed a sc-FP algorithm. The algorithm uses Newton’s method to iteratively update the position of the pupil function in the frequency domain, while updating the sample and the pupil complex functions. In order to improve performance, we utilize Newton’s method, rather than the gradient descent method, to correct the position error. High-quality intensity and phase reconstructions can be obtained after 15 iterations. In the presented simulation, while the state-of-the-art method gave a MSE fluctuating between 1000 and 1300 in 10 iterations, the sc-FP method provided a 1050 MSE constantly in all the iterations. In the presented experiment, the sc-FP can stable distinguish the third element of the ninth group while the traditional FP only can distinguish the third element of the eighth group.

As shown in simulation and experiment, the sc-FP algorithm has the following three main advantages. First, the quality of the reconstruction is improved significantly up to the quality level of the best reconstruction under normal circumstances. Second, the reconstruction is very stable as long as it is based on the same data set and initial guess. Third, the sc-FP algorithm is robust to the Gaussian, Poisson, and Gaussian-Poisson noises.

There are three points should be concerned about the limitation of the proposed method and the future work. Firstly, utilizing Newton’s method make the proposed method achieve a fairly stable correction result, which is both an advantage and a drawback. In the iterative process, once the search position fall into local optimal value, it is difficult for the algorithm to re-converge to the global optimal position. In order to solve this problem, we can improve Newton’s method, for example, by introducing mutation mechanisms or other optimization procedures, which will be focused in our future studies; Secondly, if the position error is too large, the proposed method may be failed, due to the method we used is a local searching algorithm. For this aspect, in general, we have to make sure that an accurate adjustment is completed before the experiment, so that the position error is relatively small (a few tenths of the distance between two LED elements) as we described in the simulation and experiment. Even if the position error is very large for some reasons, we can re-adjust the equipment or as we do to improve the capability of global search; Thirdly, speeding up the convergence and reducing the number of iterations are also our goals for future studies.

Funding

National Natural Science Foundation of China (51775148, 51275121); Science Foundation of Heilongjiang Province (JJ2018QN0026); China Postdoctoral science foundation (2017T100235).

Acknowledgments

Thanks for Prof. Zuo (Nanjing University of Science and Technology) for the help of the experiment of the position correction.

References and links

1. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [PubMed]  

2. X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, “Quantitative phase imaging via Fourier ptychographic microscopy,” Opt. Lett. 38(22), 4845–4848 (2013). [PubMed]  

3. G. Zheng, X. Ou, R. Horstmeyer, J. Chung, and C. Yang, “Fourier ptychographic microscopy: a gigapixel superscope for biomedicine,” Opt. Photonics News 25(4), 26–33 (2014).

4. X. Ou, R. Horstmeyer, G. Zheng, and C. Yang, “High numerical aperture Fourier ptychography: principle, implementation and characterization,” Opt. Express 23(3), 3472–3491 (2015). [PubMed]  

5. T. R. Hillman, T. Gutzler, S. A. Alexandrov, and D. D. Sampson, “High-resolution, wide-field object reconstruction with synthetic aperture Fourier holographic optical microscopy,” Opt. Express 17(10), 7873–7892 (2009). [PubMed]  

6. T. M. Turpin, L. H. Gesell, J. Lapides, and C. H. Price, “Theory of the synthetic aperture microscope,” Proc. SPIE 2566, 230–240 (1995).

7. M. Kim, Y. Choi, C. Fang-Yen, Y. Sung, R. R. Dasari, M. S. Feld, and W. Choi, “High-speed synthetic aperture microscopy for live cell imaging,” Opt. Lett. 36(2), 148–150 (2011). [PubMed]  

8. L. Granero, V. Micó, Z. Zalevsky, and J. García, “Synthetic aperture superresolved microscopy in digital lensless Fourier holography by time and angular multiplexing of the object information,” Appl. Opt. 49(5), 845–857 (2010). [PubMed]  

9. V. Mico, Z. Zalevsky, and J. García, “Synthetic aperture microscopy using off-axis illumination and polarization coding,” Opt. Commun. 276(2), 209–217 (2007).

10. W.-C. Chien, D. S. Dilworth, E. Liu, and E. N. Leith, “Synthetic-aperture chirp confocal imaging,” Appl. Opt. 45(3), 501–510 (2006). [PubMed]  

11. T. S. Ralston, D. L. Marks, P. S. Carney, and S. A. Boppart, “Interferometric synthetic aperture microscopy,” Nat. Phys. 3(2), 129–134 (2007). [PubMed]  

12. Y. Choi, M. Kim, C. Yoon, T. D. Yang, K. J. Lee, and W. Choi, “Synthetic aperture microscopy for high resolution imaging through a turbid medium,” Opt. Lett. 36(21), 4263–4265 (2011). [PubMed]  

13. F. Zhang, G. Pedrini, and W. Osten, “Phase retrieval of arbitrary complex-valued fields through aperture-plane modulation,” Phys. Rev. A 75, 043805 (2007).

14. H. M. L. Faulkner and J. M. Rodenburg, “Movable aperture lensless transmission microscopy: a novel phase retrieval algorithm,” Phys. Rev. Lett. 93(2), 023903 (2004). [PubMed]  

15. J. M. Rodenburg and H. M. L. Faulkner, “A phase retrieval algorithm for shifting illumination,” Appl. Phys. Lett. 85(20), 4795–4797 (2004).

16. C.-H. Lu, C. Barsi, M. O. Williams, J. N. Kutz, and J. W. Fleischer, “Phase retrieval using nonlinear diversity,” Appl. Opt. 52(10), D92–D96 (2013). [PubMed]  

17. X. Wang, W. Chen, and X. Chen, “Fractional Fourier domain optical image hiding using phase retrieval algorithm based on iterative nonlinear double random phase encoding,” Opt. Express 22(19), 22981–22995 (2014). [PubMed]  

18. S. Marchesini, “Phase retrieval and saddle-point optimization,” J. Opt. Soc. Am. A 24(10), 3289–3296 (2007). [PubMed]  

19. H. H. Bauschke, P. L. Combettes, and D. R. Luke, “Phase retrieval, error reduction algorithm, and Fienup variants: a view from convex optimization,” J. Opt. Soc. Am. A 19(7), 1334–1345 (2002). [PubMed]  

20. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [PubMed]  

21. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109(10), 1256–1262 (2009). [PubMed]  

22. P. Thibault, M. Dierolf, O. Bunk, A. Menzel, and F. Pfeiffer, “Probe retrieval in ptychographic coherent diffractive imaging,” Ultramicroscopy 109(4), 338–343 (2009). [PubMed]  

23. M. Guizar-Sicairos and J. R. Fienup, “Phase retrieval with transverse translation diversity: a nonlinear optimization approach,” Opt. Express 16(10), 7264–7278 (2008). [PubMed]  

24. A. M. Maiden, J. M. Rodenburg, and M. J. Humphry, “Optical ptychography: a practical implementation with useful resolution,” Opt. Lett. 35(15), 2585–2587 (2010). [PubMed]  

25. C. T. Putkunz, A. J. D’Alfonso, A. J. Morgan, M. Weyland, C. Dwyer, L. Bourgeois, J. Etheridge, A. Roberts, R. E. Scholten, K. A. Nugent, and L. J. Allen, “Atom-scale ptychographic electron diffractive imaging of boron nitride cones,” Phys. Rev. Lett. 108(7), 073901 (2012). [PubMed]  

26. F. Hue, J. M. Rodenburg, A. M. Maiden, F. Sweeney, and P. A. Midgley, “Wave-front phase retrieval in transmission electron microscopy via ptychography,” Phys. Rev. B 82, 121415 (2010).

27. D. Claus, A. M. Maiden, F. Zhang, F. G. R. Sweeney, M. J. Humphry, H. Schluesener, and J. M. Rodenburg, “Quantitative phase contrast optimised cancerous cell differentiation via ptychography,” Opt. Express 20(9), 9911–9918 (2012). [PubMed]  

28. X. Ou, R. Horstmeyer, G. Zheng, and C. Yang, “High numerical aperture Fourier ptychography: principle, implementation and characterization,” Opt. Express 23(3), 3472–3491 (2015). [PubMed]  

29. Z. Bian, S. Dong, and G. Zheng, “Adaptive system correction for robust Fourier ptychographic imaging,” Opt. Express 21(26), 32400–32410 (2013). [PubMed]  

30. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22(5), 4960–4972 (2014). [PubMed]  

31. S. Dong, R. Shiradkar, P. Nanda, and G. Zheng, “Spectral multiplexing and coherent-state decomposition in Fourier ptychographic imaging,” Biomed. Opt. Express 5(6), 1757–1767 (2014). [PubMed]  

32. L. Tian, Z. Liu, L. H. Yeh, M. Chen, J. Zhong, and L. Waller, “Computational illumination for high-speed in vitro Fourier ptychographic microscopy,” Optica 2(10), 904–911 (2015).

33. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier Ptychography with an LED array microscope,” Biomed. Opt. Express 5(7), 2376–2389 (2014). [PubMed]  

34. L. Bian, J. Suo, G. Situ, G. Zheng, F. Chen, and Q. Dai, “Content adaptive illumination for Fourier ptychography,” Opt. Lett. 39(23), 6648–6651 (2014). [PubMed]  

35. L. Bian, J. Suo, G. Zheng, K. Guo, F. Chen, and Q. Dai, “Fourier ptychographic reconstruction using Wirtinger flow optimization,” Opt. Express 23(4), 4856–4866 (2015). [PubMed]  

36. Y. Zhang, P. Song, and Q. Dai, “Fourier ptychographic microscopy using a generalized Anscombe transform approximation of the mixed Poisson-Gaussian likelihood,” Opt. Express 25(1), 168–179 (2017). [PubMed]  

37. J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Efficient positional misalignment correction method for Fourier ptychographic microscopy,” Biomed. Opt. Express 7(4), 1336–1350 (2016). [PubMed]  

38. L. H. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, and L. Waller, “Experimental robustness of Fourier ptychography phase retrieval algorithms,” Opt. Express 23(26), 33214–33240 (2015). [PubMed]  

39. A. Tripathi, I. McNulty, and O. G. Shpyrko, “Ptychographic overlap constraint errors and the limits of their numerical recovery using conjugate gradient descent methods,” Opt. Express 22(2), 1452–1466 (2014). [PubMed]  

40. J. Chung, H. Lu, X. Ou, H. Zhou, and C. Yang, “Wide-field Fourier ptychographic microscopy using laser illumination source,” Biomed. Opt. Express 7(11), 4787–4802 (2016). [PubMed]  

41. C. Kuang, Y. Ma, R. Zhou, J. Lee, G. Barbastathis, R. R. Dasari, Z. Yaqoob, and P. T. C. So, “Digital micromirror device-based laser-illumination Fourier ptychographic microscopy,” Opt. Express 23(21), 26999–27010 (2015). [PubMed]  

42. X. Ou, J. Chung, R. Horstmeyer, and C. Yang, “Aperture scanning Fourier ptychographic microscopy,” Biomed. Opt. Express 7(8), 3140–3150 (2016). [PubMed]  

43. S. Pacheco, G. Zheng, and R. Liang, “Reflective Fourier ptychography,” J. Biomed. Opt. 21(2), 26010 (2016). [PubMed]  

44. S. Boyd and L. Vandenberghe, Convex optimization (Cambridge University, 2004), Chap. 9.

45. J. E. Dennis, J. Robert, and B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Society for Industrial and Applied Mathematics, 1996), Chap. 6.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 A schematic diagram of a classic FPM setup.
Fig. 2
Fig. 2 Recovery experiment using the traditional FP with absolute positional error and without absolute positional error. (a) The input high-resolution intensity and phase images. (b) The recovered high-resolution intensity and phase images for the conditional FP algorithm without absolute positional errors. (c) The recovered high-resolution intensity and phase images for the conditional FP algorithm with absolute positional errors. (d) The recovered high-resolution intensity and phase images from (c). (e) The spectrogram for the input high-resolution image and location map with and without errors. (f) The MSE of the recovery result by the traditional FP algorithm with and without absolution position errors.
Fig. 3
Fig. 3 The recovered results for the traditional FP, pc-FP, and sc-FP algorithms. (a) The recovered high-resolution intensity and phase images. (b) The MSE of the recovered high-resolution intensity image for running the pc-FP algorithm 10 times (black) and running the sc-FP algorithm 10 times (red). (g) The frequency domain position distribution map containing the ideal, actual, and corrected positions for the pc-FP and sc-FP algorithms.
Fig. 4
Fig. 4 The recovered results for the traditional FP, pc-FP, and sc-FP algorithms with different types of noise added. The value of the upper left corner of the image is MSE value.
Fig. 5
Fig. 5 Experimental results for one region in a classic USAF sample obtained from the traditional FP, pc-FP, and sc-FP algorithms.

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

sin θ x m = x r,c m ( x r,c m ) 2 + ( y r,c m ) 2 + h 2 ,sin θ y m = y r,c m ( x r,c m ) 2 + ( y r,c m ) 2 + h 2 ,
( d kx m , d ky m )=( k 0 (sin θ x m+1 sin θ x m ), k 0 (sin θ y m+1 sin θ y m )),
F -1 {F(u u 0 ,v v 0 )}=f(x,y) e i u 0 x+i v 0 y
O j+1 ( k x , k y )= O j ( k x , k y )+α | P j ( k x + k 0 sin θ x m , k y + k 0 sin θ y m ) | P j * ( k x + k 0 sin θ x m , k y + k 0 sin θ y m ) | P j ( k x , k y ) | max ( | P j ( k x + k 0 sin θ x m , k y + k 0 sin θ y m ) | 2 + δ 1 ) [ Φ j m ( k x + k 0 sin θ x m , k y + k 0 sin θ y m ) O j ( k x , k y ) P j ( k x + k 0 sin θ x m , k y + k 0 sin θ y m ) ]
P j+1 ( k x , k y )= P j ( k x , k y )+β | O j ( k x k 0 sin θ x m , k y k 0 sin θ y m ) | O j * ( k x k 0 sin θ x m , k y k 0 sin θ y m ) | O j ( k x , k y ) | max ( | O j ( k x k 0 sin θ x m , k y k 0 sin θ y m ) | 2 + δ 2 ) [ Φ j m ( k x , k y ) O j ( k x k 0 sin θ x m , k y k 0 sin θ y m ) P j ( k x , k y ) ]
Φ j m ( k x , k y )=F{ I m c (x,y) ϕ (x,y) j m | ϕ (x,y) j m | },
E= m x,y | | ϕ m (x,y) | 2 I m (x,y) | 2 ,
ϕ m (x,y)= F -1 {O( k x k 0 sin θ x m , k y k 0 sin θ y m )P( k x , k y )}= F -1 {O( k x k x m , k y k y m )P( k x , k y )}
ϕ m (x,y)= F -1 {O( k x k x m , k y k y m )P( k x , k y )}=IDFT{ O( k x k x m , k y k y m )P( k x , k y ) } = k x , k y O( k x k x m , k y k y m )P( k x , k y )exp(i2π( k x x N + k y y N )) ,
( k x m , k y m ) j+1 = ( k x m , k y m ) j α ( k x m , k y m ) j E ( k x m , k y m ) j 2 E ,
( k x m , k y m ) 2 E=[ 2 E ( k x m ) 2 2 E k x m k y m 2 E k y m k x m 2 E ( k y m ) 2 ]
E k x m =2 x,y [ | ϕ m (x,y) | 2 I m (x,y)] | ϕ m (x,y) | 2 k x m E k y m =2 x,y [ | ϕ m (x,y) | 2 I m (x,y)] | ϕ m (x,y) | 2 k y m
| ϕ m (x,y) | 2 k x m = ( ϕ m (x,y)) * ϕ m (x,y) k x m + ϕ m (x,y) ( ϕ m (x,y)) * k x m
( ϕ m (x,y)) * ϕ m (x,y) k x m = i2π N ( ϕ m (x,y)) * IDFT[P( k x , k y )DFT(xo(x,y)exp(i2π( k x m x N + k y m y N )))]
ϕ m (x,y) ( ϕ m (x,y)) * k x m = i2π N ϕ m (x,y)IDFT[ P * ( k x , k y )DFT(x o * (x,y)exp(i2π( k x m x N + k y m y N )))],
E k x m = 8π N x,y [ | ϕ m (x,y) | 2 I m (x,y)] Im{ ( ϕ m (x,y)) * IDFT[P( k x , k y )DFT(xo(x,y)exp(i2π( k x m x N + k y m y N )))]}
E k y m = 8π N x,y [ | ϕ m (x,y) | 2 I m (x,y)] Im{ ( ϕ m (x,y)) * IDFT[P( k x , k y )DFT(yo(x,y)exp(i2π( k x m x N + k y m y N )))]}
2 E ( k x m ) 2 =2( x,y g x m g x m + x,y [ | ϕ m (x,y) | 2 I m (x,y) ] g x m k x m ) 2 E ( k y m ) 2 =2( x,y g y m g y m + x,y [ | ϕ m (x,y) | 2 I m (x,y) ] g y m k y m ) 2 E k x m k y m =2( x,y g x m g y m + x,y [ | ϕ m (x,y) | 2 I m (x,y) ] g x m k y m ) 2 E k y m k x m =2( x,y g y m g x m + x,y [ | ϕ m (x,y) | 2 I m (x,y) ] g y m k x m ),
g x m k x m = 8ππ NN { h x m ( h x m ) * Re[ ( ϕ m (x,y)) * IDFT[P( k x , k y )DFT(xxoe)]]} g x m k y m = 8ππ NN {Re[ h x m ( h y m ) * ]Re[ ( ϕ m (x,y)) * IDFT[P( k x , k y )DFT(xyoe)]]} g y m k y m = 8ππ NN { h y m ( h y m ) * Re[ ( ϕ m (x,y)) * IDFT[P( k x , k y )DFT(yyoe)]]} g y m k x m = 8ππ NN {Re[ h y m ( h x m ) * ]Re[ ( ϕ m (x,y)) * IDFT[P( k x , k y )DFT(xyoe)]]} g x m = 4π N Im{ ( ϕ m (x,y)) * IDFT[P( k x , k y )DFT(xoe)]} h x m =IDFT[P( k x , k y )DFT(xoe)] g y m = 4π N Im{ ( ϕ m (x,y)) * IDFT[P( k x , k y )DFT(yoe)]} h y m =IDFT[P( k x , k y )DFT(yoe)] oe=o(x,y)exp(i2π( k x m x N + k y m y N ))
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.