Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Accelerated convergence extended ptychographical iterative engine using multiple axial intensity constraints

Open Access Open Access

Abstract

The extended ptychographical iterative engine (ePIE) is widely applied in the field of ptychographic imaging due to its great flexibility and computational efficiency. A technique of ePIE with multiple axial intensity constraints, which is called MAIC-PIE, is proposed to drastically improve the convergence speed and reduce the calculation time. This technique requires that the diffracted light from the sample is propagated to the multiple individual axial planes, which can be achieved by using the beam splitter and multiple CCDs. In this technique, an additional intensity constraint is involved in the iterative process that makes for building the reasonable guesses of the probe and object in the first few iterations and accelerating the convergence. Simulations and experiments have verified that MAIC-PIE behaves good performance with fast convergence. The great performance and limited computational complexity make it a very attractive and promising technique for ptychographic imaging.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Phase retrieval is a non-interferometric technique to reconstruct the object’s phase using the intensity constraint in the space or Fourier domains [1]. In phase retrieval algorithm, a sequence of wave propagations are involved between the Fourier domains or the space planes, and the amplitude of the calculated wave is replaced with the measured amplitude. The facilitate changes in the amplitude consistent with the true phase is the critical factor for a successful phase reconstruction. To enhance the facilitate changes in the amplitude, there are plenty of feasible and practical techniques through increasing the number of measurements, including the use of multiple axially displaced plane [2,3], SLM transfer function [4], wavelengths scanning [5,6], shifted illumination [7], and curvatures [8].

A good example of the shifted illumination method is ptychographical iterative engine (PIE) that has inherited traits, and it has established a reputation for innovation in its own right. A light probe generated by an aperture stop is employed to illuminate the specimen, and the diffracted pattern is directly captured by a CCD or CMOS target. The light probe or specimen is fixed on a translation stage that can be laterally shifted, and diffraction patterns are recorded at a series of the overlapping positions. In the original PIE experiments, the probe was assumed to be well known and fully coherent, the scanning position was assumed to be accurately obtained, the diffraction pattern to be noise-free, and the sample to be thin. At present, algorithms for PIE have been proposed to routinely handle partial coherence [9,10], accurately reconstruct the probe [1114], correct the scanning position errors [1517], reduce the noise effect [18,19], and deal with the thick sample [20,21]. Among the above advances, the capacity to retrieve the probe function is the first achievement and remains the most significant. Among these algorithms for retrieving the probe function, the extended ptychographical iterative engine (ePIE) [13] is a global approach that at each iteration the entire measured diffraction patterns are used to perform a batch improvement to recover the probe and the object. The diffraction patterns in ePIE are used one by one to iteratively reconstruct the probe and object. The ePIE algorithm converges reasonably at a reasonable rate in practical situations where it struggles to find an accessible solution. But ePIE still takes hundreds of iterations to converge in some experiments, such as the ptychographic experiment using a focused beam to form the probe, where the exact distance of the sample from the beam focus is difficult to measure accurately that leads to a poor initial probe guess with too much or too little curvature [22], and the experiments using a diffuser to produce a randomly structured probe that is also difficult to guess the initial probe [23].

In this work, a technique to increase the convergent speed of the ptychographic imaging based on the multiple axial intensity constraints (i.e., MAIC-PIE) is proposed. This strategy requires that the diffracted light from the sample be axially separated by using the beam splitter and inserting multiple CCDs into the splitter beams. These multiple CCDs are working together independently to obtain the multiple axial intensity images. Such multiple axial intensity images are playing the role as an additional constraint to strengthen the connection between the intensity and the unknown phase. On the one hand, the multiple axial intensity constraints are imposed on the recorded plane (the position of CCD) to correct the wavefront’s amplitude. On the other hand, the multiple axial intensity constraints are beneficial to build the reasonable guesses for the probe and object in the first several iterations. By taking these guesses into the subsequent iterations, the convergence speed will be dramatically increased. Section 2 presents a detail description for the imaging system and retrieval algorithm of MAIC-PIE. In Section 3, some simulations are carried out to verify the feasibility, and the convergence of MAIC-PIE is compared with that of ePIE. The performance of the proposed method is demonstrated and verified by several selected examples in Section 4. The registration errors between multiple CCDs are discussed in Section 5. Section 6 concludes.

2. Principle

The experimental apparatus used for MAIC-PIE is shown in Fig.  1. The sample is placed on a linear x/y translation stage. A coherent incident beam, which is called as the probe, is localized within a finite area of the sample surface by using an aperture stop. The light diffracted from the sample is spatially split into two orthogonal beams by using the beam splitter. Two CCDs, i.e., CCD1 and CCD2 are placed at two different axial positions after the beam splitter. They are applied to record the diffraction patterns generated by the interaction of the sample with the localized probe. The linear translation stage is programmed to shift the sample through a grid of scanning positions, which are composed of a series of overlapping positions. At each scanning position, two axial intensity images will be recorded by CCD1 and CCD2.

 figure: Fig. 1.

Fig. 1. Setup of MAIC-PIE.

Download Full Size | PDF

Before placing the sample into the setup of MAIC-PIE, we should capture the intensities recorded on CCD1 and CCD2. For avoiding the influence of the laser intensity fluctuation on reconstruction quality, this step must be performed before imaging every sample. These intensities are only connected with the probe, and the diffraction patterns recorded by CCD1 and CCD2 in the experiment are labeled as ${I_{illum,CCD1}}(u)$ and ${I_{illum,CCD2}}(u)$. In MAIC-PIE, a set of J diffraction patterns recorded by a given CCD in the experiment are labeled as ${I_{CCDi}}(u,{s_j})$ (i = 1, 2, and j = 1, 2, …, J), where u is the coordinate on the CCD plane, and sj is the x/y scanning position of jth diffraction pattern. The wavelength of the probe is λ. The process begins with initial estimates of the probe ${P_0}(r)$ and the object ${O_0}(r)$, where r is the coordinate on the object plane. The algorithm for MAIC-PIE in the mth iteration is described in the following:

  • (1) Propagate the diffracted light from the object ${\psi _{m,obj}}({r,{s_j}} )= {O_m}({r,{s_j}} )\cdot {P_m}(r )$ to the CCD1 plane, and obtain the corresponding wavefront ${\Psi _{m,CCD1}}({u,{s_j}} )= F\{{{\psi_{m,obj}}({r,{s_j}} )} \}$, where F is the Angular Spectrum propagator.
  • (2) Replace the amplitude of ${\Psi _{m,CCD1}}({u,{s_j}} )$ with $\sqrt {{I_{CCD1}}({u,{s_j}} )} $, propagate this corrected wavefront to the CCD2 plane ${\Psi _{m,CCD2}}({u,{s_j}} )= F\{{{\Psi _{m,CCD1}}({r,{s_j}} )} \}$, replace the amplitude of ${\Psi _{m,CCD2}}({u,{s_j}} )$ with $\sqrt {{I_{CCD2}}({u,{s_j}} )} $, and propagate the corrected ${\Psi _{m,CCD2}}({u,{s_j}} )$ back to the object plane ${\psi ^{\prime}_{m,obj}}({r,{s_j}} )$ by the reverse Angular Spectrum.
  • (3) Revise the object and probe by using updating functions
    $${O_{m + 1}}(r )= {O_m}(r )+ \frac{{|{{P_m}(r )} |}}{{|{{P_m}{{(r )}_{\max }}} |}}\frac{{|{P_m^\ast (r )} |}}{{{{|{{P_m}(r )} |}^2} + \delta }} \times \alpha [{{{\psi^{\prime}}_{m,obj}}({r,{s_j}} )- {\psi_{m,obj}}({r,{s_j}} )} ], $$
    $${P_{m + 1}}(r )= {P_m}(r )+ \frac{{|{{O_m}(r )} |}}{{|{{O_m}{{(r )}_{\max }}} |}}\frac{{|{O_m^\ast (r )} |}}{{{{|{{O_m}(r )} |}^2} + \delta }} \times \beta [{{{\psi^{\prime}}_{m,obj}}({r,{s_j}} )- {\psi_m}({r,{s_j}} )} ], $$
    where α and β are numbers between 0 and 1, and δ is a regularization constant to ensure numerical stability.
  • (4) Propagate ${P_{m + 1}}(r )$ to the CCD1 plane by Angular Spectrum, obtain the wavefront ${\theta _{CCD1}}(u )$, and correct this wavefront by the following equation,
    $${\theta _{CCD1}}^\prime (u )= \left\{ {\begin{array}{cl} (1 - c){\theta_{CCD1}}(u )+ c\sqrt {{I_{illum,CCD\textrm{1}}}(u )} \frac{{{\theta_{CCD1}}(u )}}{{|{{\theta_{CCD1}}(u )} |}} &{\bmod (m,T) = 0}\\ \sqrt {{I_{illum,CCD\textrm{1}}}(u )} \frac{{{\theta_{CCD1}}(u )}}{{|{{\theta_{CCD1}}(u )} |}} & {\textrm{ otherwise}} \end{array}} \right., $$
    where we take T = 10 and c = 0.9 to present the algorithm form being trapped in local minima and to avoid stagnation.
  • (5) Propagate ${\theta _{CCD1}}^\prime (u )$ to the CCD2 plane by Angular Spectrum, obtain the wavefront ${\theta _{CCD2}}(u )$, and correct this wavefront by
    $${\theta _{CCD2}}^\prime (u )= \left\{ {\begin{array}{cl} (1 - c){\theta_{CCD2}}(u )+ c\sqrt {{I_{illum,CCD\textrm{2}}}(u )} \frac{{{\theta_{CCD2}}(u )}}{{|{{\theta_{CCD2}}(u )} |}} &{\bmod (m,T) = 0}\\ \sqrt {{I_{illum,CCD\textrm{2}}}(u )} \frac{{{\theta_{CCD2}}(u )}}{{|{{\theta_{CCD2}}(u )} |}} & {\textrm{ otherwise}} \end{array}} \right., $$
  • (6) Propagate ${\theta _{CCD2}}^\prime (u )$ back to the object plane by using the reverse Angular Spectrum, and obtain the updating probe ${P^{\prime}_{m + 1}}(r )$.
  • (7) Set the ${P_{m + 1}}(r )= {P^{\prime}_{m + 1}}(r )$. From the step (4) ∼ (7), the probe obtained in step (3) will be further updated and approximation to the true distribution, which better initializes the probe function in early iterations.
Steps 1–7 are repeated until a termination condition is met.

3. Simulation

To verify the feasibility of MAIC-PIE, simulations have been conducted. The system parameters are chosen as follows: the working wavelength is λ=632.8 nm; the dimension of two CCDs is 1024×1024 pixels; the size of the pixel 4.4 µm×4.4 µm; the distances between the aperture stop, object, CCD1, and CCD2 are 10, 64, 10 mm, respectively; and a grid of 5 × 5 positions with the step 46 pixels is used. The true high-resolution complex function is created from the images “cameraman” and “westconcordorthophoto” for its amplitude and phase, and its modulus is scaled to values in the range 0–1 and the phase of values between 0-π rad. Compared to a conventional ptychography setup, the light in MAIC-PIE is now split into two sensors, and the intensity of each sensor will be decreased. In our simulation, the split ratio of the beam splitter is 50:50, and this split ratio has been used to limit the intensity. Figure  2 shows the retrieved results by ePIE and MAIC-PIE after 200 iterations. Compared to the retrieved quality in Figs.  2(d) and 2(h), it can be observed that the image quality in Fig.  2(n) is improved. For ePIE and MAIC-PIE, the root mean square (RMS) value relative to the amplitude and phase error from each iteration is evaluated for the reconstruction accuracy and convergence speed. RMS should be calculated on the central part of the whole reconstruction image, since the recovery on the scan boundary is poor due to little overlapping, and the RMS value is not reasonable if the calculation includes this area. Figure  3 illustrates the RMS of the retrieved amplitude and phase error. Both ePIE and MAIC-PIE are implemented with the same step size a = b = 1, and each algorithm is running for 200 iterations. It can be seen that for a fixed step size, MAIC-PIE method performs better than ePIE method, and it reaches a good solution with minimum cross-talk between the phase and amplitude. From the RMSs of the amplitude and phase error for both algorithms, it is clear that the quality of the reconstruction are improved by incrasing the number of iteration but at a different rate. After 50 iterations, errors are still prominent in the results retrieved by ePIE, while much smaller errors are appearing in the results retrieved by MAIC-PIE. After 100 iterations, there are still prominent errors in results of ePIE, while errors are cleared out in the results of MAIC-PIE. Additionally, in the first 50 iterations, the reconstruction accuracy of the probe and object retrieved by MAIC-PIE method is much faster to converge compared to that of ePIE method for the test object we used. It indicates that the MAIC-PIE better initializes the probe and object functions in early iterations. These findings indicate that MAIC-PIE method is much faster to converge compared to the conventional ePIE method for the test object we used.

 figure: Fig. 2.

Fig. 2. Numerical demonstration. (a) ∼ (e) the retrieved probe and object by ePIE with the axial distance d = 64 mm; (f) ∼ (j) the retrieved probe and object by ePIE with the axial distance d = 74 mm; (k) ∼ (o) the retrieved probe and object by MAIC-PIE.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Comparisons of the reconstruction accuracy. (a) amplitude and (b) phase reconstruction accuracy of the probe; (c) amplitude and (d) phase reconstruction accuracy of the object.

Download Full Size | PDF

To better explain the fast convergence and small reconstruction error achieved by MAIC-PIE, one group of the amplitude and phase error trajectories are selected and plotted in Fig.  4. Meanwhile, we combine MAIC-PIE with the adaptive step-size method proposed by Chao Zuo [19] to increase the convergence rate and the robustness for noise. It is clear that the error in MAIC-PIE with adaptive step-size decreases more rapidly than ePIE in the first 20 iterations. For a fixed iteration number, MAIC-PIE with adaptive step-size reaches a much less error than ePIE methods. In addition, for the same step size, MAIC-PIE converges much more rapidly than ePIE, which is consistent with our previous findings obtained from Fig.  3. The trajectories of MAIC-PIE with adaptive step-size is very close to that of PIE with known probe after 60 iterations, and after 80 iterations these two methods have the same reconstruction accuracy. It indicates that the retrieved probe function by MAIC-PIE is very close to the true after 60 iterations. As a result, we can reasonably conclude that MAIC-PIE method outperforms ePIE with both faster convergence and lower mis-adjustment error simultaneously achieved.

 figure: Fig. 4.

Fig. 4. Comparisons of the amplitude (a) and phase (b) reconstruction accuracy.

Download Full Size | PDF

Next, we show the comparison between the performance of MAIC-PIE and ePIE with adaptive step-size under 40 dB and 25 dB Gaussian noise in Fig.  5. Figures  5(a) and 5(c) display the RMS of the phase error, and a similar behavior can be observed for the amplitude error curves as well. We can clearly see that MAIC-PIE also converges rapidly than ePIE with adaptive step size under 40 dB Gaussian noise. Meanwhile, under the same iteration, MAIC-PIE with adaptive step size reaches a lower error than ePIE with adaptive step size. When dealing with experimental data, it is not always possible to achieve the true complex function of the object, so the RMS of the amplitude and phase error cannot be calculated. Alternatively, the real-space error metric of the captured intensity images is chosen as an indicator of the convergence rate and reconstruction accuracy. The error metric E is calculated as follows:

$$E = \frac{{{{\sum\limits_j {\left|{\sqrt {{I_{CCD1}}({u,{s_j}} )} - |{{\Psi _{m,CCD1}}({u,{s_j}} )} |} \right|} }^2}}}{{\sum\limits_j {{I_{CCD1}}({u,{s_j}} )} }}. $$

 figure: Fig. 5.

Fig. 5. Comparisons of the reconstruction accuracy. (a) phase reconstruction accuracy and (b) error metric in the case of 45 dB Gaussian noise; (c) phase reconstruction accuracy and (d) error metric in the case of 25 dB Gaussian noise.

Download Full Size | PDF

The corresponding error metric E as a function of the iteration number is plotted in Figs.  5(b) and 5(d). Comparatively, in the case of small noise, MAIC-PIE consumes fewer iterations required for convergence, and thus achieves the best reconstruction quality with less computational overhead.

4. Experiment

The experiments are conducted in our laboratory to verify the performance of the proposed method. The light source is a He-Ne laser with a wavelength of 632.8 nm. Two 8-bit CCD cameras (DMK 23G274) with 1600×1200 pixels and a pixel size of 4.4 µm×4.4 µm are selected as CCD1 and CCD2. The object is laterally displaced to a series of overlapping adjacent positions via an x-y positioning stage. The scanning step is 0.2 mm, and the effective radius of the aperture is about 2 mm. The distances between the aperture stop, object, CCD1 and CCD2 are 10, 65.3, 10 mm, respectively. Errors in these distance will introduce artefacts into the reconstruction. In order to reduce these distance errors, we place CCD1 and CCD2 on the motorized positioning system (Thorlabs KMTS25E) to adjust the axial distance. By using the iterative autofocusing method proposed in Ref. [24], we can individually achieve the distance between object and CCD1, and the distance between object and CCD2. The distance between CCD1 and CCD2 also can be obtained. Meanwhile, we can obtain the arbitrary distance with the help of the motorized positioning system. Figure  6 exhibits the experimental results of line pairs in USAF 1951 after 100 iterations. The obvious artifacts are superimposed on the final result retrieved by ePIE, which not only obfuscate the background but also distort the small-scale features. Compared to the result retrieved by ePIE, MAIC-PIE method generates a better retrieved image with a uniform background and all groups of the features clearly resolved.

 figure: Fig. 6.

Fig. 6. Experimental results of a resolution target after 100 iterations. (a) amplitude (c) phase retrieved by ePIE with adaptive step size; (b) amplitude and (d) phase retrieved by MAIC-PIE with adaptive step size.

Download Full Size | PDF

MAIC-PIE needs extra propagation steps between CCD1 and CCD2, and it will consume much more time than ePIE for running the same number of iterations. But the decay rate of the reconstruction accuracy in MAIC-PIE is much faster than that of ePIE. Figure  7 describes the corresponding error metric E of the data in Fig.  6 as a function of the iteration number. The threshold Et = 0.01 is set as a termination condition. When the value of the error metric reaches or belows the threshold Et = 0.01, the mimimum iterations of ePIE and MAIC-PIE are 41 and 18, and the corresponding calculation time is 1319.79 s and 849.24 s by the computer with i7-6700HQ CPU and 24 GB RAM, respectively. The calculation time of MAIC-PIE has been shorten 35.65% than that of ePIE.

 figure: Fig. 7.

Fig. 7. Comparisons of the error metric.

Download Full Size | PDF

To show the validity of the proposed method in imaging the biological specimens, the aforementioned experiment is repeated by using the fly’s leg as a tested object. Figure  8 presents the experimental results of the fly’s leg in case of the termination threshold Et = 0.0003. The minimum iterations of ePIE and MAIC-PIE are 98 and 50, and the corresponding calculation time is 3154.62 s and 2359 s. These findings verify that our proposed method can be applied to effectively improve the convergence speed and reduce the calculation time.

 figure: Fig. 8.

Fig. 8. Experimental results of fly’s leg. (a) amplitude (c) phase retrieved by ePIE with adaptive step size; (b) amplitude and (d) phase retrieved by MAIC-PIE with adaptive step size.

Download Full Size | PDF

5. Discussion

In this paper, the positions of the captured intensities on the multiple CCDs should be exactly same. However, the setup in this paper cannot be used to avoid the registration errors between CCD1 and CCD2. Before calibrating the registration error, the influence of the registration error on the reconstruction should be analyzed and shown in Fig.  9, and we use Dx and Dy to denote the horizontal and vertical registration error. It indicates that when the registration error is less than or equal to 0.2 pixels, it is negligible to the effect of the error on the reconstructed image. However, when the registration error is larger than 0.5 pixels, the amplitude and phase information of the object will be contaminated that can decrease the image quality.

 figure: Fig. 9.

Fig. 9. Effect of the registration error on the reconstruction.

Download Full Size | PDF

The effect of the registration errors between CCD1 and CCD2 is similar to that of the scanning position error in ptychography [1517]. In order to correct the registration error, the annealing algorithm proposed in Ref. [15] is applied to search the positional relation between CCD1 and CCD2. The initial the registration error is chosen as Dx = Dy = 0.5 pixels in simulation, and Fig.  10 illustrates the calibrated trend of the registration error. It can be seen that the registration errors have been effectively corrected, and we can neglect the effect of the remaining registration error on the reconstruction.

 figure: Fig. 10.

Fig. 10. The calibrated trend of the registration error during iterations.

Download Full Size | PDF

6. Conclusion

ePIE is a computational imaging approach to simultaneously retrieve the object and probe. In order to boost the convergence speed of ePIE, the diffraction light from the object is axially separated by using the beam splitter and by inserting multiple CCDs on the splitter beams. Based on this strategy, the method of MAIC-PIE is proposed. The multiple CCDs can capture multiple axial intensity images, which can be employed as an additional constraint to strengthen the connection between the intensity and the unknown phase during the iterative process. The multiple axial intensity constraints are imposed on the CCD1 and CCD2 plane to correct the wavefront’s amplitude of these CCD planes. They are beneficial to build the reasonable guesses of the probe and object in the first few iterations. These guesses are taken into the subsequent iterations to better retrieve the probe and object, which will dramatically accelerate the convergence and reduce the computational time. Additionally, the strategy combining MAIC-PIE with adaptive step size has been introduced to achieve a significant improvement in the convergence and robustness of the noise reconstruction. Moreover, MAIC-PIE with adaptive step size offers fast convergence speed with short computational time. As verified by simulations and experimental results in the context of MAIC-PIE, it always behaves good performance with fast convergence. Meanwhile, we also compare MAIC-PIE with the conventional ePIE method. It has been observed that in the initial few iterations, MAIC-PIE can approach a reasonable solution at a high speed, and improve the quality of the reconstruction. As a result, the proposal MAIC-PIE in this work provides a simple and effective method to accelerate the convergence process and improve the reconstruction quality for the ptychographic imaging method.

However, the system setup in this paper can be not implemented in X-ray regime. For achieving the multiple axial intensity images in X-ray regime, CCD can be fixed on the precise axial displacement device or the grating is placed behind the object to produce ± 1st or other order diffraction beams, and two CCDs are placed into the diffraction beams with different axial positions. Hence, the method for accelerating the convergence by using multiple axial intensity constraints also can be applied in X-ray regime. Further investigations will be focused on reducing the number of CCDs, improving the calculation efficiency, investigating adaptations to the diffraction pattern update step, and improving the quality of the reconstructed image.

Funding

Natural Science Foundation of Jiangsu Province (BK20190954); National Natural Science Foundation of China (19KJB140007); Natural Science Foundation of Shandong Province (ZR2019QF013); China Postdoctoral Science Foundation (2018M630773).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]  

2. P. Almoro, G. Pedrini, and W. Osten, “Complete wavefront reconstruction using sequential intensity measurements of a volume speckle field,” Appl. Opt. 45(34), 8596–8605 (2006). [CrossRef]  

3. J. F. Binamira and P. F. Almoro, “Accelerated single-beam multiple-intensity reconstruction using unordered propagations,” Opt. Lett. 44(12), 3130–3133 (2019). [CrossRef]  

4. M. Agour, P. F. Almoro, and C. Falldorf, “Investigation of smooth wave fronts using SLM-based phase retrieval and a phase diffuser,” J. Eur. Opt. Soc. Rapid Publ. 7, 12046 (2012). [CrossRef]  

5. Y. Bai, S. P. Vettil, X. Pan, C. Liu, and J. Zhu, “Ptychographic microscopy via wavelength scanning,” APL Photonics 2(5), 056101 (2017). [CrossRef]  

6. J. Dou, T. Zhang, C. Wei, Z. Yang, Z. Gao, J. Ma, J. Li, Y. Hu, and D. Zhu, “Single-shot ptychographic iterative engine based on chromatic aberrations,” Opt. Commun. 440, 139–145 (2019). [CrossRef]  

7. J. M. Rodenburg and H. M. L. Faulkner, “A phase retrieval algorithm for shifting illumination,” Appl. Phys. Lett. 85(20), 4795–4797 (2004). [CrossRef]  

8. D. Claus, G. Pedrini, and W. Osten, “Iterative phase retrieval based on variable wavefront curvature,” Appl. Opt. 56(13), F134–F137 (2017). [CrossRef]  

9. C. Liu, Z. Jian-Qiang, and J. Rodenburg, “Influence of the illumination coherency and illumination aperture on the ptychographic iterative microscopy,” Chin. Phys. B 24(2), 024201 (2015). [CrossRef]  

10. P. Thibault and A. Menzel, “Reconstructing state mixtures from diffraction measurements,” Nature 494(7435), 68–71 (2013). [CrossRef]  

11. M. Guizar-Sicairos and J. R. Fienup, “Phase retrieval with transverse translation diversity: a nonlinear optimization approach,” Opt. Express 16(10), 7264–7278 (2008). [CrossRef]  

12. P. Thibault, M. Dierolf, A. Menzel, O. Bunk, C. David, and F. Pfeiffer, “High-resolution scanning x-ray diffraction microscopy,” Science 321(5887), 379–382 (2008). [CrossRef]  

13. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109(10), 1256–1262 (2009). [CrossRef]  

14. A. Maiden, D. Johnson, and P. Li, “Further improvements to the ptychographical iterative engine,” Optica 4(7), 736–745 (2017). [CrossRef]  

15. A. M. Maiden, M. J. Humphry, M. C. Sarahan, B. Kraus, and J. M. Rodenburg, “An annealing algorithm to correct positioning errors in ptychography,” Ultramicroscopy 120, 64–72 (2012). [CrossRef]  

16. M. Beckers, T. Senkbeil, T. Gorniak, K. Giewekemeyer, T. Salditt, and A. Rosenhahn, “Drift correction in ptychographic diffractive imaging,” Ultramicroscopy 126, 44–47 (2013). [CrossRef]  

17. F. Zhang, I. Peterson, J. Vila-Comamala, A. Diaz, F. Berenguer, R. Bean, B. Chen, A. Menzel, I. K. Robinson, and J. M. Rodenburg, “Translation position determination in ptychographic coherent diffraction imaging,” Opt. Express 21(11), 13592–13606 (2013). [CrossRef]  

18. P. Godard, M. Allain, V. Chamard, and J. Rodenburg, “Noise models for low counting rate coherent diffraction imaging,” Opt. Express 20(23), 25914–25934 (2012). [CrossRef]  

19. C. Zuo, J. Sun, and Q. Chen, “Adaptive step-size strategy for noise-robust Fourier ptychographic microscopy,” Opt. Express 24(18), 20724–20744 (2016). [CrossRef]  

20. A. M. Maiden, M. J. Humphry, and J. M. Rodenburg, “Ptychographic transmission microscopy in three dimensions using a multi-slice approach,” J. Opt. Soc. Am. A 29(8), 1606–1614 (2012). [CrossRef]  

21. A. Suzuki, S. Furutaku, K. Shimomura, K. Yamauchi, Y. Kohmura, T. Ishikawa, and Y. Takahashi, “High-resolution multislice X-ray ptychography of extended thick objects,” Phys. Rev. Lett 112(5), 053903 (2014). [CrossRef]  

22. S. Marchesini, H. Krishnan, B. J. Daurer, D. A. Shapiro, T. Perciano, J. A. Sethian, and F. R. Maia, “SHARP: a distributed GPU-based ptychographic solver,” J. Appl. Crystallogr. 49(4), 1245–1252 (2016). [CrossRef]  

23. A. M. Maiden, G. R. Morrison, B. Kaulich, A. Gianoncelli, and J. M. Rodenburg, “Soft X-ray spectromicroscopy using ptychography with randomly phased illumination,” Nat. Commun. 4(1), 1669 (2013). [CrossRef]  

24. J. Dou, Z. Gao, J. Ma, C. Yuan, Z. Yang, and L. Wang, “Iterative autofocusing strategy for axial distance error correction in ptychography,” Opt. Laser. Eng 98, 56–61 (2017). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Setup of MAIC-PIE.
Fig. 2.
Fig. 2. Numerical demonstration. (a) ∼ (e) the retrieved probe and object by ePIE with the axial distance d = 64 mm; (f) ∼ (j) the retrieved probe and object by ePIE with the axial distance d = 74 mm; (k) ∼ (o) the retrieved probe and object by MAIC-PIE.
Fig. 3.
Fig. 3. Comparisons of the reconstruction accuracy. (a) amplitude and (b) phase reconstruction accuracy of the probe; (c) amplitude and (d) phase reconstruction accuracy of the object.
Fig. 4.
Fig. 4. Comparisons of the amplitude (a) and phase (b) reconstruction accuracy.
Fig. 5.
Fig. 5. Comparisons of the reconstruction accuracy. (a) phase reconstruction accuracy and (b) error metric in the case of 45 dB Gaussian noise; (c) phase reconstruction accuracy and (d) error metric in the case of 25 dB Gaussian noise.
Fig. 6.
Fig. 6. Experimental results of a resolution target after 100 iterations. (a) amplitude (c) phase retrieved by ePIE with adaptive step size; (b) amplitude and (d) phase retrieved by MAIC-PIE with adaptive step size.
Fig. 7.
Fig. 7. Comparisons of the error metric.
Fig. 8.
Fig. 8. Experimental results of fly’s leg. (a) amplitude (c) phase retrieved by ePIE with adaptive step size; (b) amplitude and (d) phase retrieved by MAIC-PIE with adaptive step size.
Fig. 9.
Fig. 9. Effect of the registration error on the reconstruction.
Fig. 10.
Fig. 10. The calibrated trend of the registration error during iterations.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

O m + 1 ( r ) = O m ( r ) + | P m ( r ) | | P m ( r ) max | | P m ( r ) | | P m ( r ) | 2 + δ × α [ ψ m , o b j ( r , s j ) ψ m , o b j ( r , s j ) ] ,
P m + 1 ( r ) = P m ( r ) + | O m ( r ) | | O m ( r ) max | | O m ( r ) | | O m ( r ) | 2 + δ × β [ ψ m , o b j ( r , s j ) ψ m ( r , s j ) ] ,
θ C C D 1 ( u ) = { ( 1 c ) θ C C D 1 ( u ) + c I i l l u m , C C D 1 ( u ) θ C C D 1 ( u ) | θ C C D 1 ( u ) | mod ( m , T ) = 0 I i l l u m , C C D 1 ( u ) θ C C D 1 ( u ) | θ C C D 1 ( u ) |  otherwise ,
θ C C D 2 ( u ) = { ( 1 c ) θ C C D 2 ( u ) + c I i l l u m , C C D 2 ( u ) θ C C D 2 ( u ) | θ C C D 2 ( u ) | mod ( m , T ) = 0 I i l l u m , C C D 2 ( u ) θ C C D 2 ( u ) | θ C C D 2 ( u ) |  otherwise ,
E = j | I C C D 1 ( u , s j ) | Ψ m , C C D 1 ( u , s j ) | | 2 j I C C D 1 ( u , s j ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.