Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Compensation for the setup instability in ptychographic imaging

Open Access Open Access

Abstract

The high-frequency vibration of the imaging system degrades the quality of the reconstruction of ptychography by acting as a low-pass filter on ideal diffraction patterns. In this study, we demonstrate that by subtracting the deliberately blurred diffraction patterns from the recorded patterns and adding the properly amplified subtraction to the original data, the high-frequency components lost by the vibration of the setup can be recovered, and thus the image quality can be distinctly improved. Because no prior knowledge regarding the vibrating properties of the imaging system is needed, the proposed method is general and simple and has applications in several research fields.

© 2017 Optical Society of America

1. Introduction

Coherent diffraction imaging (CDI) is a promising technology for obtaining the complex transmission function of a specimen from the recorded diffraction intensity. As a lens-free technique, CDI can bypass the resolution limits imposed by the poor focusing optics available at short wavelengths [1,2] and can theoretically reach the diffraction-limited resolution. With X-ray and high-energy electrons, a resolution of nanometers or angstroms can be achieved; thus, CDI is becoming an important tool in material and biological sciences [3–7]. Because the performance of traditional CDI algorithms is not very satisfying with regard to convergence, accuracy, and reliability, several improved CDI methods have been proposed [8–11]. The ptychographic iterative engine (PIE) [12] is a scanning version of the CDI technique where the specimen is scanned through a localized illumination beam to a grid of positions and the resulting diffraction patterns are recorded. Using an iterative scheme with a proper overlapping ratio between two adjacent scanning positions, the modulus and phase of the transmission functions of the specimen and the illumination beam can be reconstructed accurately and rapidly [13,14]. In theory, PIE can easily generate images with a resolution only limited by the numerical aperture of the detector; however, in practice, the resolution is affected by the flaws of the imaging system, especially for imaging with X-ray and electron beams. The coherence of synchrotron radiation sources is not as good as that of a laser beam and has been proven to be the largest barrier for PIE to reach the theoretical diffraction-limited resolution [15,16]. Several algorithms have been investigated to improve the image quality of PIE with partially coherent illumination [17–22]. Although these methods require complete or partial prior knowledge of the properties of illumination or time-consuming computation and may slightly compromise the spatial resolution, the quality of the reconstruction can be distinctively improved in many cases. Furthermore, the recently developed technique of Fourier ptychography has potential to avoid the influence of the incoherence for achieving high-quality reconstruction [23, 24]. Another factor that limits the practical resolution of PIE is the instability of the imaging system, including the vibration of the mechanical scanning system, the tiny pointing-direction change, and the transverse shifting of the radiation beam [25, 26]. Because the wavelengths of the X-ray and the electron beams can be much smaller than 1 nm, the tiny departure of the illumination beam from the right position and direction can generate obvious errors in the final reconstruction. Numerous methods have been proposed to correct the low-frequency vibration with a period far longer than the exposure time of the detector [27–30].The high-frequency vibration of the experimental system makes the recorded diffraction intensity a summation of many diffraction intensities formed by the changing illumination during the exposure of the detector [31]. Although the influence of the high-frequency vibration can be treated as the incoherence of the illumination, these methods [17–19, 22] mentioned above require complete or partial prior knowledge of the properties of vibration-including the frequencies and amplitude-or massive data processing. However, in practice, the parameters of the vibration of the imaging system are difficult to measure in real time. The dynamic vibration of the sample or equivalently the probe can be solved by the mixed state method [20,21] with multiple probe modes [31] or be identified by the “low-rank ptychography” method with tunable solution rank number [32]. In these two methods, the illumination mode number or the solution rank number is related to the vibration property, and it will greatly increase when the characteristic of vibration is complex. To effectively deal with the image-quality degradation induced by the setup instability, we need to examine the principle of its influence on the recorded data and the final reconstruction and then develop a simple method to circumvent this problem.

In this study, we provide an easily-understood mathematical explanation on how the recorded diffraction patterns and the final reconstruction are blurred by the instability of the imaging system and then propose a simple numerical method for enhancing the lost high-frequency components. In this proposed method, the prior knowledge regarding the characteristics of the high-frequency vibration of the imaging system is not required. And can be extended to other CDI methods using X-ray and electron beams to solve the problems related to imaging-system instability.

2. Principle of the method

In the PIE method, the specimen O(r) is fixed on a two-dimensional (2D) translation stage and illuminated by a localized probe P(r). Assuming that the specimen is sufficiently thin, the exiting wave from the specimen is φ(r, R) = O(rR)P(r), and the recorded diffraction intensity is I(k, R) ∝ |ℑ[φ(r, R)]|2 in most experiments with short-wavelength sources, where k is the reciprocal coordinate with respect to the real space coordinate r in the specimen plane, and R denotes the position of the specimen during the raster scanning. With the recorded diffraction patterns, the complex amplitudes of the specimen and the probe can be reconstructed. The flowchart of the reconstruction process of PIE is shown in Fig. 1, and a detailed description is found in the literature [12,13].

 figure: Fig. 1

Fig. 1 Flowchart of the reconstruction process using the PIE method (the proposed compensation method is shown in the red box).

Download Full Size | PDF

2.1. Influence of the setup instability

In the PIE algorithm, the illumination probe is assumed to be absolutely static during the data acquisition; however, its direction and position change continually within a small range with respect to the specimen and detector owing to the instability of the imaging system. In most cases, the direction and position of the illumination beam change simultaneously during the exposure of the detector, but for simplicity, we analyze them separately to determine how the recorded data are influenced mathematically and then consider their combined effects.

According to the principles of Fourier optics, any illumination beam can be decomposed into a series of spherical waves of different strengths; thus, we can assume a spherical illuminating probe without a loss of generality. The spherical illumination is expressed in Fourier form as (k) = W(k)exp(−jπλzk2), where λ is the wavelength of the probe, and z is the distance between the focal spot of the probe and the specimen, and W(k) is determined by the numerical aperture of the illuminating optics. For a short wavelength or in the far field, the diffraction distribution in the detector plane is the Fourier transform of the wave exiting the specimen. The complex amplitude of the diffraction pattern in the detector plane can be expressed as

Φ(k)=P˜(k)O˜(k)=nP˜(kkn)O˜(kn).
where Õ(k) is the Fourier transform of the transmission function of the specimen. The recorded intensity with static illumination is
I(k)=|nP˜(kkn)O˜(kn)|2=I0(k)+mnAm(k)An(k)cos[θ(k)].
The first item I0(k) is the intensity summation of all diffracted beams, and the second item mnAm(k)An(k)cos[θ(k)] represents the interference between different spatial-frequency components.
I0(k)=n|P˜(kkn)O˜(kn)|2Am(k)=|P˜(kkm)O˜(km)|An(k)=|P˜(kkn)O˜(kn)|θ(k)=πλz[2k(kmkn)(km2kn2)]+ϕmn.
Here, kmkn is the frequency of the interference fringe formed by the mth and nth diffraction orders, and ϕm,n is the additional phase introduced by the specimen. Considering the coordinate transformation rc = λLk on the CCD plane, where L is the distance between the specimen and the CCD, the diffraction pattern intensity can be described in the frequency domain as
I˜(u)=I˜0(u)+mnA˜m(u)A˜n(u)|u=z(kmkn)/L.
where u is the reciprocal coordinate with respect to the real space coordinate rc in the CCD plane.

When considering the pointing instability of the imaging system, the illumination beam tilted an angle of α can be expressed as,

P(r)=P(r)exp(j2πrsinαλ).
So the Fourier transform of the probe is P̃′(k) = (k − Δk), where Δk = sinα/λ. The recorded intensity with tilted illumination becomes
I(k)=|nP˜(kkn)O˜(kn)|2=I(kΔk).
Assuming that the vibration in the pointing direction of the illumination beam follows the normal distribution H1k) = exp(−Δk2/K2), where K is a constant related to the vibrating properties of the imaging system. The recorded intensity with high-frequency vibration in the pointing direction of the illuminating beam can be interpreted as a summation of the diffraction patterns of all possible illuminations with different pointing directions. Thus, it can be expressed as
Iv1(k)=+I(kΔk)H1(Δk)dΔk+H1(Δk)dΔk=1πKI(k)exp(k2K2).
Considering the coordinate transform rc = λLk in the detector plane, the recorded intensity can be expressed in the frequency domain as
I˜v1(u)=I˜(u)exp[(πλLK)2u2].
Thus, the influence of the vibration in the pointing direction of the illuminating beam acts as a low-pass filter on the ideal diffraction patterns.

On the other hand, when the illumination probe suffers from transverse positioning vibration, the recorded diffraction pattern can be expressed as a summation of the diffraction intensities formed by all possible illumination beams with diverse transverse shifts. The probe with a transverse shift of δ is expressed as P″(r) = P(r + δ). And its Fourier transform is P̃″(k) = (k)exp(j2πkδ). The corresponding diffraction-pattern intensity is

I(k)=|nP˜(kkn)O˜(kn)|2=I0(k)+mnAm(k)An(k)cos[θ(k+Δ)].
Assume that the transverse shifting of the illuminating beam has a normal distribution H2(Δ) = exp(−Δ2/D2), where D is determined by the standard deviation of Δ = δ/λz. The recorded intensity with high-frequency positioning vibration is a summation of different intensities for different illumination shifts:
Iv2(k)=+I(k)H2(Δ)dΔ+H2(Δ)dΔ=I0(k)+mnAm(k)An(k)cos[θ(k)]exp[(πλzD)2(kmkn)2].
The Fourier transform of the recorded diffraction intensity becomes
I˜v2(u)=I˜0(u)+exp[(πλLD)2u2]×mnA˜m(u)A˜n(u)|u=z(kmkn)/L.
Compared with Eq. (4), the position vibration degrades the contrast of the interference fringe in the recorded intensity, acting as a low-pass filter.

In practice, the pointing and transverse vibration of the illumination beam can occur simultaneously; thus, the Fourier transform of the recorded diffraction intensity is

I˜v(u)={I˜0(u)+exp[(πλLD)2u2]×mnA˜m(u)A˜n(u)}exp[(πλLK)2u2].
Because Ĩ0(u) has a narrow frequency band for spherical illumination, Eq. (12) can be approximated as
I˜v(u)I˜0(u)+exp(C2u2)mnA˜m(u)A˜n(u).
where C=πλLK2+D2. Clearly, the instability of the illumination beam during the exposure of the detector leads to the loss of high-frequency components of the diffraction intensity, and the contrast of the interference fringes is differently suppressed depending on their frequency. As reported in the literature [22], change in the contrast of the diffraction patterns leads to mathematical ambiguity and generates errors in the final reconstruction.

2.2. Compensation method

At first glance, it appears that the Fourier component of the ideal diffraction pattern can be obtained by dividing exp(−C2u2) to the Ĩv (u) by each possible C until a satisfactory reconstruction is achieved. However, because this operation can seriously amplify the errors or noise, and the expired resolution improvement can be submerged by them, it cannot be adopted in practice.

In the proposed method, the recorded intensities Ĩv (k) are modified before the conventional PIE process, as shown in Fig. 1. The recorded diffraction intensities are deliberately blurred via convolution with the Gaussian function exp(k2/Kv2), where Kv is a constant chosen to slightly blur the recorded diffraction pattern. The deliberately blurred intensity pattern is expressed in the frequency domain as

I˜v(u)=I˜0(u)+exp(C2u2)exp(Cv2u2×mnA˜m(u))A˜n(u).
where Cv = πλLKv. We subtract the blurred pattern I˜v(k) from the original recorded diffraction Ĩv (k) and then add the subtracted pattern to Ĩv (k) after multiplying it by a constant β:
Ic(k)=Iv(k)+β[Iv(k)Iv(k)].
The Fourier transform of the modified intensity pattern is
I˜c(u)=I˜0(u)+[1ββexp(Cv2u2)]exp(C2u2)mnA˜m(u)A˜n(u).
Clearly, when [1+ββexp(Cv2u2)]exp(C2u2) in Eq. (16) is wider than exp(−C2u2), Ĩc (u) is close to the Fourier transform of the vibration-free diffraction pattern, and then the influence of the imaging system instability on the reconstructed image can be remarkably suppressed.

3. Results

3.1. Simulation result

A numerical simulation is performed using the proposed algorithm to check its feasibility. A divergent spherical wave with a wavelength of 632.8 nm is used for the illumination. Two pictures are used as the modulus and phase, respectively, of the specimen. The diameter of the illuminated area on the specimen is 0.74 mm, and 10 × 10 diffraction intensities are recorded with a step size of 0.185 mm. The charge-coupled device (CCD) camera has a resolution of 256 × 256 pixels and a pixel size of 7.4 μm.

For comparison, the diffractive intensities with stable illumination are calculated, one of which is shown in Fig. 2(a). When the unstable imaging system results in diffraction pattern vibration with variance of 46μm in the detector plane, the intensity distributions are calculated by adding diffraction intensities under many slightly different illuminations with varying incident angles and transverse shifts. The diffraction patterns corresponding to the same illuminating position of Fig. 2(a) is shown in Fig. 2(b), where the intensity is obviously blurred. This coincides with Eq. (13). The reconstructed modulus and phase of the specimen with stable illumination are very clear as shown in Figs. 3(a) and 3(e), respectively. Figs. 3(b) and 3(f) show the reconstructed complex transmission of the specimen with the unstable illumination, where the resolution is apparently decreased compared with Figs. 3(a) and 3(e). Corrected diffraction patterns are obtained from the raw recorded data using the proposed method. A Gaussian function is used to slightly blur the recorded intensities with Kv = 1.24mm−1. And the obtained blurred diffraction patterns are subtracted from the raw recorded diffraction patterns. Then the modified intensity patterns are obtained by adding the amplified subtractions to the raw data with β = 2.5. As shown in Fig. 2(c), the contrast of the corrected diffraction pattern is obviously improved compared with the raw data and the distribution is similar to that of Fig. 2(a). The reconstructed modulus and phase of the specimen using the corrected intensities are shown in Figs. 3(c) and 3(g), respectively, where the resolution is remarkably improved compared with Figs. 3(b) and 3(f). The insets in Figs. 3(a)–(c) and Figs. 3(e)–(g) show the reconstructed modulus and phase, respectively, of the illumination field, indicating that the quality of the retrieved illumination is also improved using the proposed method. In addition, we also use the mixed state method [31] to reconstruct the image from the vibrating data, and the reconstructed modulus and phase are shown in Figs. 3(d) and 3(h), respectively. The fine structures that cannot be resolved in Figs. 3(b) and 3(f) are clearly distinguished.

 figure: Fig. 2

Fig. 2 Diffraction patterns obtained (a) with stable illumination, (b) with unstable illumination, and (c) by modification via the proposed method.

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Reconstructed modulus obtained (a) with stable illumination, (b) with unstable illumination, (c) using corrected intensities, and (d) with mixed state method; reconstructed phase obtained (e) with stable illumination, (f) with unstable illumination, (g) using corrected intensities, and (h) with mixed state method. The upper insets show the reconstructed illumination fields.

Download Full Size | PDF

To quantify the performance of the proposed method, the normalized root-mean-square error metric [13] is calculated, as shown in Fig. 4. The accuracy of the reconstruction are obviously improved when the proposed method is used to correct the recorded diffraction patterns before the iterative computation. The quality of the image obtained by mixed state method is obviously improved after 500 iterations with eight illumination modes. This demonstrates that the proposed compensation method and the mixed state method are comparable in dealing with the high frequency vibration for simulation data.

 figure: Fig. 4

Fig. 4 Progress of the error of the conventional PIE method in stable situation (black dashed line), and in unstable situation (blue dashed line); the proposed method in unstable situation (red line); and the mixed state method in unstable situation (green dashed line)

Download Full Size | PDF

3.2. Experimental result

The proof-of-principle experiment is conducted with visible light as shown in Fig. 5, where the divergent laser beam from a He-Ne laser is slightly diffused by a rotating plastic diffuser before irradiating on the sample to simulate the instability of the imaging system. A cross section of a monocotyledon placed on a 2D translation stage is used as the sample. The illumination area on the specimen is ∽2.5 mm, and 10 × 10 diffraction patterns are recorded by the CCD camera while the sample is scanned relative to the unstable beam with a step size of 0.37 mm.

 figure: Fig. 5

Fig. 5 Schema of the experimental setup.

Download Full Size | PDF

Recorded intensities obtained with stable and unstable laser beams are shown in Figs. 6(a) and 6(b), respectively. It is clear that the recorded intensities using unstable imaging system are totally blurred. In order to overcome this limitation, the newly proposed method is applied to modify the recorded patterns. The Gaussian function with Kv = 0.9355mm−1 is adopted to slightly blur the recorded diffraction intensities. According to the proposed method, the corrected diffraction patterns are obtained with β = 5. As shown in Fig. 6(c), the contrast of the corrected diffraction pattern is remarkably improved and similar to that shown in Fig. 6(a). Figs. 7(a) and 7(e) show the reconstructed modulus and phase, respectively, of the sample with stable illumination, where the fine structures of individual cells are clear. Figs. 7(b) and 7(f) show the reconstructed image of the specimen with the unstable imaging system, where the individual cell is hardly resolved. With the corrected diffraction patterns, the reconstructed image of the specimen shown in Figs. 7(c) and 7(g) are generated. Here, the quality of the reconstructed images is distinctly improved, and the images obtained are roughly identical to those for the stable illumination. We also used the mixed state method to deal with the unstable data, and the sample was reconstructed using twelve illumination modes. As shown in Figs. 7(d) and 7(h), the cellular wall that cannot be resolved in Figs. 7(a) and 7(e) can be observed now, but the reconstructed image quality is not as good as that obtained with method proposed in this paper. It seems that, the vibration generated by the high-speed rotating plastic is too complex to be modeled by the periodical vibrations like in Ref. [31], so numerous illumination modes are needed in the mixed state method in this situation. Consequently, it is difficult to get satisfying reconstruction quality by using only limited number of illumination modes in the reconstruction.

 figure: Fig. 6

Fig. 6 Diffraction patterns obtained (a) in the stable case, (b) in the unstable case, and (c) by modification using the proposed method

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Reconstructed modulus of the specimen (a) in the stable case, (b) in the unstable case, (c) obtained using the proposed method, and (d) obtained using mixed state method; reconstructed phase of the specimen (e) in the stable case, (f) in the unstable case, (g) obtained using the proposed method, and (h) obtained using mixed state method.

Download Full Size | PDF

A slice of pumpkin stem was also measured using the experimental setup shown in Fig. 5. Firstly, conventional PIE method was used to retrieve the modulus and phase distribution of the sample under unstable situation. As shown in Figs. 8(a) and 8(d), only the vascular tissues which have relatively large structure can be observed, and the fine structures are completely blurred. The mixed state method was used to deal with the unstable data, and the modulus and phase of the sample reconstructed using twelve illumination modes are shown in Figs. 8(b) and 8(e), respectively. The cellular walls that are unresolvable in Figs. 8(a) and 8(d) can be clearly distinguished using mixed state method. Then, we also used the proposed compensation method to reconstruct the sample, the values of parameter kv and β are the same as these used in the experiment mentioned above. The reconstructed modulus and phase are shown in Figs. 8(c) and 8(f), the tissues with fine structure can be distinctly resolved and the image quality is obviously improved compared with those in Figs. 8(a) and 8(d). Compared with the results obtained by mixed state method, the background is more flat and clear, and there is no artifacts in the edge of the image.

 figure: Fig. 8

Fig. 8 Reconstructed modulus distribution of the pumpkin stem in unstable case (a) using conventional PIE method, (b) using mixed state method, and (c) using the proposed method; reconstructed phase distribution in unstable case (d) using conventional PIE method, (e) using mixed state method, and (f) using the proposed method;

Download Full Size | PDF

To quantify the resolvability of the proposed method, a USAF1951 target is measured. As shown in Fig. 9, group 5, which is resolvable for the stable illumination, is obviously blurred for the unstable system. After the proposed method is applied, the elements in Group 5 become distinguished, as shown in Figs. 9(c) and 9(f).

 figure: Fig. 9

Fig. 9 Reconstructed modulus distribution of the USAF 1951 resolution target for (a) stable illumination, (b) unstable illumination, and (c) the proposed method; normalized value of the red line in (a) for (d) stable illumination, (e) unstable illumination, and (f) the proposed method.

Download Full Size | PDF

4. Discussion

4.1. The influence of β

To show the dependency of the resolution-improvement effect on the value of β, a numerical simulation is performed with the same parameters that were used for the simulation in Fig. 2. Fig. 10 shows the effect of the proposed method with various curves, where the blue dashed line indicates the low-pass filter related to the vibration of the imaging system, and the other curves show the increased width of the low-pass filter with varying β. The width of the low-pass filter increases remarkably with increasing β, and for β = 2.5, the width of [1+ββexp(Cv2u2)]exp(C2u2) is about twice that of the exp(−C2u2), and accordingly the resolution of the final reconstruction with these generated diffraction patterns can be remarkably improved. However, when β = 5, the low-pass filter has the shape of the black dashed line in Fig. 10; that is, the strength of the high-frequency components is over-amplified. In fact, according to Eq. (16), the parameter Kv and β jointly influence the final reconstruction quality. The parameter Kv is chosen to obviously blur the intensity patterns further and does’t decide the quality of the final reconstruction directly, and usually we can use a Kv with the value corresponding to a few CCD pixels. The parameter β decides the strength to recover the attenuated high-frequency components, so it decides the reconstruction quality directly. Thus, the constant β should be carefully selected to assure a satisfying final reconstruction.

 figure: Fig. 10

Fig. 10 Filter window obtained with unstable illumination and by modification with different values of β

Download Full Size | PDF

Fig. 11 shows the reconstruction results with different β values according to the aforementioned analysis, where Figs. 11(a) and 11(b) are the reconstructed modulus and phase images, respectively, obtained using the raw data acquired with the unstable system. Figs. 11(c)–(h) show the reconstructed modulus and phase images with β = 2.5, 1, and 5. These results indicate that the modulus and phase reconstructed using the proposed method produce a better reconstruction quality than those obtained directly with unstable data. On the other hand, the effect of the proposed method depends on the properly chosen value of β, which is coincident with the curves shown in Fig. 10.

 figure: Fig. 11

Fig. 11 Reconstructed (a) modulus and (b) phase using unstable system; reconstructed (c) modulus and (d) phase using modified intensity patterns with β = 2.5; reconstructed (e) modulus and (f) phase using modified intensity patterns with β = 1; reconstructed (g) modulus and (h) phase using modified intensity patterns with β = 5.

Download Full Size | PDF

4.2. Robustness of the proposed method

In the aforementioned theoretical analysis, the vibration of the illumination is assumed to have a normal distribution, and another Gaussian function is used to convolve with the recorded intensities to slightly blur the patterns. But the characteristics of a real imaging system are unknown and may be more complex, so it appears that the proposed method is difficult to implement for real experiments. However, the fundamental principle of the proposed method is to recover the high-frequency components lost because of the instability of the radiation source, and other functions that can realize this purpose can replace the Gaussian function in the analysis. For example, circular functions, triangular functions, and para-curves can be adopted to slightly blur the diffraction patterns for improving the resolution without considering the exact properties of the vibration of the imaging system. This feature makes the proposed method more applicable to real experiments and is demonstrated by the following simulations and experiments.

When the vibration of the imaging system follows a normal distribution, the parameters are the same as that were used for the simulations in Fig. 2. As previously discussed, when the diffraction patterns are deliberately blurred via convolution with another Gaussian function exp(k2/Kv2), the width of the low-pass filter shown by the red line in Fig. 12(a) becomes much wider than that for the dashed line related to the instability. The same results can be obtained by using the circular function circ(k/Kv) (Fig. 12(b)) and the triangular function Λ(k/Kv) (Fig. 12(c)). For the case where the vibration of the imaging system follows a random distribution with the maximum extent of 52.5μm in the detector plane, the low-pass filter window of the recorded intensity pattern is shown by the black dashed line in Fig. 12(d). The modified results obtained by the deliberately blurring diffraction patterns via convolution with a Gaussian function, circular function, and triangular function are shown as the red lines in Figs. 12(d)–(f), respectively. When the vibration of the imaging system has a triangular distribution and results in an extent of 36.2μm in the detector plane, the low-pass filter window of the recorded intensity pattern has the profile of the black dashed line shown in Fig. 12(g). The modified results obtained by the deliberately blurring diffraction patterns via convolution with a Gaussian function, circular function, and triangular function are shown as the red lines in Figs. 12(g)–(i), respectively.

 figure: Fig. 12

Fig. 12 When the vibration follows normal distribution, filter window (a) modified with Gaussian function (Kv = 1.24mm−1, β = 2.5); (b) modified with circular function (Kv = 1.1694mm−1, β = 3.5); (c) modified with triangular function (Kv = 0.8771mm−1, β = 4); when the vibration follows random distribution, filter window (d) modified with Gaussian function (Kv = 0.8268mm−1, β = 2.6); (e) modified with circular function (Kv = 0.8771mm−1, β = 3.5); (f) modified with triangular function (Kv = 0.7309mm−1, β = 3.5); when the distribution of the vibration follows triangular function, filter window (g) modified with Gaussian function (Kv = 0.8268mm−1, β = 2); (h) modified with circular function (Kv = 0.8771mm−1, β = 3.2); (i) modified with triangular function (Kv = 0.7309mm−1, β = 2.4).(Black line is the filter window with instable illumination and red line is the filter window after modification)

Download Full Size | PDF

It is worth noticing that the parameter Kv and β have various values in the simulation and experiments mentioned above. This is because the characteristics of the vibration were different from each other in these examples. This indicates that the values of parameter Kv and β are related to the vibration property. For practical experimental system, the characteristics of the vibration can’t be known exactly in advance. At the first glance, the values of Kv and β can’t be decided properly, and accordingly it seems difficult for the proposed method to be applied for real experiments. However, since the characteristics of the high frequency vibration of a practical imaging system doesn’t change remarkably with the time, then the values of Kv and β can be predetermined by imaging a sample with known structure (known phase and amplitude) via several trial reconstructions, and in all subsequent experiments the predetermined Kv and β can be treated as known parameters.

And an experiment is carried out to verify the practicality of the proposed method, and the parameters are the same with Fig. 5. The reconstructed images shown in Figs. 13(a) and 13(b) are the reconstructions using the raw data recorded with instable illumination, which are seriously blurred. The reconstructed images shown in Figs. 13(c) and 13(d) are obtained by using the Gaussian function with Kv = 0.9355mm−1 and β = 5 to do the resolution improvement with the proposed method. Figs. 13(e) and 13(f) are obtained by using intensities modified with circular function with a diameter Kv = 1.4033mm−1 and β = 8 to do the reconstruction. Figs. 13(g) and 13(h) are obtained by using intensities modified with triangular function with a diameter Kv = 1.1694mm−1 and β = 6 to do the reconstruction. We can find that their effects on the resolution improvement are almost the same. These results shows that the effect of the proposed method is independent of the vibrating model of the imaging system; thus, the method does not require prior knowledge of the vibrating properties.

 figure: Fig. 13

Fig. 13 Reconstructed (a)modulus and (b) phase of the sample with unstable illumination; reconstructed (c) modulus and (d) phase using the intensity patterns modified with Gaussian function; reconstructed (e) modulus and (f) phase using the intensity patterns modified with circular function; reconstructed (g) modulus and (h) phase using the intensity patterns modified with triangular function.

Download Full Size | PDF

5. Conclusion

In conclusion, a simple method is proposed to relax the requirement of ptychography for the stability of the imaging system. The influence of the instability of the imaging system can be described as a low-pass filter acting on the ideal diffraction intensities; the high-frequency components are lost in the recorded data, generating a blurred image in the final reconstruction. Using the proposed method, the recorded intensity patterns are corrected by subtracting the deliberately blurred diffraction patterns from the original ones and then adding the amplified subtracted patterns to the recorded data. The resolution of the final reconstruction can be significantly improved by using the corrected diffraction patterns. The feasibility of the proposed method is demonstrated via both numerical simulations and experiments on an optical bench. Because the proposed method does not require any prior knowledge of the characteristics of the illumination beam or massive calculation during the iterative process, it is an easy approach for acquiring a high-quality reconstruction with an unstable imaging system. The method can be extended to other CDI techniques for imaging with short-wavelength irradiation, such as free electrons or soft X-ray lasers.

Funding

National Natural Science Foundation of China (No. 61675215).

References and links

1. R. W. Gerchberg, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237 (1972).

2. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef]   [PubMed]  

3. J. Miao, P. Charalambous, J. Kirz, and D. Sayre, “Extending the methodology of x-ray crystallography to allow imaging of micrometre-sized non-crystalline specimens,” Nature 400, 342–344 (1999). [CrossRef]  

4. I. K. Robinson, I. A. Vartanyants, G. Williams, M. Pfeifer, and J. Pitney, “Reconstruction of the shapes of gold nanocrystals using coherent X-ray diffraction,” Phys. Rev. Lett. 87, 195505 (2001). [CrossRef]   [PubMed]  

5. D. Shapiro, P. Thibault, T. Beetz, V. Elser, M. Howells, C. Jacobsen, J. Kirz, E. Lima, H. Miao, and A. M. Neiman, “Biological imaging by soft X-ray diffraction microscopy,” Proceedings of the National Academy of Science 102, 15343–15346 (2005). [CrossRef]  

6. J. Zuo, I. Vartanyants, M. Gao, R. Zhang, and L. Nagahara, “Atomic resolution imaging of a carbon nanotube from diffraction intensities,” Science 300, 1419–1421 (2003). [CrossRef]   [PubMed]  

7. J. Miao, T. Ishikawa, I. K. Robinson, and M. M. Murnane, “Beyond crystallography: Diffractive imaging using coherent X-ray light sources,” Science 348, 530–535 (2015). [CrossRef]   [PubMed]  

8. P. Bao, F. Zhang, G. Pedrini, and W. Osten, “Phase retrieval using multiple illumination wavelengths,” Opt. Lett. 33, 309–311(2008). [CrossRef]   [PubMed]  

9. V. Y. Ivanov, M. Vorontsov, and V. Sivokon, “Phase retrieval from a set of intensity measurements: theory and experiment,” J. Opt. Soc. Am. A 9, 1515–1524 (1992). [CrossRef]  

10. F. Zhang and J. Rodenburg, “Phase retrieval based on wave-front relay and modulation,” Phys. Rev. B 82, 121104 (2010). [CrossRef]  

11. H. Tao, S. P. Veetil, X. Pan, C. Liu, and J. Zhu, “Lens-free coherent modulation imaging with collimated illumination,” Chinese Optics Letters 14, 071203 (2016).

12. H. Faulkner and J. Rodenburg, “Movable aperture lensless transmission microscopy: a novel phase retrieval algorithm,” Phys. Rev. Lett. 93, 023903 (2004). [CrossRef]   [PubMed]  

13. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109, 1256–1262 (2009). [CrossRef]   [PubMed]  

14. H. Faulkner and J. Rodenburg, “Error tolerance of an iterative phase retrieval algorithm for moveable illumination microscopy,” Ultramicroscopy 103, 153–164 (2005). [CrossRef]   [PubMed]  

15. K. Stachnik, I. Mohacsi, I. Vartiainen, N. Stuebe, J. Meyer, M. Warmer, C. David, and A. Meents, “Influence of finite spatial coherence on ptychographic reconstruction,” Appl. Phys. Lett. 107, 011105 (2015). [CrossRef]  

16. N. Burdet, X. Shi, D. Parks, J. N. Clark, X. Huang, S. D. Kevan, and I. K. Robinson, “Evaluation of partial coherence correction in X-ray ptychography,” Opt. Lett. 3, 5452–5467 (2015).

17. L. Whitehead, G. Williams, H. Quiney, D. Vine, R. Dilanian, S. Flewett, K. Nugent, A. G. Peele, E. Balaur, and I. McNulty, “Diffractive imaging using partially coherent x rays,” Phys. Rev. Lett. 103, 243902 (2009). [CrossRef]  

18. J. N. Clark and A. G. Peele, “Simultaneous sample and spatial coherence characterisation using diffractive imaging,” Appl. Phys. Lett. 99, 154103 (2011). [CrossRef]  

19. B. Chen, R. A. Dilanian, S. Teichmann, B. Abbey, A. G. Peele, G. J. Williams, P. Hannaford, L. Van Dao, H. M. Quiney, and K. A. Nugent, “Multiple wavelength diffractive imaging,” Phys. Rev. A 79, 023809 (2009). [CrossRef]  

20. P. Thibault and A. Menzel, “Reconstructing state mixtures from diffraction measurements,” Nature 494, 68–71 (2013). [CrossRef]   [PubMed]  

21. D. J. Batey, D. Claus, and J. M. Rodenburg, “Information multiplexing in ptychography,” Ultramicroscopy 138, 13–21 (2014). [CrossRef]   [PubMed]  

22. W. Yu, S. Wang, S. Veetil, S. Gao, C. Liu, and J. Zhu, “High-quality image reconstruction method for ptychography with partially coherent illumination,” Phys. Rev. B 93, 241105 (2016). [CrossRef]  

23. S. Dong, P. Nanda, K. Guo, J. Liao, and G. Zheng, “Incoherent fourier ptychographic photography using structured light,” Photon. Res. 3, 19–23 (2015). [CrossRef]  

24. K. Guo, S. Dong, and G. Zheng, “Fourier ptychography for brightfield, phase, darkfield, reflective, multi-slice, and fluorescence imaging,” IEEE J. Sel. Top. Quantum Electron. 22, 77–88 (2016). [CrossRef]  

25. F. Wei, J.-Y. Choi, and S. Rah, “Experiences of the long term stability at sls,” Proc. AIP 879, 38–41 (2007). [CrossRef]  

26. V. Schlott, M. Boge, B. Keil, P. Pollet, and T. Schilcher, “Fast orbit feedback and beam stability at the Swiss Light Source,” Proc. AIP 732, 174–181 (2004). [CrossRef]  

27. F. Zhang, I. Peterson, J. Vila-Comamala, A. Diaz, F. Berenguer, R. Bean, B. Chen, A. Menzel, I. K. Robinson, and J. M. Rodenburg, “Translation position determination in ptychographic coherent diffraction imaging,” Opt. Express 21, 13592–13606 (2013). [CrossRef]   [PubMed]  

28. A. Maiden, M. Humphry, M. Sarahan, B. Kraus, and J. Rodenburg, “An annealing algorithm to correct positioning errors in ptychography,” Ultramicroscopy 120, 64–72 (2012). [CrossRef]   [PubMed]  

29. M. Guizar-Sicairos and J. R. Fienup, “Phase retrieval with transverse translation diversity: a nonlinear optimization approach,” Opt. Express 16, 7264–7278 (2008). [CrossRef]   [PubMed]  

30. M. Odstrcil, P. Baksh, S. Boden, R. Card, J. Chad, J. Frey, and W. Brocklesby, “Ptychographic coherent diffractive imaging with orthogonal probe relaxation,” Opt. Express 24, 8360–8369 (2016). [CrossRef]   [PubMed]  

31. J. N. Clark, X. Huang, R. J. Harder, and I. K. Robinson, “Dynamic imaging using ptychography,” Phys. Rev. Lett. 112, 113901 (2014). [CrossRef]   [PubMed]  

32. R. Horstmeyer, R. Chen, X. Ou, B. Ames, J. Tropp, and C. Yang, “Solving ptychography with a convex relaxation,” N. J. Phys. 17, 053044 (2015). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 Flowchart of the reconstruction process using the PIE method (the proposed compensation method is shown in the red box).
Fig. 2
Fig. 2 Diffraction patterns obtained (a) with stable illumination, (b) with unstable illumination, and (c) by modification via the proposed method.
Fig. 3
Fig. 3 Reconstructed modulus obtained (a) with stable illumination, (b) with unstable illumination, (c) using corrected intensities, and (d) with mixed state method; reconstructed phase obtained (e) with stable illumination, (f) with unstable illumination, (g) using corrected intensities, and (h) with mixed state method. The upper insets show the reconstructed illumination fields.
Fig. 4
Fig. 4 Progress of the error of the conventional PIE method in stable situation (black dashed line), and in unstable situation (blue dashed line); the proposed method in unstable situation (red line); and the mixed state method in unstable situation (green dashed line)
Fig. 5
Fig. 5 Schema of the experimental setup.
Fig. 6
Fig. 6 Diffraction patterns obtained (a) in the stable case, (b) in the unstable case, and (c) by modification using the proposed method
Fig. 7
Fig. 7 Reconstructed modulus of the specimen (a) in the stable case, (b) in the unstable case, (c) obtained using the proposed method, and (d) obtained using mixed state method; reconstructed phase of the specimen (e) in the stable case, (f) in the unstable case, (g) obtained using the proposed method, and (h) obtained using mixed state method.
Fig. 8
Fig. 8 Reconstructed modulus distribution of the pumpkin stem in unstable case (a) using conventional PIE method, (b) using mixed state method, and (c) using the proposed method; reconstructed phase distribution in unstable case (d) using conventional PIE method, (e) using mixed state method, and (f) using the proposed method;
Fig. 9
Fig. 9 Reconstructed modulus distribution of the USAF 1951 resolution target for (a) stable illumination, (b) unstable illumination, and (c) the proposed method; normalized value of the red line in (a) for (d) stable illumination, (e) unstable illumination, and (f) the proposed method.
Fig. 10
Fig. 10 Filter window obtained with unstable illumination and by modification with different values of β
Fig. 11
Fig. 11 Reconstructed (a) modulus and (b) phase using unstable system; reconstructed (c) modulus and (d) phase using modified intensity patterns with β = 2.5; reconstructed (e) modulus and (f) phase using modified intensity patterns with β = 1; reconstructed (g) modulus and (h) phase using modified intensity patterns with β = 5.
Fig. 12
Fig. 12 When the vibration follows normal distribution, filter window (a) modified with Gaussian function (Kv = 1.24mm−1, β = 2.5); (b) modified with circular function (Kv = 1.1694mm−1, β = 3.5); (c) modified with triangular function (Kv = 0.8771mm−1, β = 4); when the vibration follows random distribution, filter window (d) modified with Gaussian function (Kv = 0.8268mm−1, β = 2.6); (e) modified with circular function (Kv = 0.8771mm−1, β = 3.5); (f) modified with triangular function (Kv = 0.7309mm−1, β = 3.5); when the distribution of the vibration follows triangular function, filter window (g) modified with Gaussian function (Kv = 0.8268mm−1, β = 2); (h) modified with circular function (Kv = 0.8771mm−1, β = 3.2); (i) modified with triangular function (Kv = 0.7309mm−1, β = 2.4).(Black line is the filter window with instable illumination and red line is the filter window after modification)
Fig. 13
Fig. 13 Reconstructed (a)modulus and (b) phase of the sample with unstable illumination; reconstructed (c) modulus and (d) phase using the intensity patterns modified with Gaussian function; reconstructed (e) modulus and (f) phase using the intensity patterns modified with circular function; reconstructed (g) modulus and (h) phase using the intensity patterns modified with triangular function.

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

Φ ( k ) = P ˜ ( k ) O ˜ ( k ) = n P ˜ ( k k n ) O ˜ ( k n ) .
I ( k ) = | n P ˜ ( k k n ) O ˜ ( k n ) | 2 = I 0 ( k ) + m n A m ( k ) A n ( k ) cos [ θ ( k ) ] .
I 0 ( k ) = n | P ˜ ( k k n ) O ˜ ( k n ) | 2 A m ( k ) = | P ˜ ( k k m ) O ˜ ( k m ) | A n ( k ) = | P ˜ ( k k n ) O ˜ ( k n ) | θ ( k ) = π λ z [ 2 k ( k m k n ) ( k m 2 k n 2 ) ] + ϕ m n .
I ˜ ( u ) = I ˜ 0 ( u ) + m n A ˜ m ( u ) A ˜ n ( u ) | u = z ( k m k n ) / L .
P ( r ) = P ( r ) exp ( j 2 π r sin α λ ) .
I ( k ) = | n P ˜ ( k k n ) O ˜ ( k n ) | 2 = I ( k Δ k ) .
I v 1 ( k ) = + I ( k Δ k ) H 1 ( Δ k ) d Δ k + H 1 ( Δ k ) d Δ k = 1 π K I ( k ) exp ( k 2 K 2 ) .
I ˜ v 1 ( u ) = I ˜ ( u ) exp [ ( π λ L K ) 2 u 2 ] .
I ( k ) = | n P ˜ ( k k n ) O ˜ ( k n ) | 2 = I 0 ( k ) + m n A m ( k ) A n ( k ) cos [ θ ( k + Δ ) ] .
I v 2 ( k ) = + I ( k ) H 2 ( Δ ) d Δ + H 2 ( Δ ) d Δ = I 0 ( k ) + m n A m ( k ) A n ( k ) cos [ θ ( k ) ] exp [ ( π λ z D ) 2 ( k m k n ) 2 ] .
I ˜ v 2 ( u ) = I ˜ 0 ( u ) + exp [ ( π λ L D ) 2 u 2 ] × m n A ˜ m ( u ) A ˜ n ( u ) | u = z ( k m k n ) / L .
I ˜ v ( u ) = { I ˜ 0 ( u ) + exp [ ( π λ L D ) 2 u 2 ] × m n A ˜ m ( u ) A ˜ n ( u ) } exp [ ( π λ L K ) 2 u 2 ] .
I ˜ v ( u ) I ˜ 0 ( u ) + exp ( C 2 u 2 ) m n A ˜ m ( u ) A ˜ n ( u ) .
I ˜ v ( u ) = I ˜ 0 ( u ) + exp ( C 2 u 2 ) exp ( C v 2 u 2 × m n A ˜ m ( u ) ) A ˜ n ( u ) .
I c ( k ) = I v ( k ) + β [ I v ( k ) I v ( k ) ] .
I ˜ c ( u ) = I ˜ 0 ( u ) + [ 1 β β exp ( C v 2 u 2 ) ] exp ( C 2 u 2 ) m n A ˜ m ( u ) A ˜ n ( u ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.