Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Transformed pupil-function misalignment calibration strategy for Fourier ptychographic microscopy

Open Access Open Access

Abstract

Fourier ptychographic microscopy (FPM) is an enabling quantitative phase imaging technique with both high-resolution (HR) and wide field-of-view (FOV), which can surpass the diffraction limit of the objective lens by employing an LED array to provide angular-varying illumination. The precise illumination angles are critical to ensure exact reconstruction, while it’s difficult to separate actual positional parameters in conventional algorithmic self-calibration approaches due to the mixing of multiple systematic error sources. In this paper, we report a pupil-function-based strategy for independently calibrating the position of LED array. We first deduce the relationship between positional deviation and pupil function in the Fourier domain through a common iterative route. Then, we propose a judgment criterion to determine the misalignment situations, which is based on the arrangement of LED array in the spatial domain. By combining the mapping of complex domains, we can accurately solve the spatial positional parameters concerning the LED array through a boundary-finding scheme. Relevant simulations and experiments demonstrate the proposed method is accessible to precisely correct the positional misalignment of LED array. The approach based on the pupil function is expected to provide valuable insights for precise position correction in the field of microscopy.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Fourier ptychography (FP) is an emerging microscopy [1] which borrows ideas from synthetic aperture [2,3] and phase retrieval [4,5] concepts. In the FPM system, the traditional light source is replaced by a programmable LED array to provide angle-varied illumination. A series of low-resolution (LR) images with different incident illumination wave vectors are recorded by an imaging sensor and employed to sequentially update the corresponding sub-aperture information. In this way, oblique illumination realizes aperture scanning in the Fourier domain and expands the spectral range of objective lens acquisition, thus FPM can bypass the trade-off between spatial resolution and FOV in conventional optical systems.

In typical FPM platforms, the systematic errors, including inherent aberrations, inevitable noises, and model misalignment [6], are mixed in LR measurements and affect the quality of reconstructed HR images. For inherent aberrations, the embedded pupil function recovery (EPRY) algorithm has been proposed to recover object spectrum and pupil function simultaneously [7,8]. For inevitable noises, the noise effect can be alleviated via adjusting acquisition exposure time and conducting some noise removal operations [9,10]. Nevertheless, the model misalignment during the reconstruction process has not been properly addressed. The spatial positional deviation of the LED element is converted to the incorrect location of sub-aperture in the Fourier domain, resulting in error accumulation with iterations and corrupting the reconstruction consequently [11].

The model misalignment calibration methods can be divided into two categories: algorithmic self-calibration (ASC) and physics-based methods. These two methods can also integrate with a deep-learning-based FP architecture [1215] to implement the misalignment error minimization. Algorithmic self-calibration approaches [6,1620] involve jointly solving two inverse problems: FPM reconstruction and systematic parameter calibration. It generally re-evaluates the difference between the captured and updated LR images at each iteration. In this respect, Sun et al. [17] first introduced the global positional misalignment (GPM) model to characterize the illumination wave vectors. Pan et al. [6] proposed a system calibration procedure termed SC-FPM to iteratively optimize the sub-aperture positions, while the simulated annealing (SA) algorithm, LED intensity correction module, and adaptive step-size strategy are integrated into the calibration procedure. Although the ASC methods eliminate the demand for pre-knowledge and test ground truth, the LR intensity measurements still contain a mixture of information from the sample and various errors. These ASC methods are mostly based on the GPM model, however, their effectiveness is limited by the optimization algorithms, resulting in a tendency to reach locally optimal solutions rather than globally optimal results [20].

Physics-based methods [2123] intend to separate the systematic parameter calibration from the reconstruction process and localize the precise sub-aperture position through the physical model. As such, Regina et al. [21] localized bright-field (BF) illumination angle through the auto-correlation spectrum of LR intensity image. Zhang et al. [22] achieved BF alignment using bright-dark-field (BDF) transition images acquired without placing the sample. Zheng et al. [23] inferred misalignment parameters by analyzing the shifted BF position of a defocused object. The first method [21] utilizes an additional $2\times$ digital camera adapter to reduce the size of sampling aperture, ensuring all BF Fourier spectra are within the LR image size. The second method [22] requires attention to the adjustment of the distance from the illumination source to the sample to ensure the presence of the BDF transition images [24]. The defocused strategy [23] even necessitates a high-precision focusing knob. In addition, the calibration process primarily takes place in the BF area and does not include the dark-field (DF) area. While extracting information from images in the BF area is easier due to their higher intensity, the positional parameters in BF don’t accurately represent the real situation in DF.

In this paper, we report an independent model misalignment correction strategy based on the transformed pupil function, termed TPF-FPM, achieving global position calibration through the localization of DF illumination angles. First, we deduce the connection between the frequency positional deviation and pupil function for individual DF illumination during the iterative path of EPRY. Second, based on the fact that the frequency positional deviations of sub-apertures form certain distribution characteristics under different misalignment situations, we propose a judgment criterion to determine the search directions of the shifting pupil centers. Then, the offset pupil centers corresponding to corner LED elements can be precisely identified through a boundary-finding scheme and the global positional parameters can be further calculated. In TPF-FPM, using the amplitude of pupil function as the calibration reference can prevent interference from aberrations in the phase. Besides, the pupil function can also serve as an evaluation indicator to visualize whether the correct calibration has been performed in the DF area. Simulations and experiments demonstrate that TPF-FPM can precisely calibrate model misalignment of LED array and the reconstructed results are superior to the ASC method. The proposed pupil-function-based strategy allows for accurate DF localization using a physical model and promises to be applied in the precise positioning correction in conventional ptychography [25,26].

2. Principle

2.1 Global positional misalignment model

The forward imaging model of FPM is shown in Fig. 1, an LED array with quasi-monochromatic wavelength $\lambda$ illumination is placed far enough beneath the sample. The positions of LED elements in the LED array can be characterized using a four-dimensional model $(\Delta x, \Delta y, \theta, h)$ [17]. Here, $\Delta x$ and $\Delta y$ represent the lateral offsets of the LED array along the $x$ and $y$ directions, respectively. $\theta$ denotes the rotation angle at the horizontal plane and $h$ is the vertical distance of the LED array from the sample.

 figure: Fig. 1.

Fig. 1. The forward imaging model of FPM with positional misalignment. The spatial positional offset of the LED element leads to the change of wave vector and results in the shifting of sub-aperture position in the Fourier domain. The wave vector of the central LED element (green circle) is labeled with a red arrow.

Download Full Size | PDF

Assuming the global offset is larger than displacement errors between individual LEDs, the spacing between adjacent LEDs can be considered equal, denoted by $d$. For the $l^{th}$ LED element which locates on $m$ row and $n$ column, its spatial position $(x_l,y_l)$ can be expressed as

$$x_{l}=d[n\cos{\theta}-m\sin{\theta}]+\Delta x, \quad y_{l}=d[n\sin{\theta}+m\cos{\theta}]+\Delta y.$$

Suppose the sample is a thin object and far away from the LED array, the illumination beam emitted from the LED element to the sample can be regarded as a plane wave. For the central segment of the sample, the incident wave vector of the $l^{th}$ LED element can be formulated as

$$k_{xl}={-}\frac{1}{\lambda}\frac{x_{l}}{\sqrt{x_{l}^2+y_{l}^2+h^2}}, \quad k_{yl}={-}\frac{1}{\lambda}\frac{y_{l}}{\sqrt{x_{l}^2+y_{l}^2+h^2}}.$$

In the process of image acquisition, LED elements are turned on sequentially to complete the scanning of the sample’s spectrum. However, if there is any spatial deviation in the LED element’s position, it may cause changes in sub-spectral zones during the aperture scanning, which can lead to undesired reconstruction.

2.2 Transformation between frequency deviation and pupil function

The aperture scanning process can be understood as moving the coherent transfer function(CTF) of the microscope system on the sample spectrum plane. Illuminating the sample by the $l^{th}$ LED element with the wave vector $(k_{xl},k_{yl})$ is equivalent to shifting the center of CTF by $(-k_{xl},-k_{yl})$ in the Fourier domain. The corresponding sub-spectrum information is low-pass filtered by the CTF. As a result, the captured LR intensity measurement $I_{l}^{c}(\boldsymbol {r})$ can be expressed as

$$\begin{aligned} I_{l}^{c}(\boldsymbol{r}) & =\left|\mathscr{F}^{{-}1}\{\mathscr{F}\{o(x,y)\cdot\exp{(i(k_{xl}x+k_{yl}y))}\}\cdot P(k_{x},k_{y})\}\right|^{2} \\ & =\left|\mathscr{F}^{{-}1}\{O(k_x,k_y)\cdot P(k_{x}+k_{xl},k_{y}+k_{yl})\}\right|^{2}, \end{aligned}$$
where $\mathscr{F}$ and $\mathscr{F}^{-1}$ indicate the Fourier and inverse Fourier transform operations, respectively, $i$ is the imaginary unit, $\boldsymbol {r}=(x,y)$ represents the 2D spatial coordinates and $\boldsymbol {k}=(k_{x},k_{y})$ is the corresponding frequency coordinates in Fourier domain. $o(x,y)$ refers to the complex transmission function of the sample, $exp{(i(k_{xl}x+k_{yl}y))}$ denotes the $l^{th}$ illumination plane wave with wave vector $\boldsymbol {k}_l=(k_{xl},k_{yl})$. $P(k_{x},k_{y})$ is a circular pupil function whose passband is determined by the numerical aperture (NA) of objective lens and illumination wavelength $\lambda$.

The Eq. (3) demonstrates that the function of the illumination wave vector $\boldsymbol {k}_l$ can be included in the expression of pupil function $P(\boldsymbol {k})$ according to the Fourier shift theorem. In the conventional FPM reconstruction algorithm, the pupil function can be recovered using an EPRY iterative path. At the start of the recovery process, an initial HR sample spectrum solution denoted by $O_0(\boldsymbol {k})$ can be generated by interpolating the amplitude of the central LR image and setting the phase to zero. The initial guess of pupil function $P_0(\boldsymbol {k})$ is often set as a circular low-pass filter with all ones inside the objective passband. Corresponding to the particular sub-aperture position, the estimated LR complex amplitude field $\varphi _l^e(\boldsymbol {r})$ is simulated as

$$\varphi_l^e(\boldsymbol{r})=\mathscr{F}^{{-}1}\{O_0(k_x,k_y)\cdot P_0(k_x+k_{xl},k_y+k_{yl})\}.$$

During iterations, the amplitude component of $\varphi _l^e(\boldsymbol {r})$ is replaced by the square root of the actual LR measurement $I_l^c(\boldsymbol {r})$ to acquire the updated complex amplitude field $\varphi _l^u(\boldsymbol {r})$

$$\varphi_l^u(\boldsymbol{r})=\sqrt{I_l^c(\boldsymbol{r})}\frac{\varphi_l^e(\boldsymbol{r})}{|\varphi_l^e(\boldsymbol{r})|}.$$

In FPM, the complex-valued reconstruction of HR sample spectrum is iteratively achieved by enforcing two constraints on the updated quantities. The captured LR intensity images are utilized as the amplitude constraint, whilst the support constraint is imposed by the pupil function in the Fourier domain. In cases when these two constraints are defined, the valid solution to Eq. (3) can be enclosed via the EPRY updating route

$${O_{s + 1}}(k_x-k_{xl},k_y-k_{yl}) = {O_s}(k_x-k_{xl},k_y-k_{yl}) + \alpha \frac{{P_s^ * (k_x,k_y)}}{{\left| {{P_s}(k_x,k_y)} \right|_{\max }^2}}\Delta \phi _{s,l},$$
$${P_{s + 1}}(k_x,k_y) = {P_s}(k_x,k_y) + \beta \frac{{O_s^ * (k_x-k_{xl},k_y-k_{yl})}}{{\left| {{O_s}(k_x-k_{xl},k_y-k_{yl})} \right|_{\max }^2}}\Delta \phi _{s,l},$$
where $*$ denotes the complex conjugate operation, $\Delta \phi _{s,l}=\mathscr{F}\{\varphi _l^u(\boldsymbol {r})\}-\mathscr{F}\{\varphi _l^e(\boldsymbol {r})\}=\phi _l^u(\boldsymbol {k}-\boldsymbol {k}_l)-\phi _l^e(\boldsymbol {k}-\boldsymbol {k}_l)$ is the auxiliary function for updating, the constants $\alpha$ and $\beta$ can be adjusted to alter the step-size of update and the default values are 1, the subscript $s$ represents the number of iterations. The reconstruction of HR image involves updating all sub-apertures iteratively until convergence is achieved.

Due to the model misalignment in practice, varying incident illumination wave vector deviations lead to diverse offsets of sub-aperture positions. For the $l^{th}$ illumination wave, its wave vector deviation can be expressed as

$$\Delta k_{xl} = k_{xl}-k_{xl}^0, \quad \Delta k_{yl} = k_{yl}-k_{yl}^0,$$
where $\boldsymbol {k}_l^0=(k_{xl}^0,k_{yl}^0)$ represents the ideal wave vector without positional offsets. The aperture scanning during FPM forward imaging is performed in the Fourier domain, implying that a specific frequency portion of the object spectrum is captured in a single measurement. When updating a sub-spectrum in the Fourier domain, the frequency information corresponding to the captured measurement used for the reconstruction will reflect the actual constraint boundaries of the acquisition. As shown in Fig. 2(a), the captured intensity measurement $I_l^c(\boldsymbol {r})$ and updated complex amplitude field $\varphi _l^u(\boldsymbol {r})$ contain the frequency information within a circle with the center at $\boldsymbol {k}_l$ and the radius of $NA_{obj}/\lambda$, the simulated complex amplitude field $\varphi _l^e(\boldsymbol {r})$ contains the frequency information within a circle with the center at $\boldsymbol {k}_l^0$ and the radius of $NA_{obj}/\lambda$. Thus during iterative reconstruction as shown in Fig. 2(b), the function of bias term $\Delta \boldsymbol {k}_l=(\Delta k_{xl},\Delta k_{yl})$ included between $\phi _l^u(\boldsymbol {k}-\boldsymbol {k}_l)$ and $\phi _l^e(\boldsymbol {k}-\boldsymbol {k}_l^0)$ will lead to an individual shifted pupil corresponding to current sub-spectrum. The shifted pupil will be centered at $\Delta \boldsymbol {k}_l$. To obtain the complete pupil function, the CTF constraint can be removed while updating the pupil function to allow it to evolve freely without cropping. The underlying theoretical mechanism of pupil shifting is presented as
$${P_{s + 1}}(\boldsymbol{k}-\Delta\boldsymbol{k}_l) = {P_s}(\boldsymbol{k}-\Delta\boldsymbol{k}_l) + \beta \frac{{O_s^ * (\boldsymbol{k}-\boldsymbol{k}_l^0)}}{{\left| {{O_s}(\boldsymbol{k}-\boldsymbol{k}_l^0)} \right|_{\max }^2}}\Delta \phi _{s,l},$$
where the auxiliary function is modified as
$$\Delta \phi_{s,l}=\phi_l^u(\boldsymbol{k}-\boldsymbol{k}_l)-\phi_l^e(\boldsymbol{k}-\boldsymbol{k}_l^0).$$

 figure: Fig. 2.

Fig. 2. The schematic of the principle of offset pupil. (a) The captured frequency information during FPM forward imaging; (b) Recovered offset pupil due to frequency information correspondence.

Download Full Size | PDF

Meanwhile, the intensity of the deviated pupil is not the same, depending on the corresponding sub-spectral energy. In the field of conventional ptychography, Maiden et al. [27] have analyzed the update function of embedded ptychographical iterative engine (ePIE), which is homologous to EPRY. The object update of ePIE arises from a weighting equal to the normalized intensity of the probe, i.e., the good update in places where the probe is bright should be accepted, whereas the previous object estimate should be retained for the dim probe. In FPM, the probe update of EPRY shares similar regions of the weight function since the object that exists in the Fourier domain has a dramatically high dynamic range, which is analogous to the probe in ptychography. To illustrate the weight function, the Eq. (9) can be formulated as

$$P_{s+1}(\boldsymbol{k}-\Delta\boldsymbol{k}_l)=P_{s}(\boldsymbol{k}-\Delta\boldsymbol{k}_l)+w\frac{O_{s}^*(\boldsymbol{k}-\boldsymbol{k}_l^0)}{\left|O_{s}(\boldsymbol{k}-\boldsymbol{k}_l^0)\right|^2}\Delta \phi _{s,l},$$
where $w=\beta \frac {\left |O_{s}(\boldsymbol {k}-\boldsymbol {k}_l^0)\right |^2}{\left |O_{s}(\boldsymbol {k}-\boldsymbol {k}_l^0)\right |_{max}^2}$ is some spatially varying weight function that is close to unity for the center zero-frequency components of object spectrum, and close to zero for extremely high-frequency components. Hence, the intensity of offset pupil corresponding to different sub-apertures varies and is modulated by the weight function.

During the implementation of pupil function recovery, DF images used in reconstruction are essential in addition to ensuring sub-apertures overlap. However, the positional deviations of BF sub-apertures will not result in offset pupils as BF images don’t contribute to the recovery of pupil function in the framework of EPRY [6,13], making the transformation in Eq. (9) only applicable to the DF area. To illustrate this difference, all captured LR images are used for reconstruction and the model misalignment is imposed on different regions of the LED array. In this simulation, the spatial translation offset ($\Delta x=-1$ mm) is progressively added from the central to external LED rings across different situations and the corresponding recovered pupil functions are shown in Fig. 3. The system parameters here are the same as we set in the subsequent relevant simulations and experiments. The light source is a $13\times 13$ LED array (spacing 8.128 mm) with a central wavelength 505 nm and placed at 98 mm beneath the sample plane. The NA and magnification of the objective lens are 0.13 and 4, respectively. The pixel size of the imaging sensor is 6.5 $\mu$m. It’s apparent from Figs. 3(b)-(c) that the pupil function remains almost unchanged compared to the situation without model misalignment in Fig. 3(a), indicating that the sub-aperture positional deviation in BF area does not conform to the above transformation. In contrast, the circular outlines beyond the passband become brighter as incorrect-located DF sub-apertures increase under the same number of iterations, as shown in Figs. 3(d)-(h).

 figure: Fig. 3.

Fig. 3. The amplitude of recovered pupil function by EPRY and 10 iterations. (a) and (b)-(h) are results without and with model misalignment ($\Delta x=-1$ mm), respectively; the number of LEDs with translation offset in (b)-(h) are 1, $3\times 3$, $5\times 5$, $7\times 7$, $9\times 9$, $11\times 11$ and $13\times 13$, respectively.

Download Full Size | PDF

In general, areas with more overlapping pupils tend to have higher intensity. However, due to the retention of the CTF constraint during the recovery of the object spectrum, the performance of pupil function differs inside and outside the objective passband, as shown in Figs. 3(d)-(h). The pupil function inside the passband is updated simultaneously with the object spectrum, whereas outside the passband, it is solely superimposed in intensity with iterations. As the number of incorrectly located DF sub-apertures and iterations increases, the contrast inside and outside the passband will gradually become more distinct, resulting in a darker appearance inside the passband. To balance the overall contrast and facilitate the extraction of shifted pupil profiles, it’s not necessary to perform plenty of iterations.

2.3 Misalignment situation determination

The mapping relationship between the transformed pupil function and the DF frequency positional deviation has been deduced and then the distribution of deviated pupil centers is further explored based on the spatial arrangement of the LED array. Despite the degradation of reconstruction quality attributed to the combined effect of multiple misalignment parameters, the misalignment parameters affect wave vectors in different ways. In the following simulations, the main focus will be on the translation and rotation of the LED array. The frequency positional deviation caused by height factor $h$ is very small as it will not significantly change during the focusing process for different samples once the working distance of microscopy is determined.

Figures 4(a1)-(a2) show the ideal and actual frequency positions of sub-apertures when translation ($\Delta x=\Delta y=1$ mm) or rotation ($\theta =2^{\circ }$) appears independently. Figures 4(b1)-(b2) are the distribution maps of the absolute value of frequency positional deviation. The centers of shifted pupils are displayed as solid shapes in Figs. 4(c1)-(c2). Since the corner LED elements are furthest from the center of the LED array and more susceptible to severe deviation, they are marked with diamond shapes for subsequent analysis. As shown in Fig. 4(c1), in the case of translation, pupil centers are located in the same quadrant. The right-down corner LED element acquires the smallest offset, and its corresponding shifted pupil constructs a portion of the entire pupil function boundary in the translation direction. As for the rotation situation in Fig. 4(c2), the corresponding shifted pupils of four corner LED elements compose a part of the entire pupil function boundary in the approximate diagonal direction.

 figure: Fig. 4.

Fig. 4. The influence of translation($\Delta x=\Delta y=1$ mm) and rotation($\theta =2^{\circ }$) on incident wave-vectors. (a) The ideal and actual frequency positions; (b) The distribution maps of the absolute value of frequency deviations; (c) The distributions of actual frequency deviations, where solid shapes indicate the centers of DF pupils.

Download Full Size | PDF

According to the above analysis, the boundary of pupil function in certain directions can indicate the centers of deviated pupils corresponding to particular LED elements. It’s a pure circular boundary-finding issue and can be addressed by converting the two-dimensional frequency coordinates $\Delta \boldsymbol {k}_l=(\Delta k_{xl},\Delta k_{yl})$ to polar coordinates $\Delta \boldsymbol {k}_l=(\rho _l,\theta _l)$ and implementing the relevant search algorithm [21]. Since the passband of the objective lens is known and unchanged, a sharp decline will appear at the passband radius $R_{obj}$ along several radial lines out from the correct center $\Delta \boldsymbol {k}_l$. This boundary-finding method is effective for the non-overlapping portion of the circle. Nevertheless, the recovered pupil function is the result of multiple pupils overlapping, so that only those pupils whose centers are at the outermost part of the distribution maps can be localized. Thus an extra search direction constraint termed Pupil search direction criterion is necessary to define the search range and precisely localize the centers of particular shifted pupils. In this criterion, the distribution characteristics of shifted pupil centers are divided into three categories:

i) Rotation only. The impact of rotation on the wave vectors is reversed for centrosymmetric illumination positions. As shown in Fig. 5(a), $R_i$ is approximately equal at the dramatic drop along any two radial lines out from the image’s center in complementary directions. As a result, the four diagonal directions can be used to judge misalignment cases first. If $R_1\approx R_2$ and $R_3\approx R_4$ are satisfied, it indicates that the misalignment parameters only involve $\theta$, and we refer to this case as "Rotation only". In this situation, four diagonal directions are employed as initial directions to search the shifted pupil centers of four corner LED elements. By reducing the number of LR images used for reconstruction, 16 DF illumination angle deviations can be obtained and their corresponding locations are labeled with solid diamonds in Fig. 5(b).

 figure: Fig. 5.

Fig. 5. The schematic of search directions determination when "Rotation only". (a) The pupil function recovered by 169 LR images with model misalignment $\theta =2^{\circ }$; (b) Pupil centers in which the deviation directions can be determined.

Download Full Size | PDF

ii) Translation dominates. Taking further observation on Figs. 4(b1)-(b2), the wave vector deviations decrease from inside to outside of the LED array in the case of translation. On the contrary, the DF sub-apertures suffer from relatively bigger frequency positional deviations in the case of rotation. Therefore, it’s a complicated situation when translation and rotation occur simultaneously, which is more in line with practice.

Spatial translation inevitably leads to an asymmetric offset of the entire pupil function. When the judgment condition of the previous case is not met, an interval $\alpha =[0,\pm 180^{\circ }]$ is employed to search the directions $[\alpha _1,\alpha _2]$ corresponding to the $[R_{max}, R_{min}]$ at the sharp fall of the radial lines. A series of judgments around $[\alpha _1,\alpha _2]$ will be further implemented to determine whether the effect of rotation can be ignored. As shown in Fig. 6(e), when the effect of rotation is much smaller than that of translation, the shifted pupils corresponding to the fourth ring LED elements form the profile of pupil function in the $\alpha _1$ direction. The pupil functions reconstructed from different numbers of LR images are shown in Figs. 6(a)-(d). If $R_{max}$ along $\alpha _1$ direction does not satisfy the stepwise decreasing rule, it means "Translation dominates". At this point, we can only consider the effect of translation and utilize $\alpha _2$ as the initial search direction. Because both the distribution characteristics of shifted pupil centers in the cases of "Translation dominates" and "Translation only" follow a gradually decreasing rule in $\alpha _2$ direction, as shown in Figs. 6(e) and (f). Finally, 4 DF illumination angle deviations can be obtained and their corresponding locations are labeled with solid diamonds in Fig. 6(f).

 figure: Fig. 6.

Fig. 6. The schematic of search directions determination when "Translation dominates". (a)-(d) The pupil functions recovered by 169, 121, 81, and 49 LR images with model misalignment ($\Delta x=\Delta y=1$ mm, $\theta =0.2^{\circ }$); (e) Distribution characteristics of shifted pupil centers; (f) Pupil centers in which the deviation directions can be determined.

Download Full Size | PDF

iii) Rotation dominates. As the rotation degree increases and its effect exceeds that of translation, the extent of both is worthy of attention. The $R_{max}$ will gradually decrease when using the second judging condition, as can be seen in the distribution characteristics of pupil centers corresponding to the right-top corner LED element in Fig. 7(e). To this end, the radial line direction $\alpha _1$ corresponding to $R_{max}$ can be utilized. Whether $\alpha _2$ is a viable search direction depends on the clarity of the boundary in that direction. Taking Fig. 7(e) as an example, if the deviation of the left-down corner LED element is minimal, the boundary of pupil function may coincide with the passband edge in $\alpha _2$ direction, which is not conducive to segmenting effective information. For pupil functions reconstructed by different numbers of LR images, the respective $\alpha _1$ and $\alpha _2$ need to be acquired through the search process described above. Eventually, $4\sim 8$ DF illumination angle deviations can be obtained.

 figure: Fig. 7.

Fig. 7. The schematic of search direction determination when "Rotation dominates". (a)-(d) The pupil functions recovered by 169, 121, 81, and 49 LR images with model misalignment ($\Delta x=\Delta y=1$ mm, $\theta =1^{\circ }$); (e) Distribution characteristics of shifted pupil centers; (f) The diagram of the decomposition of search direction.

Download Full Size | PDF

Despite the locations of the other two corner LED elements (left-top and right-down) are equally prominent compared to the surroundings in Fig. 7(e), their radial line directions cannot be determined through a simple interval search. To provide a denser global position correction constraint, we propose an optional approximate direction determination method. Reordering items in the formula of wave vector deviation and neglecting the correlation coefficient term $(-1/\lambda )$, the formation of wave vector deviation at each illumination position can be regarded as a vector addition operation of the independent effects of translation and rotation. When the ideal location $(d\sqrt {m^2+n^2})$ of LED element is sufficiently large compared to the translation degree $(\sqrt {\Delta x^2+\Delta y^2})$ in the spatial domain, the assumption that the translation effect in the denominator item can be ignored is valid

$$\Delta k_x^{m,n}=\Delta k_{xr}^{m,n}+\Delta k_{xt}^{m,n}=\frac{nd\cos\theta-md\sin\theta-nd}{\sqrt{(m^2+n^2)d^2+h^2}}+\frac{\Delta x}{\sqrt{(m^2+n^2)d^2+h^2}},$$
$$\Delta k_y^{m,n}=\Delta k_{yr}^{m,n}+\Delta k_{yt}^{m,n}=\frac{nd\sin\theta+md\cos\theta-md}{\sqrt{(m^2+n^2)d^2+h^2}}+\frac{\Delta y}{\sqrt{(m^2+n^2)d^2+h^2}}.$$

As shown in Fig. 7(f), for four corner LED elements equidistant from the center, the rotation effect can be approximately described as unit vectors along diagonal directions, marked with orange arrows. The translation vector $\beta =(u,v)$ is also unique to the four LED elements and is indicated by a blue arrow, which characterizes the effect relative to the rotation vector and can be calculated by the connection between $\alpha _1$ and $\alpha _2$ (marked with red arrows)

$$\alpha_1=\tan\left(\frac{-1+v}{-1+u}\right), \quad \alpha_2=\tan\left(\frac{1+v}{1+u}\right),$$
where the rotation vector can be determined by the quadrant where $\alpha _1$ is located. The directional constraints described in Eq. (14) apply to situations where $\alpha _1$ and $\alpha _2$ are not complementary. If the spatial lateral offset of the LED array is equal in both directions $(\Delta x=\Delta y)$, the translation vector will follow diagonal direction $(u=v)$, rendering the above approximation inaccurate. At this point, an alternative amplitude constraint can be considered to address this issue
$$\frac{|R_{max}|}{|R_{min}|}=\frac{1+u}{-1+u}.$$

Two solutions in terms of $u$ could be obtained via Eq. (15). If the solution is smaller than 1, it indicates that the translation effect is not considerable compared to the rotation effect, as it will not cause $R_{min}$ to be less than $R_{obj}$. Another solution greater than 1 describes a larger translation vector that causes the reversal of deviation direction for the left-down LED element in Fig. 7(e). We can choose the appropriate solution based on the comparison between $R_{min}$ and $R_{obj}$. Under the premise that translation vector $\beta =(u,v)$ is obtained in the above analysis, the remaining directions $\alpha _3$ and $\alpha _4$ can be further calculated through the vector addition rule

$$\alpha_3=\tan\left(\frac{-1+v}{1+u}\right), \quad \alpha_4=\tan\left(\frac{1+v}{-1+u}\right).$$

After obtaining the positional deviations of particular LED elements, the exact sub-aperture positions can be further identified since the locations (row $m$, column $n$) of these elements are known in the LED array. However, determining their spatial locations in rotation cases is challenging because the pupil functions are identical when having the same angular magnitude and opposite direction. The main distinguishing factor is that the distribution characteristics of pupil centers are substantially symmetrical to each other. In this regard, we present a supplementary criterion termed Half-sampling criterion to determine the direction of rotation, as shown in Fig. 8. We can use only half of the LR images for reconstruction and the green circles in Fig. 8(a) represent the lighted LED elements. It’s obvious in Figs. 8(c1)-(c2) that the pupil centers’ distributions of two rotation cases ($\theta =2^{\circ }$ and $\theta =-2^{\circ }$) are completely opposite. When the rotation direction is positive, the pupil function retains only its upper half, otherwise, it’s the lower half. As shown in Figs. 8(d1)-(d2), based on the four initial search directions obtained from previous judgments, the $R_i$ at the sharp drop is identified again. There must be a significant decrease in $R_i$ along two directions, highlighted with orange circles. Observe whether these two directions lie in the interval $[0,180^{\circ }]$ or $[0,-180^{\circ }]$. If they belong to the former interval $[0,180^{\circ }]$, it proves that the rotation direction is positive, and the contrary is negative.

 figure: Fig. 8.

Fig. 8. The schematic of the Half-sampling criterion which determines the direction of in-plane rotation. (a) The illumination pattern of LED array; (b) The recovered spectrum; (c) Distribution characteristics of offset pupil centers in the cases of $\theta =2^{\circ }$ and $\theta =-2^{\circ }$, respectively; (d) Recovered pupil function, while the orange circles indicate the decreasing of $R_i$ along the radial line in that direction.

Download Full Size | PDF

2.4 Positional misalignment correction strategy

The process of TPF-FPM is shown in Fig. 9 and can be divided into the following steps:

 figure: Fig. 9.

Fig. 9. The flowchart of TPF-FPM.

Download Full Size | PDF

Step 1. Reconstruct pupil function from captured LR images. Note that although the recovered pupil function only characterizes the deviations of DF illumination angles, joint reconstruction with BF images can improve the contrast relative to the background. Besides, the offset pupil is not affected by the number of iterations like the object function. More iterations only increase contrast for easier segmentation. When the number of DF images is greater than that of BF images, within 5 iterations is sufficient to maintaining effective information about the boundary of pupil function, and also saving time. For cases where the number of DF images is less than that of BF images, more iterations are required. Based on the distinct performances inside and outside the passband, we suggest using different thresholds for segmentation separately.

Step 2. Calibrate the DF sub-aperture positions. First, determine the initial search directions based on the Pupil search direction criterion. Second, a boundary-finding scheme is performed to identify shifted pupil centers $\Delta \boldsymbol {k}_{l}$ that occur near the sharp decline at objective radius $R_{obj}$

$$R_{obj}=\frac{NA_{obj}}{\lambda} \frac{p_{img}\cdot N}{Mag},$$
where $p_{img}$ denotes the pixel size of imaging sensor, $Mag$ is the magnification of the objective lens, and $N \times N$ is the size of LR image. Then the precise sub-aperture position $\boldsymbol {k}_l=(k_{xl},k_{yl})$ of particular LED element can be calibrated through Eq. (8). The Half-sampling criterion can contribute to the qualitative analysis of rotation direction.

Step 3. Once the localization of DF sub-apertures is accomplished, a compact solution about global misalignment parameters $[\Delta x^c,\Delta y^c,\theta ^c,h^c]$ can be enclosed by Eq. (2). Arbitrary two sub-aperture positions correspond to a set of approximate solutions. For the case of "Rotation only", the lateral offsets can be considered as 0 and each sub-aperture position is enough to obtain an independent solution about $[\theta ^c,h^c]$. Take the average values of all obtained solutions and the final result will be used to update the HR images in the exact sub-apertures.

3. Simulations

To verify the effectiveness of TPF-FPM and cover the three mentioned misalignment cases, we perform several groups of simulations with different positional parameters. The platform is a laptop with a CPU (Intel i7-6700) and no parallel computing framework is utilized. The systematic parameters stay the same as in Sec. 2.1. Two images Baboon and Aerial are employed as HR intensity and phase samples, respectively, each containing $512\times 512$ pixels. To simulate the actual imaging circumstance, the Gaussian noise with the mean of 0 and standard deviation of $5\times 10^{-5}$ is artificially introduced to each LR image. For comparison, the algorithmic self-calibration approach SC-FPM is chosen as a cross-reference. The recovered positional parameters of the two methods are shown in Table 1.

Tables Icon

Table 1. Recovered positional misalignment parameters of SC-FPM and TPF-FPM

The calibrated parameters of the two methods are highly consistent with the actual positional misalignment parameters. The height factor $h^c$ by SC-FPM is slightly higher than that of TPF-FPM. As an ASC method using simulated annealing algorithm, SC-FPM typically requires about 30 iterations to converge to stable solutions, while TPF-FPM has a shorter processing time. As described in Sec. 2.4, the calibration process of TPF-FPM mainly consists of two parts: pupil function recovery and boundary-finding implementation. The former generally takes about 15 s, while the time of the latter depends on the complexity of the judgment steps required to determine the initial search directions. For the same misalignment case, the time required for correction is roughly equal.

To further evaluate the improvement of reconstruction quality, we employ the calibrated parameters to correct the frequency positions of sub-apertures during reconstruction. The recovered HR images of two algorithms are shown in Fig. 10, where image quality assessment indexes PSNR and SSIM are selected to quantitatively evaluate the reconstructed results. Figures 10(a) and (b) are HR samples. The labels $①\sim ④$ correspond to different misalignment cases in Table 1. The reconstructed results by EPRY, SC-FPM, and TPF-FPM are shown in red, green, and blue boxes, respectively. The quality of recovered HR intensity images using both calibration algorithms is satisfactory, with TPF-FPM slightly outperforming in evaluation values. As for the phase images, it can be intuitively seen from the results of $② \sim ④$ that TPF-FPM performs better than SC-FPM. Some dark artifacts appearing in EPRY are effectively eliminated after the calibration of TPF-FPM. However, SC-FPM exhibits a certain degree of mutual interference between intensity and phase, as shown in Figs. 10(f2)-(f3), demonstrating its performance is relatively poor when "Translation dominates". It’s related to the difference in correction logic between the two algorithms. SC-FPM inevitably falls into the local optimum as the evaluation metric based on the entire image is sensitive to noise. In contrast, TPF-FPM utilizes the local contour features of the recovered pupil function, and a few iterations further avoid excessive accumulation of noise. Despite these two methods involving more or fewer iterations, TPF-FPM achieves more accurate positional parameters and better reconstruction compared to SC-FPM.

 figure: Fig. 10.

Fig. 10. Recovered HR images by EPRY, SC-FPM, and TPF-FPM with 10 iterations. (a)-(b) The HR intensity and phase samples; (c1)-(h4) The recovered HR intensity and phase images by three algorithms, labeled with red, green, and blue boxes, respectively. The evaluating values of PSNR and SSIM are marked in the lower right corner.

Download Full Size | PDF

4. Experiment

Based on the analysis of the intensity of shifted pupils in Sec. 2.2, we chose a testicular tissue slice with more high-frequency details as the calibration sample to suppress the noise in the experimental system and make the boundary of the transformed pupil function easier to segment. The model misalignment parameters of both calibration methods were obtained using this sample and employed to reconstruct other samples (USAF-1951 board and blood smear) to demonstrate the generalizability of corrected positions. A $13\times 13$ programmable LED array (8.128 mm spacing) is utilized to provide angle-varied illumination, which is placed arbitrarily in the horizontal plane to produce the model misalignment error. In our experimental system, the time spent in collecting 169 LR images is 51.11 s. Each LED element is embedded with tricolor illumination modules whose central wavelengths are 460 nm, 505 nm, and 629 nm, respectively. A 4 $\times$ /0.13 NA objective lens and a CMOS camera with pixel pitch 6.5 $\mu$m constitute the acquisition part of the FPM system.

The central LR images of the calibration sample captured under the illumination of 629 nm and 505 nm are shown in Figs. 11(a1)-(a2). Based on the correction theory of our method, we attempt to qualitatively analyze the calibrated results through pupil functions. The recovered pupil functions by three algorithms are shown in Figs. 11(b1)-(d2). It’s apparent from Figs. 11(b1)-(b2) that some bright profiles appear in the lower right section outside the passband. We can presumably determine that there is a negative translation in the LED array. In the first group of calibrations, SC-FPM converged on $[\Delta x=-0.896$ mm, $\Delta y=-0.845$ mm, $\theta =-0.582^{\circ }, h=116.10$ mm] at 31 iterations, taking 31.425 s. The results of TPF-FPM are $[\Delta x=-1.177$ mm, $\Delta y=-0.804$ mm, $\theta =-0.701^{\circ }, h=98.42$ mm], taking 23.527 s. In the second group of calibrations, SC-FPM converged on $[\Delta x=-1.082$ mm, $\Delta y=-0.909$ mm, $\theta =-1.234^{\circ }, h=108.63$ mm] at 60 iterations, taking 64.761 s. The results of TPF-FPM are $[\Delta x=-1.638$ mm, $\Delta y=-0.941$ mm, $\theta =-0.785^{\circ }, h=99.47$ mm], taking 23.643 s. The height factor by SC-FPM far deviates from the practical measurement of $h=98$ mm, yet the height factor by TPF-FPM is closer to the actual situation and provides better guidance for aligning the LED array. Reflected in the pupil functions of Figs. 11(c1)-(c2), there are still numerous bright spots outside the passband even after the correction of SC-FPM. While the absence of any noticeable bright spots outside the passband in Figs. 11(d1)-(d2) indicates that TPF-FPM is more effective in eliminating the positional deviations of DF area. In addition, the running time of TPF-FPM is shorter than that of SC-FPM, and in the second group of calibrations, it’s even less than half of SC-FPM.

 figure: Fig. 11.

Fig. 11. The recovered pupil functions by three algorithms. (a1)-(a2) The central LR images of the selected calibration sample under the illumination of 629 nm and 505 nm, respectively. (b1)-(d2) The recovered pupil functions by EPRY, SC-FPM, and TPF-FPM, respectively.

Download Full Size | PDF

One segment (128$\times$128 pixels) of USAF-1951 board under 629 nm illumination is chosen as the test sample. Figure 12(a) presents the captured LR image. The comparative effects of three algorithms are shown in Figs. 12(c1)-(c3) and (f1)-(f3), marked with the same color as in the Fig. 11. An area containing two sets of stripes (group 7, elements 5 and 6) is enlarged to illustrate the boost on intensity image. It’s evident from the intensity diagrams that TPF-FPM alleviates the dark-spot effect present within the fringes in EPRY and enhances the integrity of the contour. However, the details of stripes are still unclear and even worse than EPRY after the correction of SC-FPM, demonstrating that its calibrated positions are imprecise. Meanwhile, the cross-section mean intensity distributions of the purple boxes in Figs. 12(b) and (d1)-(d3) are presented to further support the effectiveness of TPF-FPM. As shown in Figs. 12(e1)-(e4), TPF-FPM improves the intensity contrast, while the result of SC-FPM is inferior to that of EPRY. The comparison can also be proved in the recovered phase maps. TPF-FPM produces better performance in removing remarkable artifacts in the background.

 figure: Fig. 12.

Fig. 12. Experimental results of one segment (128$\times$128 pixels) of USAF-1951 resolution target recovered by three algorithms under 629 nm illumination. (a) The captured LR intensity image corresponding to central LED illumination; (b) The zoom-in of the orange rectangle in (a); (c1)-(c3) The recovered intensity images by three algorithms, respectively; (d1)-(d3) The zoom-ins of orange rectangles in (c1)-(c3); (e1)-(e4) The cross-section mean intensity distributions of the purple boxes in (b) and (d1)-(d3); (f1)-(f3) The recovered phase images by three algorithms, respectively.

Download Full Size | PDF

Besides, we further show the results of recovering a blood smear with the corrected positions under 505 nm illumination. The HR images recovered by three algorithms are illustrated in Figs. 13(c1)-(c3) and (e1)-(e3). The zooms-ins of orange boxes in recovered intensity and phase images are shown in Figs. 13(d1)-(d3) and (f1)-(f3) to compare the reconstructed details. The results of EPRY show some wrinkle artifacts before misalignment calibration, while both correction algorithms eliminate this issue. Although SC-FPM successfully eliminates most artifacts, it doesn’t perfectly reconstruct the profiles of blood cells, especially those overlapping ones. In contrast, TPF-FPM yields more satisfying results with improved blood cell contour details. Additionally, when it comes to phase maps, SC-FPM may produce dark shadows in the background, whereas TPF-FPM delivers a cleaner and more uniform overall phase.

 figure: Fig. 13.

Fig. 13. Experimental results of one segment (40$\times$40 pixels) of blood smear recovered by three algorithms under 505 nm illumination. (a) The segment of central LR intensity image; (b) The zoom-in of the orange rectangle in (a); (c1)-(c3) The recovered HR intensity images; (d1)-(d3) The zoom-ins of orange rectangles in (c1)-(c3); (e1)-(e3) The recovered HR phase images; (f1)-(f3) The zoom-ins of orange rectangles in (e1)-(e3).

Download Full Size | PDF

5. Conclusion

In this paper, we propose a novel model misalignment calibration strategy called TPF-FPM, which is based on the conversion between the spatial frequency deviation and shifting pupil function. The feasibility and effectiveness of TPF-FPM are demonstrated by the great performances achieved in simulations and experiments. Compared with the joint optimization concept in algorithmic self-calibration approaches, we constructed an explicit physical mapping model to separate the misalignment calibration from the reconstruction. In addition, the utilization of the pupil function greatly weakens the interference of other error sources. Unlike calibration methods based on the central region of the BF area, we achieve overall calibration by minimizing the positional deviations of edge corners. The analysis of positional deviation distribution not only enables the determination of the comparative relationship between different positional parameters in an LED array, but also provides the potential for revealing the positional error threshold that seriously degrades reconstruction.

It’s worth discussing there are some points that could be improved for TPF-FPM in future work. Firstly, the pupil-function-based strategy can be extended to the dark-field localization under full-pose-parameter misalignment [28,29] or other illuminators. We only need to predetermine the distribution maps of shifted pupil centers and which LED elements correspond to the offset pupils that form the contour of pupil function in particular directions. Secondly, in addition to the boundary-finding scheme, the estimation of illumination position can also be posed as an object detection problem in neural networks to improve the estimation accuracy [30]. Thirdly, the recovery of pupil function is limited by the ePIE-based algorithm. Due to the direction and speed of error accumulation in the iterative route, the reconstruction may collapse in some special misalignment cases. When the positional deviation does not exceed the fault threshold, this issue can be addressed by appropriately reducing the update step size or adopting more robust iterative algorithms [31]. Fourthly, to improve the robustness to noise, TPF-FPM imposes certain requirements on the selected calibration samples. The intensity of the shifted pupil depends not only on the energy of current sub-spectrum, but also on the update weight function $w$ in Eq. (11). We expect the profile of offset pupils corresponding to the high frequency sub-spectrum could be effectively pronounced during the iterative process. This issue can be related to the construction of weight functions [27].

Funding

National Natural Science Foundation of China (62101032).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

2. J. García, Z. Zalevsky, and D. Fixler, “Synthetic aperture superresolution by speckle pattern projection,” Opt. Express 13(16), 6073–6078 (2005). [CrossRef]  

3. L. Granero, V. Micó Z. Zalevsky, and J. García, “Synthetic aperture superresolved microscopy in digital lensless fourier holography by time and angular multiplexing of the object information,” Appl. Opt. 49(5), 845–857 (2010). [CrossRef]  

4. R. W. Gerchberg and A. S. W. O., “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–250 (1972).

5. J. R. Fienup, “Phase retrieval algorithms: a personal tour [invited],” Appl. Opt. 52(1), 45–56 (2013). [CrossRef]  

6. A. Pan, Y. Zhang, T. Zhao, et al., “System calibration method for Fourier ptychographic microscopy,” J. Biomed. Opt. 22(09), 1–11 (2017). [CrossRef]  

7. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for fourier ptychographic microscopy,” Opt. Express 22(5), 4960–4972 (2014). [CrossRef]  

8. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109(10), 1256–1262 (2009). [CrossRef]  

9. Y. Fan, J. Sun, Q. Chen, et al., “Adaptive denoising method for fourier ptychographic microscopy,” Opt. Commun. 404, 23–31 (2017). [CrossRef]  

10. L. Hou, H. Wang, M. Sticker, et al., “Adaptive background interference removal for fourier ptychographic microscopy,” Appl. Opt. 57(7), 1575–1580 (2018). [CrossRef]  

11. Y. Chen, T. Xu, J. Zhang, et al., “Precise and independent position correction strategy for fourier ptychographic microscopy,” Optik 265, 169481 (2022). [CrossRef]  

12. S. Jiang, K. Guo, J. Liao, et al., “Solving fourier ptychographic imaging problems via neural network modeling and tensorflow,” Biomed. Opt. Express 9(7), 3306–3319 (2018). [CrossRef]  

13. J. Zhang, X. Tao, L. Yang, et al., “Forward imaging neural network with correction of positional misalignment for fourier ptychographic microscopy,” Opt. Express 28(16), 23164–23175 (2020). [CrossRef]  

14. D. Yang, S. Zhang, C. Zheng, et al., “Fourier ptychography multi-parameter neural network with composite physical priori optimization,” Biomed. Opt. Express 13(5), 2739–2753 (2022). [CrossRef]  

15. V. Bianco, M. D. Priscoli, D. Pirone, et al., “Deep learning-based, misalignment resilient, real-time fourier ptychographic microscopy reconstruction of biological tissue slides,” IEEE J. Sel. Top. Quantum Electron. 28(4), 1–10 (2022). [CrossRef]  

16. Z. Bian, S. Dong, and G. Zheng, “Adaptive system correction for robust Fourier ptychographic imaging,” Opt. Express 21(26), 32400–32410 (2013). [CrossRef]  

17. J. Sun, Q. Chen, Y. Zhang, et al., “Efficient positional misalignment correction method for Fourier ptychographic microscopy,” Biomed. Opt. Express 7(4), 1336–1350 (2016). [CrossRef]  

18. A. Zhou, W. Wang, N. Chen, et al., “Fast and robust misalignment correction of Fourier ptychographic microscopy for full field of view reconstruction,” Opt. Express 26(18), 23661–23674 (2018). [CrossRef]  

19. Y. Zhu, M. Sun, P. Wu, et al., “Space-based correction method for led array misalignment in fourier ptychographic microscopy,” Opt. Commun. 514, 128163 (2022). [CrossRef]  

20. J. Liu, Y. Li, W. Wang, et al., “Stable and robust frequency domain position compensation strategy for fourier ptychographic microscopy,” Opt. Express 25(23), 28053–28067 (2017). [CrossRef]  

21. R. Eckert, Z. F. Phillips, and L. Waller, “Efficient illumination angle self-calibration in fourier ptychography,” Appl. Opt. 57(19), 5434–5442 (2018). [CrossRef]  

22. J. Zhang, T. Xu, J. Liu, et al., “Precise brightfield localization alignment for fourier ptychographic microscopy,” IEEE Photonics J. 10(1), 1–9 (2018). [CrossRef]  

23. C. Zheng, S. Zhang, G. Zhou, et al., “Robust fourier ptychographic microscopy via a physics-based defocusing strategy for calibrating angle-varied led illumination,” Biomed. Opt. Express 13(3), 1581–1594 (2022). [CrossRef]  

24. Y. Gao, A. Pan, H. Gao, et al., “Design of fourier ptychographic illuminator for single full-fov reconstruction,” Opt. Express 31(18), 29826–29842 (2023). [CrossRef]  

25. J. M. Rodenburg and H. M. L. Faulkner, “A phase retrieval algorithm for shifting illumination,” Appl. Phys. Lett. 85(20), 4795–4797 (2004). [CrossRef]  

26. H. M. L. Faulkner and J. M. Rodenburg, “Movable Aperture Lensless Transmission Microscopy: A Novel Phase Retrieval Algorithm,” Phys. Rev. Lett. 93(2), 023903 (2004). [CrossRef]  

27. A. Maiden, D. Johnson, and P. Li, “Further improvements to the ptychographical iterative engine,” Optica 4(7), 736–745 (2017). [CrossRef]  

28. C. Zheng, S. Zhang, D. Yang, et al., “Robust full-pose-parameter estimation for the led array in fourier ptychographic microscopy,” Biomed. Opt. Express 13(8), 4468–4482 (2022). [CrossRef]  

29. G. Zhou, T. Li, S. Zhang, et al., “Hybrid full-pose parameter calibration of a freeform illuminator for fourier ptychographic microscopy,” Biomed. Opt. Express 14(8), 4156–4169 (2023). [CrossRef]  

30. F. Ströhl, S. Jadhav, B. S. Ahluwalia, et al., “Object detection neural network improves fourier ptychography reconstruction,” Opt. Express 28(25), 37199–37208 (2020). [CrossRef]  

31. Y. Chen, T. Xu, H. Sun, et al., “Integration of fourier ptychography with machine learning: an alternative scheme,” Biomed. Opt. Express 13(8), 4278–4297 (2022). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. The forward imaging model of FPM with positional misalignment. The spatial positional offset of the LED element leads to the change of wave vector and results in the shifting of sub-aperture position in the Fourier domain. The wave vector of the central LED element (green circle) is labeled with a red arrow.
Fig. 2.
Fig. 2. The schematic of the principle of offset pupil. (a) The captured frequency information during FPM forward imaging; (b) Recovered offset pupil due to frequency information correspondence.
Fig. 3.
Fig. 3. The amplitude of recovered pupil function by EPRY and 10 iterations. (a) and (b)-(h) are results without and with model misalignment ($\Delta x=-1$ mm), respectively; the number of LEDs with translation offset in (b)-(h) are 1, $3\times 3$, $5\times 5$, $7\times 7$, $9\times 9$, $11\times 11$ and $13\times 13$, respectively.
Fig. 4.
Fig. 4. The influence of translation($\Delta x=\Delta y=1$ mm) and rotation($\theta =2^{\circ }$) on incident wave-vectors. (a) The ideal and actual frequency positions; (b) The distribution maps of the absolute value of frequency deviations; (c) The distributions of actual frequency deviations, where solid shapes indicate the centers of DF pupils.
Fig. 5.
Fig. 5. The schematic of search directions determination when "Rotation only". (a) The pupil function recovered by 169 LR images with model misalignment $\theta =2^{\circ }$; (b) Pupil centers in which the deviation directions can be determined.
Fig. 6.
Fig. 6. The schematic of search directions determination when "Translation dominates". (a)-(d) The pupil functions recovered by 169, 121, 81, and 49 LR images with model misalignment ($\Delta x=\Delta y=1$ mm, $\theta =0.2^{\circ }$); (e) Distribution characteristics of shifted pupil centers; (f) Pupil centers in which the deviation directions can be determined.
Fig. 7.
Fig. 7. The schematic of search direction determination when "Rotation dominates". (a)-(d) The pupil functions recovered by 169, 121, 81, and 49 LR images with model misalignment ($\Delta x=\Delta y=1$ mm, $\theta =1^{\circ }$); (e) Distribution characteristics of shifted pupil centers; (f) The diagram of the decomposition of search direction.
Fig. 8.
Fig. 8. The schematic of the Half-sampling criterion which determines the direction of in-plane rotation. (a) The illumination pattern of LED array; (b) The recovered spectrum; (c) Distribution characteristics of offset pupil centers in the cases of $\theta =2^{\circ }$ and $\theta =-2^{\circ }$, respectively; (d) Recovered pupil function, while the orange circles indicate the decreasing of $R_i$ along the radial line in that direction.
Fig. 9.
Fig. 9. The flowchart of TPF-FPM.
Fig. 10.
Fig. 10. Recovered HR images by EPRY, SC-FPM, and TPF-FPM with 10 iterations. (a)-(b) The HR intensity and phase samples; (c1)-(h4) The recovered HR intensity and phase images by three algorithms, labeled with red, green, and blue boxes, respectively. The evaluating values of PSNR and SSIM are marked in the lower right corner.
Fig. 11.
Fig. 11. The recovered pupil functions by three algorithms. (a1)-(a2) The central LR images of the selected calibration sample under the illumination of 629 nm and 505 nm, respectively. (b1)-(d2) The recovered pupil functions by EPRY, SC-FPM, and TPF-FPM, respectively.
Fig. 12.
Fig. 12. Experimental results of one segment (128$\times$128 pixels) of USAF-1951 resolution target recovered by three algorithms under 629 nm illumination. (a) The captured LR intensity image corresponding to central LED illumination; (b) The zoom-in of the orange rectangle in (a); (c1)-(c3) The recovered intensity images by three algorithms, respectively; (d1)-(d3) The zoom-ins of orange rectangles in (c1)-(c3); (e1)-(e4) The cross-section mean intensity distributions of the purple boxes in (b) and (d1)-(d3); (f1)-(f3) The recovered phase images by three algorithms, respectively.
Fig. 13.
Fig. 13. Experimental results of one segment (40$\times$40 pixels) of blood smear recovered by three algorithms under 505 nm illumination. (a) The segment of central LR intensity image; (b) The zoom-in of the orange rectangle in (a); (c1)-(c3) The recovered HR intensity images; (d1)-(d3) The zoom-ins of orange rectangles in (c1)-(c3); (e1)-(e3) The recovered HR phase images; (f1)-(f3) The zoom-ins of orange rectangles in (e1)-(e3).

Tables (1)

Tables Icon

Table 1. Recovered positional misalignment parameters of SC-FPM and TPF-FPM

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

x l = d [ n cos θ m sin θ ] + Δ x , y l = d [ n sin θ + m cos θ ] + Δ y .
k x l = 1 λ x l x l 2 + y l 2 + h 2 , k y l = 1 λ y l x l 2 + y l 2 + h 2 .
I l c ( r ) = | F 1 { F { o ( x , y ) exp ( i ( k x l x + k y l y ) ) } P ( k x , k y ) } | 2 = | F 1 { O ( k x , k y ) P ( k x + k x l , k y + k y l ) } | 2 ,
φ l e ( r ) = F 1 { O 0 ( k x , k y ) P 0 ( k x + k x l , k y + k y l ) } .
φ l u ( r ) = I l c ( r ) φ l e ( r ) | φ l e ( r ) | .
O s + 1 ( k x k x l , k y k y l ) = O s ( k x k x l , k y k y l ) + α P s ( k x , k y ) | P s ( k x , k y ) | max 2 Δ ϕ s , l ,
P s + 1 ( k x , k y ) = P s ( k x , k y ) + β O s ( k x k x l , k y k y l ) | O s ( k x k x l , k y k y l ) | max 2 Δ ϕ s , l ,
Δ k x l = k x l k x l 0 , Δ k y l = k y l k y l 0 ,
P s + 1 ( k Δ k l ) = P s ( k Δ k l ) + β O s ( k k l 0 ) | O s ( k k l 0 ) | max 2 Δ ϕ s , l ,
Δ ϕ s , l = ϕ l u ( k k l ) ϕ l e ( k k l 0 ) .
P s + 1 ( k Δ k l ) = P s ( k Δ k l ) + w O s ( k k l 0 ) | O s ( k k l 0 ) | 2 Δ ϕ s , l ,
Δ k x m , n = Δ k x r m , n + Δ k x t m , n = n d cos θ m d sin θ n d ( m 2 + n 2 ) d 2 + h 2 + Δ x ( m 2 + n 2 ) d 2 + h 2 ,
Δ k y m , n = Δ k y r m , n + Δ k y t m , n = n d sin θ + m d cos θ m d ( m 2 + n 2 ) d 2 + h 2 + Δ y ( m 2 + n 2 ) d 2 + h 2 .
α 1 = tan ( 1 + v 1 + u ) , α 2 = tan ( 1 + v 1 + u ) ,
| R m a x | | R m i n | = 1 + u 1 + u .
α 3 = tan ( 1 + v 1 + u ) , α 4 = tan ( 1 + v 1 + u ) .
R o b j = N A o b j λ p i m g N M a g ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.