Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Robust full-pose-parameter estimation for the LED array in Fourier ptychographic microscopy

Open Access Open Access

Abstract

Fourier ptychographic microscopy (FPM) can achieve quantitative phase imaging with a large space-bandwidth product by synthesizing a set of low-resolution intensity images captured under angularly varying illuminations. Determining accurate illumination angles is critical because the consistency between actual systematic parameters and those used in the recovery algorithm is essential for high-quality imaging. This paper presents a full-pose-parameter and physics-based method for calibrating illumination angles. Using a physics-based model constructed with general knowledge of the employed microscope and the brightfield-to-darkfield boundaries inside captured images, we can solve for the full-pose parameters of misplaced LED array, which consist of the distance between the sample and the LED array, two orthogonal lateral shifts, one in-plane rotation angle, and two tilt angles, to correct illumination angles precisely. The feasibility and effectiveness of the proposed method for recovering random or remarkable pose parameters have been demonstrated by both qualitative and quantitative experiments. Due to the completeness of the pose parameters, the clarity of the physical model, and the high robustness for arbitrary misalignments, our method can significantly facilitate the design, implementation, and application of concise and robust FPM platforms.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Fourier ptychographic microscopy (FPM) [13] is a recently developed computational imaging technique. It uses angle-varied illumination to surpass the resolution limit in optical imaging systems. By integrating synthetic aperture [4] and phase retrieval [5] concepts, the space-bandwidth product (SBP) of imaging the complex transmittance function characterizing the absorption and phase modulation properties of samples can be significantly improved. Because of its superior performance, FPM has been a powerful technique in many fields, such as digital pathology [6], aberration metrology [7], high-throughput cytometry [8], and three-dimensional (3D) imaging [911], among the others.

In contrary to real-space ptychography [12,13], FPM avoids the use of mechanical scanning devices and instead utilizes a programmable LED array to illuminate a sample of interest from multiple angles, thereby enabling aperture scanning in the Fourier domain. At each illumination angle, FPM captures a low-resolution (LR) intensity image, corresponding to a circular sub-spectrum whose position and radius are determined by the illumination wavelength, illumination angle, and the numerical aperture (NA) of the objective. Then, the FPM algorithm can recover high-resolution (HR) complex sample images by sequentially imposing amplitude constraint with each captured image in the spatial domain, and support region constraint with associate circular sub-spectrum in the Fourier domain.

As a typical computational imaging method, FPM has high requirements for the consistency between the illumination parameters of the experimental LED array and those used in the recovery algorithm. Nevertheless, when one builds or modifies an FPM platform, the LED array is unavoidable to be misaligned, resulting in inaccurate stitching of the LR images’ spectra in the Fourier domain, in turn, degrading the quality of the recovered images. Two types of methods are proposed to overcome the problem, which are pre-calibration and post-calibration, respectively. For the pre-calibrated scheme, one can use precision mechanical devices to adjust the LED array to an ideal position before capturing the raw data set based on the symmetrical feature of optical systems [14,15], or utilize the shift of captured out-of-focus image to seek the correct illumination angles [16]. One can also obtain the location and orientation of the LED array by exploiting the features of brightfield-to-darkfield (B-D) boundaries [17]. However, these methods not only need significant user expertise or specific devices, but also are not robust to changes in the system (e.g., bumping the setup). To alleviate the cost- and labor-intensive task, many post-calibrated methods are developed. For example, pcFPM [18], SC-FPM [19] and SBC method [20] can obtain four non-titled pose parameters of the LED array by using the intensity distribution of captured images, and another two-part calibration [21] can also achieve the purpose using the spectra of images. Unfortunately, SC-FPM and pcFPM are based on simulated anneal (SA) algorithm, the performance of which highly relies on suitable initialization of step size and search direction. The effectiveness of SBC method based on particle swarm optimization (PSO) algorithm is also influenced by the update step size and the learning factor. When the LED is remarkably misplaced, one has to manually refine the initial parameters to avoid the optimization algorithm falling into a local trap or failing to converge. Furthermore, all post-calibrated methods have two limitations. Firstly, they are not robust because the captured data is coupled with not only the pose misalignment, but also the aberration of objective, intensity fluctuation, and system noise. Secondly, they are not efficient because a number of iterative times are needed for the convergence of estimated pose parameters.

To combine the advantages of pre-calibrated and post-calibrated approaches, and obtain the pose parameters of all 6 degrees of freedom of the LED array, we report a physics-based full-pose-parameter estimation method for FPM platforms. With the general knowledge of the employed microscope and captured LR images, our method can solve for the six pose parameters to model the LED array precisely for automatically calibrating illumination angles. We show the principle of forming an arc-shaped B-D boundary inside the captured image, and construct an explicit physics model to connect the position of each LED element with the circle center and radius of the B-D boundary. In addition, to extract the B-D boundary from a captured image, we design an algorithm to remove noise and accurately calculate the circle center and radius of the B-D boundary. The feasibility and effectiveness of calibrating pose misalignment have been demonstrated by experiments. Such a physics-based method is capable of obtaining the full-pose parameters of LED array placed randomly, significantly prompting the development of adjust-free FPM platforms.

The remainder of this paper is organized as follows: we review the theory of FPM in Section 2.1. Then we introduce a precise LED array model and analyze the misalignment-induced artifacts in Section 2.2. In Section 2.3, we construct a physics model to connect the position of LEDs with captured B-D boundary features. In Section 2.4, we proposed a full-pose-parameter estimation method to correct pose misalignment based on the constructed model. Experimental results with resolution target and biological sample are presented in Section3. Finally, we summarize the conclusions in Section 4.

2. Principle

2.1 FPM theory

A typical FPM system is composed of an LED array, a low-NA objective, a tube lens, and an image sensor (COMS or CCD camera). Experimentally, M × N LEDs are turned on sequentially to illuminate a thin sample that is placed far away from the LED array, and the image sensor is used to record the corresponding LR images. For a small segment of the sample, whose size is sufficient small compared with the distance between the LED array and the sample, the light wave from LEDm,n (row m, column n) can be approximately treated as a plane wave with wave vectors of (um,n, vm,n). The LR intensity image can be described as

$$I_{m,n}^c(x,y) = |[o(x,y){e^{(jx{u_{m,n}},jy{v_{m,n}})}}] \ast h(x,y){|^2} = |{\mathrm{{\cal F}}^{ - 1}}\{ O(u - {u_{m,n}},v - {v_{m,n}}) \cdot P(u,v)\} {|^2},$$
where $o(x,y)$ is the complex transmission function of the thin sample, (x, y) denotes the two-dimensional (2D) coordinates in the sample plane, j is the imaginary unit, $h(x,y)$ stands for the point spread function, * is the convolution operator, and ${\mathrm{{\cal F}}^{ - 1}}$ represents the inverse Fourier transform operator. $O(u,v) = \mathrm{{\cal F}}\{ o(x,y)\}$ and $P(u,v) = \mathrm{{\cal F}}\{ h(x,y)\}$ are the Fourier transform of $o(x,y)$ and $h(x,y)$, respectively. For a circularly-symmetrical diffraction-limited coherent imaging system the transfer function, P, is a circular pupil determining the highest spatial frequency transmitted by the objective as NA/λ, where λ is the central illumination wavelength.

FPM computationally generates HR complex images from the captured M × N LR images with a recovery procedure composed of five steps. The first step is to initialize the guesses of the HR sample spectrum ${O_0}(u,v)$ and pupil function ${P_0}(u,v)$, where the subscript ‘0’ denotes the index of initial iteration. In general, the initial spectrum is set as the Fourier transform of any up-sampled LR image. The initial pupil function is given as a circular low-pass filter with one and zero amplitudes within and outside the passband, and zero phase everywhere. Secondly, use the pupil function and spectrum to generate a LR target image under the illumination of LEDm,n as

$$o_{_{0,m,n}}^e(x,y) = {\mathrm{{\cal F}}^{ - 1}}\{ {O_0}(u - {u_{m,n}},v - {v_{m,n}}){P_0}(u,v)\} .$$

Thirdly, replace the target image’s amplitude components with the square root of the intensity image obtained under the corresponding illumination angle, and keep the phase unchanged to form an updated LR target image as

$$o_{_{0,m,n}}^u(x,y) = \sqrt {I_{m,n}^c(x,y)} \frac{{o_{0,m,n}^e(x,y)}}{{|o_{0,m,n}^e(x,y)|}}.$$

Subsequently, the updated target image is utilized to update the corresponding sub-spectrum of the HR sample spectrum, which is given by:

$$\begin{array}{c} {O_{i + 1}}(u - {u_{m,n}},v - {v_{m,n}}) = {O_i}(u - {u_{m,n}},v - {v_{m,n}}) + \alpha \frac{{P_i^\ast (u,v)}}{{|{P_i}(u,v)|_{\max }^2}}\Delta {O_{i,m,n}},\\ {P_{i + 1}}(u,v) = {P_i}(u,v) + \beta \frac{{O_i^\ast (u - {u_{m,n}},v - {v_{m,n}})}}{{|{O_i}(u - {u_{m,n}},v - {v_{m,n}})|_{\max }^2}}\Delta {O_{i,m,n}}, \end{array}$$
where α and β are the iterative step size, which can be set as one or adaptively updated to decrease the noise in recovered image [22]. The subscript i denotes the index of iteration, and $\Delta {O_{i,m,n}} = \mathrm{{\cal F}}\{ o_{_{i,m,n}}^u(x,y)\} - \mathrm{{\cal F}}\{ o_{_{i,m,n}}^e(x,y)\}$ is an auxiliary function.

In the fourth step, steps 2-3 are repeated for other captured LR images. Finally, steps 2-4 are repeated until the solution converges. At the end of the iterative recovery process, the converged solution in the Fourier domain will cover a significantly extended spectral support, and it will be transformed to the spatial domain to recover HR intensity and phase images.

2.2 LED array model and misalignment-induced artifacts

It is worth noting that the image recovery in FPM is based on the accurate stitching of each sub-spectrum in the Fourier domain, as detailed in the fourth step of the recovery procedure. Indeed, the position of each LED element with respect to the sample is critical for a successful recovery, as it directly determines the center coordinate of the stitched sub-spectrum. In practice, LEDs are arranged on a plane board in a matrix way. Both the plane and matrix distribution are key prior constraints to determine the full-pose parameters of the LED array in our method.

A global LED array model [19] has been proposed to constrain the position of LEDs to ensure the convergence of SA algorithm. It uses four parameters (Δx, Δy, θz, h) to model an LED array, where Δx and Δy are the lateral shifts, θz is the in-plane rotation angle, and h is the distance between the sample and the LED array. However, the pose of a rigid LED board in the 3D space should be described with the pose parameters of 6 degrees of freedom. Therefore, we improve the model with more complete pose parameters (Δx, Δy, θx, θy, θz, h), where θx and θy are the tilt angles along the x- and y-axes, respectively. Figure 1 illustrates the LED array modeled with the six parameters. A programmable LED array (5 × 5 LEDs are sketched for the sake of clarity) is placed under the sample. The coordinate system is built as follows: the z-axis is the optical axis of the objective with a direction from the LED array to the sample, and it interacts with the LED array at the point O. The y-axis passing through the point O has the same direction as the line that goes through the centers of two adjacent pixels on the image sensor. The x-axis also passes through the point O and is perpendicular to the y-axis.

 figure: Fig. 1.

Fig. 1. Schematic of a misaligned LED array model. (a) shows the misaligned LED array with lateral shifts of (Δx, Δy) and the in-plane rotation angle of θz. (b)-(c) are the side views of the misaligned LED array, where θx and θy are the tilts angles along x- and y-axes, respectively.

Download Full Size | PDF

We assume the distance between adjacent LEDs is the same, then the position of each individual LED element can be expressed as

$$\left[ {\begin{array}{{c}} {{x_{m,n}}}\\ {{y_{m,n}}}\\ {{z_{m,n}}} \end{array}} \right] = \mathbf {R}\left[ {\begin{array}{{c}} {m{d_{LED}}}\\ {n{d_{LED}}}\\ 0 \end{array}} \right] + \left[ {\begin{array}{{c}} {\Delta x}\\ {\Delta y}\\ 0 \end{array}} \right],$$
where (xm,n, ym,n, zm,n) represents the coordinate of LEDm,n in the 3D space, dLED denotes the distance between adjacent LEDs, and R is the rotation matrix that can be written as
$$\begin{array}{l} \mathbf{R} = \left[ {\begin{array}{{c}} {\cos ({\theta_y})\cos ({\theta_z})}\\ { - \cos ({\theta_x})\cos ({\theta_z}) + \sin ({\theta_x})\sin ({\theta_y})\cos ({\theta_z})}\\ {\sin ({\theta_x})\sin ({\theta_z}) + \cos ({\theta_x})\sin ({\theta_y})\cos ({\theta_z})} \end{array}\textrm{ }} \right.\\ \begin{array}{{c}} {\cos ({\theta _y})\sin ({\theta _z})}\\ {\cos ({\theta _x})\cos ({\theta _z}) + \sin ({\theta _x})\sin ({\theta _y})\sin ({\theta _z})}\\ { - \sin ({\theta _x})\sin ({\theta _z}) + \cos ({\theta _x})\sin ({\theta _y})\sin ({\theta _z})} \end{array}\textrm{ }\left. {\begin{array}{{c}} { - \sin ({\theta_y})}\\ {\sin ({\theta_x})\cos ({\theta_y})}\\ {\cos ({\theta_x})\cos ({\theta_y})} \end{array}} \right] \end{array}. $$

Once we obtain the coordinate of each LED, we can calculate the illumination wave vectors as

$$\begin{array}{l} {u_{m,n}} = \frac{{2\pi }}{\lambda } \cdot \frac{{{x_c} - {x_{m,n}}}}{{\sqrt {{{(x - {x_{m,n}})}^2} + {{(y - {y_{m,n}})}^2} + {{(h - {z_{m,n}})}^2}} }},\\ {v_{m,n}} = \frac{{2\pi }}{\lambda } \cdot \frac{{{y_c} - {y_{m,n}}}}{{\sqrt {{{(x - {x_{m,n}})}^2} + {{(y - {y_{m,n}})}^2} + {{(h - {z_{m,n}})}^2}} }}, \end{array}$$
where (xc, yc) is the central coordinate of one small segment of the sample.

 figure: Fig. 2.

Fig. 2. Effects of different pose misalignments. (a1)-(a2) are the ground truth of the sample; (b1)-(b2) are the recovered intensity and phase images, respectively, with a lateral shift of 700 µm along the x-axis (Δx = 700 µm). (c1)-(c2) are the recovered intensity and phase images, respectively, with an in-plane rotation angle of 10° along the z-axis (θz = 10°). (d1)-(d2) are the recovered intensity and phase images, respectively, with a tilt angle of 10° along the y axis (θy = 10°); (b3) and (e1) are the curves of the pixel number difference between all ideal and true sub-spectra with increased pose misalignments, which serve as the evaluation in the Fourier domain; (b4)-(b5) and (e2)-(e3) show the RMSE between the recovered images (I is intensity, and Ф is phase) and the ground truth with increased pose misalignments, which serve as the evaluation in the space domain.

Download Full Size | PDF

To illustrate the misalignment-induced artifacts and demonstrate the necessity for calibrating all kinds of pose misalignments, we perform simulations and compare the effects of different pose parameters quantitatively. The simulation parameters are chosen based on an experimental system. A 21 × 21 LED array (2.5 mm LED spacing and 470 nm wavelength) is placed at 92 mm beneath the sample. The NA and magnification of the objective are 0.1 and 4×, respectively. Figures 2(a1) and 2(a2) are the ground truth of the sample. First, we study the effect of lateral-shift-induced artifacts by increasing the value of Δx from 0 µm to 2000 µm with a step size of 100 µm. To evaluate the effect in the Fourier domain, we calculate the sum of pixel number difference between the position of all ideal and true sub-spectra, denoted by Δp, as shown in Fig. 2(b3). Since the illumination wave vectors are approximately linearly correlated with the position of LEDs according to Eq. (7), Δp representing the wave vectors difference grows linearly with Δx. In addition, we also calculate the root-mean-square error (RMSE) between the recovered images and the ground truth to evaluate the effect in the space domain, as shown in Figs. 2(b4) and 2(b5), where I and Ф represent the intensity and phase, respectively. It is noted that the RMSE curves grow in a stair-step way as the lateral shift increases. For example, when Δx varies from 600 µm to 700 µm, the associated RMSE grows dramatically, and the artifacts arising in the images degrade the imaging quality severely, as shown in Figs. 2(b1) and 2(b2). Therefore, calibrating the lateral shift is necessary for high-quality imaging.

Next, we analyze the effects of in-plane rotation and tilt with similar simulations. The curves of Δp and RMSE with increased angle are shown in Figs. 2(e1)-(e3), respectively. We find that the RMSE induced by either in-plane rotation or tilt is smaller than that induced by lateral shift. Although the rotation angle increases to 10°, which is easy to be observed only with the human eye, the distortion in recovered images is still slight, as shown in Figs. 2(c) and 2(d). The different effects can be interpreted with two simple equations. For lateral shift, all LEDs are shifted by Δx, resulting in the shift of all spectra, whatever the low- or high-frequency. However, the shifts of the position of LEDm,n caused by in-plane rotation are ${x_{m,n}}[1 - \cos ({\theta _z})]$ and ${x_{m,n}}\sin ({\theta _z})$ along the x- and y- axes, respectively, and the tilt-induced shift is also ${x_{m,n}}[1 - \cos ({\theta _y})]$ along the x-axis. Therefore, the shifts caused by rotation or tilt are linear to the coordinate of the LED element; that is, the shift of the spectrum with low-frequency information is smaller than that with high-frequency information. Meanwhile, compared with the high-frequency information, the low-frequency information usually contributes more to an image in space domain. Thus, the degradations of image quality of Figs. 2(c) and 2(d) are much smaller than that of Figs. 2(b1) and 2(b2). In any case, both in-plane rotation and tilt misalignments have degraded the image quality, implying calibrating them is also important to accurate quantitative analysis in FPM.

2.3 Physics-based model for pose calibration

In computational imaging approaches, an important feature is to seek the joint optimization of imaging systems and algorithms. One of the core ways to achieve that is to mine and utilize more effective and explicit physical models. To calibrate the pose misalignment, we construct a physics-based model to connect the B-D boundary inside captured image with the full-pose parameters.

As shown in Fig. 3(a1), when the central LED element is used to provide illumination, the diameter of the light is greater than the FOV of the CMOS camera, so the captured intensity image is a pure brightfield (BF) image. Figure 3(b1) shows such a BF image, where the circle drawn by the green dashed line is the projection of the aperture stop, point C0,0 is the circle center, and the red rectangle is the FOV of the CMOS camera that serves as the field stop of the system. In contrast, the intensity image with the illumination of the LED element whose illumination NA is close to the objective’s NA is a B-D image, as sketched in Fig. 3(b2). Typically, the B-D boundary is arc-shaped because it is formed by the superposition of the field stop and the projection of the circular aperture stop in the image plane, as shown in Fig. 3(a2). In order to analyze conveniently, we transform the field stop from the image space into the object space and get the entrance window (EW), and transform the aperture stop and get the entrance pupil (EP). The schematic diagram of our physics-based model is presented in Fig. 3 (c).

 figure: Fig. 3.

Fig. 3. Physics-based model used for pose calibration in FPM system. (a1) shows the principle of forming a BF image, and the corresponding image is shown in (b1). (a2) shows the principle of a B-D image, and the corresponding image is shown in (b2). (c) is the physics-based model used to connect the features of B-D image with the position of each LED element.

Download Full Size | PDF

To model mathematically, we first calculate the distance between the EP and EW, denoted by h1. It is noted that h1 is a constant characteristic quantity of an FPM system because the position of EP and EW are fixed. As shown in Fig. 3(c), GH is the EP’s radius calculated as h1·NA. CC0,0 is the radius of the projection of the EP on the EW plane. We note that the projection is conjugated to the B-D boundary according to the above transformation, so CC0,0 can be calculated as RBD/Mag, where RBD is the radius of the B-D boundary, and Mag is the magnification of the system. With a simple geometrical deduction, we can get

$$\frac{{C{C_{0,0}}}}{{GH}} = \frac{h}{{h - {h_1}}}.$$

Substituting the formulas of CC0,0 and GH into Eq. (8), we can get

$${h_1} = \frac{{h{R_{BD}}}}{{\textrm{NA} \cdot h \cdot \textrm{Mag} + {R_{BD}}}}.$$

Now, we can calculate the circle center of the B-D boundary as

$$\begin{array}{l} {x_{BD,m,n}} = \frac{{{h_1} \cdot {x_{m,n}}}}{{h - {h_1} - {z_{m,n}}}},\\ {y_{BD,m,n}} = \frac{{{h_1} \cdot {y_{m,n}}}}{{h - {h_1} - {z_{m,n}}}}. \end{array}$$

Equations (9) and (10) make up the forward part of our model, which suggests the center circle and radius of a B-D boundary can be calculated with the knowledge of the position of corresponding LED element and the NA of the objective. In turn, the position of each LED can also be estimated with the features of B-D boundary. More details are given in the following Section.

2.4 Physics-based and self-calibrated strategy

Unlike existing pre- or post-calibrated methods, we present a physics-based full-pose-parameters calibration method. It takes full advantage of general knowledge of the employed microscope, and can achieve robust and precise pose calibration with the features of B-D boundaries inside captured images. The flow chart of our method is shown in Fig. 4.

 figure: Fig. 4.

Fig. 4. Flow chart of our method. (a1)-(a4) are four unsuitable intensity images for extracting the B-D boundary due to pure BF imaging, pure DF imaging, high-proportion BF imaging, and low-proportion BF imaging. (a5)-(a6) are two suitable intensity images, and the extracted circle centers and radii are depicted in (b1) and (b2).

Download Full Size | PDF

First, capture LR intensity images as the raw data set. This step is multiplexed with the raw data acquisition of FPM, because the proposed method also belongs to a post-calibrated method. The difference is that the LED array used in our method can be placed randomly.

Second, select suitable images for B-D boundaries extraction. As mentioned in Section 2.3, the position of each LED element can be inferred from the B-D boundary. However, not all captured images can carry suitable B-D information. If an LED element is close to the optical axis, the captured image will be a pure BF image, as shown in Fig. 4(a1). If the illumination NA is greater than the objective’s NA, the full FOV of the captured image is DF imaging, as shown in Fig. 4(a2). Extracting B-D information from such images is useless. Thus, we define the extracting range of intensity images as S=$\{ (m,n)|m ={-} {m_{BD}},\ldots ,{m_{BD}},n ={-} {n_{BD}},\ldots ,{n_{BD}}\} $, where mBD is the same as nBD for uniformity. In actual experiments, mBD can be determined according to the index of the captured image where the full FOV starts to become DF imaging. Alternatively, it can also be mathematically calculated as

$${m_{BD}} = \left\lceil {\frac{{h \cdot \tan (\arcsin (\textrm{NA})) + \frac{{0.5{L_x}}}{{Mag}}}}{{{d_{LED}}}}} \right\rceil ,$$
where $\lceil{} \rceil $ rounds numbers to upper integers of their argument, and Lx is the length of the COMS camera. In addition, the accuracy of extraction is influenced by the proportion of the BF region to the whole FOV. For example, the intensity images shown in Figs. 4(a3) and 4(a4) are not suitable for following extraction because the proportion of BF region is too high or too low. Therefore, we further define a proportion threshold ${\eta _{th}} = 0.85$ to judge whether the intensity images within the extracting range are suitable. For an intensity image ${I_{m,n}}(x,y)$, the proportion of BF region can be approximately calculated as
$${\eta _{m,n}} = \frac{{\sum\limits_{x,y}^{X,Y} {{B_{m,n}}(x,y)} }}{{XY}},(m,n) \in S,$$
where ${B_{m,n}}(x,y)$ is the binarized image of ${I_{m,n}}(x,y)$, X and Y are the number of pixels along the x- and y-axes, respectively. If ${\eta _{m,n}} \in [1 - {\eta _{th}},{\eta _{th}}]$, we think that ${I_{m,n}}(x,y)$ is suitable for the following steps. At the end of this step, we define the pre-selected image set as ${I_q}(x,y)$, $q = (1,\ldots ,Q)$, where the subscript q is the index corresponding to (m, n), and Q is the total number of the selected images. Experimentally, $Q \ge 4$ is necessary to correct all pose parameters successfully, which is equivalent to require the aperture-overlapping-rate [23] larger than 26.23%. But this is not an additional requirement because stricter aperture-overlapping-rate (i.e. 31.81%) is required in FPM recovery algorithm, implying our method is suitable for all misalignment correction in FPM system.

Third, extract circle centers and radii of B-D boundaries. This step is an image processing problem. But some existing methods, e.g., Hough transform that relies on accurate edge detector, are not appropriate for our purpose, because the captured intensity images coupled with sample information will make edge detection problematic. Therefore, we propose an algorithm to solve the problem. The pipeline of algorithm is shown in Tab. 1. For the qth image, we first sequentially perform binarizing, removing sample information, and edge detecting. However, one simple image preprocessing scheme cannot satisfy the need to remove the information of various samples. The left sample information will serve as noise, which will degrade the accuracy of extracting B-D boundaries. To maintain the generalizability of the algorithm, we first perform denoise using the coarse estimates of the B-D boundary with random sample consensus (RANSAC) algorithm and then gain refined estimates with least squares (LS) method. In RANSAC algorithm, the total iteration times are noted as K. For the kth iteration, random 3 edge points are chosen to calculate a circle with center and radius denoted by $({\hat{x}_{BD,q,k}},{\hat{y}_{BD,q,k}})$ and ${\hat{R}_{BD,q,k}}$, respectively. A metric function ${F_k} = \sum\limits_{{x_e},{y_e}} {p({x_e},{y_e})} $, is also calculated to represent the fitness of the current circle, where $({x_e},{y_e})$ donates the coordinate of edge points, $p({x_e},{y_e})$ is a function used for judging whether an edge point locates on the circle:

$$p({x_e},{y_e}) = \left\{ {\begin{array}{{c}} {1,}\\ {0,} \end{array}\textrm{ }\begin{array}{{c}} {if|\sqrt {{{({x_e} - {{\hat{x}}_{BD,q,k}})}^2} + {{({y_e} - {{\hat{y}}_{BD,q,k}})}^2}} - {{\hat{R}}_{BD,q,k}}|< t{h_1}}\\ {otherwise} \end{array}} \right.$$
where th1 is a distance threshold. After K iterations, we locate the index of the largest Fk and pick up the corresponding estimates $({\hat{x}_{BD,q,}},{\hat{y}_{BD,q}})$ and ${\hat{R}_{BD,q}}$ as the output. To denoise, we define a new distance threshold, th2. If the difference between the ${\hat{R}_{BD,q}}$ and the distance from an edge point to the center, $({\hat{x}_{BD,q,}},{\hat{y}_{BD,q}})$, is greater than th2, we consider the edge point as noise and remove it from the existing point set. After removing all noise points, we use LS method to obtain the refined estimates of circle center and radius, denoted by $({x_{BD,q}},{y_{BD,q}})$ and ${R_{BD,q}}$, respectively. Two fitted circles with our algorithm are shown in Figs. 4(b1) and 4(b2).

Tables Icon

Table 1. Pipeline of extracting circle centers and radii of B-D boundaries

Next, estimate the full-pose parameters of LED array. In this step, we use the extracted circle centers and radii and the constructed physics model to estimate the six pose parameters of LED array. Mathematically, it can be described as

$$\begin{array}{c} E(\Delta x,\Delta y,{\theta _x},{\theta _y},{\theta _z},h) = \sum\limits_{q = 1}^Q {\{ {{[{x_{BD,q}} - {x_{BD,q,c}}]}^2} + {{[{y_{BD,q}} - {y_{BD,q,c}}]}^2}} \} ,\\ {(\Delta x,\Delta y,{\theta _x},{\theta _y},{\theta _z},h)^u} = argmin[E(\Delta x,\Delta y,{\theta _x},{\theta _y},{\theta _z},h)], \end{array}$$
where $({x_{BD,q,c}},{y_{BD,q,c}})$ denotes the calculated coordinate using the six estimated parameters with our forward model. If the estimated parameters are close to the actual parameters, then the defined function $E(\Delta x,\Delta y,{\theta _x},{\theta _y},{\theta _z},h)$ is close to zero. Hence, we can obtain the optimized pose parameters, ${(\Delta x,\Delta y,{\theta _x},{\theta _y},{\theta _z},h)^u}$, by minimizing $E(\Delta x,\Delta y,{\theta _x},{\theta _y},{\theta _z},h)$.

At last, calibrate the position of sub-spectra in the recovery process of FPM. With the estimated pose parameters, we can calculate the accurate position of each stitched spectrum in the Fourier domain. HR intensity and phase images can be obtained without degradation of imaging quality, regardless of how the LED array is placed.

3. Experiments

To validate our method experimentally, we compare the recovered results of one small segment (128 × 128 pixels) in a USA-1951 resolution target with the conventional FPM, SC-FPM, and our method. One 21 × 21 LED array (2.5 mm spacing, 470 nm central wavelength with 20 nm bandwidth), an objective lens (NA= 0.1, Mag= 4), and a CMOS camera (FLIR, BFS-U3-200S6M-C, pixel size 2.4 µm) are used to build our FPM platform.

We first align the LED array with a 3D mechanical adjustment device that can achieve 2D lateral shifts and the in-plane rotation [14], and captured 441 LR intensity images to recover one HR intensity image as the reference, as shown in Fig. 5(a). Meanwhile, we also estimate the pose of LED array with our method, and the resultant pose parameters, (Δx = 0.012 mm, Δy = -0.043 mm, θx = -0.316°, θy = 0.403°, θz = -0.167°, h = 92.13 mm), are close to the ideal parameters, verifying the superior performance of our method for pose estimation.

 figure: Fig. 5.

Fig. 5. Comparison of conventional FPM, SC-FPM, and our method. (b1)-(b3) are the recovered intensity images of conventional FPM with the nominal pose parameters of (Δx = 2.000 mm, Δy = 2.000 mm, θx = 3.224°, θy = -2.149°, θz = 0.000°, h = 92.00 mm), a random position that contains an extreme misalignments of in-plane rotation (θz), and (Δx = 4.000 mm, Δy = 4.000 mm, θx = 3.224°, θy = -2.149°, θz = 0.000°, h = 92.00 mm), respectively. (c1)-(c3) are the corresponding recovered intensity images of SC-FPM. (d1)-(d3) are the corresponding recovered intensity images of our method.

Download Full Size | PDF

To introduce pose misalignments, we shift, tilt and rotate the aligned LED array. Quantitatively, we treat the indications of the adjustment device as the nominal pose parameters of lateral shifts and in-plane rotation, and measure the height difference of the four corners of the LED array to calculate the nominal pose parameters of tilt. In the case of (Δx = 2.000 mm, Δy = 2.000 mm, θx = 3.224°, θy = −2.149°, θz = 0.000°, h = 92.00 mm), the recovered intensity image with conventional FPM method is shown in Fig. 5(b1). The background is disturbed by wrinkle-shaped artifacts, and the features of all groups are also distorted and irresolvable. With the calibration of SC-FPM, an improved intensity image shown in Fig. 5(c1) is obtained, where the features of Group 8 can be resolved, while features of Group 9 are not clear due to the low contrast and the deteriorated background. What’s more, the recovered pose parameters, (Δx = 1.526 mm, Δy = 1.917 mm, θz = −0.120°, h = 91.83 mm), are not agree with the nominal parameter (the difference of the lateral shift along the x-axis is ∼ 0.5 mm). After calibrating with our method, we get a high-quality HR intensity image, as shown in Fig. 5(d1), where all features can be resolved clearly and the contrast remains high. The pose parameters estimated by using 32 B-D images selected from captured images, (Δx = 1.998 mm, Δy = 1.988 mm, θx = 2.540°, θy = −1.671°, θz = −0.218°, h = 91.42 mm), are also consistent with the introduced parameters. Subsequently, we adjust the LED array to random position that contains an extreme in-plane rotation angle (θz). The recovered intensity image with conventional FPM is shown in Fig. 5(b2). In addition to the decreased resolution, the stripes of the resolution target are rotated by an evident angle. Compared with Fig. 5(c1), the recovered image of SC-FPM shown in Fig. 5(c2) becomes worse. The features of Group 8, Element 6, have been irresolvable. Fortunately, with the calibration of the pose parameter of (Δx = 2.168 mm, Δy = 0.348 mm, θx = 3.421°, θy = −2.258°, θz = 5.562°, h = 91.44 mm) obtained by using 36 B-D images, the recovered image shown in Fig. 5(d2) is still comparable to the reference image.

Further, to validate the robustness and effectiveness of our method for calibrating remarkable misalignment, we deliberately adjust the LED array to an extremely misaligned pose to introduce the nominal parameters of (Δx = 4.000 mm, Δy = 4.000 mm, θx = 3.224°, θy = −2.149°, θz = 0.000°, h = 92.00 mm). The recovered intensity image of conventional FPM is submerged by more wrinkle-shaped artifacts, as shown in Fig. 5(b3). With the calibration of SC-FPM, we get one intensity image shown in Fig. 5 (c3), where the image quality seems lower than that of the uncalibrated image. The estimated pose parameters, (Δx = −0.200 mm, Δy = 1.913 mm, θz = 4.164°, h = 93.10 mm), are also heavily skewed from the truth. The unstable performance of SC-FPM is caused by unsuitable initial step size and search direction, and it is difficult for unskilled users to optimize the initial parameters of SA algorithm. Thanks to the explicit physics model and effective algorithm, the recovered image with our method maintains high imaging quality, as shown in Fig. 5(d3). The recovered pose parameters using 25 B-D images, (Δx = 4.075 mm, Δy = 4.001 mm, θx = 3.325°, θy = -1.627°, θz = -0.242°, h = 91.37 mm), are still close to the actual parameters, verifying that our method can achieve resolution- and contrast-invariant imaging for remarkable misalignment.

In addition, we also demonstrate the effectiveness of our method by recovering a biological slide (Vicia faba seedling root). Similarly, we adjust the LED array to introduce unknown misalignments. Figure 6(a) shows the captured full-FOV image of the sample under the illumination from the nominal central LED element. Figure 6(b1) is a small segment of Fig. 6(a), with a resolution of ∼ 4.7 µm, which is close to the size of the biological cell, resulting in the image being blurred. However, only by stitching captured LR images with conventional FPM method, we cannot get an ideal HR intensity image because of unmatched illumination angles. Instead, a distorted image coupled with noticeable artifacts is yielded, as shown in Fig. 6(b2). Using the B-D boundaries contained in captured images, we obtain the estimated pose parameters of (Δx = -2.587 mm, Δy = -0.271 mm, θx = 2.010°, θy = 1.731°, θz = -5.791°, h = 91.97 mm), and an HR intensity image is recovered with the calibration of the parameters, as shown in Fig. (b3). Compared with Fig. 6(b2), the imaging quality is significantly improved and the evident artifacts are also removed. The image details, such as the cell mall of the primary xylem cells, can be clearly resolved as well. Figure 6(c1) shows another small segment mainly consisting of pericycle cells. Although the recovered image without calibration is full of artifacts, as shown in Fig . 6(c2), our method can still work well. The calibrated intensity image shown in Fig. 6(c3) has more distinct details, and the artifacts also have been eliminated.

 figure: Fig. 6.

Fig. 6. Recovered results of two segments in a Vicia faba seedling root slide with conventional FPM and our method. (a) presents the captured full-FOV image of the sample under the illumination from the nominal central LED element. (b1)-(b3) show the enlargement of one small segment of (a), the recovered HR intensity images with conventional FPM and our method, respectively. (c1)-(c3) are the enlargement of another small segment of (a), the recovered HR intensity images with conventional FPM and our method, respectively.

Download Full Size | PDF

4. Conclusion

We have demonstrated a physics-based and full-pose-parameter estimation method for correcting illumination angles in FPM systems. The effectiveness and robustness have been verified experimentally. Fundamentally, we analyze the structure of a typical FPM system to interpret the principle of forming arc-shaped B-D boundaries, and transform the problem of calibrating illumination angles from solving the position of LEDs to calculating the centers and radii of B-D boundaries. With the constructed model, we can solve for the six full-pose parameters to characterize a misplaced LED array accurately. Different from the pre-calibrated methods, our method is completely bind and can automatically remove random pose misalignments, unlocking the use of FPM by unskilled users. Further, compared with existing post-calibrated methods, our method is capable of modeling the LED array that is remarkably misplaced. Therefore, our method can facilitate the design, implementation, and application of concise and robust FPM platforms.

Funding

National Key Research and Development Program of China (2021YFC2202400); 111 Project (B18005); Funding of Foundation Enhancement Program (2021-JCJQ-JJ-0823).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

2. X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, “Quantitative phase imaging via Fourier ptychographic microscopy,” Opt. Lett. 38(22), 4845–4848 (2013). [CrossRef]  

3. G. Zheng, C. Shen, S. Jiang, P. Song, and C. Yang, “Concept, implementations and applications of Fourier ptychography,” Nat. Rev. Phys. 3(3), 207–223 (2021). [CrossRef]  

4. S. A. Alexandrov, T. R. Hillman, T. Gutzler, and D. D. Sampson, “Synthetic Aperture Fourier Holographic Optical Microscopy,” Phys. Rev. Lett. 97(16), 168102 (2006). [CrossRef]  

5. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]  

6. R. Horstmeyer, X. Ou, G. Zheng, P. Willems, and C. Yang, “Digital pathology with Fourier ptychography,” Computerized Medical Imaging and Graphics 42, 38–43 (2015). [CrossRef]  

7. P. Song, S. Jiang, H. Zhang, X. Huang, Y. Zhang, and G. Zheng, “Full-field Fourier ptychography (FFP): Spatially varying pupil modeling and its application for rapid field-dependent aberration metrology,” APL Photonics 4(5), 050802 (2019). [CrossRef]  

8. J. Chung, X. Ou, R. P. Kulkarni, and C. Yang, “Counting white blood cells from a blood smear using Fourier ptychographic microscopy,” PLOS One 10, e0133489 (2015). [CrossRef]  

9. R. Horstmeyer, J. Chung, X. Ou, G. Zheng, and C. Yang, “Diffraction tomography with Fourier ptychography,” Optica 3(8), 827–835 (2016). [CrossRef]  

10. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2(2), 104 (2015). [CrossRef]  

11. C. Zuo, J. Sun, J. Li, A. Asundi, and Q. Chen, “Wide-field high-resolution 3D microscopy with Fourier ptychographic diffraction tomography,” Optics and Lasers in Engineering 128, 106003 (2020). [CrossRef]  

12. P. Thibault, M. Dierolf, O. Bunk, A. Menzel, and F. Pfeiffer, “Probe retrieval in ptychographic coherent diffractive imaging,” Ultramicroscopy 109(4), 338–343 (2009). [CrossRef]  

13. H. M. L. Faulkner and J. M. Rodenburg, “Movable aperture lensless transmission microscopy: a novel phase retrieval algorithm,” Phys. Rev. Lett. 93(2), 023903 (2004). [CrossRef]  

14. S. Zhang, G. Zhou, Y. Wang, Y. Hu, and Q. Hao, “A Simply Equipped Fourier Ptychography Platform Based on an Industrial Camera and Telecentric Objective,” Sensors 19(22), 4913 (2019). [CrossRef]  

15. A. Zhou, W. Wang, N. Chen, E. Y. Lam, B. Lee, and G. Situ, “Fast and robust misalignment correction of Fourier ptychographic microscopy for full field of view reconstruction,” Biomed. Opt. Express 26(18), 23661–23674 (2018). [CrossRef]  

16. C. Zheng, S. Zhang, G. Zhou, Y. Hu, and Q. Hao, “Robust Fourier ptychographic microscopy via a physics-based defocusing strategy for calibrating angle-varied LED illumination,” Biomed. Opt. Express 13(3), 1581–1594 (2022). [CrossRef]  

17. D. Yang, S. Zhang, C. Zheng, G. Zhou, L. Cao, Y. Hu, and Q. Hao, “Fourier ptychography multi-parameunter neural network with composite physical priori optimization,” Biomed. Opt. Express 13(5), 2739–2753 (2022). [CrossRef]  

18. J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Efficient positional misalignment correction method for Fourier ptychographic microscopy,” Biomed. Opt. Express 7(4), 1336 (2016). [CrossRef]  

19. A. Pan, Y. Zhang, T. Zhao, Z. Wang, D. Dan, and B. Yao, “System calibration method for Fourier ptychographic microscopy,” J. Biomed. Opt 22(09), 1–11 (2017). [CrossRef]  

20. Y. Zhu, M. Sun, P. Wu, Q. Mu, L. Xuan, D. Li, and B. Wang, “Space-based correction method for LED array misalignment in Fourier ptychographic microscopy,” Opt. Commun. 514, 128163 (2022). [CrossRef]  

21. R. Eckert, Z. F. Phillips, and L. Waller, “Efficient illumination angle self-calibration in Fourier ptychography,” Appl. Opt. 57(19), 5434–5442 (2018). [CrossRef]  

22. C. Zuo, J. Sun, and Q. Chen, “Adaptive step-size strategy for noise-robust Fourier ptychographic microscopy,” Opt. Express 24(18), 20724–20744 (2016). [CrossRef]  

23. J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Sampling criteria for Fourier ptychographic microscopy in object space and frequency space,” Opt. Express 24(14), 15765–15781 (2016). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Schematic of a misaligned LED array model. (a) shows the misaligned LED array with lateral shifts of (Δx, Δy) and the in-plane rotation angle of θz. (b)-(c) are the side views of the misaligned LED array, where θx and θy are the tilts angles along x- and y-axes, respectively.
Fig. 2.
Fig. 2. Effects of different pose misalignments. (a1)-(a2) are the ground truth of the sample; (b1)-(b2) are the recovered intensity and phase images, respectively, with a lateral shift of 700 µm along the x-axis (Δx = 700 µm). (c1)-(c2) are the recovered intensity and phase images, respectively, with an in-plane rotation angle of 10° along the z-axis (θz = 10°). (d1)-(d2) are the recovered intensity and phase images, respectively, with a tilt angle of 10° along the y axis (θy = 10°); (b3) and (e1) are the curves of the pixel number difference between all ideal and true sub-spectra with increased pose misalignments, which serve as the evaluation in the Fourier domain; (b4)-(b5) and (e2)-(e3) show the RMSE between the recovered images (I is intensity, and Ф is phase) and the ground truth with increased pose misalignments, which serve as the evaluation in the space domain.
Fig. 3.
Fig. 3. Physics-based model used for pose calibration in FPM system. (a1) shows the principle of forming a BF image, and the corresponding image is shown in (b1). (a2) shows the principle of a B-D image, and the corresponding image is shown in (b2). (c) is the physics-based model used to connect the features of B-D image with the position of each LED element.
Fig. 4.
Fig. 4. Flow chart of our method. (a1)-(a4) are four unsuitable intensity images for extracting the B-D boundary due to pure BF imaging, pure DF imaging, high-proportion BF imaging, and low-proportion BF imaging. (a5)-(a6) are two suitable intensity images, and the extracted circle centers and radii are depicted in (b1) and (b2).
Fig. 5.
Fig. 5. Comparison of conventional FPM, SC-FPM, and our method. (b1)-(b3) are the recovered intensity images of conventional FPM with the nominal pose parameters of (Δx = 2.000 mm, Δy = 2.000 mm, θx = 3.224°, θy = -2.149°, θz = 0.000°, h = 92.00 mm), a random position that contains an extreme misalignments of in-plane rotation (θz), and (Δx = 4.000 mm, Δy = 4.000 mm, θx = 3.224°, θy = -2.149°, θz = 0.000°, h = 92.00 mm), respectively. (c1)-(c3) are the corresponding recovered intensity images of SC-FPM. (d1)-(d3) are the corresponding recovered intensity images of our method.
Fig. 6.
Fig. 6. Recovered results of two segments in a Vicia faba seedling root slide with conventional FPM and our method. (a) presents the captured full-FOV image of the sample under the illumination from the nominal central LED element. (b1)-(b3) show the enlargement of one small segment of (a), the recovered HR intensity images with conventional FPM and our method, respectively. (c1)-(c3) are the enlargement of another small segment of (a), the recovered HR intensity images with conventional FPM and our method, respectively.

Tables (1)

Tables Icon

Table 1. Pipeline of extracting circle centers and radii of B-D boundaries

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

I m , n c ( x , y ) = | [ o ( x , y ) e ( j x u m , n , j y v m , n ) ] h ( x , y ) | 2 = | F 1 { O ( u u m , n , v v m , n ) P ( u , v ) } | 2 ,
o 0 , m , n e ( x , y ) = F 1 { O 0 ( u u m , n , v v m , n ) P 0 ( u , v ) } .
o 0 , m , n u ( x , y ) = I m , n c ( x , y ) o 0 , m , n e ( x , y ) | o 0 , m , n e ( x , y ) | .
O i + 1 ( u u m , n , v v m , n ) = O i ( u u m , n , v v m , n ) + α P i ( u , v ) | P i ( u , v ) | max 2 Δ O i , m , n , P i + 1 ( u , v ) = P i ( u , v ) + β O i ( u u m , n , v v m , n ) | O i ( u u m , n , v v m , n ) | max 2 Δ O i , m , n ,
[ x m , n y m , n z m , n ] = R [ m d L E D n d L E D 0 ] + [ Δ x Δ y 0 ] ,
R = [ cos ( θ y ) cos ( θ z ) cos ( θ x ) cos ( θ z ) + sin ( θ x ) sin ( θ y ) cos ( θ z ) sin ( θ x ) sin ( θ z ) + cos ( θ x ) sin ( θ y ) cos ( θ z )   cos ( θ y ) sin ( θ z ) cos ( θ x ) cos ( θ z ) + sin ( θ x ) sin ( θ y ) sin ( θ z ) sin ( θ x ) sin ( θ z ) + cos ( θ x ) sin ( θ y ) sin ( θ z )   sin ( θ y ) sin ( θ x ) cos ( θ y ) cos ( θ x ) cos ( θ y ) ] .
u m , n = 2 π λ x c x m , n ( x x m , n ) 2 + ( y y m , n ) 2 + ( h z m , n ) 2 , v m , n = 2 π λ y c y m , n ( x x m , n ) 2 + ( y y m , n ) 2 + ( h z m , n ) 2 ,
C C 0 , 0 G H = h h h 1 .
h 1 = h R B D NA h Mag + R B D .
x B D , m , n = h 1 x m , n h h 1 z m , n , y B D , m , n = h 1 y m , n h h 1 z m , n .
m B D = h tan ( arcsin ( NA ) ) + 0.5 L x M a g d L E D ,
η m , n = x , y X , Y B m , n ( x , y ) X Y , ( m , n ) S ,
p ( x e , y e ) = { 1 , 0 ,   i f | ( x e x ^ B D , q , k ) 2 + ( y e y ^ B D , q , k ) 2 R ^ B D , q , k | < t h 1 o t h e r w i s e
E ( Δ x , Δ y , θ x , θ y , θ z , h ) = q = 1 Q { [ x B D , q x B D , q , c ] 2 + [ y B D , q y B D , q , c ] 2 } , ( Δ x , Δ y , θ x , θ y , θ z , h ) u = a r g m i n [ E ( Δ x , Δ y , θ x , θ y , θ z , h ) ] ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.