Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Lensfree on-chip microscopy based on dual-plane phase retrieval

Open Access Open Access

Abstract

In lensfree on-chip microscopy, the iterative phase retrieval with defocused images easily enables a high-resolution and whole field reconstruction. However, on the reconstruction of the dense sample, conventional methods suffer from the stagnation problem and noise affection under two intensity measurements, which gives rise to a remarkable loss of the image contrast and resolution. Here we propose a novel dual-plane phase retrieval algorithm to perform a stable and versatile lensless reconstruction. A weighted feedback constraint was utilized to speed up the convergence. Then, a gradient descent minimization based on total variation metric was proposed to suppress the noise affection. With these two object constraints, a smoothed but resolution-preserving result can be achieved. Numerical simulations of Gaussian and Poisson noise were given to prove the noise-robustness of our method. The experiments of USAF resolution target, H&E stained pathological slide, and label-free microglia cell demonstrated the superior performance of our approach compared to other state-of-the-art methods.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In lensfree holographic microscopy, quasi-monochromatic light wave passes through sample and then captured by digital camera. However, the current sensor chip, such as CCD or CMOS, merely measures an averaged intensity of light field, and the corresponding phase information is lost. If the phase information of diffraction pattern is retrieved, the wavefronts at any planes can be recovered by using numerical diffraction propagation. Based on this principle, lensfree on-chip microscope with iterative phase retrieval algorithm achieves a non-invasive, wide field-of-view (FOV) and high-resolution image reconstruction, which has been exploited in the application of living cell observation [1,2], optical sectioning [3], point-of-care device [4], and whole pathological slide imaging [5,6].

To recover the sample image, different iterative phase retrieval algorithms were investigated in lensfree imaging, including single-shot [2,710] and multi-image phase retrieval [1128]. The former requires a prior acknowledge of the object scene, such as support region or background image (captured after object removal), to escape from the local minimum in the iterative procedure. These single-shot methods have the ability of dynamic imaging but have to satisfy the sparse sample assumption. The latter can perform a high fidelity and stable image reconstruction at the cost of measuring many observation planes. Among multi-image phase retrieval methods, axial scanning strategy [1728] captures a series of defocused patterns and utilizes multi-distance (or height) phase retrieval (MDPR) for image recovery, which possesses the advantage of simple operation and high stability. However, the sequential measurement of MDPR decreases the acquisition rate for lensfree system. Also, the retrieved accuracy of MDPR is highly dependent on the number of observation planes. For example, as shown in [18], the visually acceptable reconstruction of dense sample requires at least three intensity measurements. If one can achieve the reconstruction with two-shot measurements, the weakness of MDPR on the acquisition rate can be overcome by using a dual-camera lensfree system. Till now, much approaches composed of modified Gerchberg-Saxton (GS) algorithm [2426] and non-iterative method [27,28] have been proposed to decrease the measurement times to 2, but they merely accomplished the reconstruction of binary or isolated object, and dense sample reconstruction, such as pathological slide and label-free living cell, is not considered.

Compared to previously reported methods, we propose a dual-plane phase retrieval algorithm to achieve a stable and versatile lensless reconstruction. Two object constraints were designed to solve the stagnation problem and noise affection. Firstly, a weighted feedback term was used to speed up the convergence. Secondly, a gradient descent minimization based on total variation (TV) metric was proposed to remove noisy information so that a smoothed but resolution-preserving result can be obtained. Numerical simulations with the addition of Gaussian and Poisson noise proved the noise robustness of our method. Experimental results of USAF target, pathological slide, and label-free microglia cell were given to validate the superior performance compared to other state-of-the-art methods.

2. Theory

2.1 The overview of MDPR

The geometrical configuration of lensfree on-chip system was shown in Fig. 1(a). A fiber laser directly incidents on the sample and recorded by a tightly-placed CMOS sensor chip, which contributes a unit magnification for multi-distance observation planes. Thus, the magnification effect of the spherical wave is ignored, and angular spectrum diffraction model with a pixel-to-pixel projection can be used for numerical reconstruction. If the complex amplitude of the object is O and the sample-to-sensor distances are denoted as Zn, the n-th diffracted image on the sensor is expressed as

$${I_n} = {|{{A_{Z_{n}}}(O )} |^2} + \varepsilon , n \in [1,N],$$
where ${A_{Z_{n}}}({\cdot} )$ is the forward propagation operator over the distance of Zn and $\varepsilon$ is the additive noise function. In the following context, the operator ${A_{Z_{n}}}^{- 1}$ denotes the inverse diffraction propagation, which can be taken by inputting $- {Z_n}$ into angular spectrum kernel. The goal of MDPR is to retrieve O with the diffracted intensity images ${I_n}$.

 figure: Fig. 1.

Fig. 1. The proposed imaging method: (a) the experimental system and (b) a flowchart.

Download Full Size | PDF

MDPR is composed of three types: successive mode, parallel mode, and regularization mode. Assuming that the k-th object estimation is ${O^k}$, the successive mode [1719], also called as single-beam multiple-intensity phase reconstruction (SBMIR) algorithm, is generalized as follows.

  • (1) Forward propagating the k-th object estimation to the first observation plane, and replacing the generated amplitude with the square root of intensity image as
    $$C_1^k = \sqrt {{I_1}} \frac{{{A_{{Z_1}}}({O^k})}}{{|{{A_{{Z_1}}}({O^k})} |}},$$
  • (2) The combined complex-valued image is successively propagated to the next plane until all observation planes are reached, and this process can be denoted as
    $$C_{n + 1}^k = \sqrt {{I_{n + 1}}} \frac{{{A_{{Z_{n + 1}} - {Z_n}}}(C_n^k)}}{{|{{A_{{Z_{n + 1}} - {Z_n}}}(C_n^k)} |}}, n \in [1,N - 1],$$
    where the subscript ${Z_{n + 1}} - {Z_n}$ denotes the distance interval between the (n + 1)-th and n-th observation plane.
  • (3) The (k + 1)-th object estimation is obtained by the inverse propagation
    $${O^{k + 1}} = A_{{Z_N}}^{ - 1}(C_N^k).$$
It is noted that SBMIR algorithm with N = 2 is equivalent to the conventional Gerchberg-Saxton algorithm [29], which has been used in [24,25]. As the parallel mode, the amplitude-phase retrieval algorithm (APR) [20,21] separately deconvolutes each observed image and averages all deconvoluted guesses for the next object estimation, and it is expressed as
$${O^{k + 1}} = \frac{1}{N}\sum\limits_{n = 1}^N {A_{{Z_n}}^{ - 1}\left[ {\sqrt {{I_n}} \frac{{{A_{{Z_n}}}({O^k})}}{{|{{A_{{Z_n}}}({O^k})} |}}} \right]} .$$
Different from the above two methods, regularization mode constructs the phase retrieval as an optimization problem of the cost function between the desired complex-valued image and observed intensity patterns as follows
$$\hat{O} = \arg \min \sum\limits_{n = 1}^N {\frac{1}{{2\sigma _n^2}}{\bigg \Vert}{{I_n} - {{|{{A_{{Z_n}}}(O)} |}^2}} {\bigg \Vert}_2^2} + \mu \cdot \textrm{pen}(O),$$
where $\textrm{pen}({\cdot} )$ is the penalty term and ${\bigg \Vert}{{I_n} - {{|{{A_{{Z_n}}}(O)} |}^2}} {\bigg \Vert}_2^2$ is the fidelity term. The regularization parameter $\mu$ is given to keep the balance of the observed data fitting (fidelity term) and noise offset (penalty term). If $\mu = 0$, Eq. (6) is transferred to the minimization of fidelity term as successive and parallel modes have done. However, these methods ignore the fact that the observed data is noisy, which easily outputs an over-smoothed or divergent result, especially for fewer measurements. If $\mu > 0$, penalty term can be introduced to suppress the noise, and it is usually constructed based on the probabilistic modeling or heuristic operation. For example, as shown in [23], Migukin et al. proposed the augmented Lagrangian (AL) algorithm [23] to solve Eq. (6) by alternating direction method of multipliers (ADMM), where the penalty was constructed by the Tikhonov’s form, i. e. $\textrm{pen}(O) = ||O ||_2^2$. For the above methods, SBMIR has the fastest convergence speed but is sensitive to noise. APR holds better noise-robustness especially while adding more observation planes. AL could precisely control the convergence accuracy, but finely-tuned parameters of AL usually vary with the noise level and the number of observation planes. The common drawback of these three methods lies in the stagnant and unstable reconstruction of dense sample merely with two measurements.

2.2 Our proposal

For conventional MDPR methods, the bad performance under N = 2 is attributed to two aspects: stagnation problem and noise affection. To solve the two issues, we design two object constraints on APR algorithm. Firstly, a weighted feedback constraint is used to speed up the convergence. In our previous work [22], the weighted feedback constraint was proved to perform a threefold convergence acceleration, but it easily magnifies the background noise for fewer measurements. Thus, a smoothness operation based on TV minimization is designed to constrain the background noise.

The L1-norm TV metric of a complex-valued image S is denoted as

$$\begin{array}{l} {\textrm{TV}}(S) = \int\!\!\!\int {|{\nabla S({x_1},{y_1})} |} d{x_1}d{y_1}\\ {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} = \int\!\!\!\int {\sqrt {{{|{{\nabla_x}S({x_1},{y_1})} |}^2} + {{|{{\nabla_y}S({x_1},{y_1})} |}^2}} } d{x_1}d{y_1}, \end{array}$$
where $({x_1},{y_1})$ denotes the pixel position of an image S. ${\nabla _x}$ and ${\nabla _y}$ represent the gradient operators along x and y direction. The TV metric and its modified versions were usually used as the indicator of image sharpness in the auto-focusing algorithm [30,31]. We provide a numerical simulation in Fig. 2 to show this effect of TV metric. Figure 2(a) is the ground truth image, and the corresponding intensity pattern is shown in Fig. 2(b). With a simple inverse propagation, the diffracted effect is alleviated, and the back-propagated image is given in Fig. 2(c) (only amplitude displayed). The TV values of Figs. 2(a)–2(c) are calculated using Eq. (7) as: 7.7×103, 1.6×104 and 1×104. It is easy to find that the ground truth image holds the minimum of TV values, the back-propagated image takes the second place, and the diffraction pattern gets the maximum. Followed this principle, if the minimization of TV metric is added in the object constraint, the noise contamination will be suppressed. The minimization of TV metric can be simplified by a gradient descent form as
$$\hat{S} = S - t\nabla (\textrm{TV}),$$
where t is the gradient step. Using the Euler-Lagrange equation [32,33], Eq. (8) can be derived as
$$\begin{array}{l} \hat{S} = S - t\left\{ {\frac{{\partial \sqrt {{{|{{\nabla_x}S} |}^2} + {{|{{\nabla_y}S} |}^2}} }}{{\partial {S^\ast }}} - \nabla \cdot \left[ {\frac{{\partial \sqrt {{{|{{\nabla_x}S} |}^2} + {{|{{\nabla_y}S} |}^2}} }}{{\partial (\nabla {S^\ast })}}} \right]} \right\}\\ {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} = S - t\left\{ {\frac{{\partial \sqrt {\nabla (S)\nabla ({S^\ast })} }}{{\partial {S^\ast }}} - \nabla \cdot \left[ {\frac{{\partial \sqrt {\nabla (S)\nabla ({S^\ast })} }}{{\partial (\nabla {S^\ast })}}} \right]} \right\}\\ {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} = S + \frac{t}{2}\nabla \cdot \left[ {\frac{{\nabla S}}{{|{\nabla S} |}}} \right]. \end{array}$$
where ${S^\ast }$ is the complex conjugate of S. Thus, we impose the TV minimization term of Eq. (9) on the iterative procedure of APR to mitigate the noise influence.

 figure: Fig. 2.

Fig. 2. Numerical simulation of intensiy pattern generation and back propagation.

Download Full Size | PDF

With these two object constraints, the flowchart of our method is shown in Fig. 1(b) and its detail is generalized as: (1) the k-th object estimation ${O^k}$, after the forward propagation and amplitude replacement, is transferred to a set of object guesses as follows

$$g_n^k = A_{{Z_n}}^{ - 1}\left[ {\sqrt {{I_n}} \frac{{{A_{{Z_n}}}({O^k})}}{{|{{A_{{Z_n}}}({O^k})} |}}} \right],$$
(2) the TV constraint is imposed on these guesses, and the smoothed data are given as
$$\left\{ \begin{array}{l} \bar{g}_n^k = g_n^k + \frac{t}{2}{\nabla_x} \cdot \left\{ {\frac{{{\nabla_x}({g_n^k} )}}{{|{{\nabla_x}({g_n^k} )} |}}} \right\},\\ \hat{g}_n^k = \bar{g}_n^k + \frac{t}{2}{\nabla_y} \cdot \left\{ {\frac{{{\nabla_y}({\bar{g}_n^k} )}}{{|{{\nabla_y}({\bar{g}_n^k} )} |}}} \right\}, \end{array} \right.$$
(3) multiple guesses are averaged to ${\hat{O}^k} = \frac{1}{N}\sum\limits_{n = 1}^N {\hat{g}_n^k}$. And then the (k + 1)-th object estimation is obtained by using a weighted feedback constraint as
$$\left\{ \begin{array}{l} {O^{k + 1}} = {{\hat{O}}^k},{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} k \le 2,\\ {O^{k + 1}} = (1 + a + b){{\hat{O}}^k} - a{O^k} - b{O^{k - 1}},{\kern 1pt} \textrm{otherwise,} \end{array} \right.$$
where a and b represent the feedback parameters. The value of a ranges from 0.1 to 1, and the parameter b can be chosen by the equation as follows [22]: $b ={-} 0.13a + 0.6$. After some iterations, lensless image reconstruction with N = 2 can be stably finished. For the computational framework of MDPR, successive and parallel modes are based on the alternating projection (AP) framework between object and observation planes, where the constraints are enforced for the corresponding planes. AP-based methods are simple but inherent priors of noise distribution are ignored. In contrast, the AL algorithm introduces the penalty term, and optimizes the cost function in Eq. (6) for image reconstruction. The AL algorithm has a slow convergence speed, and its accuracy heavily relies on the tuning parameters and penalty construction. Inspired by these methods, we use the TV minimization as an object constraint rather than the penalty, and plug the constraint into the AP framework. After this correction, the noise-robustness of the AP framework is improved, and the complex noise modeling construction is removed.

3. Numerical simulation

Numerical simulations with N = 2 are given to compare our method with state-of-the-art algorithms: SBMIR [1719,24,25], APR [20,21], APRDF [22] and AL [23]. The simulations with Gaussian and Poisson noise are provided in Figs. 35. The image of Fig. 2(a) is used as the ground truth. The simulated parameters are listed as: (1) the ground truth image has 600×600 pixels with a pixel size of 1.34µm; (2) the diffraction distances are selected at Z1=0.75mm and Z2=1.5mm; (3) the wavelength is 532nm; (4) the parameters a, b and t in our method are set as 0.7, 0.5 and 0.05, which are used for all simulations and experiments. The normalized correlation coefficient (NCC) is selected to indicate the retrieved accuracy.

 figure: Fig. 3.

Fig. 3. Numerical simulation with the addition of zero-mean Gaussian noise. (a-b) convergence curves with the noise variance of 0.05 and 0.08; (c-g) retrieved images by SBMIR, AL, APR, APRDF, and our method.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Numerical simulation with the addition of Poisson noise. (a) convergence curves with the photon level of 107 photons/pixel; (b-f) retrieved images by SBMIR, AL, APR, APRDF, and our method.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. The NCC values comparison of retrieved results at different intervals between two observation planes under the contamination of (a) Gaussian noise and (b) Poisson noise.

Download Full Size | PDF

After 100 iterations, the NCC curves of different methods are plotted in Figs. 3(a) and 3(b) with the Gaussian noise variance of 0.05 and 0.08. As shown in Figs. 3(a) and 3(b), the curves of conventional methods are divergent, which implies that the conventional MDPR methods with N = 2 are susceptible to the noise turbulence. As a comparison, our method reaches to be convergent merely using ∼20 iterations. The noise susceptibility of MDPR is obviously improved by our method. The corresponding retrieved amplitudes of different methods are shown in Figs. 3(c)–3(g), and the corresponding running time is 26.7s, 126.6s, 28.5s, 29.1s, and 79.6s. It is noted that our method has the best imaging contrast in Fig. 3(g), and the retrieved images of other methods are contaminated by noise in Figs. 3(c)–3(f). A similar simulation with the addition of Poisson noise is illustrated in Fig. 4, where the photon count level is set as $\textrm{1}{\textrm{0}^7}$ photons/pixel. Figure 4(a) is the convergence curve and Figs. 4(b)–4(f) are retrieved by SBMIR, AL, APR, APRDF, and our method with 100 iterations. It can be observed that our method still holds the best resistance for Poisson noise, and other methods are incapable of stable recovery. Especially for the AL method, its NCC curve and retrieved image show a terrible imaging quality, because the noise modeling of AL is based on the assumption of Gaussian distribution rather than Poisson distribution.

In MDPR, the distance selection strategy affects the convergence speed. To further show this issue, the results with different intervals between two observation planes are reconstructed, and the corresponding NCC values are pictured in Fig. 5. Here Figs. 5(a) and 5(b) represent the results with the addition of Gaussian and Poisson noise, respectively. As shown in Fig. 5, the NCC values of conventional methods vary at different intervals. Without this limitation, our method has the highest but nearly-identical NCC values for each interval. Numerical simulations prove the noise-robustness and convergence stability of our method.

4. Experimental results

4.1 USAF resolution target

In the experiment, a fiber laser with a wavelength of 532nm is chosen for illumination. A bare CMOS sensor chip (Sony, IMX206, 1.34 µm×1.34 µm) is held by a linear translation stage (M-403, resolution 0.012 µm, Physik Instrumente Inc.) for the multi-plane measurements. Three types of samples, including USAF resolution target, H&E stained pathological slide, and label-free microglia cell, are used to validate the effectiveness. In the real scene, the tilt illumination of the incident plane wave usually triggers an aligning problem for multi-distance data, which is difficult to be offset by mechanical adjustment. Thus the image registration algorithm [34] is applied to align all axial patterns. The reconstruction of positive USAF resolution target (Ready Optics) is presented in Fig. 6, where the two sample-to-sensor distances are quantified as 0.745mm and 0.975mm by the auto-focusing algorithm [30,31]. After 40 iterations, the retrieved amplitudes of the resolution target are displayed in Figs. 6(a1)–6(e1). The high-resolution regions labeled with green and blue boxes are displayed in Figs. 6(a2)–6(e2) and Figs. 6(a3)–6(e3). For a fair comparison, the regularized parameter of AL is finely tuned, and $\mu = 0.4$ is employed for the reconstruction. Even so, AL cannot accomplish the recovery with N = 2, and its retrieved images are blurred in Figs. 6(b1)–6(b3). In Figs. 6(a), 6(c), and 6(d), although the recovery of low-resolution parts (Groups 4, 5 and 6) can be performed by SBMIR, APR, and APRDF, their reconstructed Groups 7 and 8 are filled with noisy background, which makes it difficult for visual observation. Different from the conventional algorithms, our method outputs the contrast-enhanced images in Figs. 6(e1)–6(e3), where the artifact is remarkably removed, and a half-pitch lateral resolution of 1.096µm is obtained.

 figure: Fig. 6.

Fig. 6. The reconstruction of USAF resolution target with two intensity images. (a1-a3), (b1-b3), (c1-c3), (d1-d3) and (e1-e3) are retrieved by SBMIR, AL, APR, APRDF, and our method. The white bar relates to 200µm.

Download Full Size | PDF

To show the data-efficiency, we provide the reconstructed results of our method with N = 2 and APR with N = 11 in Fig. 7. As APR is run using 11 intensity images with 40 iterations, the background noise in Figs. 6(c2) and 6(c3) is wiped out, and the imaging contrast is strengthed in Figs. 7(a1) and 7(a2). However, the comparison of Fig. 6(c3) and Fig. 7(a2) indicates that the imaging resolution of APR is slightly undermined from N = 2 to N = 11, which originates from the subpixel drift accumulation of the aligned multi-distance data. It is deduced from the above results that increasing observation planes actually functions well in the suppression of background noise, but the resulting aligning problems could lead to the resolution loss. After similar iterations, the results of our method with N = 2 are displayed in Figs. 7(b1) and 7(b2). Our method offers a comparable imaging quality by contrast with APR with 11 measurements, which demonstrates the data-efficiency of our method. To quantitatively show the improvement, the plotlines between blue-dash and red lines of Figs. 7(a2) and 7(b2) are pictured in Fig. 7(c). It can be seen that the image contrast of our method is superior to that of APR for all elements of group 8. Also, the element 6 of group 8 can be resolved by our method. This resolution-holding property benefits from the edge-preserving effect of TV minimization. The experiment of the resolution target proves that our method accomplishes a data-efficient reconstruction without the loss of resolution.

 figure: Fig. 7.

Fig. 7. The reconstructed results of APR (N = 11) and our method (N = 2). (a1-a2) retrieved images by APR with 11 intensity images; (b1-b2) retrieved images by our method with two intensity images; (c) plotlines between blue and red arrows.

Download Full Size | PDF

4.2 Pathological slide

The experiments of H&E stained rat lung and intestine are given in Fig. 8 and Fig. 9. With the use of two intensity images, the full-FOV recovery can be achieved by our method. Due to the layout limitation, the reconstructed full-FOV image is not exhibited. The selected regions with a size of 134µm×134µm are shown in Figs. 8(a1)–8(e1) and Figs. 8(a2)–8(e2) for lung tissue. The intestine images are retrieved in Figs. 8(a3)–8(e3) and Figs. 8(a4)–8(e4). Similar to Fig. 6, the reconstruction result using two intensity patterns is still impossible for AL. The retrieved images of SBMIR, APR, and APRDF are wrapped with noise artifacts. Especially for the intestine tissue, the detailed information, such as texture and cell nucleus, cannot be clearly distinguished from the noisy background for the conventional methods. In contrast, the lung and intestine pathological slides are both clearly recovered by our method in Figs. 8(e1)–8(e4).

 figure: Fig. 8.

Fig. 8. The reconstructed results of H&E stained pathological slides of rat lung and intestine with two intensity images.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. The reconstructed rat intestine images of single-shot algorithm and our method. (a1-a3), (b1-b3) and (c1-c3) are retrieved by the support-based method [7], compressive sensing method [10] and our method.

Download Full Size | PDF

As a comparison, the typical single-shot phase retrieval methods, based on support constraint [7] and compressive sensing [10], are used for the reconstruction of the intestine. It should be pointed out that these two single-shot methods also need to record two intensity images. The two images are composed of the diffraction pattern ${I_1}$ in Z1 and background image ${R_1}$ that recorded by removing the sample. As done in [7], a binary support mask is obtained by the image segmentation and displayed in the red box of Fig. 9(a1). Then, a modified GS algorithm is employed for phase retrieval, and is expressed as

$${O^{k + 1}} = \sup \cdot A_{{Z_1}}^{ - 1}\left[ {\sqrt {{I_1}} \frac{{{A_{{Z_1}}}({O^k})}}{{|{{A_{{Z_1}}}({O^k})} |}}} \right] + M \cdot (1 - \sup ) \cdot A_{{Z_1}}^{ - 1}\left( {\sqrt {{R_1}} } \right),$$
$$M = {{{\mathop{\textrm {mean}}\nolimits} \left\{ {A_{_{{Z_1}}}^{ - 1}\left[ {\sqrt {{I_1}} \frac{{{A_{{Z_1}}}({O^k})}}{{|{{A_{{Z_1}}}({O^k})} |}}} \right]} \right\}} \mathord{\left/ {\vphantom {{{\mathop{\textrm {mean}}\nolimits} \left\{ {A_{_{{Z_1}}}^{ - 1}\left[ {\sqrt {{I_1}} \frac{{{A_{{Z_1}}}({O^k})}}{{|{{A_{{Z_1}}}({O^k})} |}}} \right]} \right\}} {{\mathop{\textrm {mean}}\nolimits} \left[ {A_{_{{Z_1}}}^{ - 1}\left( {\sqrt {{R_1}} } \right)} \right]}}} \right.} {{\mathop{\textrm {mean}}\nolimits} \left[ {A_{_{{Z_1}}}^{ - 1}\left( {\sqrt {{R_1}} } \right)} \right]}},$$
where the symbol ‘sup’ represents the support mask in the red box of Fig. 9(a1). The operator $\textrm{mean}({\cdot} )$ denotes the mean value of the input.

Repetitively calculating Eqs. (13) and (14), the support-based method reconstructs the full-FOV intestine image in Fig. 9(a1), and the selected regions from the blue and green boxes are zoomed in Figs. 9(a2)–9(a3). Intestine tissue includes rich detail, and the support mask cannot be precisely sketched, which leads to a bad convergence. Compressive sensing approach [10] constructs the lensless deconvolution as an optimization problem and combines an iterative shrinkage/thresholding solver with a total variation penalty to output an artifact-free sample image. The retrieved images of compressive sensing approach are displayed in Figs. 9(b1)–9(b3), where the twin-image and border artifact are eliminated, but the imaging resolution is damaged. Compared to the single-shot methods, the retrieved images of our method in Figs. 9(c1)–9(c3) show the remarkable enhancement of the image contrast and resolution.

4.3 Phase imaging

The experiments of the resolution target and pathological slide merely focus on the amplitude reconstruction. To further prove the capability of phase recovery, we provide the experiment of the label-free microglia cell line (BV-2) in Fig. 10 and Fig. 11. The used cells were purchased from Cell Culture Center (Institute of Basic Medicine, Chinese Academy of Medical Sciences). The cells were uniformly seeded in 24-well plates, and placed in a 37°C cell culture incubator. Cell growth was terminated when the cell density reached 40% of the bottom wall area. With the preparation, we fixed the living cells on the clean glass slide for convenient usage and observation. Merely capturing two defocused images, the full-FOV phase image of microglia cells is retrieved by our method in Fig. 10, where the FOV is quantified as 28.6 mm2. The yellow, black and red regions in Fig. 10 are digitally cropped from the retrieved images, and shown in Fig. 11 for different methods. As a comparison, the phase images in Fig. 11 were processed by phase unwrapping algorithm. In Figs. 11(b1)–11(b3), the AL is incompetent for the phase reconstruction under N = 2. The retrieved images of SBMIR, APR and APRDF suffer from the noise affection in Figs. 11(a), 11(c) and 11(d), in which the cell phase images are covered with the nonuniform and noisy background. Different from the conventional methods, our method effectively eliminates the noise artifact and accomplishes a contrast-enhanced phase reconstruction in Figs. 11(e1)–11(e3).

 figure: Fig. 10.

Fig. 10. The full-FOV phase reconstruction of label-free microglia cell by our method. The white bar corresponds to 1mm.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. The retrieved phase images of the selected regions for different methods.

Download Full Size | PDF

5. Conclusions

We proposed a dual-plane phase retrieval to achieve a stable and versatile image reconstruction for lensfree on-chip microscopy. In our method, two object constraints, including weighted feedback and TV minimization, were proposed to eliminate the stagnation convergence and noisy background under two-shot measurements. In numerical simulations, we analyzed the robustness effect of our method on the noise type, noise level, and distance selection. In the experiment, the results of the positive USAF resolution target proved that our method is data-efficient and resolution-preserving in comparison with SBMIR, AL, APR, and APRDF. Also, our method outperformed the conventional methods on the reconstruction of the pathological slide and label-free microglia cell line (BV-2). Without the extra prior knowledge of object scene, our method can achieve a high fidelity reconstruction by contrast with the support-based and compressive sensing method.

This work effectively decreases the redundancy of axial scanning measurement, and its advantages can be generalized in the following. Firstly, our method could achieve a data-reduction for lensfree pixel super-resolution (PSR) imaging. As shown in [5,6], the PSR needs a 3D-stack (6×6×8) data to accomplish a twin-image-free and subpixel reconstruction. Equipped with our method, the data requirement of PSR will be obviously decreased. Secondly, the acquisition rate for axial scanning system is increased to provide a promising dynamic living cell imaging tool for lensfree dual-sensor system. Thirdly, the proposed method could be applied for other lensfree systems, such as wavelength scanning [1213] and multi-angle illumination [14], to perform a data-efficient reconstruction.

Funding

National Natural Science Foundation of China (61575055, 61575053, 11874132, 61975044).

Disclosures

The authors declare no conflicts of interest.

References

1. L. Herve, O. Cioni, P. Blandin, F. Navarro, M. Menneteau, T. Bordy, S. Morales, and C. Allier, “Multispectral total-variation reconstruction applied to lens-free microscopy,” Biomed. Opt. Express 9(11), 5828–5836 (2018). [CrossRef]  

2. D. Ryu, Z. Wang, K. He, G. Zheng, R. Horstmeyer, and O. Cossairt, “Subsampled phase retrieval for temporal resolution enhancement in lensless on-chip holographic video,” Biomed. Opt. Express 8(3), 1981–1995 (2017). [CrossRef]  

3. Y. Zhang, Y. Shin, K. Sung, S. Yang, H. Chen, H. Wang, D. Teng, Y. Rivenson, R. P. Kulkarni, and A. Ozcan, “3D imaging of optically cleared tissue using a simplified CLARITY method and on-chip microscopy,” Sci. Adv. 3(8), e1700553 (2017). [CrossRef]  

4. D. Tseng, O. Mudanyali, C. Oztoprak, S. O. Isikman, I. Sencan, O. Yagliderea, and A. Ozcan, “Lensfree microscopy on a cellphone,” Lab Chip 10(14), 1787–1792 (2010). [CrossRef]  

5. A. Greenbaum, W. Luo, T. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9(9), 889–895 (2012). [CrossRef]  

6. A. Greenbaum, Y. Zhang, A. Feizi, P. Chung, W. Luo, S. R. Kandukuri, and A. Ozcan, “Wide-field computational imaging of pathology slides using lensfree on-chip microscopy,” Sci. Transl. Med. 6(267), 267ra175 (2014). [CrossRef]  

7. O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab Chip 10(11), 1417–1428 (2010). [CrossRef]  

8. S. M. F. Raupach, “Cascaded adaptive-mask algorithm for twin-image removal and its application to digital holograms of ice crystals,” Appl. Opt. 48(2), 287–301 (2009). [CrossRef]  

9. C. Gaur, B. Mohan, and K. Khare, “Sparsity-assisted solution to the twin image problem in phase retrieval,” J. Opt. Soc. Am. A 32(11), 1922–1927 (2015). [CrossRef]  

10. W. Zhang, L. Cao, D. J. Brady, H. Zhang, J. Cang, H. Zhang, and G. Jin, “Twin-image-free holography: a compressive sensing approach,” Phys. Rev. Lett. 121(9), 093902 (2018). [CrossRef]  

11. D. W. E. Noom, K. S. E. Eikema, and S. Witte, “Lensless phase contrast microscopy based on multiwavelength Fresnel diffraction,” Opt. Lett. 39(2), 193–196 (2014). [CrossRef]  

12. S. Witte, V. T. Tenner, D. W. E. Noom, and K. S. E. Eikema, “Lensless diffractive imaging with ultra-broadband table-top sources: from infrared to extreme-ultraviolet wavelengths,” Light: Sci. Appl. 3(3), e163 (2014). [CrossRef]  

13. W. Luo, Y. Zhang, A. Feizi, Z. Gorocs, and A. Ozcan, “Pixel super-resolution using wavelength scanning,” Light: Sci. Appl. 5(4), e16060 (2016). [CrossRef]  

14. W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light: Sci. Appl. 4(3), e261 (2015). [CrossRef]  

15. P. Li and A. Maiden, “Lensless LED matrix ptychographic microscope: problems and solutions,” Appl. Opt. 57(8), 1800–1806 (2018). [CrossRef]  

16. D. Claus and J. M. Rodenburg, “Diffraction-limited superresolution ptychography in the Rayleigh-Sommerfeld regime,” J. Opt. Soc. Am. A 36(2), A12–A19 (2019). [CrossRef]  

17. G. Pedrini, W. Osten, and Y. Zhang, “Wave-front reconstruction from a sequence of interferograms recorded at different planes,” Opt. Lett. 30(8), 833–835 (2005). [CrossRef]  

18. A. Greenbaum and A. Ozcan, “Maskless imaging of dense samples using pixel super-resolution based multi-height lensfree on-chip microscopy,” Opt. Express 20(3), 3129–3143 (2012). [CrossRef]  

19. L. J. Allen and M. P. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. 199(1-4), 65–75 (2001). [CrossRef]  

20. L. J. Allen, W. McBride, N. L. O. Leary, and M. P. Oxley, “Exit wave reconstruction at atomic resolution,” Ultramicroscopy 100(1-2), 91–104 (2004). [CrossRef]  

21. Z. Liu, C. Guo, J. Tan, Q. Wu, L. Pan, and S. Liu, “Iterative phase-amplitude retrieval with multiple intensity images at output plane of gyrator transforms,” J. Opt. 17(2), 025701 (2015). [CrossRef]  

22. C. Guo, C. Shen, Q. Li, J. Tan, S. Liu, X. Kan, and Z. Liu, “A fast-converging iterative method based on weighted feedback for multi-distance phase retrieval,” Sci. Rep. 8(1), 6436 (2018). [CrossRef]  

23. A. Migukin, V. Katkovnik, and J. Astola, “Wave field reconstruction from multiple plane intensity-only data: augmented lagrangian algorithm,” J. Opt. Soc. Am. A 28(6), 993–1002 (2011). [CrossRef]  

24. Y. Zhang, G. Pedrini, W. Osten, and H. J. Tiziani, “Whole optical wave field reconstruction from double or multi in-line holograms by phase retrieval algorithm,” Opt. Express 11(24), 3234–3241 (2003). [CrossRef]  

25. Z. Li, L. Li, Y. Qin, G. Li, D. Wang, and X. Zhou, “Resolution and quality enhancement in terahertz in-line holography by sub-pixel sampling with double-distance reconstruction,” Opt. Express 24(18), 21134–21146 (2016). [CrossRef]  

26. C. Guo, Y. Zhao, J. Tan, S. Liu, and Z. Liu, “Multi-distance phase retrieval with a weighted shrink-wrap constraint,” Opt. Lasers Eng. 113, 1–5 (2019). [CrossRef]  

27. Y. Zhang, G. Pedrini, W. Osten, and H. J. Tiziani, “Reconstruction of in-line digital holograms from two intensity measurements,” Opt. Lett. 29(15), 1787–1789 (2004). [CrossRef]  

28. B. Das and C. S. Yelleswarapu, “Dual plane in-line digital holographic microscopy,” Opt. Lett. 35(20), 3426–3428 (2010). [CrossRef]  

29. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

30. Y. S. Choi and S. J. Lee, “Three-dimensional volumetric measurement of red blood cell motion using digital holographic microscopy,” Appl. Opt. 48(16), 2983–2990 (2009). [CrossRef]  

31. E. Krotkov, “Focusing,” Int. J. Comp. Vis. 1(3), 223–237 (1988). [CrossRef]  

32. I. M. Gelfand and V. A. Fomin, translated and edited by R. A. Silverman, Calculus of Variations (Dover, 2000).

33. D. H. Brandwood, “A complex gradient operator and its application in adaptive array theory,” IEE Proc., Part H: Microwaves, Opt. Antennas 130(1), 11–16 (1983). [CrossRef]  

34. M. Guizar-Sicairos, S. T. Thurman, and J. R. Fienup, “Efficient subpixel image alignment algorithms,” Opt. Lett. 33(2), 156–158 (2008). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. The proposed imaging method: (a) the experimental system and (b) a flowchart.
Fig. 2.
Fig. 2. Numerical simulation of intensiy pattern generation and back propagation.
Fig. 3.
Fig. 3. Numerical simulation with the addition of zero-mean Gaussian noise. (a-b) convergence curves with the noise variance of 0.05 and 0.08; (c-g) retrieved images by SBMIR, AL, APR, APRDF, and our method.
Fig. 4.
Fig. 4. Numerical simulation with the addition of Poisson noise. (a) convergence curves with the photon level of 107 photons/pixel; (b-f) retrieved images by SBMIR, AL, APR, APRDF, and our method.
Fig. 5.
Fig. 5. The NCC values comparison of retrieved results at different intervals between two observation planes under the contamination of (a) Gaussian noise and (b) Poisson noise.
Fig. 6.
Fig. 6. The reconstruction of USAF resolution target with two intensity images. (a1-a3), (b1-b3), (c1-c3), (d1-d3) and (e1-e3) are retrieved by SBMIR, AL, APR, APRDF, and our method. The white bar relates to 200µm.
Fig. 7.
Fig. 7. The reconstructed results of APR (N = 11) and our method (N = 2). (a1-a2) retrieved images by APR with 11 intensity images; (b1-b2) retrieved images by our method with two intensity images; (c) plotlines between blue and red arrows.
Fig. 8.
Fig. 8. The reconstructed results of H&E stained pathological slides of rat lung and intestine with two intensity images.
Fig. 9.
Fig. 9. The reconstructed rat intestine images of single-shot algorithm and our method. (a1-a3), (b1-b3) and (c1-c3) are retrieved by the support-based method [7], compressive sensing method [10] and our method.
Fig. 10.
Fig. 10. The full-FOV phase reconstruction of label-free microglia cell by our method. The white bar corresponds to 1mm.
Fig. 11.
Fig. 11. The retrieved phase images of the selected regions for different methods.

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

I n = | A Z n ( O ) | 2 + ε , n [ 1 , N ] ,
C 1 k = I 1 A Z 1 ( O k ) | A Z 1 ( O k ) | ,
C n + 1 k = I n + 1 A Z n + 1 Z n ( C n k ) | A Z n + 1 Z n ( C n k ) | , n [ 1 , N 1 ] ,
O k + 1 = A Z N 1 ( C N k ) .
O k + 1 = 1 N n = 1 N A Z n 1 [ I n A Z n ( O k ) | A Z n ( O k ) | ] .
O ^ = arg min n = 1 N 1 2 σ n 2 I n | A Z n ( O ) | 2 2 2 + μ pen ( O ) ,
TV ( S ) = | S ( x 1 , y 1 ) | d x 1 d y 1 = | x S ( x 1 , y 1 ) | 2 + | y S ( x 1 , y 1 ) | 2 d x 1 d y 1 ,
S ^ = S t ( TV ) ,
S ^ = S t { | x S | 2 + | y S | 2 S [ | x S | 2 + | y S | 2 ( S ) ] } = S t { ( S ) ( S ) S [ ( S ) ( S ) ( S ) ] } = S + t 2 [ S | S | ] .
g n k = A Z n 1 [ I n A Z n ( O k ) | A Z n ( O k ) | ] ,
{ g ¯ n k = g n k + t 2 x { x ( g n k ) | x ( g n k ) | } , g ^ n k = g ¯ n k + t 2 y { y ( g ¯ n k ) | y ( g ¯ n k ) | } ,
{ O k + 1 = O ^ k , k 2 , O k + 1 = ( 1 + a + b ) O ^ k a O k b O k 1 , otherwise,
O k + 1 = sup A Z 1 1 [ I 1 A Z 1 ( O k ) | A Z 1 ( O k ) | ] + M ( 1 sup ) A Z 1 1 ( R 1 ) ,
M = mean { A Z 1 1 [ I 1 A Z 1 ( O k ) | A Z 1 ( O k ) | ] } / mean { A Z 1 1 [ I 1 A Z 1 ( O k ) | A Z 1 ( O k ) | ] } mean [ A Z 1 1 ( R 1 ) ] mean [ A Z 1 1 ( R 1 ) ] ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.