Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Ptychography-based high-throughput lensless on-chip microscopy via incremental proximal algorithms

Open Access Open Access

Abstract

Ptychography-based lensless on-chip microscopy enables high-throughput imaging by retrieving the missing phase information from intensity measurements. Numerous reconstruction algorithms for ptychography have been proposed, yet only a few incremental algorithms can be extended to lensless on-chip microscopy because of large-scale datasets but limited computational efficiency. In this paper, we propose the use of accelerated proximal gradient methods for blind ptychographic phase retrieval in lensless on-chip microscopy. Incremental gradient approaches are adopted in the reconstruction routine. Our algorithms divide the phase retrieval problem into sub-problems involving the evaluation of proximal operator, stochastic gradient descent, and Wirtinger derivatives. We benchmark the performances of accelerated proximal gradient, extended ptychographic iterative engine, and alternating direction method of multipliers, and discuss their convergence and accuracy in both noisy and noiseless cases. We also validate our algorithms using experimental datasets, where full field of view measurements are captured to recover the high-resolution complex samples. Among these algorithms, accelerated proximal gradient presents the overall best performance regarding accuracy and convergence rate. The proposed methods may find applications in ptychographic reconstruction, especially for cases where a wide field of view and high resolution are desired at the same time.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Lensless microscopy enables high-throughput imaging without any lens and other bulky optical components, which is common in conventional lens-based microscopy. With high-throughput imaging sensors and large-scale parallel computational capacity, lensless on-chip microscopy has become an important part of computational imaging and had many applications in recent years [17]. According to the distance between the sample and image sensor, lensless on-chip microscopy can be classified into diffraction imaging [8] and shadow imaging [9]. In diffraction imaging, the distance between the sample and sensor is on the millimeter scale, and the reconstruction is computed from the measured diffraction patterns, which have high computational complexity but the reconstruction accuracy is limited. On the contrary, in shadow imaging, the sample is placed as close as possible to the active sensing area of the imager, and the diffraction effect is negligible. Therefore, a sharp intensity image can be directly captured, and by simple filtered back projection, we can recover the object intensity information [1012]. However, the resolution of the shadow imaging depends on the physical pixel size of the image sensor, which is limited by manufacturing technique, and the phase information of the wavefront is lost in the measurements.

As an advanced reconstruction principle in lensless microscopy, ptychography [13] is a computational imaging approach to retrieve the missing phase of the wavefront from a sequence of measurements, and can also improve the resolution of the recovered image. With lensless on-chip microscopy via near-field blind ptychographic modulation [14,15], we can recover the wide-field, high-resolution complex image without using any lens. In the lensless system, a thin diffuser is placed between the object and the image sensor. By scanning the diffuser to different lateral positions, the image sensor captures a sequence of intensity images to retrieve the complex object information. With the diffuser modulation, the high-resolution object information can be effectively encoded into the captured images [16,17]. An up-sampling phase retrieval scheme is employed to bypass the resolution limit set by the pixel size of the image sensor. The system is different from traditional ptychography. It employs a unit-magnification configuration with a Fresnel number of ∼50 000 and has the entire sensor area as the imaging field of view (FOV). The large Fresnel number results in a weak diffraction effect, and therefore, the unknown positional shift of the diffuser can be estimated accurately via cross-correlation.

There are many robust ptychographic phase retrieval algorithms for traditional ptychography, some are based on alternating projection or gradient-descent [1826]. Among those algorithms, only a few like extended ptychographic iterative engine (ePIE) [18] are incremental approaches, whereas the others like alternating direction method of multipliers (ADMM) [27] are global approaches, where the entire dataset is used to perform a batch update in each iteration. In lensless on-chip microscopy, massive datasets are processed and restored, and it is difficult to apply those global approaches.

Although ePIE converges robustly and at a reasonable rate, the convergence rate and accuracy can be further improved with proximal algorithms. Proximal algorithms [28] are well-suited for convex optimization problems whose objective functions include non-differentiable parts. Proximal algorithms divide objective functions into non-differentiable and differentiable sub-problems with the proximal operator to find a closed-form solution. Proximal algorithms are originally devised for the convex real-value problem, while ptychography is a non-convex complex-value problem. With Wirtinger derivatives [29], we can derive the differentiation of real-valued functions on complex domains, and adapt the proximal algorithm to solve the ptychographic problem. ‘Global’ proximal algorithms for ptychography were proposed by Yan [30], which shares similarities with alternating projection and gradient descent.

Here, we propose efficient proximal algorithms for lensless on-chip microscopy, which are incremental extensions of the accelerated proximal gradient. Thus, we term these algorithms incremental accelerated proximal gradient (iAPG). We mainly focus on the ptychographic reconstruction without noise and with the Poisson noise, which commonly occurs during the image acquisition process. We demonstrate the effectiveness of iAPG in both simulation and experiment, and incremental gradient algorithms, ePIE, ADMM, are compared with iAPG. iAPG has the fastest convergence rate and highest accuracy among them. Especially, it achieves a satisfying recovery with fewer measurements. In addition, adaptive step-size strategy [31] or other techniques of ‘incremental’ ptychographic algorithms can be readily extended to our algorithms.

This paper is organized as follows. In Section 2, we build the mathematical model for our lensless system and discuss the reconstruction process via alternating projection method, i.e. ePIE. In Section 3, we demonstrate the procedures of our proposed algorithms in detail. In Section 4, we benchmark the performance of iAPG with ePIE, ADMM and discuss their convergence and accuracy. In Section 5, we present the experimental result of iAPG and ePIE. In Section 6, we summarize the results and discuss the advantages of our algorithms.

2. Ptychography based lensless on-chip microscopy

2.1 Forward imaging model

In our near-field blind ptychographic system, we place an unknown diffuser between the object and the image sensor for lightwave modulation, as shown in Fig. 1. There is a distance of ∼1 mm between the object and the image sensor.By blindly scanning the diffuser to different lateral positions, we acquire modulated intensity images for ptychographic phase retrieval. The forward imaging model is expressed as

$${I_j\left( {x,y} \right)} = \left| \begin{array}{c}{{ {\left[ {O\left( {x,y} \right)\ast PSF_{free}\left( {d_1} \right)\cdot D\left( {x-x_j,y-y_j} \right)} \right]}}} \\ {*PSF_{free}\left( {d_2} \right)}\end{array} \right|_{\downarrow M}^2 , $$
where ${I_j}({x,y} )$ is the $j$th intensity measurement ($j = 1,2, \cdots ,J$), $O({x,y} )$ is the complex exit wavefront of the object, $D({x,y} )$ is the complex diffuser profile, $({{x_j},{y_j}} )$ is the $j$th positional shift of the diffuser, ‘$\cdot $’ is Hadamard product, and ‘${\ast }$’ is the convolution operation. ${d_1}$ is the distance between the object and the diffuser, and ${d_2}$ is the distance between the diffuser and the image sensor. $PS{F_{free}}(d )$ is the point spread function to represent the free-space propagation over distance d in angular spectrum form. Because the pixel size is relatively large, the captured image is actually down-sampled, and the subscript ‘$\downarrow M$’ stands for the down-sampling process. With these captured images ${I_j}$, we can recover the complex-valued exit wavefront of the object $O({x,y} )$ and the diffuser profile $D({x,y} )$ via upsampled ptychographic phase retrieval algorithms.

 figure: Fig. 1.

Fig. 1. Ptychography based lensless on-chip microscopy. (a) A diffuser is placed between the object and the image sensor for lightwave modulation. The distance between the object and the diffuser is ${d_1}$, and the distance between the diffuser and the image sensor is ${d_2}$. (b) The experimental platform, and the distance between the object and the image sensor is ∼1 mm.

Download Full Size | PDF

2.2 Reconstruction via alternating projection

There are many popular alternating projection algorithms for ptychographic phase retrieval problems, e.g. the Douglas-Rachford (DR) algorithm by Thibault et al. [32], ePIE by Maiden and Rodenburg [18] and the relaxed averaged alternating reflections [33] based projection algorithm by Marchesini et al. [26]. Among these algorithms, only ePIE employs measurements one-by-one to iteratively update the pattern and object, which is described as a stochastic or incremental gradient approach. We focus on this kind of incremental algorithm, and ePIE reconstruction is described in detail as follows.

We can express ePIE as an alternating projection method to find the exit wave of the diffuser plane $\psi _j^\ast $ satisfying

$$\begin{aligned} \psi _j^{\ast } \in &\left\{ {{\psi_j}:{{|{{\psi_j}{\ast }PS{F_{free}}({{d_2}} )} |}_{ \downarrow M}} = \sqrt {{I_j}} } \right\} \cap \\ &{\{{{\psi_j}:\exists U,{D_j},\,s.t.\,U\cdot {D_j} = {\psi_j}} \}, } \end{aligned}$$
where $U({x,y} )= O({x,y} )\ast PS{F_{free}}({{d_1}} )$ is the object wavefront on the diffuser plane. The alternating projection is to determine the intersection set via calculating the projection on these two sets. The recovery process is shown in Table 1. We first recover the positional shift of the diffuser using cross-correlation. We then initialize the object and the diffuser profile, which will be discussed in detail later. In the $k$th iteration, we obtain the exit wave at the detector plane $\varPsi _j^k$ with the forward model as line 9. Secondly, we update the exit wave $\varPsi _j^{k + 1}$ as line 10, where the size of $\varPsi _j^k$ is different from ${I_j}$. If the up-sampling rate $M = 2$, $\varPsi _j^k$ is twice the size of ${I_j}$. In the denominator of line 10, ${|{\varPsi _j^k} |^2}$ convolves with an average filter ($M \times M$ all-ones matrix) followed by M-times down-sampling. This update is to sum the intensity of each $M \times M$ pixels patch and make the summation equal to the corresponding large pixel in the measurement ${I_j}$. After the amplitude replacement process, we propagate back $\varPsi _j^{k + 1}$ to the diffuser plane as line 11. With the updated exit wave at the diffuser plane $\psi _j^{k + 1}$, we can update the object wavefront on the diffuser plane ${U^{k + 1}}$ and the shifted diffuser $D_j^{k + 1}$ as lines 12, 13, where the subscript $max$ stands for taking the maximum, i.e. $|{D_j^k} |_{max}^2 = \mathop {\max }\limits_{x,y} {|{D_j^k({x,y} )} |^2}$.

We initialize the object $O({x,y} )$ by directly averaging captured image ${I_j}({x,y} )$ as follows:

$${{O^0}({x,y} )= {{\left( {\frac{1}{J}\mathop \sum \limits_{j = 1}^J \sqrt {{I_j}({x,y} )} } \right)}_{ \uparrow M}},}$$
where the subscript ‘$\uparrow M$’ stands for the nearest-neighbor up-sampling. The diffuser profile $D({x,y} )$ is initialized as follows:
$${{D^0}({x,y} )= {{\left( {\frac{1}{J}\mathop \sum \limits_{j = 1}^J \sqrt {I_j^{({pad} )}({x + {x_j},y + {y_j}} )} } \right)}_{ \uparrow M}},\; }$$
where the superscript $({pad} )$ stands for the padding operator to consider the boundary pixels of the image in the registration process. In Eq. (4), we shift the padded measurement $I_j^{({pad} )}({x,y} )$ to different positions and average them. Therefore, the initialized diffuser ${D^0}({x,y} )$ is a little larger than the object ${O^0}({x,y} )$. In line 7, $D_j^k$ is the central region of the shifted diffuser profile ${D^k}({x - {x_j},y - {y_j}} )$, which has the same size with $U^k$. In line 14, we first replace the central region of ${D^k}({x - x_{j}}-{y - y_{j}})$ with ${D_{j}^{k+1}}$, then shift it back to get the new estimate ${D^{k+1}(x,y)}$. In the following proposed algorithms, we adopt the same registration and initialization process.

3. Proposed incremental reconstruction algorithms

For ptychographic phase retrieval problems, incremental approaches update the object and the diffuser profile using intensity measurements one by one, whereas global approaches use the entire datasets to perform one update at each iteration. With a dataset of $J$ measurements and $N$ loops for the recovery process, the time consumption is ${O(N \cdot J)}$ for both approaches, but the memory consumption is $O(1)$ for the incremental approaches and $O(J)$ for the global approaches. In lensless microscopy, it usually takes the memory of several GB to process a wide FOV image. Therefore, it is a huge consumption to process the entire dataset in global approaches. On the contrary, incremental approaches offer reduced computational and memory consumption and a faster convergence rate than global approaches. Inspired by this, we focus on incremental approaches and we will demonstrate our proposed incremental algorithms in this section.

3.1 Alternating direction method of multiplier

ADMM is derived from the augmented Lagrangian method [34], and its application for ptychography was reported in [27,35]. Standard ADMM takes a global manner, which imposes a huge demand on the memory storage capacity for the reconstruction of lensless on-chip microscopy, so we extend it into an incremental approach. For ptychographic phase retrieval, the reconstruction can be expressed as an optimization problem with a non-convex constraint:

$${{\mathop {\min }\limits_{\varPsi _j} \mathop \sum \limits_{j = 1}^J f\left( {\varPsi _j} \right),\; \; s.t.\; \varPsi _j = \left( {D_j\cdot U} \right)*PSF_{free}\left( {d_2} \right),} }$$
where $f$ is the loss function defined as:
$$f({{\varPsi _j}} )= \frac{1}{2}\vert\vert\sqrt {{I_j}} - \sqrt {{{({{{|{\varPsi _j} |}^2}{\ast }ones({M,M} )} )}_{ \downarrow M}}} \vert\vert_2^2. $$

For an incremental algorithm, the optimization problem should be reformulated as follows:

$$ \mathop {\min }\limits_{{\Psi _j}} f({{\varPsi _j}} ),{\kern 3pt} s.t.\; {\varPsi _j} = ({{D_j}\cdot U} )\ast PS{F_{free}}({{d_2}} ). $$

The augmented Lagrangian loss function can be expressed as

$$\begin{aligned} L\left( {U,D_j,\varPsi _j;\mathrm{\varLambda }} \right) &= f\left( {\varPsi _j} \right) + Re\langle\mathrm {\varLambda },\varPsi _j-\left( {D_j\cdot U} \right){\ast}PSF_{free}\left( {d_2} \right)\rangle \\ & \quad + {\displaystyle{\beta \over 2}\vert\vert\varPsi _j-\left( {D_j\cdot U} \right){\ast}PSF_{free}\left( {d_2} \right)\vert\vert_2^2 ,} \end{aligned}$$
where $\varLambda $ is the complex-valued Lagrangian multiplier and $\beta $ is a positive real value parameter and $Re(x,y)$ is the real part of the inner product of $x,\; y$.

The ADMM algorithm is to find a saddle point of the problem $\mathop {\max }\limits_\mathrm{\varLambda } \mathop {\min }\limits_{{D_j},U,{{\Psi }_j}} L\langle{U,{D_j},{{\Psi }_j};\varLambda }\rangle$ and a direct solution is to split it into three subproblems of $U,{D_j},{\Psi _j}$ as Eq. (9-11) and then update the $\varLambda \,$ as line 15 in Table 2.

$$\varPsi _j^{k + 1} = \mathop {argmin}\limits_{{\Psi _j}} L({{U^k},D_j^k,{\varPsi _j};{\varLambda ^k}} ),$$
$${ {U^{k + 1} = \mathop {argmin}\limits_U L\left( {U,D_j^k ,\varPsi _j^{k + 1} ;\varLambda ^k} \right) + \displaystyle{{\gamma _1} \over 2}\left\| {U-U^k} \right\|_2^2 ,} } $$
$${ {D_j^{k + 1} = \mathop {argmin}\limits_D L\left( {U^k,D_j,\varPsi _j^{k + 1} ;\varLambda ^k} \right) + \displaystyle{{\gamma _2} \over 2}\left\| {D_j-D_j^k } \right\|_2^2 .} } $$

Here, we directly give the closed-form solution of these sub-problems as shown in Table 2 lines 10, 12, 13, and the detailed derivation is in Supplement 1. The complete ADMM algorithm in incremental form is shown in Table 2.

3.2 Incremental accelerated proximal gradient

We can reformulate the problem of lensless imaging reconstruction into unconstrained optimization as follows:

$$\mathop {\min }\limits_{{\Psi _j}} f({{\varPsi _j}} )+ g({{\varPsi _j}} ), $$
where $f(x )$ is the negative likelihood loss function based on maximum likelihood estimation. Here, two models of different noise are discussed: amplitude Gaussian and intensity Poisson model respectively [36]. For the Gaussian model,
$${f_G}({{\varPsi _j}} )= \frac{1}{2}\vert\vert\sqrt {{I_j}} - \sqrt {{{({{{|{{\varPsi _j}} |}^2}{\ast }ones({M,M} )} )}_{ \downarrow M}}} \vert\vert_2^2, $$
and for the Poisson model,
$${f_P}({{\varPsi _j}} )= \frac{1}{2}\langle{{|{{\varPsi _j}} |}^2}{\ast }ones{{({M,M} )}_{ \downarrow M}} - ({{I_j} + \varepsilon } )\cdot log ({{{|{{\varPsi _j}} |}^2}{\ast }ones{{({M,M} )}_{ \downarrow M}} + \varepsilon } ),\mathbf{1}\rangle, $$
where $\varepsilon $ is a small real-value constant as suggested in [27]. $g(x )$ is an indicator function for the non-convex constraint of propagation and is non-differentiable:
$$ g\left( {\varPsi _j} \right) = \left\{ \begin{array}{rl}{0,\; \varPsi _j\in \left\{ {\varPsi _j:\varPsi _j} \right.} & {\left. { = \left( {D_j\cdot U} \right)\ast PSF_{free}\left( {d_2} \right)} \right\}} \\ \hfill {\infty ,} & {otherwise}\end{array} \right.,$$

The proximal gradient algorithm can be derived as

$$\varPsi _j^{k + 1} = pro{x_g}({\varPsi _j^k - {\mathrm {\lambda}^k}\nabla f({\varPsi _j^k} )} ), $$
where $\nabla f({{\mathrm{\varPsi }_j}} )$ is the gradient of $f({{\mathrm{\varPsi }_j}} )$, ${\lambda ^k}$ is a positive real value representing the step size of gradient descent, and $pro{x_g}(v )$ is the proximal operator defined as:
$$pro{x_g}(v )= \mathop {argmin}\limits_x g(x )+ \vert\vert x - v\vert\vert_2^2. $$

For indicator function, the proximal operator is Euclidean projection, i.e. $pro{x_g}(v )= \mathop {\textrm{argmin}}\limits_{x \in dom\,g} \vert\vert x - v\vert\vert_2^2$, which has a closed-form solution in our algorithm. The solution to the projection can be obtained in many different ways, and we adopt the solution just like ePIE as lines 17, 18 in Table 3. Although it is an inexact projection, it has better performances in the following simulation and experiments, which demonstrates the validity of such iterative updates. We can also adopt the update steps in regularized ptychographic iterative engine (rPIE), by simply replacing lines 17, 18 in Table 3 as follows:

$${U^{k + 1}} = {U^k} + \frac{{conj({D_j^k} )\cdot ({\psi_j^{k + 1} - \psi_j^k} )}}{{({1 - {\gamma_{obj}}} ){{|{D_j^k} |}^2} + {\gamma _{obj}}|{D_j^k} |_{max}^2}}, $$
$$D_j^{k + 1} = D_j^k + \frac{{conj({{U^k}} )\cdot ({\psi_j^{k + 1} - \psi_j^k} )}}{{({1 - {\gamma_{pt}}} ){{|{{U^k}} |}^2} + {\gamma _{pt}}|{{U^k}} |_{max}^2}}, $$
where ${\gamma _{obj}}$ and ${\gamma _{pt}}$ are real-value parameters for the object and diffuser profile, respectively. According to [22], these update steps (ePIE and rPIE) are interpreted as estimates with different weights of previous and revised projections. Without loss of generality, we discuss iAPG with the update steps of ePIE and compare its performance with ePIE in the following sections.

To accelerate the proximal algorithm, the following extrapolation steps can be used

$$\hat{\varPsi }_j^k = \varPsi _j^k + {\omega ^k}({\varPsi _j^k - \varPsi _j^{k - 1}} ), $$
$$\varPsi _j^{k + 1} = \; pro{x_g}({\hat{\varPsi }_j^k - {\mathrm {\lambda}^k}\nabla f({\hat{\varPsi }_j^k} )} ), $$
$${\omega ^k} = \frac{{n - 1}}{{n + 2}},\; n = 1,\; 2,\; \cdots , $$
where n stands for the $n$th loop in the recovery process. The acceleration is inspired by Nesterov’s work [37]. With such an acceleration manner, the proximal gradient becomes the accelerated proximal gradient. Especially, when $\mathrm {\lambda} = 1$ for the Gaussian model, our algorithm is equivalent to ePIE with accelerated extrapolation.

Our algorithm is an incremental gradient approach and is different from the accelerated proximal gradient algorithms by Yan [30], which is a global approach. The computational complexity of our algorithm is lower than the global one. We further adopt an inexact projection as the solution of the proximal operator, which is more robust in the incremental case.

4. Numerical simulation

In this section, we compare different algorithms over simulation data. The diffuser profile is generated randomly with uniform distribution. In this simulation, the pixel size of the image sensor is set as 1.67 μm, and the distance ${d_1} = {d_2} = $150 μm. The scanning is a regular square grid of positions in a random sequence with lateral scanning step size approximately equal to the pixel size. The translational shift position is known accurately and the up-sampling factor $M = 2$. We employ the forward imaging model, Eq. (1), to generate 225 raw images for the following recovery process. To evaluate the performance of different algorithms, we use the root mean square error (RMSE) defined as:

$$RMSE = \mathop {\min }\limits_\alpha \vert\vert O({x,y} )- \alpha \hat{O}{{({x,y} )}\vert\vert_2}, $$
where $O({x,y} )$ is the ground truth and $\hat{O}({x,y} )$ is the recovered object, $\alpha $ is a complex value representing the global scaling factor for removing the ambiguity in ptychography reconstruction. The closed-form solution is $\alpha = \mathop \sum \nolimits_{x,y} conj({O({x,y} )} )\cdot\hat{O}({x,y} )\textrm{ / }\vert\vert O({x,y} )\vert\vert_2^2$. We choose RMSE as the metric because the recovered object is complex-valued and RMSE can evaluate the amplitude and phase collectively.

We first study the noiseless case and the results of different algorithms mentioned in the preceding sections are shown in Fig. 2. Here, we only plot the RMSE curves to give a quantitative analysis of the convergence, and the result of ePIE is regarded as a benchmark. In the result of ADMM (Fig. 2(a)), we find when $\beta = {\gamma _1} = {\gamma _2},\; $ the algorithm converges to better performance, and its convergence rate depends on $\beta $. For a smaller value of $\beta $, the convergence rate is faster and the RMSE is lower for ADMM. When $\beta < {10^{ - 3}}$, the RMSE curve is similar to the curve of $\beta = {10^{ - 3}}$. We also notice that the ADMM algorithm converges faster in the first several iterations, yet a higher RMSE at last compared with ePIE. When $\beta \to 0$, ADMM tends to get close to ePIE, but the performance of ADMM doesn’t surpass ePIE. Under certain parameters, $\beta \to 0,\; \mathrm{\varLambda } = 0$, ${\gamma _1}/\beta = |{{D_j}} |_{max}^2 - {|{{D_j}} |^2}\; $, ${\gamma _2}/\beta = |U |_{max}^2 - {|U |^2}$, the ADMM algorithm has no difference from ePIE [22]. However, if we directly replace ${U^{k + 1}},\; D_j^{k + 1}$ update steps in ADMM with corresponding ones of ePIE, the convergence will be slower and even tends to be unstable.

 figure: Fig. 2.

Fig. 2. RMSE as the function of iterations with (a) ADMM (b) iAPG under different parameter conditions and ePIE is regarded as the benchmark. (c) The comparison among different algorithms. (d) The comparison in the case of fewer measurements.

Download Full Size | PDF

For iAPG (Fig. 2(b)), we perform the intensity Poisson model and amplitude Gaussian model respectively. From the results, we find that the convergence of iAPG depends on the value of step size $\mathrm {\lambda} $ in gradient descent. For a bigger step size $\mathrm {\lambda} $, the convergence rate is faster and RMSE is lower, but it leads to an unstable solution. When $\mathrm {\lambda} = 1$ for the Poisson model and $\mathrm {\lambda} = 2$ for the Gaussian model, the results are critically stable and have a fast convergence rate but slightly fluctuation. When $\mathrm {\lambda}$ is above this value, the results don’t tend to converge to a stable solution yet with initial fast convergence. To ensure the convergence of APG, we can reduce the value of $\mathrm {\lambda}$. As to be expected, the results of the Poisson model and Gaussian model is similar in the noiseless situation, and the corresponding curves almost coincide as shown in the yellow ($\mathrm {\lambda} = 1.8$ Gaussian) and purple ($\mathrm {\lambda} = 0.9$ Poisson), blue ($\mathrm {\lambda} = 2$ Gaussian) and red ($\mathrm {\lambda}= 1$ Poisson) curves of Fig. 2(b).

Figure 2(c) shows the comparison of ePIE, ADMM, iAPG, and iAPG has the best performance among them. For this case, we conclude that when $0.5 < \mathrm {\lambda} < 1$ for Poisson and $1 < \mathrm {\lambda}< 2$ for Gaussian, iAPG has a better performance than ePIE in both convergence rate and converged solution. In Fig. 2(d), we compare iAPG and ePIE when the number of the measurement is reduced to 36. In this case, it is difficult for the algorithms to recover high-quality results because of insufficient information and bad initial guesses. Therefore, the convergence of these algorithms is not stable in the first several iterations, but they all converge after sufficient iterations. Although iAPG has a higher error at the beginning of the recovery, it has a faster convergence rate in the next several iterations and a lower RMSE than ePIE at last. The difference in computational consumption between iAPG and ePIE is the acceleration extrapolation steps, where iAPG needs to restore the previous estimates of the object and the diffuser profile. In other steps, the consumption of iAPG is equal to ePIE. Despite little computational overhead, iAPG has higher accuracy and a faster convergence rate compared with ePIE.

We also study the case of measurements contaminated by Poisson noise, which commonly occurs in image acquisition. To quantify the Poisson noise, we calculate the signal-noise ratio SNR defined as

$$SNR = 10{{log }_{10}}\frac{{\mathop \sum \nolimits_{j = 1}^J I_j^2}}{{\mathop \sum \nolimits_{j = 1}^J {{({{I_j} - I_j{^{\prime}}} )}^2}}}, $$
where ${I_j}$ is the ground truth and $I_j^{\prime}$ is the measurement with Poisson noise. We study the case with SNR=12.13 dB (maximum detector intensity is 10 counts). We adopt a simple heuristic scheme [31] to adapt the step-size $\mathrm {\lambda}$, and iAPG has a better performance. In this scheme, we start with a large $\mathrm {\lambda}$ for fast convergence rate, when the following condition is reached,
$$\frac{{[{f({\varPsi _j^{k + 1}} )- f({\varPsi _j^k} )} ]}}{{f({\varPsi _j^{k + 1}} )}} < \delta , $$
we shrink the step size $\mathrm {\lambda} $ by $\beta $ times, i.e. $\mathrm {\lambda} \to \beta \mathrm {\lambda},\; \; \beta \in ({0,1} )$. When the step size $\mathrm {\lambda}$ is sufficiently small, it maintains its value for the following iterations. $\delta $ is a real-value threshold set manually. We set $\delta = {10^{ - 6}},\,\beta = 0.5$ and initial value of step size $\mathrm {\lambda ^0} = 1$ in our algorithm.

Figure 3 shows the results of ePIE, iAPG Poisson, and iAPG Poisson with the adaptive step size scheme. The upper row is the recovered object amplitude, and the bottom row is the recovered object phase. From the results of ePIE and iAPG, we find that the recovered amplitude is corrupted by noise, and the region of low intensity in amplitude has a stronger cross-talk in phase. In addition, the result of iAPG has more noise, and we see that iAPG doesn’t tend to converge to a stable solution after initial fast converge as shown in Fig. 3(e). If we reduce the step size $\mathrm {\lambda}$, iAPG will be more stable but the convergence rate will be slower, which is a trade-off. iAPG with adaptive step size solves this problem well by balancing the convergence rate and the stability, and produces the best results, with the smoothest image, the lightest cross-talk, and the lowest RMSE, among these algorithms. The Poisson model has a slightly lower RMSE in this case but doesn’t have a noticeable difference from the Gaussian model. Figure S1 shows the performances of iAPG when we scan the diffuser with different step sizes. Figure S2 shows the results of iAPG when the estimated positions of the diffuser have random errors.

 figure: Fig. 3.

Fig. 3. (a) The ground truth. The reconstructed amplitude and phase image with noisy measurements (SNR=12.13 dB) using (b) ePIE, (c) iAPG, (d) iAPG with adaptive step size. The initial object and pattern are all the same. (e) The RMSE comparisons of different algorithms.

Download Full Size | PDF

5. Experimental result

In the preceding simulation, we only take into account the noiseless case and the Poisson noise case. However, in the actual experiment, measurements may contain other errors, such as the fluctuation of the light source, the effect caused by the pixel response of the image sensor. To demonstrate the robustness of our algorithm, we perform the recovery over actually captured datasets. In our experimental setup, a 5 mW, 532 nm fiber-coupled laser diode is used as sample illumination. A thin diffuser made by coating a coverslip with 1-2 μm polystyrene beads is placed between the object and the image sensor. The distance between the object and the diffuser is ∼300 μm and the distance between the diffuser and the image sensor is ∼900 μm. A regular square grid of scan is applied and the motion step size is set between 1-2 μm via motor micro-stepping. We estimate the lateral translational shift based on the intensity measurements via cross-correlation, so no prior positional knowledge is required in the image acquisition process. We use a monochrome image sensor for image acquisition. Its pixel size is 1.85 μm and the imaging FOV is 7.4 mm by 5.6 mm. We adjust the exposure time to ensure that the measurements are not over-exposed. The details of the experimental schemecan also be found in [14] and Fig. 1(b) shows our experimental setup. In the recovery process, all algorithms are initialized with the same guess as described in Section 2.

We first validate our algorithms with a quantitative phase target (Benchmark QPT) as shown in Fig. 4. Figure 4(a) is the captured raw image, and Fig. 4(b-c) is the recovered diffuser profile and quantitative phase reconstructed by iAPG with 400 measurements. Figure 4(d) is the line profile across the red dashed arc. The two red lines represent the ground truth of the phase target. The recovered phase is in good agreement with the ground truth, which indicates the good performance of the reported method for quantitative phase imaging.

 figure: Fig. 4.

Fig. 4. Validating the quantitative phase imaging nature of our proposed algorithm. (a) The captured raw image. (b) The recovered phase of the diffuser. (c) The recovered phase target via iAPG based on 400 raw images. (d) The line trace of the red arc in (c).

Download Full Size | PDF

Retrieving a wide FOV image and maintaining high-resolution at the same time imposes a great challenge for microscopic imaging. The lensless configuration enables an entire sensor area as the imaging FOV. We recovered a Schistosomiasis of Intestines pathology slide sample which has rich spatial features. We employed an up-sampling rate $M = 3$ in the recovery process and reconstructed a squared full FOV with 5.6 mm by 5.6 mm. At each iteration, the recovery process requires a memory size of ∼10GB. We implement our algorithm on GPU for a fast processing speed. Figure 5(a) shows the recovered full FOV amplitude of the sample, which is recovered by iAPG Gaussian with 484 measurements. Figure 5(b) is the magnified view of the region in red square, and we regard it as the ground truth for the following comparison. Since the information contained by the 484 measurements is over sufficient, the results of ePIE and iAPG have no noticeable difference. For a noticeable comparison, we reduce the number of measurements by 75%, i.e. 121 measurements, which is an inadequate number of measurements to recover the sample. In Fig. 5(c-e), we show the corresponding regions of the recovered amplitude by ePIE and iAPG, and all these algorithms achieve converged results. PSNR and SSIM [38] of the amplitude are provided for comparison, and iAPG Poisson has the highest performance. For ePIE and iAPG Gaussian, the reconstruction is blurry, and it is hard to distinguish the details. However, for iAPG Poisson, the reconstruction is distinct in feature and has clear texture, where the quality is satisfying, without having visible artifacts or losing significant details, though there is still some noise making the image grainy compared to the ground truth. For this experimental dataset, we can conclude that iAPG has a better performance than ePIE in the case of limited measurements.

 figure: Fig. 5.

Fig. 5. The recovered amplitude of Schistosomiasis of Intestines pathology slide. (a) The recovered amplitude by iAPG Gaussian with 484 measurements. (b) The magnified view of the reconstruction as the ground truth. (c-e) The reconstruction by ePIE, iAPG Gaussian, iAPG Poisson with 121 measurements respectively, PSNR and SSIM are labeled at the top-right corner.

Download Full Size | PDF

For 400 raw images of 1024 by 1024 pixels and an up-sampling rate of $M = 3$, the processing time is ∼10 min for 20 iterations using one Nvidia GeForce RTX 2080Ti. It should be noted that our algorithm is implemented in MATLAB, and the processing time will be greatly reduced if the algorithm is implemented in the C language. Furthermore, iAPG offers a faster convergence rate with little computational overhead compared with ePIE, and the additional computation is the acceleration part which needs to store the estimates of the previous iteration.

6. Conclusion

In summary, we present several ptychographic phase retrieval methods derived in the frame of the proximal gradient. The phase retrieval problem is divided into sub-optimization involving proximal operators, which makes it tractable. We compare iAPG with ePIE and ADMM in the case of noisy and noiseless measurements and give quantitative and qualitative discussion in both simulations and experiments. iAPG presents the best performance among them, especially with limited measurements. We also adopt an adaptive step size scheme in our methods for further performance improvement.

The advantages of our algorithm can be summarized as follows. First, it is an incremental gradient approach, which allows us to process the whole FOV images on GPU. Second, the algorithm has a faster convergence rate and higher accuracy than the conventional algorithm, i.e. ePIE, and the time complexity is competitive with ePIE. Third, in the datasets contaminated by Poisson noise, our algorithm has a better performance with adaptive step size in simulation. Finally, with limited measurements, our algorithm achieves better performance in both simulation and experiment. Our algorithm can also be extended to other computational imaging methods like Fourier ptychographic imaging [39,40] with slight adaptation. For ptychographic problems with the known probe, the reconstruction can be faster and more robust to noise with proper initial estimates, e.g. spectral initializations [41,42]. We also consider devising spectral methods for ptychographic initializations and incorporating regularization terms to denoise, such as constraints on the diffuser, in the future work.

Funding

National Natural Science Foundation of China (61922048, 62031023); Shenzhen Science and Technology Innovation Program (JCYJ20200109142808034); Guangdong Special Support Plan (2019TX05X187).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat Methods 9(9), 889–895 (2012). [CrossRef]  

2. C. Zuo, J. Sun, J. Zhang, Y. Hu, and Q. Chen, “Lensless phase microscopy and diffraction tomography with multi-angle and multi-wavelength illuminations using a LED matrix,” Opt. Express 23(11), 14314–14328 (2015). [CrossRef]  

3. J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope,” Sci. Adv. 3(12), e1701548 (2017). [CrossRef]  

4. Y. Wu and A. Ozcan, “Lensless digital holographic microscopy and its applications in biomedicine and environmental monitoring,” Methods 136, 4–16 (2018). [CrossRef]  

5. H. Wang, H. Ceylan Koydemir, Y. Qiu, B. Bai, Y. Zhang, Y. Jin, S. Tok, E. C. Yilmaz, E. Gumustekin, Y. Rivenson, and A. Ozcan, “Early detection and classification of live bacteria using time-lapse coherent imaging and deep learning,” Light Sci Appl 9(1), 118 (2020). [CrossRef]  

6. S. Feng, M. Wang, and J. Wu, “Lensless in-line holographic microscope with Talbot grating illumination,” Opt. Lett. 41(14), 3157–3160 (2016). [CrossRef]  

7. Y. Zhou, J. Wu, J. Suo, X. Han, G. Zheng, and Q. Dai, “Single-shot lensless imaging via simultaneous multi-angle LED illumination,” Opt. Express 26(17), 21418 (2018). [CrossRef]  

8. S. Isikman, S. Seo, I. Sencan, A. Erlinger, and A. Ozcan, “Lensfree cell holography on a chip: From holographic cell signatures to microscopic reconstruction,” in2009 IEEE LEOS Annual Meeting Conference Proceedings (2009), pp. 404–405.

9. D. Lange, C. W. Storment, C. A. Conley, and G. T. A. Kovacs, “A microfluidic shadow imaging system for the study of the nematode Caenorhabditis elegans in space,” Sensors and Actuators B: Chemical 107(2), 904–914 (2005). [CrossRef]  

10. G. Zheng, S. A. Lee, Y. Antebi, M. B. Elowitz, and C. Yang, “The ePetri dish, an on-chip cell imaging platform based on subpixel perspective sweeping microscopy (SPSM),” Proceedings of the National Academy of Sciences 108(41), 16889–16894 (2011). [CrossRef]  

11. G. Zheng, S. A. Lee, S. Yang, and C. Yang, “Sub-pixel resolving optofluidic microscope for on-chip cell imaging,” Lab Chip 10(22), 3125–3129 (2010). [CrossRef]  

12. S. A. Lee, J. Erath, G. Zheng, X. Ou, P. Willems, D. Eichinger, A. Rodriguez, and C. Yang, “Imaging and Identification of Waterborne Parasites Using a Chip-Scale Microscope,” PLoS One 9(2), e89712 (2014). [CrossRef]  

13. J. M. Rodenburg and H. M. L. Faulkner, “A phase retrieval algorithm for shifting illumination,” Appl. Phys. Lett. 85(20), 4795–4797 (2004). [CrossRef]  

14. S. Jiang, J. Zhu, P. Song, C. Guo, Z. Bian, R. Wang, Y. Huang, S. Wang, H. Zhang, and G. Zheng, “Wide-field, high-resolution lensless on-chip microscopy via near-field blind ptychographic modulation,” Lab Chip 20(6), 1058–1065 (2020). [CrossRef]  

15. P. Song, R. Wang, J. Zhu, T. Wang, Z. Bian, Z. Zhang, K. Hoshino, M. Murphy, S. Jiang, C. Guo, and G. Zheng, “Super-resolved multispectral lensless microscopy via angle-tilted, wavelength-multiplexed ptychographic modulation,” Opt. Lett. 45(13), 3486 (2020). [CrossRef]  

16. P. Song, S. Jiang, H. Zhang, Z. Bian, C. Guo, K. Hoshino, and G. Zheng, “Super-resolution microscopy via ptychographic structured modulation of a diffuser,” Opt. Lett. 44(15), 3645 (2019). [CrossRef]  

17. Z. Bian, S. Jiang, P. Song, H. Zhang, P. Hoveida, K. Hoshino, and G. Zheng, “Ptychographic modulation engine: a low-cost DIY microscope add-on for coherent super-resolution imaging,” J. Phys. D: Appl. Phys. 53(1), 014005 (2020). [CrossRef]  

18. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109(10), 1256–1262 (2009). [CrossRef]  

19. P. Thibault, M. Dierolf, O. Bunk, A. Menzel, and F. Pfeiffer, “Probe retrieval in ptychographic coherent diffractive imaging,” Ultramicroscopy 109(4), 338–343 (2009). [CrossRef]  

20. V. Katkovnik and J. Astola, “Sparse ptychographical coherent diffractive imaging from noisy measurements,” J. Opt. Soc. Am. A 30(3), 367–379 (2013). [CrossRef]  

21. M. Odstrčil, A. Menzel, and M. Guizar-Sicairos, “Iterative least-squares solver for generalized maximum-likelihood ptychography,” Opt. Express 26(3), 3108 (2018). [CrossRef]  

22. A. Maiden, D. Johnson, and P. Li, “Further improvements to the ptychographical iterative engine,” Optica 4(7), 736 (2017). [CrossRef]  

23. R. Hesse, D. R. Luke, S. Sabach, and M. K. Tam, “Proximal Heterogeneous Block Implicit-Explicit Method and Application to Blind Ptychographic Diffraction Imaging,” SIAM J. Imaging Sci. 8(1), 426–457 (2015). [CrossRef]  

24. J. R. Fienup, “Phase-retrieval algorithms for a complicated optical system,” Appl. Opt. 32(10), 1737 (1993). [CrossRef]  

25. H. H. Bauschke, P. L. Combettes, and D. R. Luke, “Hybrid projection–reflection method for phase retrieval,” J. Opt. Soc. Am. A 20(6), 1025–1034 (2003). [CrossRef]  

26. S. Marchesini, H. Krishnan, B. J. Daurer, D. A. Shapiro, T. Perciano, J. A. Sethian, and F. R. N. C. Maia, “SHARP: a distributed GPU-based ptychographic solver,” J Appl Crystallogr 49(4), 1245–1252 (2016). [CrossRef]  

27. H. Chang, P. Enfedaque, and S. Marchesini, “Blind Ptychographic Phase Retrieval via Convergent Alternating Direction Method of Multipliers,” SIAM J. Imaging Sci. 12(1), 153–185 (2019). [CrossRef]  

28. N. Parikh and S. Boyd, “Proximal Algorithms,” FNT in Optimization 1(3), 127–239 (2014). [CrossRef]  

29. W. Wirtinger, “Zur formalen Theorie der Funktionen von mehr komplexen Veränderlichen,” Math. Ann. 97(1), 357–375 (1927). [CrossRef]  

30. H. Yan, “Ptychographic phase retrieval by proximal algorithms,” New J. Phys. 22(2), 023035 (2020). [CrossRef]  

31. C. Zuo, J. Sun, and Q. Chen, “Adaptive step-size strategy for noise-robust Fourier ptychographic microscopy,” Opt. Express 24(18), 20724 (2016). [CrossRef]  

32. L. Rong, C. Tang, D. Wang, B. Li, F. Tan, Y. Wang, and X. Shi, “Probe position correction based on overlapped object wavefront cross-correlation for continuous-wave terahertz ptychography,” Opt. Express 27(2), 938 (2019). [CrossRef]  

33. D. R. Luke, “Relaxed averaged alternating reflections for diffraction imaging,” Inverse Problems 21(1), 37–50 (2005). [CrossRef]  

34. S. Boyd, “Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers,” FNT in Machine Learning 3(1), 1–122 (2010). [CrossRef]  

35. Z. Wen, C. Yang, X. Liu, and S. Marchesini, “Alternating direction methods for classical and ptychographic phase retrieval,” Inverse Problems 28(11), 115010 (2012). [CrossRef]  

36. P. Thibault and M. Guizar-Sicairos, “Maximum-likelihood refinement for coherent diffractive imaging,” New J. Phys. 14(6), 063004 (2012). [CrossRef]  

37. Yu. Nesterov, “A method of solving a convex programming problem with convergence rate O (1/k2),” in Soviet Mathematics Doklady (1983).

38. Zhou Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

39. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nature Photon 7(9), 739–745 (2013). [CrossRef]  

40. G. Zheng, C. Shen, S. Jiang, P. Song, and C. Yang, “Concept, implementations and applications of Fourier ptychography,” Nat Rev Phys 3(3), 207–223 (2021). [CrossRef]  

41. E. J. Candes, X. Li, and M. Soltanolkotabi, “Phase Retrieval via Wirtinger Flow: Theory and Algorithms,” IEEE Trans. Inform. Theory 61(4), 1985–2007 (2015). [CrossRef]  

42. L. Valzania, J. Dong, and S. Gigan, “Accelerating ptychographic reconstructions using spectral initializations,” Opt. Lett. 46(6), 1357 (2021). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       The derivation of our algorithms and additional simulations

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Ptychography based lensless on-chip microscopy. (a) A diffuser is placed between the object and the image sensor for lightwave modulation. The distance between the object and the diffuser is ${d_1}$, and the distance between the diffuser and the image sensor is ${d_2}$. (b) The experimental platform, and the distance between the object and the image sensor is ∼1 mm.
Fig. 2.
Fig. 2. RMSE as the function of iterations with (a) ADMM (b) iAPG under different parameter conditions and ePIE is regarded as the benchmark. (c) The comparison among different algorithms. (d) The comparison in the case of fewer measurements.
Fig. 3.
Fig. 3. (a) The ground truth. The reconstructed amplitude and phase image with noisy measurements (SNR=12.13 dB) using (b) ePIE, (c) iAPG, (d) iAPG with adaptive step size. The initial object and pattern are all the same. (e) The RMSE comparisons of different algorithms.
Fig. 4.
Fig. 4. Validating the quantitative phase imaging nature of our proposed algorithm. (a) The captured raw image. (b) The recovered phase of the diffuser. (c) The recovered phase target via iAPG based on 400 raw images. (d) The line trace of the red arc in (c).
Fig. 5.
Fig. 5. The recovered amplitude of Schistosomiasis of Intestines pathology slide. (a) The recovered amplitude by iAPG Gaussian with 484 measurements. (b) The magnified view of the reconstruction as the ground truth. (c-e) The reconstruction by ePIE, iAPG Gaussian, iAPG Poisson with 121 measurements respectively, PSNR and SSIM are labeled at the top-right corner.

Tables (3)

Tables Icon

Table 1. Algorithm ePIE

Tables Icon

Table 2. Algorithm ADMM

Tables Icon

Table 3. Algorithm iAPG

Equations (25)

Equations on this page are rendered with MathJax. Learn more.

I j ( x , y ) = | [ O ( x , y ) P S F f r e e ( d 1 ) D ( x x j , y y j ) ] P S F f r e e ( d 2 ) | M 2 ,
ψ j { ψ j : | ψ j P S F f r e e ( d 2 ) | M = I j } { ψ j : U , D j , s . t . U D j = ψ j } ,
O 0 ( x , y ) = ( 1 J j = 1 J I j ( x , y ) ) M ,
D 0 ( x , y ) = ( 1 J j = 1 J I j ( p a d ) ( x + x j , y + y j ) ) M ,
min Ψ j j = 1 J f ( Ψ j ) , s . t . Ψ j = ( D j U ) P S F f r e e ( d 2 ) ,
f ( Ψ j ) = 1 2 | | I j ( | Ψ j | 2 o n e s ( M , M ) ) M | | 2 2 .
min Ψ j f ( Ψ j ) , s . t . Ψ j = ( D j U ) P S F f r e e ( d 2 ) .
L ( U , D j , Ψ j ; Λ ) = f ( Ψ j ) + R e Λ , Ψ j ( D j U ) P S F f r e e ( d 2 ) + β 2 | | Ψ j ( D j U ) P S F f r e e ( d 2 ) | | 2 2 ,
Ψ j k + 1 = a r g m i n Ψ j L ( U k , D j k , Ψ j ; Λ k ) ,
U k + 1 = a r g m i n U L ( U , D j k , Ψ j k + 1 ; Λ k ) + γ 1 2 U U k 2 2 ,
D j k + 1 = a r g m i n D L ( U k , D j , Ψ j k + 1 ; Λ k ) + γ 2 2 D j D j k 2 2 .
min Ψ j f ( Ψ j ) + g ( Ψ j ) ,
f G ( Ψ j ) = 1 2 | | I j ( | Ψ j | 2 o n e s ( M , M ) ) M | | 2 2 ,
f P ( Ψ j ) = 1 2 | Ψ j | 2 o n e s ( M , M ) M ( I j + ε ) l o g ( | Ψ j | 2 o n e s ( M , M ) M + ε ) , 1 ,
g ( Ψ j ) = { 0 , Ψ j { Ψ j : Ψ j = ( D j U ) P S F f r e e ( d 2 ) } , o t h e r w i s e ,
Ψ j k + 1 = p r o x g ( Ψ j k λ k f ( Ψ j k ) ) ,
p r o x g ( v ) = a r g m i n x g ( x ) + | | x v | | 2 2 .
U k + 1 = U k + c o n j ( D j k ) ( ψ j k + 1 ψ j k ) ( 1 γ o b j ) | D j k | 2 + γ o b j | D j k | m a x 2 ,
D j k + 1 = D j k + c o n j ( U k ) ( ψ j k + 1 ψ j k ) ( 1 γ p t ) | U k | 2 + γ p t | U k | m a x 2 ,
Ψ ^ j k = Ψ j k + ω k ( Ψ j k Ψ j k 1 ) ,
Ψ j k + 1 = p r o x g ( Ψ ^ j k λ k f ( Ψ ^ j k ) ) ,
ω k = n 1 n + 2 , n = 1 , 2 , ,
R M S E = min α | | O ( x , y ) α O ^ ( x , y ) | | 2 ,
S N R = 10 l o g 10 j = 1 J I j 2 j = 1 J ( I j I j ) 2 ,
[ f ( Ψ j k + 1 ) f ( Ψ j k ) ] f ( Ψ j k + 1 ) < δ ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.