Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Deep learning–enhanced fluorescence microscopy via degeneration decoupling

Open Access Open Access

Abstract

Deep learning–based reconstruction has emerged as an effective tool in fluorescence microscopy, with the potential to resolve diffraction-limited structures. However, most deep-learning reconstruction methods employed an end-to-end strategy, which ignored physical laws in the imaging process and made the preparation of training data highly challenging as well. In this study, we have proposed a novel deconvolution algorithm based on an imaging model, deep-learning priors and the alternating direction method of multipliers. This scheme decouples the reconstruction into two separate sub-problems, one for deblurring and one for restraining noise and artifacts. As a result of the decoupling, we are able to introduce deep-learning image priors and a variance stabilizing transform against targeted image degeneration due to the low photon budget. Deep-learning priors are learned from the general image dataset, in which biological images do not have to be involved, and are more powerful than hand-designed ones. Moreover, the use of the imaging model ensures high fidelity and generalization. Experiments on various kinds of measurement data show that the proposed algorithm outperforms existing state-of-the-art deconvolution algorithms in resolution enhancement and generalization.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Fluorescence microscopy is a powerful and ubiquitous technology for observing dynamic processes in live-cell. Due to the diffraction barrier, a common microscope cannot distinguish biological structures with size less than 250 nm [13]. Long exposure and high laser power in fluorescence microscopy can lead to fatal phototoxicity and photobleaching, along with more visible motion artifacts, while otherwise imaging condition results in low-intensity images corrupted with massive noise [4,5]. Besides, optical aberrations produce undesirable artifacts [6]. In biological research, one always expects as short exposure time as possible during the live-cell imaging processes, but there is a trade-off between the exposure time, sample activity and signal-to-noise ratio (SNR). The deconvolution algorithms are of great assistance to fluorescence imaging and make it possible to resolve fine structures of biological samples from degraded observation [7], but the coupling of multiple degenerations makes the reconstruction a highly ill-posed inverse problem.

The Richardson-Lucy (RL) algorithm [8,9] is a classical deconvolution method in fluorescence microscopy, but its performance was insufficient for the reconstruction of low-SNR images. Regularization terms describe an explicit or latent distribution of expected signals and play a key role in maximum a posteriori (MAP) reconstruction schemes. Benefiting from total variation (TV) [10] regularizations, the RL-TV algorithm [11] has a good noise suppression performance, whereas staircase artifacts arise [12]. The TV regularization depresses the noise by restraining the summation of the derivative of an image, according to the empirical summary that signals are usually successive while the noise arises randomly. Using a second-order derivative penalty [13] and the prior knowledge about fluorophore, Hessian based regularizations and the entropy regularization (ER) [14] attained better denoising ability [1517], at the cost of slightly blurring edges. Regularized methods are widely used in biomedicine image processing [15]. Deep learning–based methods [18] have reported a battery of inspiring improvement on inverse problems such as general image restoration [1924], super-resolution microscopy reconstruction [2527], and common fluorescence microscopy reconstruction [2830]. For example, Schuler et al. [24] trained the multi-layer perceptron (MLP), which is of neural networks, for image denoising and deconvolution. Rivenson et al. [28] imaged the same samples via lenses of different diameters, collected the matching pairs of low-resolution and high-resolution images, and trained end-to-end convolutional neural networks (CNN) using the elaborately prepared data. Wang and Rivenson et al. [30] trained Generative Adversarial Networks (GAN) that matched low-numerical-aperture wide-field images with the results acquired from the high-numerical-aperture microscope. Weigert et al. [29] trained CNNs for projection, isotropic reconstruction, and deconvolution. The imaging model disappeared to be replaced by end-to-end strategies. However, ignoring physical laws is prone to imperfections, such as excessive delicate edges (e.g. upon the reconstruction of microfilaments) and fake structures that come from the specific training dataset. Besides, these methods rely heavily on elaborate data, matching pairs of clear and degraded images of the specific biological objects. It is difficult to acquire completely clean fluorescence images due to ringings and artifacts, which is sometimes unfeasible for the sensitive and fast-moving structures. Furthermore, limited types of cells and subcellular structures in the training dataset can make the generalization of the trained networks problematic.

We observed that combining an imaging model with deep learning has the potential to enhance fluorescence microscopy in both performance and generalization. Although designing high-quality regularization terms is challenging, they can be directly learned from data using deep neural networks [22,31,32]. The matching pairs are necessary for conventional end-to-end learning schemes, but to learn image priors requires clean images only. The image degeneration in fluorescence microscopy is owing to two main factors, diffraction, and noise. If the networks are restricted to image denoising which takes effect in image structures instead of sample types, universal images are able to feed the networks with no need of acquiring cell images carefully. Specifically, we started by training denoising CNNs from rich natural images as potential image priors (i.e. regularization terms) and used the alternating directions method of multipliers (ADMM) [33] to decouple the original optimization problem into two foundational sub-problems that are consistent with overcoming blur from diffraction and restraining random noise, respectively. Deep-learning image priors contributed to the denoising sub-problem. The Anscombe variance stabilizing transform (VST) [34] was also used to deal with Poisson noise. The experimental results demonstrated that our method outperforms existing state-of-the-art methods on both simulation images and fluorescence images in performance and generalization. The source code and pre-trained models are available at https://github.com/HUST-Tan/Decoupling-Microscopy.

2. Methods

2.1. Imaging and reconstruction model

According to the diffraction of light, each of the sharp points diffuses into surrounding areas and is referred to as the point spread function (PSF) h. Noise is customarily modeled as signal-dependent Poisson noise (intrinsic shot noise) that stems from the nature of light, and additive Gaussian noise (extrinsic readout noise) from detector devices [35]. The imaging process is expressed analytically as

$$g({\mathbf x} )= { {\mathcal P}}({h({\mathbf x} ) \otimes f({\mathbf x} )} )+ n({\mathbf x} ),$$
where g is the image acquired from a microscope, f is the underlying fluorophore distribution to be reconstructed, x is the dimension of measurement, ${\otimes} $ is the convolution operation, ${{\mathcal P}}$ is Poisson process and n is the additive white Gaussian noise (AWGN). The potential fluorophore distribution $\hat{f}$ can be solved by minimizing an energy functional (using the matrix-vector form):
$$\hat{\mathbf{f}} = \arg \mathop {\min }\limits_{\mathbf f} \; \left[ {\frac{1}{{2{\sigma^2}}}||{{\mathbf {Hf}} - {\mathbf g}} ||_2^2 + R({\mathbf f} )} \right],$$
where R(f) refers to the regularization, H is the Toeplitz matrix of the PSF h(x) and $\sigma$ is the regularization term parameter.

2.2. ADMM solver

Equation (2) can be separated into distributed unconstraint optimization problems using ADMM [33]. We first introduce a variable z to split the variable f and derive the following constrained optimization problem

$$({\hat{{\mathbf f}},\hat{{\mathbf z}}} )= \arg \mathop {\min }\limits_{{\mathbf f},{\mathbf z}} \; \left[ {\frac{1}{{2{\sigma^2}}}||{{\mathbf {Hf}} - {\mathbf g}} ||_2^2 + R({\mathbf z} )} \right]\textrm{ }s.t.\textrm{ }{\mathbf f} = {\mathbf z}.$$
The augmented Lagrangian function is written as
$$L({{\mathbf f},{\mathbf z},{\mathbf v}} )= \frac{1}{2}||{{\mathbf {Hf}} - {\mathbf g}} ||_2^2 + \lambda R({\mathbf z} )+ {{\mathbf v}^\textrm{T}}({{\mathbf f} - {\mathbf z}} )+ \frac{\rho }{2}||{{\mathbf f} - {\mathbf z}} ||_2^2,$$
where v is the Lagrangian dual vector, λ is the regularization term parameter that equals σ2 and ρ is the penalty parameter.

ADMM solver minimizes the above function using dual ascent method and derives the alternate iterative framework

$$\begin{array}{l} {{\mathbf f}^{({t + 1} )}} = \arg \mathop {\min }\limits_{\mathbf f} L({{\mathbf f},{{\mathbf z}^{(t )}},{{\mathbf v}^{(t )}}} ), \\ {{\mathbf z}^{({t + 1} )}} = \arg \mathop {\min }\limits_{\mathbf z} L({{{\mathbf f}^{({t + 1} )}},{\mathbf z},{{\mathbf v}^{(t )}}} ), \\ {{\mathbf v}^{({t + 1} )}} = {{\mathbf v}^{(t )}} + \rho ({{{\mathbf f}^{({t + 1} )}} - {{\mathbf z}^{({t + 1} )}}} ). \end{array}$$
It is worth noting that the ADMM solver not only splits Eq. (2) mathematically but more importantly, it decouples the degeneration. Specifically, it splits the original inverse problem into two sub-problems with explicit physical meaning, which correspond to restoring resolution and restraining undesired random noise and artifacts, respectively.

The solution of the first sub-problem in Eq. (5) is

$${f^{(t + 1)}}({\mathbf x} )= {{\mathcal F}^{ - 1}}\left\{ {\frac{{{\mathcal F}{{[{h({\mathbf x} )} ]}^ \ast } \cdot {\mathcal F}[{g({\mathbf x} )} ]+ \rho {\mathcal F}[{{z^{(t )}}({\mathbf x} )- {u^{(t )}}({\mathbf x} )} ]}}{{{\mathcal F}{{[{h({\mathbf x} )} ]}^ \ast } \cdot {\mathcal F}[{h({\mathbf x} )} ]+ \rho }}} \right\},$$
where * denotes the conjugate operation, ${\cdot} $ denotes the Hadamard product, ${\mathcal F}$ is the abbreviation of the Fourier transform and ${{\mathcal F}^{ - 1}}$ inverse Fourier transform, and u(x) is the scaled dual variable. See Appendix for the derivation.

The second sub-problem has the same mathematical form as image denoising. With a denoising network, the solution of the second sub-problem in Eq. (5) can be analytically presented as

$${z^{({t + 1} )}}({\mathbf x} )= {\mathop{\rm Network}\nolimits} [{({{f^{({t + 1} )}}({\mathbf x} )+ {u^{(t )}}({\mathbf x} )} ),\sigma_{net}^2} ],$$
where ${\sigma _{net}} = \sqrt {\lambda /\rho }$ is related to the magnitude of the denoiser. The noise of all the pixels of an image is often considered to be independent and identically distributed (i.i.d.) but in the imaging process, signal-dependent Poisson noise usually dominates the degeneration. The Anscombe VST [34] was applied to alleviate this problem [36]. The transformation was inserted between the solutions Eq. (6) and Eq. (7), and the corresponding inverse transformation is inserted after Eq. (7). To deal with mixed noise, a generalized version of VST termed GAT is proposed by Murtagh et al. [37] but here we refrained from it for avoiding the multifarious parameter tuning. The Anscombe VST is imposed if the noise mainly obeys Poisson distribution, and otherwise (e.g. Gaussian noise is strong enough), the mixed noise is described as synthesized Gaussian distribution and only the Gaussian denoisers are used. To get enough pairs of various types of degraded fluorescence images and their clean version (ground-truth) is a major headache in learning-based methods. However, with the benefit of the ADMM solver, networks restrict to denoising, and the general images are enough to the training of networks.

Also, we rewrite the update formula of the scaled dual variable as

$${u^{({\textrm{t} + 1} )}}({\mathbf x} )= {u^{(t )}}({\mathbf x} )+ ({{f^{({t + 1} )}}({\mathbf x} )- {z^{({t + 1} )}}({\mathbf x} )} ).$$

It is generally known that the ADMM penalty factor ρ doesn’t impact on the solutions but controls the convergence rate in the convex optimization problem. For general denoisers, Chan et al. [38] reported that such ADMM methods can reach fixed-point convergence if a denoiser satisfies a generalized property named ‘bound denoiser’ and suitable parameter tuning does take effect on robustness and finial reconstruction results. Thus the denoisers are trained to be compatible for both strong and weak noise and a series of denoisers are applied in one reconstruction task in different iteration steps. For parameters tuning, we adopt the robust method reported by Chan et al. [38]

$$\rho _{}^{({t + 1} )} = k\rho _{}^{(t )},$$
where k is a constant and σnet is obtained by σnet2=λ/ρ. λ is a tunable hyper-parameter that depends on the magnitude of noise in the degraded image. The overall algorithm and detailed setting are given in the Appendix.

2.3. Learning image priors via convolutional neural networks

Figure 1(a) illustrates the flow of our algorithm. The deep convolutional neural networks are key components that provide both denoising efficiency and performance. In terms of image reconstruction, networks should not lose useful information of the input signals. We don’t use up-sample and down-sample (e.g. pooling layers) for avoiding additional information loss. Motivated by the excellent performance of deep residual learning [21,39], we design a network that consists of stacks of convolutional layers, each followed by batch normalization [40] and ReLU activation functions [41], as illustrated in Fig. 1(b). Some of the convolutional layers are designed as dilated convolutions [42] to enlarge the receptive field. In the input convolutional layer, 64 filters at the size of 3×3×1 are used to model a high-dimensional description of primary input images. These features are then filtered by convolutional layers each of which consists of 64 filters at the size of 3×3×64, and transformed into output images by output convolutional layer. The shallow network ensures a fast forward operation in the application. The 40×40 training patches are cropped from BSD Dataset [43] and MS-COCO Dataset (2017 version) [44]. The batch size is 128 and the epoch is 50. A total of 298k image patches are used for training. We pre-process 5960 noise-free image patches in the training dataset using Laplace-filter as Li et al. [25] reported in order to help networks restore shape edges. We take noisy images as the input of the network and noise-free images as the expectation (i.e. label). The loss function of the network is a typical Euclidean distance

$$L({{{\mathbf x}_i},{{\mathbf y}_i}} )= \frac{1}{N}\sum\limits_{i = 1}^N {||{f({{{\mathbf x}_i};{\bf \theta }} )- {{\mathbf y}_i}} ||_F^2} ,$$
where x and y are the input and expectation of the network and θ is the network parameters. The parameters are adjusted by the Adam optimization algorithm [45] while training.

 figure: Fig. 1.

Fig. 1. Flow chart of our algorithm and network architecture. (a) illustrates the implementation of our algorithm and (b) illustrates the architecture of our deep convolutional neural network. Conv. refers to ‘convolutional’, BN refers to batch normalization and ReLU Rectified Linear Unit. The s-dilated convolution is used (s = [2 2 3 3 3 2 2]) in center convolutional layers (n = 7).

Download Full Size | PDF

The experimental platform is based on an Nvidia 1080Ti GPU and an Intel i7-7700k (4.2 GHz) CPU. We use Matlab R2017a and MatConvNet toolbox [46] to implement the networks. A series of denoisers for different noise levels are wanted in the ADMM framework, and we train 25 denoisers on noise level ranging from 1 to 49 with an interval 2, as an empirical setting [22,38]. The training cost 2 days. Around 10 networks are used in the general reconstructions. In our scheme, enhancement of high-frequency information occurs in the first sub-problem, and deep-learning priors contribute to the second sub-problem, so these networks don’t need to be retrained in the extended reconstruction process.

3. Materials

The experiments on simulation datasets and total internal reflection fluorescence (TIRF) images are conducted to validate the performance and generalization of our algorithm.

3.1. Simulation setup

The medical synthetic Compressed Sensing (CS) phantom [47], fluorescence images from Cell Image Library (CIL) Dataset [48] (CIL-187, CIL-13899, CIL-48101, CIL-48104), the Gene Expression Nervous System Atlas (GENSAT) Project [49], and natural images (fingerprint, house, Lena, peppers) are used as ground truth in the simulation. These images are scaled and cropped to ensure enough clarity. The ‘low-dose’ degraded images are generated by convolving original ground-truth images with the PSF (assume that one pixel in images correspond to 65 nm, NA is 1.7 and wavelength of the emitted light is 488 nm) and subsequently superimposing over synthesized Poisson-Gaussian noise (the standard deviation of AWGN is 5/255).

3.2. TIRF and TIRF-SIM setup

The TIRF microscopy imaging data of various samples are also involved. The schematic illustration of the system is based on a commercial inverted fluorescence microscope (IX83, Olympus) equipped with a TIRF objective (Apo N 100X/1.7 HI Oil, Olympus) and a multiband dichroic mirror (DM, ZT405/488/561/640-phase R; Chroma). Laser light with wavelengths of 488 nm (Sapphire 488LP-200) and 561nm (Sapphire 561LP-200, Coherent) and acoustic optical tunable filters (AOTF, AA Opto-Electronic, France) were used to combine, switch, and adjust the illumination power of the lasers. A collimating lens (focal length: 10 mm, Lightpath) was used to couple the lasers to a polarization-maintaining single-mode fiber (QPMJ-3AF3S, Oz Optics). The output lasers were then collimated by an objective lens (CFI Plan Apochromat Lambda 2X N.A. 0.10, Nikon), and diffracted by the pure phase grating that consisted a polarizing beam splitter (PBS), a half wave plate and the SLM (3DM-SXGA, ForthDD). The diffraction beams were then focused by another achromatic lens (AC508-250, Thorlabs) onto the intermediate pupil plane, where a carefully designed stop mask was placed to block the zero-order beam and other stray light and to permit passage of ± 1 order beam pairs only. These orders were focused and the diameter of the hole is 0.4 mm. The size of the separation of these orders from the zero-order and the ± 2 orders was around 3 mm. To maximally modulate the illumination pattern while eliminating the switching time between different excitation polarizations, a home mode polarization rotator placed after the stop mask(cite). Next, the light passed another lens (AC254-125, Thorlabs) and a tube lens (ITL200, Thorlabs) to focus on the back focal plane of the objective lens, which were interfered at the image plane after pass the objective lens. Emitted fluorescence collected by the same objective passed through a dichroic mirror (DM), an emission filter and another tube lens. Finally, the emitted fluorescence was split by an image splitter (W-VIEW GEMINI, Hamamatsu, Japan) before being captured by a sCMOS (Flash 4.0 V2, Hamamatsu, Japan). We averaged nine TIRF-SIM raw frames to get one TIRF data. The objective numerical aperture (NA) is 1.7 and the physical length of one pixel is 65 nm in the objective plane.

3.3. Cell maintenance and preparation

HUVECs were isolated and cultured in M199 medium (Thermo Fisher Scientific, 31100035) supplemented with fibroblast growth factor, heparin, and 20% fetal bovine serum (FBS) or in ECM medium containing endothelial cell growth supplement (ECGS) and 10% FBS. The cells were infected with a retrovirus system to express Lifeact-EGFP. The transfected cells were cultured for 24 h, detached using trypsin-EDTA, seeded onto poly-L-lysine-coated coverslips (H-LAF10L glass, reflection index: 1.788, thickness: 0.15 mm, customized), and cultured in an incubator at 37°C with 5% CO2 for an additional 20-28 h before the experiments. To label tubulin in HUVECs, we followed a previously described protocol [50], in which the cells were incubated with SiR-tubulin (Cytoskeleton, Inc. CY-SC006) at a concentration of 1 µM in growth medium at 37°C for 2 h and imaged without washing. HEK293 cells were cultured in high-glucose Dulbecco’s Modified Eagle’s Medium (DMEM) (HyClone, SH30022.01) supplemented with 10% FBS, 50 U/ml penicillin, and 50 µg/ml streptomycin (Thermo Fisher Scientific, 15140122) and were transfected with STIM1-EGFP/ mCherry-E-syt1 using LipofectamineTM 2000. LSECs were isolated and plated onto 100 µg/ml collagen-coated coverslips and cultured in high-glucose DMEM supplemented with 10% FBS, 1% L-glutamine, 50 U/ml penicillin, and 50 µg/ml streptomycin in an incubator at 37°C with 5% CO2 for 6 h before imaging. Live cells were incubated with DiI (100 µg/ml, Biotium, 60010) for 15 min at 37°C, whereas fixed cells were fixed with 4% formaldehyde at room temperature for 15 min before labeling with DiI.

4. Experimental results

4.1. Comparison to existing deconvolution algorithms

We have benchmarked the performance of our method with three regularization based deconvolution methods [11,14,15] and a learning-based method, MLP deconvolution [24]. The RLTV and ER algorithms are implemented according to the literature [11,14] and the Bregman-Hessian deconvolution and MLP deconvolution were applied using the open-source codes [24]. The experiment is conducted in 10 simulation images and TIRF microscopy data. The peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) indexes [51] are used to quantify the difference between a reconstruction result and the ground truth. The higher the PSNR and SSIM indexes, the better the quality of reconstruction. In the simulation experiment, we multiplied the primary images by a scaling constant [52] before superimposing Poisson-Gaussian noise for a series of SNRs. The Poisson noise level is controlled by signal intensity (i.e. the scaling constant) and the standard deviation of Gaussian noise is set to 5. The simulation studies are addressed at different signal-to-noise ratios and the quantitative results (Table 1) report more than a 0.6 dB PSNR improvement by our method. Besides, our method provides the reconstruction results with fewest artifacts as shown in the top four patches in Fig. 2(a) and can recover very faint arcs in row 4, while the results of other deconvolution methods remained unclear. Figure 2(b) shown the line plot values along the yellow lines in Fig. 2(a), which indicates that our method provides the results with the highest contrast. Figure 2(c) shows that VST assists with the reconstruction of dark areas of an image. Our algorithm runs fast with the GPU support, which costs 0.29 seconds to process a 256×256 image and 0.73 seconds to process a 512×512 image (Table 2).

 figure: Fig. 2.

Fig. 2. Simulation experiment results. (a) Representative patches of CIL187, CIL13899, CIL48101, CS-stripe, and target images. The maximum gray values of the top two images are set to 100, and those of the bottom three images (consist more textures) are set to 255. The ‘deconv.’ denotes deconvolution. (b) The line plot values along the yellow lines in bottom images of (a). The reconstructed results (marked as the black lines) show that our methods provide the highest contrast improvement. (c) The verification of the use of VST.

Download Full Size | PDF

Tables Icon

Table 1. Quantitative results for different algorithms under different Poisson noise level

Tables Icon

Table 2. Number of iterations and execution time (average over 20-time numerical experiment, CPU: Intel i7-4702MQ, GPU: Nvidia 2080 Ti) for different algorithms

Defects in microscope images such as blur, noise, artifacts, aberration, and background inherently occur. We imaged low-intensity Lifeact-EGFP labeled HUVEC whose ground truth was unknown. The original TIRF images and reconstructed results are shown in Fig. 3(a). In Fig. 3(b), five magenta rectangles are regions of interest (ROIs) marked as ROI (I∼V). ROI (I) and (II) concentrate on the resolution and contrast enhancement. In our reconstruction, neighboring beads or lines become separable and potential holes begin to appear. In ROI (III), noise in TIRF images is seriously strong. Regularization terms correspond to the prior knowledge that images should be smooth and consistent (e.g. TV and Hessian norm regularization) damage clear edge and detail areas inherently and result in producing artifacts and over-smoothed regions, and the MLP deconvolution attempts to smooth the whole area. On the contrary, image priors are able to protect all the elaborate edges and provide more detailed structural information. Although dark and thin filamentous structures are almost covered by ambient background light and noise, as illustrated in ROI (IV), our method could still resolve them. In ROI (V), as magenta arrows pointed, some points of straight filamentous structures are very weak. The fluorescence labeling should be homogeneous and successive along the microfilament structures but such inconsistency is caused by random factors such as insufficient dyeing, inopportune noise and scatter. All the methods try to restore a consistent line and ours provides the clearest one. We plot values cross the microfilaments in Fig. 3(a) (marked in yellow) using Gaussian fitting and observe that our reconstructed microfilaments have the minimum width and full width at half maximum (FWHM). The Fourier transforms of the reconstructed images by considered deconvolution methods in Fig. 3(d) also show that our methods recover the most high-frequency details.

 figure: Fig. 3.

Fig. 3. TIRF image during 4.5-ms exposure and reconstructed images of actin structures. (a) TIRF image of HUVEC labeled with Lifeact-EGFP and the reconstruction results. Five magenta rectangles mark five regions of interest as ROI (I∼V). These ROIs of TIRF image and reconstruction are shown in (b). (c) Line plot analysis along the yellow line in (a) cross the microfilaments. The FWHMs of the microfilament imaged by TIRF and reconstructed by MLP and our method are 212.29 nm, 203.52 nm, and 175.16 nm, respectively. (d) The Fourier transforms of the reconstructed images by considered deconvolution methods. Scale bar, 2 µm.

Download Full Size | PDF

4.2. Towards a universal method

The end-to-end deep-learning reconstruction (restoration) methods [2730] have shown the powerful potential of directly generating high-frequency signals from pre-trained networks. However, the use of the imaging model in our method will bring more fidelity and not restrict to a particular sample and imaging setting. To validate the generalization and fidelity, in this section, we employ our method to the simulation dataset and the diverse biological structures imaged with different imaging settings, without re-training the networks. The available end-to-end deep-learning reconstruction methods [29,30] are considered for comparison by use of Fiji 1.5.2e [53], CSBDeep v0.3.4 [29] and SISR-net v1.0.0 [30]. The end-to-end CSBDeep [29] deconvolution network that has learned the distribution of microtubules achieves outstanding performance in the HUVEC image with high contrast as shown in Fig. 4(i). However, when applied to other structures, it produces excessively continuous tubule-like objects (Figs. 4(j)–(i), Figs. 5(c) and (h)), while we cannot find the related reference from the TIRF images and the ground truth (Figs. 4(b)–(d), Figs. 5(a) and (f)). The SISR transform TxRed network [30] generalizes well to multiple samples but it is sensitive to noise and shows a weaker deblurring capacity (Figs. 4(n)–(p) and Figs. 5(d) and (i)). These issues indicate that without specific training data the performance of the end-to-end methods would decline. However, our algorithm ensures high fidelity and satisfactory generalization to various structures and imaging settings (Figs. 4(e)–(h)).

 figure: Fig. 4.

Fig. 4. TIRF and reconstructed images of various biological structures. The 1st column: HUVEC labeled with Lifeact-EGFP acquired with 488-nm. The 2nd column: HEK293 cell labeled with STIM1-EGFP acquired with 488-nm. The 3rd column: HEK293 cell labeled with mCherry-E-syt1 acquired with 561-nm. The 4th column: LSEC cell labeled with DiI.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Simulation study to compare the considered methods’ generalization abilities. The 1st line: The cell image library 13899. The 2nd line: Crym image from the gene expression nervous system atlas project. We blur the ground-truth images with 488-nm PSF and add Poisson-Gaussian noise to get the corrupted images. The maximum gray values of CIL-13899 and Crym are set to 100 and 1000, respectively.

Download Full Size | PDF

4.3. Resolution analysis

Resolution is one of the most noteworthy topics in fluorescence imaging. We imaged beads using TIRF and structured illumination microscopy (2D-SIM, about 90-nm resolution). The TIRF images are shown in Figs. 6(a) and (b), and our reconstruction (of TIRF images) are shown in Figs. 6(c) and (d). The reconstructed beads and Gaussian fitting curves along the yellow lines that pass through two adjacent beads are shown in Fig. 6(e), which demonstrates the ability of our algorithm to enhance resolution. For some of the adjacent point pairs that cannot be distinguished in TIRF images and Wiener reconstruction, our method still separates them. To quantitatively describe the enhancement of resolution, we first found adjacent beads in the TIRF-SIM images and calculated their distance by means of bi-Gaussian curve fitting. Then we determined if they could be resolved in TIRF views, Wiener-deconvoluted views and our reconstructed views. We determined the minimum distance below which two adjacent beads are no longer distinguished in accordance with Rayleigh criterion (We followed the definition of the resolution in the literature [2], which was the distance below which the center intensity of the line cross two adjacent points is smaller than 73.6% maximum intensity of the objects). The spatial resolution of TIRF images, Wiener deconvolution and our reconstruction are around 225 nm, 210 nm, and 195 nm, respectively.

 figure: Fig. 6.

Fig. 6. Resolution analysis using fluorescent beads. (a, b) TIRF images acquired under high/low SNR during 63/4.5-ms exposure and (c, d) our reconstruction results. Six ROIs (I∼VI) marked in yellow rectangles are shown in (e). Gaussian fitting curves of intensity along the yellow line within two fluorescent beads of ROI (I, VI) are illustrated in (e). (f) spatial resolution (TIRF 225 nm, Wiener deconvolution 210 nm, ours 195 nm).

Download Full Size | PDF

5. Discussion

Noise and blur are the two main factors damaging biological imaging quality. Consistent with prior information on the expected images, according to the MAP estimation, are the regularization terms that contribute significantly to deconvolution. We train deep CNNs as latent data-driven regularization and introduce them into the MAP estimation by the ADMM. Our scheme splits the primary problem into deblurring and denoising, respectively, and then solve them in turns. The current deep-learning reconstruction methods [29,30] solved Eq. (1) using an end-to-end strategy. Specifically, they acquired the degenerate images f(x) and ground-truth images g(x) and then trained networks to map g(x) to f(x). The acquired data restricted actually to a particular case, where the PSF, noise level, and the structures for training were fixed, and hence they were limited in generalization and fidelity. In contrast, using our method decoupling the degeneration, one can adjust the PSF and denoising parameter according to the individual situations. Our reconstruction derives from reliable optical imaging model Eq. (1) and the MAP estimation, so each step has a logical explanation. The solution space of the original optimization problem is restricted by the likelihood term related to the imaging numerical model, which ensures good fidelity. Through CNN denoisers, our method is robust to low-SNR input. Furthermore, for signal-dependent Poisson noise, the Anscombe VST is used in our method to transform original degeneration into a Gaussian form. As addressed in Fig. 6 our method also provides an improvement of spatial resolution near diffraction limit (TIRF around 225 nm, our deconvolution around 195 nm, 488-nm laser).

From the point of view of the frequency domain, the effect of diffraction is modeled as an optical transfer function (OTF), and the occurrence of blur is modeled as the Hadamard product of OTF and the Fourier transform of an original fluorescence image. The high-frequency information beyond the physical cut-off frequency does not be intercepted by lenses, and no longer exists in imaging results. Besides, the remaining high-frequency information within the cut-off frequency band becomes weakened. Correspondingly, images details in the space domain vanish into the haze. Noise distributes over the entire frequency domain, and it especially pollutes high-frequency bands, where signals are faint. The reconstruction methods based on a numerical imaging model tend to restore high-frequency information by pixel-wise dividing the Fourier transform of the corrupted images by the OTF matrix. The regularizations are developed to ensure a unique and stable solution with noise suppression. Traditional regularization terms often introduce smooth priors, which sacrifices high-frequency information in return for denoising, and leads the incomplete restoration of resolution. Learning priors can otherwise avoid forcing smoothness on reconstruction, thus it is more valid for high-frequency reconstruction within the physical cut-off frequency. As a result, the unidentifiable patterns become recognizable and resolution following Rayleigh criterion rises.

Appendix

Solve the first sub-problem in Eq. (5) : Put Eq. (4) into Eq. (5) and get

$$\begin{aligned} {{\mathbf f}^{({t + 1} )}} &= \arg \mathop {\min }\limits_{\mathbf f} L({{\mathbf f},{{\mathbf z}^{(t )}},{{\mathbf v}^{(t )}}} )\\ & = \arg \mathop {\min }\limits_{\mathbf f} \frac{1}{2}||{{\mathbf {Hf}} - {\mathbf g}} ||_2^2 + \lambda R({{{\mathbf z}^{(t )}}} )+ {({{{\mathbf v}^{(t )}}} )^\textrm{T}}({{\mathbf f} - {{\mathbf z}^{(t )}}} )+ \frac{\rho }{2}||{{\mathbf f} - {{\mathbf z}^{(t )}}} ||_2^2\\ & = \arg \mathop {\min }\limits_{\mathbf f} \frac{1}{2}||{{\mathbf {Hf}} - {\mathbf g}} ||_2^2 + \frac{\rho }{2} \left\|{{\mathbf f} - \left( {{{\mathbf z}^{(t )}} - \frac{1}{\rho }{{\mathbf v}^{(t )}}} \right)} \right\|_2^2, \end{aligned}$$
then replace the v/ρ with u and get the closed solution:
$${{\mathbf f}^{({t + 1} )}} = \frac{{{{\mathbf H}^\textrm{T}}{\mathbf g} + \rho ({{{\mathbf z}^{(t )}} - {{\mathbf u}^{(t )}}} )}}{{{{\mathbf H}^\textrm{T}}{\mathbf H} + \rho {\mathbf 1}{{\mathbf 1}^\textrm{T}}}}.$$
Convert the matrix-vector form to the function form:
$${f^{(t + 1)}}({\mathbf x} )= {{\mathcal F}^{ - 1}}\left\{ {\frac{{{\mathcal F}{{[{h({\mathbf x} )} ]}^ \ast } \cdot {\mathcal F}[{g({\mathbf x} )} ]+ \rho {\mathcal F}[{{z^{(t )}}({\mathbf x} )- {u^{(t )}}({\mathbf x} )} ]}}{{{\mathcal F}{{[{h({\mathbf x} )} ]}^ \ast } \cdot {\mathcal F}[{h({\mathbf x} )} ]+ \rho }}} \right\}.$$
Solve the second sub-problem in Eq. (5):
$$\begin{aligned} {{\mathbf z}^{({t + 1} )}} &= \arg \mathop {\min }\limits_{\mathbf z} L({{{\mathbf f}^{({t + 1} )}},{\mathbf z},{{\mathbf v}^{(t )}}} ), \\ & = \arg \mathop {\min }\limits_{\mathbf z} \frac{1}{2}||{{\mathbf H}{{\mathbf f}^{({t + 1} )}} - {\mathbf g}} ||_2^2 + \lambda R({\mathbf z} )+ {({{{\mathbf v}^{(t )}}} )^\textrm{T}}({{{\mathbf f}^{({t + 1} )}} - {\mathbf z}} )+ \frac{\rho }{2}||{{{\mathbf f}^{({t + 1} )}} - {\mathbf z}} ||_2^2\\ & = \arg \mathop {\min }\limits_{\mathbf z} \lambda R({\mathbf z} )+ \frac{\rho }{2}\left\|{\left( {{{\mathbf f}^{({t + 1} )}} + \frac{1}{\rho }{{\mathbf v}^{(t )}}} \right) - {\mathbf z}} \right\|_2^2\\ & = {\mathop{\rm Network}\nolimits} [{({{f^{({t + 1} )}} + {u^{(t )}}} ),\textrm{ }\sigma_{net}^2} ], \end{aligned}$$
where σnet2=λ/ρ.

The overall algorithm is given as follows:

oe-28-10-14859-i001

Funding

National Natural Science Foundation of China (31327901, 31428004, 31521062, 31570839, 61375018, 61672253).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. L. Rayleigh, “XV. On the theory of optical images, with special reference to the microscope,” The London, Edinburgh, and Dublin Philosophical Mag. J. Sci. 42(255), 167–195 (1896). [CrossRef]  

2. E. H. Stelzer, “Contrast, resolution, pixelation, dynamic range and signal-to-noise ratio: fundamental limits to resolution in fluorescence light microscopy,” J. Microsc. 189(1), 15–24 (1998). [CrossRef]  

3. J. S. Verdaasdonk, A. D. Stephens, J. Haase, and K. Bloom, “Bending the rules: widefield microscopy and the Abbe limit of resolution,” J. Cell. Physiol. 229(2), 132–138 (2014). [CrossRef]  

4. B. Herman, Fluorescence microscopy (Bios Scientific Publishers, 1998).

5. J.-B. Sibarita, “Deconvolution microscopy,” in Microscopy Techniques (Springer, 2005), pp. 201–243.

6. W. Wallace, L. H. Schaefer, and J. R. Swedlow, “A workingperson's guide to deconvolution in light microscopy,” BioTechniques 31(5), 1076–1097 (2001). [CrossRef]  

7. M. B. Cannell, A. McMorland, and C. Soeller, “Image enhancement by deconvolution,” in Handbook of biological confocal microscopy (Springer, 2006), pp. 488–500.

8. W. H. Richardson, “Bayesian-Based Iterative Method of Image Restoration*,” J. Opt. Soc. Am. 62(1), 55–59 (1972). [CrossRef]  

9. L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astrophys. J. 79, 745 (1974). [CrossRef]  

10. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” in Eleventh International Conference of the Center for Nonlinear Studies on Experimental Mathematics : Computational Issues in Nonlinear Science: Computational Issues in Nonlinear Science (1992), pp. 259–268.

11. N. Dey, L. Blanc-Feraud, C. Zimmer, P. Roux, Z. Kam, J.-C. Olivo-Marin, and J. Zerubia, “Richardson-Lucy algorithm with total variation regularization for 3D confocal microscope deconvolution,” Microsc. Res. Tech. 69(4), 260–266 (2006). [CrossRef]  

12. H. Liu and S. Tan, “Image Regularizations Based on the Sparsity of Corner Points,” IEEE Trans. on Image Process. 28(1), 72–87 (2019). [CrossRef]  

13. M. Lysaker, A. Lundervold, and T. Xue-Cheng, “Noise removal using fourth-order partial differential equation with applications to medical magnetic resonance images in space and time,” IEEE Trans. on Image Process. 12(12), 1579–1590 (2003). [CrossRef]  

14. M. Arigovindan, J. C. Fung, D. Elnatan, V. Mennella, Y.-H. M. Chan, M. Pollard, E. Branlund, J. W. Sedat, and D. A. Agard, “High-resolution restoration of 3D structures from widefield images with extreme low signal-to-noise-ratio,” Proc. Natl. Acad. Sci. 110(43), 17344–17349 (2013). [CrossRef]  

15. S. Lefkimmiatis, A. Bourquard, and M. Unser, “Hessian-based norm regularization for image restoration with biomedical applications,” IEEE Trans. on Image Process. 21(3), 983–995 (2012). [CrossRef]  

16. T. Sun, N. Sun, J. Wang, and S. Tan, “Iterative CBCT reconstruction using Hessian penalty,” Phys. Med. Biol. 60(5), 1965–1987 (2015). [CrossRef]  

17. X. Huang, J. Fan, L. Li, H. Liu, R. Wu, Y. Wu, L. Wei, H. Mao, A. Lal, and P. Xi, “Fast, long-term, super-resolution imaging with Hessian structured illumination microscopy,” Nat. Biotechnol. 36(5), 451–459 (2018). [CrossRef]  

18. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015). [CrossRef]  

19. L. Xu, J. S. Ren, C. Liu, and J. Jia, “Deep convolutional neural network for image deconvolution,” in Advances in Neural Information Processing Systems (2014), pp. 1790–1798.

20. C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in European conference on computer vision (Springer, 2014), pp. 184–199.

21. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Trans. on Image Process. 26(7), 3142–3155 (2017). [CrossRef]  

22. K. Zhang, W. M. Zuo, S. H. Gu, and L. Zhang, “Learning Deep CNN Denoiser Prior for Image Restoration,” Proc Cvpr Ieee, 2808–2817 (2017).

23. C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, and Z. Wang, “Photo-realistic single image super-resolution using a generative adversarial network,” in Proceedings of the IEEE conference on computer vision and pattern recognition (2017), pp. 4681–4690.

24. C. J. Schuler, H. C. Burger, S. Harmeling, and B. Schölkopf, “A machine learning approach for non-blind image deconvolution,” in Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on (IEEE, 2013), pp. 1067–1074.

25. Y. Li, F. Xu, F. Zhang, P. Xu, M. Zhang, M. Fan, L. Li, X. Gao, and R. Han, “DLBI: deep learning guided Bayesian inference for structure reconstruction of super-resolution fluorescence microscopy,” Bioinformatics 34(13), i284–i294 (2018). [CrossRef]  

26. E. Nehme, L. E. Weiss, T. Michaeli, and Y. Shechtman, “Deep-STORM: super-resolution single-molecule microscopy by deep learning,” Optica 5(4), 458–464 (2018). [CrossRef]  

27. W. Ouyang, A. Aristov, M. Lelek, X. Hao, and C. Zimmer, “Deep learning massively accelerates super-resolution localization microscopy,” Nat. Biotechnol. 36(5), 460–468 (2018). [CrossRef]  

28. Y. Rivenson, Z. Gorocs, H. Gunaydin, Y. B. Zhang, H. D. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017). [CrossRef]  

29. M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, and S. Culley, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018). [CrossRef]  

30. H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Günaydın, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019). [CrossRef]  

31. S. C. Zhu and D. Mumford, “Prior learning and Gibbs reaction-diffusion,” IEEE Trans. Pattern Anal. Machine Intell. 19(11), 1236–1250 (1997). [CrossRef]  

32. D. Zoran and Y. Weiss, “From learning models of natural image patches to whole image restoration,” in 2011 International Conference on Computer Vision (IEEE, 2011), pp. 479–486.

33. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers,” FNT in Machine Learning 3(1), 1–122 (2010). [CrossRef]  

34. M. F. Freeman and J. W. Tukey, “Transformations related to the angular and the square root,” Ann. Math. Statist. 21(4), 607–611 (1950). [CrossRef]  

35. D. Sage, L. Donati, F. Soulez, D. Fortun, G. Schmit, A. Seitz, R. Guiet, C. Vonesch, and M. Unser, “DeconvolutionLab2: An open-source software for deconvolution microscopy,” Methods 115, 28–41 (2017). [CrossRef]  

36. L. Azzari and A. Foi, “Variance stabilization in Poisson image deblurring,” in 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017) (2017), pp. 728–731.

37. S. J. L. Murtagh F and A. Bijaoui, “Image restoration with noise suppression using a multiresolution support[J]. Astronomy and Astrophysics Supplement Series,” Astronomy and Astrophysics Supplement Series 112 (1995).

38. S. H. Chan, X. R. Wang, and O. A. Elgendy, “Plug-and-Play ADMM for Image Restoration: Fixed-Point Convergence and Applications,” IEEE Trans. Comput. Imaging 3(1), 84–98 (2017). [CrossRef]  

39. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition (2016), pp. 770–778.

40. S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167 (2015).

41. V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of the 27th international conference on machine learning (ICML-10) (2010), pp. 807–814.

42. F. Yu and V. Koltun, “Multi-scale context aggregation by dilated convolutions,” arXiv preprint arXiv:1511.07122 (2015).

43. D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on (IEEE, 2001), pp. 416–423.

44. T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in European conference on computer vision (Springer, 2014), pp. 740–755.

45. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

46. A. Vedaldi and K. Lenc, “Matconvnet: Convolutional neural networks for matlab,” in Proceedings of the 23rd ACM international conference on Multimedia (ACM, 2015), pp. 689–692.

47. D. S. Smith, “Compressed Sensing MRI Phantom,” (2013), https://ww2.mathworks.cn/matlabcentral/fileexchange/29364-compressed-sensing-mri-phantom-v1-1?s_tid=srchtitle.

48. “Cell Image Library,” http://www.cellimagelibrary.org/home.

49. “The Gene Expression Nervous System Atlas (GENSAT) Project, NINDS Contracts N01NS02331 & HHSN271200723701C to The Rockefeller University (New York, NY),” http://www.gensat.org/imagenavigator.jsp?imageID=29109.

50. G. Lukinavičius, L. Reymond, E. D’este, A. Masharina, F. Göttfert, H. Ta, A. Güther, M. Fournier, S. Rizzo, and H. Waldmann, “Fluorogenic probes for live-cell imaging of the cytoskeleton,” Nat. Methods 11(7), 731–733 (2014). [CrossRef]  

51. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

52. J. Li, F. Luisier, and T. Blu, “PURE-LET image deconvolution,” IEEE Trans. on Image Process. 27(1), 92–105 (2018). [CrossRef]  

53. W. S. Rasband, “ImageJ,” (Bethesda, MD, 1997).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Flow chart of our algorithm and network architecture. (a) illustrates the implementation of our algorithm and (b) illustrates the architecture of our deep convolutional neural network. Conv. refers to ‘convolutional’, BN refers to batch normalization and ReLU Rectified Linear Unit. The s-dilated convolution is used (s = [2 2 3 3 3 2 2]) in center convolutional layers (n = 7).
Fig. 2.
Fig. 2. Simulation experiment results. (a) Representative patches of CIL187, CIL13899, CIL48101, CS-stripe, and target images. The maximum gray values of the top two images are set to 100, and those of the bottom three images (consist more textures) are set to 255. The ‘deconv.’ denotes deconvolution. (b) The line plot values along the yellow lines in bottom images of (a). The reconstructed results (marked as the black lines) show that our methods provide the highest contrast improvement. (c) The verification of the use of VST.
Fig. 3.
Fig. 3. TIRF image during 4.5-ms exposure and reconstructed images of actin structures. (a) TIRF image of HUVEC labeled with Lifeact-EGFP and the reconstruction results. Five magenta rectangles mark five regions of interest as ROI (I∼V). These ROIs of TIRF image and reconstruction are shown in (b). (c) Line plot analysis along the yellow line in (a) cross the microfilaments. The FWHMs of the microfilament imaged by TIRF and reconstructed by MLP and our method are 212.29 nm, 203.52 nm, and 175.16 nm, respectively. (d) The Fourier transforms of the reconstructed images by considered deconvolution methods. Scale bar, 2 µm.
Fig. 4.
Fig. 4. TIRF and reconstructed images of various biological structures. The 1st column: HUVEC labeled with Lifeact-EGFP acquired with 488-nm. The 2nd column: HEK293 cell labeled with STIM1-EGFP acquired with 488-nm. The 3rd column: HEK293 cell labeled with mCherry-E-syt1 acquired with 561-nm. The 4th column: LSEC cell labeled with DiI.
Fig. 5.
Fig. 5. Simulation study to compare the considered methods’ generalization abilities. The 1st line: The cell image library 13899. The 2nd line: Crym image from the gene expression nervous system atlas project. We blur the ground-truth images with 488-nm PSF and add Poisson-Gaussian noise to get the corrupted images. The maximum gray values of CIL-13899 and Crym are set to 100 and 1000, respectively.
Fig. 6.
Fig. 6. Resolution analysis using fluorescent beads. (a, b) TIRF images acquired under high/low SNR during 63/4.5-ms exposure and (c, d) our reconstruction results. Six ROIs (I∼VI) marked in yellow rectangles are shown in (e). Gaussian fitting curves of intensity along the yellow line within two fluorescent beads of ROI (I, VI) are illustrated in (e). (f) spatial resolution (TIRF 225 nm, Wiener deconvolution 210 nm, ours 195 nm).

Tables (2)

Tables Icon

Table 1. Quantitative results for different algorithms under different Poisson noise level

Tables Icon

Table 2. Number of iterations and execution time (average over 20-time numerical experiment, CPU: Intel i7-4702MQ, GPU: Nvidia 2080 Ti) for different algorithms

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

g ( x ) = P ( h ( x ) f ( x ) ) + n ( x ) ,
f ^ = arg min f [ 1 2 σ 2 | | H f g | | 2 2 + R ( f ) ] ,
( f ^ , z ^ ) = arg min f , z [ 1 2 σ 2 | | H f g | | 2 2 + R ( z ) ]   s . t .   f = z .
L ( f , z , v ) = 1 2 | | H f g | | 2 2 + λ R ( z ) + v T ( f z ) + ρ 2 | | f z | | 2 2 ,
f ( t + 1 ) = arg min f L ( f , z ( t ) , v ( t ) ) , z ( t + 1 ) = arg min z L ( f ( t + 1 ) , z , v ( t ) ) , v ( t + 1 ) = v ( t ) + ρ ( f ( t + 1 ) z ( t + 1 ) ) .
f ( t + 1 ) ( x ) = F 1 { F [ h ( x ) ] F [ g ( x ) ] + ρ F [ z ( t ) ( x ) u ( t ) ( x ) ] F [ h ( x ) ] F [ h ( x ) ] + ρ } ,
z ( t + 1 ) ( x ) = Network [ ( f ( t + 1 ) ( x ) + u ( t ) ( x ) ) , σ n e t 2 ] ,
u ( t + 1 ) ( x ) = u ( t ) ( x ) + ( f ( t + 1 ) ( x ) z ( t + 1 ) ( x ) ) .
ρ ( t + 1 ) = k ρ ( t ) ,
L ( x i , y i ) = 1 N i = 1 N | | f ( x i ; θ ) y i | | F 2 ,
f ( t + 1 ) = arg min f L ( f , z ( t ) , v ( t ) ) = arg min f 1 2 | | H f g | | 2 2 + λ R ( z ( t ) ) + ( v ( t ) ) T ( f z ( t ) ) + ρ 2 | | f z ( t ) | | 2 2 = arg min f 1 2 | | H f g | | 2 2 + ρ 2 f ( z ( t ) 1 ρ v ( t ) ) 2 2 ,
f ( t + 1 ) = H T g + ρ ( z ( t ) u ( t ) ) H T H + ρ 1 1 T .
f ( t + 1 ) ( x ) = F 1 { F [ h ( x ) ] F [ g ( x ) ] + ρ F [ z ( t ) ( x ) u ( t ) ( x ) ] F [ h ( x ) ] F [ h ( x ) ] + ρ } .
z ( t + 1 ) = arg min z L ( f ( t + 1 ) , z , v ( t ) ) , = arg min z 1 2 | | H f ( t + 1 ) g | | 2 2 + λ R ( z ) + ( v ( t ) ) T ( f ( t + 1 ) z ) + ρ 2 | | f ( t + 1 ) z | | 2 2 = arg min z λ R ( z ) + ρ 2 ( f ( t + 1 ) + 1 ρ v ( t ) ) z 2 2 = Network [ ( f ( t + 1 ) + u ( t ) ) ,   σ n e t 2 ] ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.