Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Deep learning approach for Fourier ptychography microscopy

Open Access Open Access

Abstract

Convolutional neural networks (CNNs) have gained tremendous success in solving complex inverse problems. The aim of this work is to develop a novel CNN framework to reconstruct video sequences of dynamic live cells captured using a computational microscopy technique, Fourier ptychographic microscopy (FPM). The unique feature of the FPM is its capability to reconstruct images with both wide field-of-view (FOV) and high resolution, i.e. a large space-bandwidth-product (SBP), by taking a series of low resolution intensity images. For live cell imaging, a single FPM frame contains thousands of cell samples with different morphological features. Our idea is to fully exploit the statistical information provided by these large spatial ensembles so as to make predictions in a sequential measurement, without using any additional temporal dataset. Specifically, we show that it is possible to reconstruct high-SBP dynamic cell videos by a CNN trained only on the first FPM dataset captured at the beginning of a time-series experiment. Our CNN approach reconstructs a 12800×10800 pixel phase image using only ∼25 seconds, a 50× speedup compared to the model-based FPM algorithm. In addition, the CNN further reduces the required number of images in each time frame by ∼ 6×. Overall, this significantly improves the imaging throughput by reducing both the acquisition and computational times. The proposed CNN is based on the conditional generative adversarial network (cGAN) framework. We further propose a mixed loss function that combines the standard image domain loss and a weighted Fourier domain loss, which leads to improved reconstruction of the high frequency information. Additionally, we also exploit transfer learning so that our pre-trained CNN can be further optimized to image other cell types. Our technique demonstrates a promising deep learning approach to continuously monitor large live-cell populations over an extended time and gather useful spatial and temporal information with sub-cellular resolution.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In recent years, data-driven image reconstruction techniques based on machine learning, in particular deep learning (DL) [1], have gained tremendous success in solving complex inverse problems [2], and can often provide results surpassing those using state-of-the-art model-based techniques. Traditionally, solving an inverse problem involves first explicitly formulating the imaging model and incorporating domain and prior knowledge (e.g. via the use of regularization techniques), and then finding an analytical solution (e.g. through an optimization procedure) [3]. Unlike model-based approaches, the ‘end-to-end’ DL framework does not explicitly utilize any models or priors, and instead relies on large datasets to ‘learn’ the underlying inverse problem. The outcome of this DL approach consists of two important components. First, the result from the training stage is a CNN that corresponds to a plausible underlying mapping function relating the measurement to the solution. Second, the trained CNN can be used to make ‘predictions’ when presenting it with new measurements that were unused in the training stage. This second part comes with major practical benefits in computational cost and speed in typical image reconstruction problems, since the prediction process simply involves the feedforward computation of the CNN that typically takes no more than a few seconds on a normal grade GPU. In contrast, most of modern model-based techniques rely on iterative algorithms [4–6] that require much higher computational cost and longer running time; the same lengthy process needs to be repeated every time for each new measurement.

Here, we distinguish two classes of imaging problems: those involve independent datasets from often static objects, and those dealing with sequential datasets that are temporally correlated, from dynamic objects. In independent problems, CNNs have been demonstrated to provide superior performance to solve many challenging imaging problems, such as image super-resolution [7,8], denoising [9, 10], segmentation [11], deconvolution [12, 13], compressive imaging [14, 15], tomography [16,17], digital labeling [18], holography [19,20], phase recovery [21], and imaging through diffusers [22, 23]. What’s common in this class of problems is that independently prepared input-output pairs (i.e. measurement and solution), obtained by repeating the same imaging process, are presented to the CNN at the training stage to optimize the network’s parameters. In sequential problems, the temporal correlation of a dynamic process contains additional information, and is often recorded in video datasets. Various CNN frameworks have been proposed to learn the additional temporal information. For example, spatial super-resolution has been demonstrated by training a CNN on both spatial and temporal dimensions of videos [24]. Temporal super-resolution on recurring processes is achieved by learning the underlying temporal statistics [25]. The motion information of dynamic objects is learned with an optical-flow based CNN [26]. Motion artifacts can be removed by jointly learning the blurring point-spread-function (PSF) and deconvolution operation [27, 28]. In all these cases, CNNs are designed to process a video sequence in order to extract the temporal information. The downside is that the CNN architectures inevitably become more complicated that require more computational resources, as compared to those used in the independent problems. Fundamentally, the complication stems from that any single frame from the imaging techniques used does not contain sufficient temporal statistical information.

In this work, we develop a CNN architecture to reconstruct video sequence of dynamic live cells captured with a computational microscopy technique based on Fourier ptychographic microscopy (FPM) [29, 30]. The unique feature of the FPM is its ability to quantitatively reconstruct phase information with both wide field-of-view (FOV) and high spatial resolution, i.e., a large space-bandwidth product (SBP). This is not possible for traditional techniques which must trade spatial or temporal resolution for FOV. For live-cell imaging applications, this allows one to simultaneously image a large cell population (e.g. more than 3400 in a single frame in [29]). Cells of the same type undergo similar morphological changes during different cell states, which then repeat over each cell cycle. If one records only a few cells at a time using conventional microscopy techniques [31], capturing the full dynamics would require a large sequence of measurements to cover the entire cell cycle (typically ranging from a few hours to days). Our proposed technique is based on the observation that, in any live cell experiment without precise cell synchronization [32], at any instant of time, a large cell population would contain samples covering all cell states. In other words, it is possible to gather sufficient temporal statistical information of a single cell by imaging a large spatial ensembles simultaneously. Based on this idea, we propose a CNN that is trained using only a single frame from the FPM. We then show that this trained CNN is able to reconstruct large-SBP phase videos with high fidelity using datasets taken in a time-series live cell experiments.

Existing FPM techniques are limited by their long acquisition times, which are limited by the FPM algorithms that require at least 65% overlap in the Fourier coverage of the images captured from neighboring LEDs [30]. Several illumination multiplexing techniques have been demonstrated to improve the acquisition speed [29,33]. However, the amount of data reduction is still limited by the Fourier overlap requirement. Here, we show that, similar to prior work on CNN for FPM on static objects [34], our CNN can be sufficiently trained using much fewer images than that needed by the model-based FPM algorithms for dynamic live-cell samples.

Distinct from computer vision applications, a particular challenge in applying DL to biomedical microscopy is the difficulty in gathering ground truth data needed for training the network. Various strategies have been proposed, including synthetic data from simulations built with physical imaging models [35–37], semi-synthetic data that uses experimental data to guide simulations [36], experimental data captured with a different modality [8,19], and experimental data captured with the same modality [36]. Here, we propose to use the traditional FPM reconstructed phase images as the ground truth for training. Since our technique requires only a single frame for training, this does not add much overhead in data acquisition or computation. When using experimental data as the ground truth, they inevitably are contaminated with noise. In FPM, the quality of the phase reconstruction is limited by spatially variant aberrations, system mis-alignment, and intensity-dependent noise [38]. Robust learning using noisy labelled data has been demonstrated for image classification and segmentation [39,40]. In essence, CNN captures the invariants while filtering out the random fluctuations [41, 42]. Here, we show that our proposed CNN is also robust to phase noise in the ‘ground truth’ data for solving the inverse problem of FPM.

We build a CNN based on the conditional generative adversarial network (cGAN) framework, consisting of two sub-networks, the generator and the discriminator. The generator network uses the UNet architecture [11] with densely connected convolutional blocks (DenseNet) [43] to output high-resolution phase image. The discriminator network distinguishes if the output is real or fake. We compare five variants of the network, which differ by the input measurements using different illumination patterns corresponding to different Fourier coverages. Similar to the traditional FPM, the darkfield measurements lead to spatial resolution improvement in the reconstruction. To further refine the network, we introduce a mixed loss function that takes a weighted Fourier domain loss, in addition to the standard image domain loss for the generator and the adversarial loss for the discriminator. We show that this novel weighted Fourier domain loss leads to improved recovery of high frequency information. We demonstrate our technique using live Hela cell FPM video data from [29]. We quantitatively assess the performance of our CNN technique over time against those from traditional FPM results, and found that the ‘generalization’ degradation of the reconstructed phase is small over the entire time course (>4 hours).

The training is performed on a PC Intel core i7, 32 GB RAM, NVIDIA GeForce Titan XP for ∼16 hours using Keras/Tensorflow framework. Once the network is trained, reconstructing a 12800×10800 pixels phase image requires only ∼25 seconds, which is approximately 50× faster than the model-based FPM algorithm [29].

Our technique demonstrates a promising deep learning approach to continuously image large live-cell populations over extended time and gather spatial and temporal information with sub-cellular resolution. Compare to existing FPM [29, 30], this CNN approach significantly improves the overall throughput by reducing both the acquisition and computation times, and with less data requirement. The CNN reconstructed phase image provides high spatial resolution, wide FOV, and low noise-induced artifacts. We also show the flexibility in reconstructing other cell types using transfer learning, which makes our technique appealing to broad applications.

2. Method

2.1. Conditional generative adversarial network (cGAN)

Generally speaking, the proposed CNN based FPM reconstruction algorithm takes a set of low-resolution intensity images Iα as the network input and output a single high-resolution phase image ϕG. The intensity images Iα are captured from illuminating the sample from α different illumination angles (LEDs) [Fig. 1(a)], in which αBF are brightfield (BF) and αDF are darkfield (DF) (Fig. 2). In the training stage, the ground truth phase image is fed into the CNN, which is obtained from the reconstructed high-resolution phase ϕFPM from the FPM algorithm in [29] [Fig. 1(b)]. A key feature of the FPM is to reconstruct a high-resolution phase image using a set of low-resolution intensity images. The resolution enhancement factor is r in each dimension. To obtain the ground truth, it needed to capture the full FPM dataset, containing 173 images [29]. Since our DL scheme only requires training for the first ‘FPM frame’, the rest of the frame only requires α(< 173) images, which allows reducing the acquisition time, especially in a time-series experiment. We denote the set of α low-resolution images Iα as a tensor of dimension W × H × α and the corresponding ground truth ϕFPM a tensor of dimension rW × rH × 1 [Fig. 1(b)].

 figure: Fig. 1

Fig. 1 The workflow of the proposed deep learning based Fourier ptychography video reconstruction. (A) The intensity data is captured by illuminating the sample from different angles with an LED array. (B) Training CNN to reconstruct high-resolution phase images. The input to the CNN are low-resolution intensity images; the output of the CNN is the ground truth phase image reconstructed using the traditional FPM algorithm in [29]. The network is then trained by optimizing network’s parameters that minimizes a loss function calculated based on the network’s predicted output and the ground truth. (C) The network is fully trained using the first dataset at 0 min, then can be used to predict phase videos of dynamic cell samples frame by frame.

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 The proposed condition generative adversarial network (cGAN) for FPM video reconstruction. The the generator (top) and the discriminator (bottom) are constructed with the ConvBlock BN-ReLU-Conv(1 × 1)-BN-ReLU-Conv(3 × 3) and ConvBlock Conv-BN-LeakyReLU, respectively. The generator output is the high-resolution phase. The discriminator tries to distinguish if that output phase is fake or real. The generator uses the UNet architecture. For the discriminator, the generator predicted phase or the ground truth phase is concatenated with the up-sampled intensity data as a conditional input to the discriminator network. The following color schemes are used: the two blocks oe-26-20-26470-i001.jpg and oe-26-20-26470-i002.jpg describe the dense concatenation inside the dense block in down-sampling and up-sampling path, respectively. oe-26-20-26470-i003.jpg and oe-26-20-26470-i004.jpg are transition layers interweaving with the dense blocks in the generator. oe-26-20-26470-i005.jpg denotes the convolutional layer, oe-26-20-26470-i006.jpg denotes the batch-normalization with a nonlinear ReLU layer in generator model, and oe-26-20-26470-i007.jpg the batch-normalization with the leaky ReLU in the discriminator. In the last three layers of the discriminator, oe-26-20-26470-i008.jpg denotes fully-connected layers for high-level feature reasoning. oe-26-20-26470-i009.jpg is used at the end for binary classification. k#n#s# (# stands for some integer) denotes the filter size, number of channels, and stride of the convolution layer, respectively.

Download Full Size | PDF

The proposed CNN that performs FPM video reconstruction [Fig. 1(c)] is based on the conditional generative adversarial network (cGAN) framework. It consists of two sub-networks, the generator G and the discriminator D (Fig. 2). Here, the goal of the generator G, is to be trained to predict a high-resolution phase ϕG = G(Iα) from the low-resolution image set Iα input. To simplify the notation, we will drop the subscript α knowing that I will always contain α low-resolution intensity images. The generator network G consists of a set of parameters θG (weights and biases), which will be optimized through the training. The optimal θG is learned by minimizing a loss function l over N input-output training pairs:

θ^G=argminθGn=1N1Nl(GθG(In,ϕn)).
We emphasize that the choice of the loss function l significantly affects the quality of the training. We propose a mixed loss function that takes the weighted sum of multiple elementary loss functions, which will be detailed in Subsection 2.2.

The generator G adopts the general “encoder-decoder” architecture used in UNet [11] to facilitate efficient learning of pixel-to-pixel information. UNet has shown to increase the network’s performance by adapting to the high-complexity information in image dataset [44]. To enhance the efficiency of the training process, batch-normalization (BN) is used to offset the internal covariate shift [45]. In addition, dropout regularization [46] is employed to constrain network’s adaptation to the data during the training to avoid overfitting and increase the network’s model accuracy. A known problem of training a CNN is that it can get saturated when the network’s depth becomes too deep [47]. To mitigate this problem, the dense block (DB) proposed in the densely connected network is used [43]. A DB connects each layer to its subsequent layers in a feed-forward fashion. The inputs to each layer are the feature-maps of all preceding layers; the output of the current layer’s own feature-maps are inputs to all the subsequent layers (see Fig. 2). The DB has several advantages, including (a) mitigation of the vanishing-gradient problem in the training; (b) reduction of the total number of parameters; (c) enhancement of feature propagation and reuse. A typical L-layer DB is defined as follows:

xL=HL([x0,x1,,xL2,xL1]).
where [·] denotes the concatenation operation that connects all the feature maps of all L layers in the block. The output at the end of each L-layer DB HL(·) has R0 + R × (L − 1) numbers of feature maps, where R0 is the number of the feature maps in the first layer, the hyper-parameter R is referred to as the growth rate. Within each layer inside the DB (ConvBlock), a series of operations are performed, including batch-normalization (BN), nonlinear activation using the ReLU or LeakyReLU function (ReLU/Leaky ReLU) [48], and convolution with filters of kernel size k × k [Conv(k × k)].

In our generator G, it contains a total of 11 DBs. The number of ConvBlock layers in each DB is L = [4, 5, 7, 10, 12, 10, 7, 5, 4, 4, 4] (marked as L# in Fig. 2 with # denoting the number of ConvBlock layers in each DB). In each ConvBlock layer, a stack of BN-ReLU-Conv(1 × 1)-BN-ReLU-Conv(3 × 3) operations are performed with R = 12 and R0 = 46.

Between two consecutive DBs, a transition block is used to facilitate the desired down-sampling or up-sampling operation. The down-sampling transition block contains Conv(1 × 1)-BN-ReLU-Conv(3 × 3, stride=2); the up-sampling transition block contains Conv(1 × 1)-BN-ReLU-Deconv(3 × 3, stride=2), where Deconv denotes the deconvolution (transpose convolution) layer [49]. The features of the input layer are extracted by an initial Conv(3 × 3)-BN-ReLU block before feeding them to the first DB. A Conv(1 × 1) is used to perform the final regression to generate the phase map ϕG.

The discriminator network D aims to distinguish if the output from G is real or fake. Following [50] and [51], we define a conditional Generative Adversarial Network (cGAN) to solve the following adversarial min−max problem:

minθGmaxθDEI,ϕ[logDθD(I,ϕ)]+EI[log(1DθD(I,G(I))].

The general idea behind this network is that it aims to train a generator G to ‘fool’ the discriminator D. Here, D is trained to distinguish whether the high-resolution phase image predicted by G represents a real phase image. It was observed that GAN in general is hard to train and it may fail when the generator collapses to a parameter setting where it always gives the same output. A successful strategy to avoid this failure is to allow the discriminator to perform minibatch discrimination [51, 52]. In this case, the discriminator distinguishes if the reconstructed phase image is real or fake by evaluating multiple sub-regions of the G-predicted image instead of the whole.

2.2. Loss function

A motivation of the usage of the discriminator network D is that the commonly used pixel-wise loss functions, such as the mean absolute error (MAE), mean square error (MSE), and structural similarity index (SSIM), may not be the most appropriate figures of merit, in particular when assessing a CNN’s performance in preserving high frequency content of reconstructed images. The minimization of these pixel-wise loss functions can lead to solutions that ignores the high-frequency details, while favors solutions that are smooth, albeit have less perceptual quality [53]. With cGAN approach, the generator G can learn to create a solution that resembles realistic high-resolution images with high-frequency details.

For this purpose, we define the ‘perceptual loss function’ l as a weighted sum of multiple loss functions. This ensures that the model can learn the desired features containing both low-frequency and high-frequency information in the phase images. Specifically, our loss function consists of four components, including the pixel-wise spatial domain mean-absolute error (MAE) loss lMAE, the pixel-wise Fourier domain mean-absolute error (FMAE) loss lFMAE, the generator’s adversarial loss lG, and the weight regularization lθG, in the following form:

l=λ1(β1lMAE+β2lFMAE)+λ2lG+λ3lθG,
where
lMAE=1r2WH|ϕ||GθG(I)|,
lFMAE=1r2WH|(ϕ)||(GθG(I))|,
lG=logDθG(I,G(I)),
lθG=θG,
where denotes the 2D Fourier transform, ‖ · ‖ is the L1-norm. (λ1, β1, β2, λ2, λ3) are hyper-parameters that controls the relative weights of each loss components. We found that the Fourier loss function is sensitive to pixel-wise corruption during the early stage of the training process. As a result, we use it only to refine the outputs by enforcing similarity in the frequency domain [54] after initial training is done with the other three loss components (details in Subsection 2.4).

2.3. Data preparation

To test our CNN technique, we use FPM video data from [29]. The time-series data was taken on Hela cells at 2 min intervals over the course of ∼4 hours that contains several cell cycles. Each FPM dataset contains 173 low-resolution intensity images, in which 37 are brightfield (BF), 136 are darkfield (DF). Each intensity image is 2560×2160 pixels in 16-bit grayscale.

To generate the data for training, FPM phase reconstructions from [29] are used as the ground truth. Each FPM reconstructed phase image contains 12800×10800 pixels, which is 5×5 larger than the raw intensity image.

To prepare the dataset for training, we use only the first FPM frame in the time-lapse as the training set. Specifically, to prepare the ground truth data, the full FOV phase image is first divided into 4×4 sub-regions, containing 3440×2760 pixels. To avoid edge artifacts during training and reconstruction, neighboring sub-regions are chosen to have 320-pixel and 80-pixel overlap along the horizontal and vertical directions, respectively. The corresponding intensity image in each sub-region are with 688×552 pixels. The input to the CNN are BF and DF image patches that are cropped from random locations of each of the sub-region images, each with 64×64 pixels. Each training input data is formed by stacking the BF and DF image patches to form a 64×64×α tensor. To facilitate fast computation, the models are designed with down-sampling path and up-sampling path. Each input image was up-sampled to 80 × 80 using bilinear interpolation. The spatial dimension of each layer in the CNN are 80, 40, 20, 10, 20, 40, 80, 160 and 320, respectively. The corresponding ground truth data contains 320×320 pixels. Each raw BF image was preprocessed by the background subtraction procedure in [29]; each raw DF image was preprocessed to remove the dark current noise [29]. The same preprocessing steps are applied for training, validation, and testing.

2.4. Training, evaluation, and testing

To investigate the interplay between the illumination pattern and the performance of the CNN, we train our network by using several different combinations of BF and DF images. The illumination patterns along with the CNN models used are shown in Fig. 3(a). Each illumination pattern is plotted in the Fourier space, in which a yellow circle indicates the NA of the objective lens. Intensity images taken from the LEDs within the circle are BF; whereas those outside the circle are DF. The LEDs in-use are marked in red. To systematically study the relation between the reconstructed resolution with the illumination’s angular coverage, we have designed patterns with (P1) 13 BF-only with 0.2 illumination NA, (P2) 13 BF + 36 DF with 0.6 illumination NA, (P3) 13 BF + 10 DF with 0.25 illumination NA, and (P4) 9 BF + 20 DF with 0.4 illumination NA. The following networks are investigated: P1 is trained on two networks, U-B13 implements the UNet without DB in [55]; U-B13-cGAN implements the UNet in [51] with the cGAN architecture (i.e. with the discriminator network D in Fig. 2); P2 is trained on the cGAN network in Fig. 2, D-B13D36-cGAN; P3 is trained on the cGAN network, D-B13D10-cGAN; P4 is trained on a cGAN network with and without the Fourier loss function, denoted as D-B9D20-cGAN, D-B9D20-F-cGAN, respectively.

 figure: Fig. 3

Fig. 3 (A) The summary of the illumination patterns and network structures investigated. The illumination angles (shown in the Fourier space) in-use are marked in red. The yellow cycle indicates the NA of the imaging system. (B) A sample full-FOV high-SBP phase reconstruction (at 4 hour) predicted by the proposed network D-B9D20-F-cGAN. (C) The original intensity image, ground truth phase image, and the reconstructions from the CNN models from the zoom-in area [marked by the red square in (B)].

Download Full Size | PDF

Each model was trained with ∼700–900 epochs. For UNet, the batch-size was 16; whereas the batch-size was 4 in UNet with DB due to memory limitation. We use the weight coefficients (λ1, β1, β2, λ2, λ3) = (102, 1, 0, 1, 10−5) when the Fourier loss is not used. When the Fourier loss is used, we first train the network with (λ1, β1, β2, λ2, λ3) = (102, 1, 0, 1, 10−5) for 700 epochs, and then with (λ1, β1, β2, λ2, λ3) = (102, 0.95, 0.05, 1, 10−5) for another 145 epochs. We observed that the network’s parameters are unstable in the early stage of training. To stabilize the training process, we added the Fourier loss after 700 epochs. We used the ADAM optimizer [56] with initial learning rate of 10−5, dropout factor 0.5 after every 10 epochs, in which each epoch contains 1000 iterations. In each iteration, the algorithm incrementally updates the model using a subset (set by the batch-size) of the input. To fine tune each network, as an optional step, we performed model validation using the FPM frame taken at 2 hour. The best models were selected based on the MAE metric calculated on the validation data.

Once the CNN is trained, which only needs to be performed once using the first FPM frame taken at 0 min, the CNN is then applied to reconstruct high-SBP phase video frames (i.e. the testing step). To perform the reconstruction, similar data preprocessing steps are followed as the training phase. The raw intensity images were first divided into 4 × 4 sub-regions. Within each region, image patches having the same sizes as training batches (64×64×α) are used for reconstruction. Neighboring image patches contain 15-pixel and 19-pixel overlap in the horizontal and vertical directions, respectively. Each image patch was first up-sampled to 80×80 pixels with bilinear interpolation. The predicted phase image contains 320× 320 pixels. Once reconstructions are performed on all 2288 patches, the alpha blending algorithm was used to form the full FOV phase image containing 12800 × 10800 pixels. To reconstruct the video, we simply fed each FPM frame to the trained CNN to reconstruct the high-SBP dynamic information from the times-series data. The time for reconstructing each full-FOV, high-SBP image is ∼25±2 seconds using our cGAN network with the added Fourier loss, D-B9D20-F-cGAN, which is ∼50× faster than the standard FPM algorithm (which took ∼20 min for each frame [29]). A detailed comparison of all networks is detailed in Section 3 and Table 1.

Tables Icon

Table 1. Performance metrics evaluated on the full-FOV testing data [Legend: * stands for -cGAN, • based on the region inFig. 3(c), GT: ground truth]

3. Results and discussion

We discuss our results by presenting results in real space (Fig. 3), Fourier space (Fig. 4), and over different time points (Fig. 5).

 figure: Fig. 4

Fig. 4 Fourier analysis of the CNN reconstructed phase images. We directly take the Fourier transform of the reconstructions in Fig. 3(c). They are compared with the raw intensity image from on-axis illumination and the ground truth from FPM. To illustrate the Fourier coverage in each model, we mark three circles in each image, in which the yellow circle corresponds to the support of the pupil function with a radius of 1×NA, the green circle corresponds to the support of the optical transfer function with a radius of 2×NA, and the orange circle is the support from the ground truth with a radius of 4×NA.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Reconstructed temporal dynamic information using the proposed CNN. (A) The MAE metric is evaluated for every frame of the time-series experiment on all the CNN models. (B) Several frames of the reconstructed high-SBP phase video (see Visualization 1 for more examples) from a zoom-in region, where significant morphological changes are observed over the course of 4 hours.

Download Full Size | PDF

Figure 3(a) summarizes all the illumination patterns used for training and testing along with the corresponding networks used. All networks are applied to reconstruct the entire time-series experiment. A sample large-SBP phase reconstruction across the full (1.7mm×2.1mm) FOV is shown in Fig. 3(b). In Fig. 3(c), we zoom-in on a sub-region to compare the results from different networks in the real space. For comparison, the raw low-resolution intensity image from the central BF illumination is shown, which was bilinearly up-sampled to have the same size as the network’s output.

The result from U-B13, which uses BF data only, UNet without DB or cGAN and only the pixel-wise MAE loss, produce low-resolution phase images. It has been shown that the MAE loss function can lead to blurry results when solving an image reconstruction problem [51] because it does not place sufficient weights in the high frequency content. To overcome this problem, we use generative adversarial networks to reconstruct phase image with conditional input (cGAN) [50]. In U-B13-cGAN, the UNet is accompanied with a discriminator network in order to better learn high frequency information. The introduction of the cGAN architecture allows us to better reconstruct sub-cellular structures with more perceptual details; however, the resolution still appears worse compared to the ground truth.

In order to further improve resolution, as in FPM, DF images are needed since they contain high-spatial frequency information beyond the support of the optical transfer function (OTF). In addition, to deal with the added data size, we also seek a more efficient network structure with higher representation power. The dense block (DB) structure has shown to provide efficient presentation with a small number of parameters in the model [43]. We present results from three illumination patterns with different angular coverage, all reconstructed with DenseNet (UNet with DB) and cGAN structure. In D-B13D36-cGAN, we use 36 DF images covering up to 0.6 illumination NA. This leads to moderate resolution improvement; however, the results are limited by the highly noisy data captured at very large NAs. In general, we observe that it is not guaranteed that higher illumination angle leads to better resolution. The reason is because the DF data is subject to much higher level noise than the BF data, and the noise level increases as the the illumination angle increases [38]. When the signal-to-noise ratio (SNR) falls below certain threshold, the inclusion of these DF data is no longer helpful. To confirm this, we first use a small amount of DF data from small angles in D-B13D10-cGAN. It leads to resolution improvement as compared to the one from D-B13D36-cGAN. It should be noted that the DF SNR can significantly improve if a dome-shaped LED array [57] is used instead of the planar array in [29]. Heuristically, we found that the capacity of our CNN is that it can reliably utilize DF data up to 0.4 illumination NA (P4). The reconstructions are further explored using two networks, D-B9D20-cGAN and D-B9D20-F-cGAN.

A major limitation of image-space only loss function is that the metric still favors low-frequency information [53] but under-weights high-frequency information. A recently proposed solution is to further include Fourier loss component [54]. The result from using this strategy is shown in D-B9D20-F-cGAN. Our reconstruction of last frame on Hela cells is available at [58].

To better visualize the recovery of high-frequency information, Fig. 4 shows the Fourier transform of each image in Fig. 3(c). The spectrum of the on-axis BF image is mostly concentrated within the pupil region, i.e. the circular region with a radius of 1×NA, and extends up to the support of the OTF (i.e. 2×NA). It is well known that using only BF images can provide Fourier coverage up to the support of the OTF. As shown in the Fourier image of U-B13-cGAN, the network is able to fully recover these low-frequency information. The inclusion of DF images should lead to larger Fourier coverage; however, the improvement is not significant with image-space only loss function, as shown in the Fourier images of D-B13D36-cGAN, D-B13D10-cGAN, and B9D20-cGAN. The introduction of the Fourier domain loss significantly boosts the Fourier coverage up to the 0.4 illumination NA (<0.6 NA in the ground truth), as shown in the Fourier image of D-B9D20-F-cGAN. We note that using the Fourier domain loss in the training process generally leads to enhancement of the sharpness of the results and the frequency measurement metric (FM) [59]; however, it may trade off image-space metrics, such as MAE, SSIM, and PSNR due to different metric weighting schemes involved (see Table. 1).

Further inspecting the results from the CNN and comparing them to the FPM generated ‘ground truth’, we note that the ground truth image contains noisy structures, which are clearly visible in the background. All CNN reconstructed results are free from these background artifacts, demonstrating the robustness of the training process to noisy ground truth data.

A unique feature of our technique is the ability to reconstruct high-SBP phase videos with training data only from the first time point of a long time-series experiment. To demonstrate the effectiveness of this strategy, we show our CNN predicted temporal frames over the course of over 4 hours. During this process, considerable amount of morphological (hence phase distribution) changes occur due to cell division over several cell cycles. Figure 5(b) shows several frames (reconstructed with D-B9D20-F-cGAN) of a zoom-in region, where one cell is growing and dividing into multiple cells, and another cell has its membrane rapidly fluctuating. More example videos are provided in Visualization 1. A more quantitative evaluation of the ‘generalization error’ over time is presented in Fig. 5(a), in which the MAE metrics of all the networks studied are plotted for every frame in the time series experiment. The error is low at the beginning of the experiment and grows slowly as the time progresses.

4. Transfer learning

Practically, it is difficult to train a single network that can handle all sample types, a main drawback of the DL approach compared to the model based methods. To mitigate this problem, we investigate transfer learning, in which our pre-trained CNN on Hela cell is finely tuned for other cell types. The effectiveness of this strategy to address the generalization limitation of sample types has also been previously demonstrated in other biomedical imaging applications [60].

We used D-B9D20-F-cGAN trained on Hela cells to predict the phase reconstruction of two other cell types (MCF10A, U2OS) with or without staining. The data were captured with the same setup in [29]. In Fig. 6, we compare two results. First, we directly apply the D-B9D20-F-cGAN network to the new data. To further refine the results, we use the transfer learning technique. Specifically, we take the weights from the pretrained network and continue the training with the new cell data as the training data for ∼30 mins. Note that these new cell data contain significant intensity differences. By fine tuning the model, the CNN is able to produce high quality reconstruction. During the transfer learning, we did not use any validation data and only evaluated the new CNN’s performance directly after the 30-min training. The results show that transfer learning provides a practical way to broaden the utility of our technique.

 figure: Fig. 6

Fig. 6 Transfer learning using the pre-trained CNN (D-B9D20-F-cGAN) on Hela cells, and then used to make predictions of the phase on MCF10A, stained and unstained U2OS cells. (a) the intensity images vary across different cell types and before/after staining. The image patches are taken from the same FOV region and using the same illumination angle. (b) The regions used for testing and training for demonstrating the transfer learning. Phase reconstructed from (c1) directly apply the pre-trained CNN to the new data. (c2) after 30min transfer learning. (c3) the ground truth from [29].

Download Full Size | PDF

5. Conclusion

We have demonstrated a deep learning framework for Fourier ptychography video reconstruction. The proposed CNN architecture fully exploits the unique high-SBP imaging capability of FPM so that it can be trained using a single frame and then be generalized to a full time-series experiment. In addition, the CNN requires reduced number of images for high-resolution phase recovery. The reconstruction of each high-SBP image takes less than 30 seconds. Overall, this technique significantly improves the imaging throughput of the FPM system by reducing both the acquisition and reconstruction time. The central idea of our technique is based on the observation that each FPM image contains a large cell ensembles covering all morphological information throughout the time-series experiment. By the principle of ergodicity, the statistical information learned from these large spatial ensembles in a single frame are shown to be sufficient to predict temporal dynamics with high fidelity. In practice, we showed that our trained CNN can successfully reconstruct a high-SBP phase video of dynamic live cell populations with reduced noise artifacts. Using the conditional generative adversarial network (cGAN) framework and a weighted Fourier loss function, the proposed CNN is able to more effectively learn the high resolution information encoded in the darkfield data. The technique may find wide applications in in vitro live cell imaging and gather large-scale spatial and temporal information in a data and computation efficient manner. We also demonstrate that transfer learning is a practical approach to image a broad range of new cell samples, bypassing the need to train an entirely new CNN from scratch.

Acknowledgments

We would like to thank NVIDIA Corporation for supporting us with the GeForce Titan Xp through the GPU Grant Program.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015). [CrossRef]   [PubMed]  

2. A. Lucas, M. Iliadis, R. Molina, and A. K. Katsaggelos, “Using Deep Neural Networks for Inverse Problems in Imaging: Beyond Analytical Methods,” IEEE Signal Process. Mag. 35(1), 20–36 (2018). [CrossRef]  

3. M. Bertero and P. Boccacci, Introduction to inverse problems in imaging (IOP Publishing, 1998). [CrossRef]  

4. M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo, “Fast image recovery using variable splitting and constrained optimization,” IEEE Trans. Med. Imaging 19(9), 2345–2356 (2010). [CrossRef]  

5. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sciences 2(1), 183–202 (2009). [CrossRef]  

6. S. Boyd, N. Parikh, B. P. E Chu, and J. Eckstein, “Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers,” Foundations and Trends® in Machine Learning 3(1), 1–122 (2011). [CrossRef]  

7. C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 105–144.

8. Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017). [CrossRef]  

9. H. C. Burger, C. J. Schuler, and S. Harmeling, “Image denoising: Can plain neural networks compete with BM3D?” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 2392–2399.

10. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Trans. Med. Imaging 26(7), 3142–3155 (2017). [CrossRef]  

11. O. Ronneberger, P. Fischer, and T. Brox “U-net: Convolutional networks for biomedical image segmentation,” https://arxiv.org/abs/1505.04597.

12. L. Xu, J. S. Ren, C. Liu, and J. Jia, “Deep convolutional neural network for image deconvolution,” Advances in Neural Information Processing Systems (NIPS, 2014), pp. 1790–1798.

13. M. D. Zeiler, D. Krishnan, G. W. Taylor, and R. Fergus, “Deconvolutional networks,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 2528–2535.

14. H. Yao, F. Dai, D. Zhang, Y. Ma, S. Zhang, and Y. Zhang, “Dr2-net: Deep residual reconstruction network for image compressive sensing,” https://arxiv.org/abs/1702.05743.

15. K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “Reconnet: Non-iterative reconstruction of images from compressively sensed measurements,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 449–458.

16. K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Med. Imaging 26(9), 4509–4522 (2017). [CrossRef]  

17. T. Nguyen, V. Bui, and G. Nehmetallah, “Computational optical tomography using 3-D deep convolutional neural networks,” Opt. Eng. 57(4), 043111 (2018).

18. E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O’Neil, K. Shah, A. K. Lee, P. Goyal, W. Fedus, P. Ryan, A. Esteve, M. Berndl, L. L. Rubin, P. Nelson, and S. Finkbeiner, “In silico labeling: Predicting fluorescent labels in unlabeled images,” Cell 137 (3), 792–803 (2018). [CrossRef]  

19. Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. , 7(2), 17141 (2018). [CrossRef]  

20. Z. Ren, Z. Xu, and E. Y. Lam, “Learning-based nonparametric autofocusing for digital holography,” Optica 5(4), 337–344 (2018). [CrossRef]  

21. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4(9), 1117–1125 (2017). [CrossRef]  

22. S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5, 803 (2018). [CrossRef]  

23. Y. Li, Y. Xue, and L. Tian, “Deep speckle correlation: a deep learning approach towards scalable imaging through scattering media,” https://arxiv.org/abs/1806.04139.

24. A. Kappeler, S. Yoo, Q. Dai, and A. K. Katsaggelos, “Video super-resolution with convolutional neural networks,” IEEE Trans. Med. Imaging 2(2), 109–122 (2016). [CrossRef]  

25. O. Shahar, A. Faktor, and M. Irani, “Space-time super-resolution from a single video,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 3353–3360.

26. A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. van der Smagt, D. Cremers, and T. Brox, “Flownet: Learning optical flow with convolutional networks,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2015), pp. 2758–2766.

27. S. Nah, T. H. Kim, and K. M. Lee, “Deep multi-scale convolutional neural network for dynamic scene deblurring,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 257–265.

28. H. Chen, J. Gu, O. Gallo, M. Liu, A. Veeraraghavan, and J. Kautz, “Reblur2deblur: Deblurring videos via self-supervised learning,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2018), pp. 1–9.

29. L. Tian, Z. Liu, L.-H. Yeh, M. Chen, J. Zhong, and L. Waller, “Computational illumination for high-speed in vitro Fourier ptychographic microscopy,” Optica 2(10), 904–911 (2015). [CrossRef]  

30. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier Ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

31. D. J. Stephens and V. J. Allan, “Light microscopy techniques for live cell imaging,” Science , 300(5616), 82–86 (2003). [CrossRef]   [PubMed]  

32. T. Ashihara and R. Baserga, “[20] cell synchronization,” Methods in Enzymology 8, 248–262 (1979). [CrossRef]  

33. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier ptychography with an LED array microscope,” Biomed. Opt. Express 5(7), 2376–2389 (2014). [CrossRef]   [PubMed]  

34. A. Kappeler, S. Ghosh, J. Holloway, O. Cossairt, and A. Katsaggelos, “Ptychnet: Cnn based fourier ptychography,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2017), pp. 1712–1716.

35. E. Nehme, L. E. Weiss, T. Michaeli, and Y. Shechtman, “Deep-storm: super-resolution single-molecule microscopy by deep learning,” Optica 5(4), 458–464 (2018). [CrossRef]  

36. M. Weigert, U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: Pushing the limits of fluorescence microscopy,” https://www.biorxiv.org/content/early/2017/12/19/236463.

37. N. Boyd, E. Jonas, H. P. Babcock, and B. Recht, “Deeploco: Fast 3d localization microscopy using neural networks,” https://www.biorxiv.org/content/early/2018/02/16/267096.

38. L.-H. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, and L. Waller, “Experimental robustness of Fourier ptychography phase retrieval algorithms,” Opt. Express 23(26), 33214–33240 (2015). [CrossRef]  

39. T. Xiao, T. Xia, Y. Yang, C. Huang, and X. Wang, “Learning from massive noisy labeled data for image classification,” IEEE Conference on Computer Vision and Pattern Recognition, 2691–2699 (2015).

40. Z. Lu, Z. Fu, T. Xiang, P. Han, L. Wang, and X. Gao, “Learning from weak and noisy labels for semantic segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39(3), 486–500 (2017). [CrossRef]  

41. Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2013). [CrossRef]   [PubMed]  

42. M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” European conference on computer vision, Springer, 818–833 (2014).

43. G. Huang, Z. Liu, L. v. d. Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 2261–2269.

44. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” https://arxiv.org/abs/1409.1556.

45. S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” https://arxiv.org/abs/1502.03167.

46. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res. 15(1), 1929–1958 (2014).

47. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 770–778.

48. F. Agostinelli, M. D. Hoffman, P. J. Sadowski, and P. Baldi, “Learning activation functions to improve deep neural networks,” https://arxiv.org/abs/1412.6830.

49. V. Dumoulin and F. Visin, “A guide to convolution arithmetic for deep learning,” https://arxiv.org/abs/1603.07285.

50. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (NIPS, 2014), pp. 2672–2680.

51. P. Isola, J. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 5967–5976.

52. T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved techniques for training gans,” in Advances in Neural Information Processing Systems (NIPS, 2016), pp. 2234–2242.

53. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Med. Imaging 13(4), 600–612 (2004). [CrossRef]  

54. G. Yang, S. Yu, H. Dong, G. Slabaugh, P. L. Dragotti, X. Ye, F. Liu, S. Arridge, J. Keegan, Y. Guo, and D. Firmin, “Dagan: Deep de-aliasing generative adversarial networks for fast compressed sensing mri reconstruction,” IEEE Trans. Med. Imaging 37 (6), 1310–1321 (2018). IEEE Trans. Med. Imaging [CrossRef]   [PubMed]  

55. T. Nguyen, V. Bui, V. Lam, C. B. Raub, L.-C. Chang, and G. Nehmetallah, “Automatic phase aberration compensation for digital holographic microscopy based on deep learning background detection,” Opt. Express 25(13), 15043–15057 (2017). [CrossRef]   [PubMed]  

56. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” https://arxiv.org/abs/1412.6980.

57. Z. F. Phillips, M. V. D’Ambrosio, L. Tian, J. J. Rulison, H. S. Patel, N. Sadras, A. V. Gande, N. A. Switz, D. A. Fletcher, and L. Waller, “Multi-contrast imaging and digital refocusing on a mobile microscope with a domed led array,” PLoS ONE 10(5), e0124938 (2015). [CrossRef]   [PubMed]  

58. T. Nguyen, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “DeepLearningFourierPtychographicMircoscopy,” https://github.com/32nguyen/DeepLearningFourierPtychographicMircoscopy (2018). Accessed: 2018-7-21.

59. K. De and V. Masilamani, “Image sharpness measure for blurred images in frequency domain,” Procedia Eng. 64, 149–158 (2013). [CrossRef]  

60. Y. Rivenson, H. Wang, Z. Wei, Y. Zhang, H. Gunaydin, and A. Ozcan, “Deep learning-based virtual histology staining using auto-fluorescence of label-free tissue,” https://arxiv.org/abs/1803.11293.

Supplementary Material (1)

NameDescription
Visualization 1       Gigapixel phase video of Hela cells dividing in vito for 4 hours with 2 minutes interval using Generative Adversarial Network

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 The workflow of the proposed deep learning based Fourier ptychography video reconstruction. (A) The intensity data is captured by illuminating the sample from different angles with an LED array. (B) Training CNN to reconstruct high-resolution phase images. The input to the CNN are low-resolution intensity images; the output of the CNN is the ground truth phase image reconstructed using the traditional FPM algorithm in [29]. The network is then trained by optimizing network’s parameters that minimizes a loss function calculated based on the network’s predicted output and the ground truth. (C) The network is fully trained using the first dataset at 0 min, then can be used to predict phase videos of dynamic cell samples frame by frame.
Fig. 2
Fig. 2 The proposed condition generative adversarial network (cGAN) for FPM video reconstruction. The the generator (top) and the discriminator (bottom) are constructed with the ConvBlock BN-ReLU-Conv(1 × 1)-BN-ReLU-Conv(3 × 3) and ConvBlock Conv-BN-LeakyReLU, respectively. The generator output is the high-resolution phase. The discriminator tries to distinguish if that output phase is fake or real. The generator uses the UNet architecture. For the discriminator, the generator predicted phase or the ground truth phase is concatenated with the up-sampled intensity data as a conditional input to the discriminator network. The following color schemes are used: the two blocks oe-26-20-26470-i001.jpg and oe-26-20-26470-i002.jpg describe the dense concatenation inside the dense block in down-sampling and up-sampling path, respectively. oe-26-20-26470-i003.jpg and oe-26-20-26470-i004.jpg are transition layers interweaving with the dense blocks in the generator. oe-26-20-26470-i005.jpg denotes the convolutional layer, oe-26-20-26470-i006.jpg denotes the batch-normalization with a nonlinear ReLU layer in generator model, and oe-26-20-26470-i007.jpg the batch-normalization with the leaky ReLU in the discriminator. In the last three layers of the discriminator, oe-26-20-26470-i008.jpg denotes fully-connected layers for high-level feature reasoning. oe-26-20-26470-i009.jpg is used at the end for binary classification. k#n#s# (# stands for some integer) denotes the filter size, number of channels, and stride of the convolution layer, respectively.
Fig. 3
Fig. 3 (A) The summary of the illumination patterns and network structures investigated. The illumination angles (shown in the Fourier space) in-use are marked in red. The yellow cycle indicates the NA of the imaging system. (B) A sample full-FOV high-SBP phase reconstruction (at 4 hour) predicted by the proposed network D-B9D20-F-cGAN. (C) The original intensity image, ground truth phase image, and the reconstructions from the CNN models from the zoom-in area [marked by the red square in (B)].
Fig. 4
Fig. 4 Fourier analysis of the CNN reconstructed phase images. We directly take the Fourier transform of the reconstructions in Fig. 3(c). They are compared with the raw intensity image from on-axis illumination and the ground truth from FPM. To illustrate the Fourier coverage in each model, we mark three circles in each image, in which the yellow circle corresponds to the support of the pupil function with a radius of 1×NA, the green circle corresponds to the support of the optical transfer function with a radius of 2×NA, and the orange circle is the support from the ground truth with a radius of 4×NA.
Fig. 5
Fig. 5 Reconstructed temporal dynamic information using the proposed CNN. (A) The MAE metric is evaluated for every frame of the time-series experiment on all the CNN models. (B) Several frames of the reconstructed high-SBP phase video (see Visualization 1 for more examples) from a zoom-in region, where significant morphological changes are observed over the course of 4 hours.
Fig. 6
Fig. 6 Transfer learning using the pre-trained CNN (D-B9D20-F-cGAN) on Hela cells, and then used to make predictions of the phase on MCF10A, stained and unstained U2OS cells. (a) the intensity images vary across different cell types and before/after staining. The image patches are taken from the same FOV region and using the same illumination angle. (b) The regions used for testing and training for demonstrating the transfer learning. Phase reconstructed from (c1) directly apply the pre-trained CNN to the new data. (c2) after 30min transfer learning. (c3) the ground truth from [29].

Tables (1)

Tables Icon

Table 1 Performance metrics evaluated on the full-FOV testing data [Legend: * stands for -cGAN, • based on the region inFig. 3(c), GT: ground truth]

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

θ ^ G = argmin θ G n = 1 N 1 N l ( G θ G ( I n , ϕ n ) ) .
x L = H L ( [ x 0 , x 1 , , x L 2 , x L 1 ] ) .
min θ G max θ D E I , ϕ [ log D θ D ( I , ϕ ) ] + E I [ log ( 1 D θ D ( I , G ( I ) ) ] .
l = λ 1 ( β 1 l MAE + β 2 l FMAE ) + λ 2 l G + λ 3 l θ G ,
l MAE = 1 r 2 W H | ϕ | | G θ G ( I ) | ,
l FMAE = 1 r 2 W H | ( ϕ ) | | ( G θ G ( I ) ) | ,
l G = log D θ G ( I , G ( I ) ) ,
l θ G = θ G ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.